Geoffrey Hinton, a scientist known for his early work in artificial intelligence, has warned once again that people may be underestimating the risks linked to the technology. In a recent interview, he reflected on how the systems he helped create have moved in a direction he did not fully anticipate. His concern, he said, has grown stronger over time, as the capabilities of these models continue to evolve at a speed that makes it difficult for even experts to stay ahead.
Image: Image: DOAC / YT
He believes that while current developments pose many challenges, future versions of AI could present dangers that are far more serious. Among the long-term possibilities he has considered is one in which machines develop the ability to make decisions beyond human control. This would not necessarily happen through a dramatic event but could take shape gradually as the systems become more autonomous and more complex.
One of the points he raised was that human beings have never had to deal with an intelligence that surpasses their own. Because of that, he said, it remains difficult to predict how things might unfold once that threshold is crossed. In his view, there is no clear plan for what to do if such systems begin to act in ways we cannot manage.
Despite the abstract nature of those risks, Hinton also discussed more immediate problems. He pointed out that AI tools can now be used by individuals with harmful intentions, especially those with access to biological or technical knowledge. For example, someone could exploit AI to create new viruses or run targeted cyberattacks. According to him, the cost of causing disruption has dropped, while the scale of damage has increased.
He also referred to the use of artificial intelligence in political contexts. The risk, he said, comes from its ability to shape public opinion without being noticed. This includes tampering with elections, reinforcing online echo chambers, and spreading misinformation at a speed that manual systems could not match. He noted that once these tools become part of daily political messaging, it becomes harder to know whether any campaign is entirely organic.
During the conversation, Hinton suggested that people might consider building skills that machines are less likely to replicate soon. Tasks that require physical precision and experience, such as plumbing, could remain in demand longer than those that rely on digital output alone. In his view, systems might excel at language or calculations, but they still struggle when it comes to dealing with the physical world in real-time.
He also shared doubts about the idea of slowing down development. Even if one country decides to pause, others may continue without hesitation. Because of that, he does not expect a coordinated slowdown to happen any time soon. The competition between nations and companies, he said, is moving faster than regulators can respond.
For now, his goal remains to raise awareness. While he once focused on pushing boundaries in machine learning, he now spends more time trying to highlight the risks that may lie just ahead. Whether people choose to act on those warnings, he admits, is something he cannot control.
Read next: OpenAI Rolls Out ChatGPT Image Generation via WhatsApp at +18002428478
Image: Image: DOAC / YT
He believes that while current developments pose many challenges, future versions of AI could present dangers that are far more serious. Among the long-term possibilities he has considered is one in which machines develop the ability to make decisions beyond human control. This would not necessarily happen through a dramatic event but could take shape gradually as the systems become more autonomous and more complex.
One of the points he raised was that human beings have never had to deal with an intelligence that surpasses their own. Because of that, he said, it remains difficult to predict how things might unfold once that threshold is crossed. In his view, there is no clear plan for what to do if such systems begin to act in ways we cannot manage.
Despite the abstract nature of those risks, Hinton also discussed more immediate problems. He pointed out that AI tools can now be used by individuals with harmful intentions, especially those with access to biological or technical knowledge. For example, someone could exploit AI to create new viruses or run targeted cyberattacks. According to him, the cost of causing disruption has dropped, while the scale of damage has increased.
He also referred to the use of artificial intelligence in political contexts. The risk, he said, comes from its ability to shape public opinion without being noticed. This includes tampering with elections, reinforcing online echo chambers, and spreading misinformation at a speed that manual systems could not match. He noted that once these tools become part of daily political messaging, it becomes harder to know whether any campaign is entirely organic.
During the conversation, Hinton suggested that people might consider building skills that machines are less likely to replicate soon. Tasks that require physical precision and experience, such as plumbing, could remain in demand longer than those that rely on digital output alone. In his view, systems might excel at language or calculations, but they still struggle when it comes to dealing with the physical world in real-time.
He also shared doubts about the idea of slowing down development. Even if one country decides to pause, others may continue without hesitation. Because of that, he does not expect a coordinated slowdown to happen any time soon. The competition between nations and companies, he said, is moving faster than regulators can respond.
For now, his goal remains to raise awareness. While he once focused on pushing boundaries in machine learning, he now spends more time trying to highlight the risks that may lie just ahead. Whether people choose to act on those warnings, he admits, is something he cannot control.
Read next: OpenAI Rolls Out ChatGPT Image Generation via WhatsApp at +18002428478