AI’s riding high these days, yet its blueprint was drawn long before the digital age hit its stride
To create a machine that works and thinks just like humans is a mid 20th-century idea that evolved over decades to reach its present stage. Just like any other machine or product, it has been through its ups and downs before finally bringing a huge change in the technology world. AI is the best example of science fiction becoming a reality.
The computer scientist Alan Turning was the first person to propose the idea of a thinking machine in his paper Computing Machinery and Intelligence in 1950. His work opened the door of research in AI for other people. In 1956, during the workshop of Dartmouth Summer Research Project, the term Artificial Intelligence was coined by the researchers. This finally made AI an academic field which would be passionately pursued by researchers in the coming years.
After many years of research, a neural network called Perceptron, which could recognize patterns was developed by Frank Rosenblatt in 1957, and a chatbot known as Eliza capable of using natural language processing (NLP) was developed by Joseph Weizenboun at MIT in 1966. In the same year, the first mobile robot, Shakey the Robot, capable of autonomous navigation and decision-making and of using logical reasoning was also developed. This was the first phase of AI, so these systems were simple and could do only limited functions.
The 1970s saw a decline in interest in AI because of its flaws and inability to solve complex problems. In its initial phase, AI was not up to the expectation of people. Thus, the funding for AI research also decreased in that decade significantly.
Unfortunately, the breakthroughs in the 1980s were still not enough to attract more funding for research and to convince the world. The expectations that the world had from AI were not practical then, and the limited functions AI was performing were too costly. The reasons led to another period of no-research in AI for some years.
Machine learning reached an advanced level in the 1990s when AI models were on data sets. This advancement resulted in the development of Support Vector Machines, Decision Trees like bagging and boosting and Reinforcement Learning. SVMs and techniques like Reinforcement Learning made AI capable of solving complex problems, so the scope of AI in different industries also increased. It was being used for facial recognition, detection of fraud documents and document classification. From there onwards, AI kept on becoming more and more advanced with each passing year.
In the first two decades of the 21st century, many breakthroughs were achieved in deep learning, neural networks and training methods due to better computational power. Deep Belief Networks were developed by Geoffery Hinton in 2006 to train deep neural networks without supervised learning. This showed others the way to enhance deep model learning. The introduction of Transformer Architecture took NLP to the next level by using self-attention mechanisms. These steps led to the creation of language models, like GPT, which could generate accurately human-like content.
Now LLMs (Large language models) are the base of AI in 2025. These models are trained on large datasets, allowing AI to respond to questions just like a human. LLMs can accurately understand the context and intricacies of the languages, which is why they shocked the world. Another aspect of these LLMs is that they are also creative. Though trained on large datasets, modern AI models, like ChatGPT, Dall-E etc., are able to generate creative text, image and video. Due to the convenience brought by these modern AI models, they are influencing every field of life, from writing to science. This aspect of AI has also worried creative people around the world.
The research is still ongoing in artificial intelligence. Researchers are now trying their best to reach the level of artificial general intelligence. At that level, AI would be able to do tasks that only intellectual humans are able to do, whether in art or science. Some researchers have also coined the term artificial superintelligence for the level beyond AGI. At that level, AI would be able to do those tasks which are still impossible for even humans. This last stage is theoretical, but judging from the evolution of AI over the decades and from what it can do today, it will not be a surprise if AI reaches the levels of AGI and ASI.
When it comes to the future of tech, AI’s calling the shots and running the show. Just like no field or aspect of life could escape the revolution brought by computers in the 1980s and 1990s, no one and nothing will escape the benefits or harms of artificial intelligence.
Image: DIW-Aigen
Read next:
• Study Finds Openness to AI’s Utility But Concern Grows Over Chatbots Replacing Real Human Relationships
• Industrial Age to Tech Age, The Changes and The Cycle of Innovation (infographic)
• New Report Shows that Energy Consumption by Data Centers is Going to Get Doubled by 2030
To create a machine that works and thinks just like humans is a mid 20th-century idea that evolved over decades to reach its present stage. Just like any other machine or product, it has been through its ups and downs before finally bringing a huge change in the technology world. AI is the best example of science fiction becoming a reality.
The computer scientist Alan Turning was the first person to propose the idea of a thinking machine in his paper Computing Machinery and Intelligence in 1950. His work opened the door of research in AI for other people. In 1956, during the workshop of Dartmouth Summer Research Project, the term Artificial Intelligence was coined by the researchers. This finally made AI an academic field which would be passionately pursued by researchers in the coming years.
After many years of research, a neural network called Perceptron, which could recognize patterns was developed by Frank Rosenblatt in 1957, and a chatbot known as Eliza capable of using natural language processing (NLP) was developed by Joseph Weizenboun at MIT in 1966. In the same year, the first mobile robot, Shakey the Robot, capable of autonomous navigation and decision-making and of using logical reasoning was also developed. This was the first phase of AI, so these systems were simple and could do only limited functions.
The 1970s saw a decline in interest in AI because of its flaws and inability to solve complex problems. In its initial phase, AI was not up to the expectation of people. Thus, the funding for AI research also decreased in that decade significantly.
- Also read: Study Finds Public Can Detect AI Content Easily, But Acceptance Depends on Context and Purpose
Unfortunately, the breakthroughs in the 1980s were still not enough to attract more funding for research and to convince the world. The expectations that the world had from AI were not practical then, and the limited functions AI was performing were too costly. The reasons led to another period of no-research in AI for some years.
Machine learning reached an advanced level in the 1990s when AI models were on data sets. This advancement resulted in the development of Support Vector Machines, Decision Trees like bagging and boosting and Reinforcement Learning. SVMs and techniques like Reinforcement Learning made AI capable of solving complex problems, so the scope of AI in different industries also increased. It was being used for facial recognition, detection of fraud documents and document classification. From there onwards, AI kept on becoming more and more advanced with each passing year.
In the first two decades of the 21st century, many breakthroughs were achieved in deep learning, neural networks and training methods due to better computational power. Deep Belief Networks were developed by Geoffery Hinton in 2006 to train deep neural networks without supervised learning. This showed others the way to enhance deep model learning. The introduction of Transformer Architecture took NLP to the next level by using self-attention mechanisms. These steps led to the creation of language models, like GPT, which could generate accurately human-like content.
Now LLMs (Large language models) are the base of AI in 2025. These models are trained on large datasets, allowing AI to respond to questions just like a human. LLMs can accurately understand the context and intricacies of the languages, which is why they shocked the world. Another aspect of these LLMs is that they are also creative. Though trained on large datasets, modern AI models, like ChatGPT, Dall-E etc., are able to generate creative text, image and video. Due to the convenience brought by these modern AI models, they are influencing every field of life, from writing to science. This aspect of AI has also worried creative people around the world.
The research is still ongoing in artificial intelligence. Researchers are now trying their best to reach the level of artificial general intelligence. At that level, AI would be able to do tasks that only intellectual humans are able to do, whether in art or science. Some researchers have also coined the term artificial superintelligence for the level beyond AGI. At that level, AI would be able to do those tasks which are still impossible for even humans. This last stage is theoretical, but judging from the evolution of AI over the decades and from what it can do today, it will not be a surprise if AI reaches the levels of AGI and ASI.
When it comes to the future of tech, AI’s calling the shots and running the show. Just like no field or aspect of life could escape the revolution brought by computers in the 1980s and 1990s, no one and nothing will escape the benefits or harms of artificial intelligence.
Image: DIW-Aigen
Read next:
• Study Finds Openness to AI’s Utility But Concern Grows Over Chatbots Replacing Real Human Relationships
• Industrial Age to Tech Age, The Changes and The Cycle of Innovation (infographic)
• New Report Shows that Energy Consumption by Data Centers is Going to Get Doubled by 2030