Like any complex technology, Artificial Intelligence has its roots in a number of fields. From philosophy to computer science, mathematics to linguistics, tracing the history of AI and automation is a difficult business. The field was officially named in the 1950s, but ideas about automated machines have existed since long before then. This is a history of the development of Artificial Intelligence from some of its earliest philosophical and theoretical inceptions through to modern day technology.
Ancient Automatons
Our fascination with automatons goes back a long way. Scholars have argued that the Ancient Greeks proposed automatic servants as a utopian alternative to human slaves in one of the earliest examples of technosolutionism I’ve seen yet. The myths of Daedalus, recalled by Plato and Socrates, describe the inventor creating “animate statues”. Heron of Alexandria, in 60AD, wrote about steam powered automata, engines, and wind turbines (he also purportedly invented the world’s first vending machine).
Skipping forward a few centuries, in the Byzantine Empire, King Constantinople VII hired craftsmen from Baghdad to create enthralling golden automata to impress his guests. Similarly, the Banū Mūsā brothers, a group of 9th-century Muslim inventors, created a number of automata, including mechanical birds that could sing and move their wings.
As we approach more recent years, the pace of invention and the passion for automata also accelerates. From Leonardo da Vinci’s robotic knight in the 16th Century to development of self-driving cars and advanced AI technology in the 21st Century, we are obsessed with automation.

Although many of these inventions seem like vanity projects or interesting but useless distractions, the ideas that were generated alongside them – including advances in mathematics, engineering, philosophy, and science – were incredibly important. Of all these inventions, one stands out as the obvious predecessor to modern computing: Charles Babbage’s Difference Engine.
The impossible machine and the first programmer
Babbage’s engines (Difference Engines number 1 and 2, and his Analytical Engine) represent some of the “greatest intellectual achievements of the 19th century”. Although Babbage found it impossible to build his machines – due to the cost and the materials required – they did inspire the world’s first computer programmer.
Ada Lovelace was the daughter of the famous poet Lord Byron, though her mother Annabella Milbanke Byron separated from her father and Ada never knew him. She first met Charles Babbage through a mutual friend in 1833. They exchanged ideas via correspondence, and, even though it was never actually built, Lovelace wrote programs for the Analytical Engine including an algorithm which could be used to compute the Bernoulli numbers.
Practical Strategies for ChatGPT in Education ran as a live webinar in February. To access a recording of the webinar, click here:
The first AI Summer
Although these early technologies experimented in automation and even suggested some of the elements of modern computers, it is not until 1950 that we see the beginnings of the field we now call AI. In his paper ‘Computing Machinery and Intelligence‘, Alan Turing proposed a test to determine the likelihood of a machine being capable of intelligence. The “imitation game” – now more commonly known as the Turing Test – pits a machine against a human. To pass the Turing Test, a machine must be able to engage in a conversation with a human evaluator and convince the evaluator that it is a human, rather than a machine. Turing’s work became one of the cornerstones of computer science.
In 1956, at a Dartmouth College conference, John McCarthy coined the term “Artificial Intelligence” to describe a new field which brought together computer science and mathematics, including the work of other conference attendees such as Claude Shannon, famous for Information Theory, and Marvin Minsky who cofounded MIT’s Artificial Intelligence Laboratory.
The period from 1956 – 1973 is referred to as the “first AI Summer” due to an increase in research, funding, and government interest in the studies of AI. During the period there were many notable achievements, including the invention of ELIZA, the first chat-bot, and the creation of LISP by John McCarthy, a programming language which is still in use today.
The period came to an end in 1973 when the “Lighthill Report” cast daming aspersions on the potential for AI researchers to achieve some of their grandest claims, including modelling the human mind.

What’s in a name?
The term “Artificial Intelligence” fell out of favour after the Lighthill Report, but that doesn’t mean the field disappeared. Research into Machine Learning, Neural Networks, Natural Language Processing, and other areas continued throughout the 1970s and 1980s.
Although “AI” had become associated with the hype and unfulfilled promises of these earlier technologies, there were still great strides forward. From advanced mathematics like Hidden Markov Models to computer science and Kunihiko Fukushima’s Convolutional Neural Networks, development continued towards the kinds of AI we are more familiar with today.
In the 1980s AI entered a brief “second Summer” with the promise of neural networks, but couldn’t shake the disappointments of previous eras. Despite efforts by researchers and technology companies, the field entered its second Winter and research once again became more conservative.
Garry Kasparov versus The Machines
It wasn’t mathematics or computer science that ultimately lifted AI out of its second Winter and back into the public eye: it was chess. Garry Kasparov’s book Deep Thinking details the long and complicated journey towards the development of IBM’s Deep Blue, the artificial intelligence that was the first to beat a human chess Grand Master.
Since then Deep Blue isn’t the only AI to grab headlines by beating humans at their own games. IBM’s Watson beat human contestants at Jeopardy! in 2011, Google’s AlphaGo defeated a human champion in 2015, and in 2021 Meta’s CICERO model defeated humans in a game of Diplomacy.
These public victories have helped to improve the popular image of AI, leading to our current state.

The Modern Age of AI
Over the past two decades, there has been a significant increase in funding and research for AI and ML projects. The rapid advancement of technology, such as smartphones, the internet, and social media, has enabled leading tech companies, such as Google, Microsoft, Amazon, and Meta, to develop powerful AI systems that drive their predictive engines, search tools, and business models. These companies have access to vast amounts of data, which is crucial for machine learning algorithms.
In the early 80s, the shift from symbolic methods to neural networks marked the beginning of the current AI revolution. With the availability of massive datasets and mass surveillance, these algorithms have become truly useful. As a result, AI can now be found in a variety of everyday technologies, ranging from cars to refrigerators. Despite controversies surrounding the definition of AI as intelligence, it is unlikely at this stage that we will experience another AI winter.
The future of AI remains uncertain. While some experts worry about the potential harm that Artificial General Intelligence or Artificial Super Intelligence could cause to humanity, others, such as futurist Ray Kurzweil, are optimistic about the positive impact that AI could have on society. There is a growing consensus among experts that finding a balance between the dystopian and utopian perspectives of AI is crucial.
In the field of research, the merging of symbolic AI and neural networks is giving rise to the field of neuro-symbolic AI. This approach aims to combine the formal logic and symbolic reasoning of symbolic AI with the data modelling and adaptability of neural networks. With the growing use of AI in various industries, such as finance and healthcare, the field is expected to continue to grow and evolve in the coming years.
Current AI models such as OpenAI’s GPT and Google’s LaMDA are dominating the headlines. At the time of writing, Google is about to make a huge announcement regarding the integration of an AI chatbot named “Bard” into its search. Microsoft will incorporate GPT into both Bing and its Office products.
Artificial Intelligence has had a long and complex history since the term was coined in the 1950s, but there is no doubt that we are in a time of rapid development and change. The AI arms race between huge corporations like Google and Microsoft will accelerate the pace. We just need to make sure that we don’t get swept up in another cycle of AI hype and lose sight of the very real ethical and social concerns of these technologies.
If you would like to read more about developments in Artificial Intelligence, or how educators can prepare for the wave of modern AI, join the mailing list:
Got a question or comment? Get in touch:
Leave a Reply