The History of AI

October 31, 2025 – Generative AI has a fairly short history, with the technology being initially introduced during the 1960s, in the form of chatbots. It is a form of artificial intelligence that can currently produce high-quality text, images, videos, audio, and synthetic data in seconds. However, it wasn’t until 2014, when the concept of the generative adversarial network (GAN) was introduced, that generative AI evolved to the point of being able to create images, videos, and audio that seem authentic recordings of real people. Currently, generative AI is a major component of ChatGPT and its variations.
The 1950s
Generative AI is based on machine learning and deep learning algorithms. The first machine learning algorithm was developed by Arthur Samuel in 1952 for playing checkers – he also came up with the phrase “machine learning.”
The first “neural network” capable of being trained was called the Perceptron, and was developed in 1957 by a Cornell University psychologist, Frank Rosenblatt. The Perceptron’s design was very similar to modern neural networks but only had “one” layer containing adjustable thresholds and weights, which separated the input and output layers. This system failed because it was too time-consuming.
The 1960s and 1970s
The first historical example of generative AI was called ELIZA. It could also be considered an early version of chatbots. It was created in 1961 by Joseph Weizenbaum. ELIZA was a talking computer program that would respond to a human, using a natural language and responses designed to sound empathic.
During the 1960s and ’70s, the groundwork research for computer vision and some basic recognition patterns was carried out. Facial recognition took a dramatic leap forward when Ann B. Lesk, Leon D. Harmon, and A. J. Goldstein significantly increased its accuracy (Man-Machine Interaction in Human-Face Identification, 1972). The team developed 21 specific markers, including characteristics such as the thickness of lips and the color of hair to automatically identify faces.
In the 1970s, backpropagation began being used by Seppo Linnainmaa. The term “backpropagation” is a process of propagating errors, backward, as part of the learning process. The steps involved are:
- Processed in the output end
- Sent to be distributed backward
- Moved through the network’s layers for training and learning
(Backpropagation is used in training deep neural networks.)
Additional settings for Safari Browser.
