Artificial intelligence is a term that has been around for a long time. It was first used to describe machines that were capable of performing tasks by mimicking human intelligence. This is a very broad term and includes many different types of AI. We’re going to look at a brief history of AI and what it means today. 

1950

Computing Machinery and Intelligence was published in Mind(a peer-reviewed journal) by the British mathematician Alan Turing. In it, he introduced the Turing test as a means to determine whether a computer can think.

Up until today, this concept of a computer thinking has been the basis of most AI research.

1955 

The Logic Theorist is a program created by Allen Newell, Herbert Alexander Simon, and John Clifford Shaw in 1956. The program is considered to be one of the first artificial intelligence programs. It was created to prove theorems.

1956

The Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) was held in 1956 and hosted by John McCarthy and Marvin Minsky.

McCarthy, Minsky, and other scientists ai researchers, and mathematicians met at Dartmouth College to discuss the future of artificial intelligence. It was at this conference that the term “artificial intelligence” was officially coined.

1958

Lisp is a programming language created by John McCarthy in 1958. Lisp is used in artificial intelligence research and is also the programming language of the Lisp Machine, a popular type of computer system in the 1980s. The name “Lisp” comes from the phrase “LISt Processing.”

1959

Arthur Samuel coined the term machine learning. Machine learning is the process of training computers to learn without being explicitly programmed.

1963

MIT receives $2.2 million from DARPA to fund their research on AI. DARPA funds research and development at MIT. The Defense Advanced Research Projects Agency (DARPA) is an agency of the United States Department of Defense responsible for developing new military technologies. 

1964

STUDENT is an early achievement of artificial intelligence in natural language processing. It was a program created by Daniel G. Bobrow to solve algebra word problems. It was written in Lisp.

 1965

Moore’s law, named after Gordan Moore, who observed in 1965 that the number of transistors in a dense integrated circuit doubles every two years.

The introduction of the expert system by Edward Feigenbaum was also around 1965. Expert systems are computer programs that emulate human expertise in complex fields to solve problems. They can solve problems with their knowledge base and inference engine. Expert systems are an early example of successful artificial intelligence.

1966

ELIZA, a natural language processing computer program, was created by Joseph Weizenbaum in 1966. The program was designed to simulate a conversation with a human being. It was the world’s first chatbot and was a precursor to the modern-day chatbots we have today.

1968

SHRDLU was created at MIT by Terry Winograd and is considered a successful AI natural language understanding program.

1972

In 1972 PROLOG was designed to solve problems of natural language processing, specifically to handle computational linguistics. Still, nowadays, it has many other applications as well.

1973

The first full-scale anthropomorphic robot was created at the Waseda University in Japan. It was called WABOT-1, and it was capable of performing simple tasks such as walking, talking, and interacting with people.

1974 – 1980

DARPA cuts down on the funding of AI research. In the UK, budget and AI research are also severely reduced, thanks to the Lighthill report.

This governmental reduction in funding and reduced public interest in AI signals the beginning of the first AI winter.

1982

The Japanese government starts the Fifth Generation Computer Systems project (FGCS). The goal of the project was to advance AI through logic programming. The project lasted 11 years, and over $416 million was invested.

1987 – 1993

1987 saw funding reduced once again in the AI field. This began the second AI winter.

The second AI winter came to an end in 1993.

1988

In 1988 Judea Pearl authored the book that revolutionized the AI field, Probabilistic Reasoning in Intelligent Systems.

1997

In 1997 IBM’s Deep Blue AI computer beat chess champion Gary Kasparov, 3.5 – 2.5.

1997 also saw the creation of NaturallySpeaking 1.0 by Dragon Systems. This was the world’s first publicly available speech recognition program.

This was also the year that Sepp Hochreiter and Jurgen Schmidhuber developed LSTM (Long Short-Term Memory). LSTM is a Recurrent Neural Network (RNN) architecture. It’s commonly used for handwriting and speech recognition as well as for detecting anomalies in network traffic.

1998

Kismet, the AI robot that can recognize and simulate human emotions, was created by Dr. Cynthia Breazeal at MIT.

2002

i-Robot created the world’s first autonomous AI vacuum cleaner, the Roomba.

2006

Oren Etzioni, Michele Banko, and Michael Cafarella wrote the paper Machine Reading.

2011

Apple’s AI assistant Siri was released in 2011.

This was also the year that IBM’s AI computer, Watson, won Jeopardy by beating the reigning champions.

2014

Microsoft and Amazon released their own AI assistants in Cortana and Alexa, respectively. 

2018

Alibaba’s language processing AI beats humans in the Stanford reading and comprehension test.

Google BERT (Bidirectional Encoder Representations from Transformers) is created. BERT is an ai system and machine learning technique for NLP. Over the next couple of years, it would be heavily integrated with determining user search intent in their search results.

2019

2019 saw the completion and release of OpenAI’s GPT-2 (Generative Pretrained Transformer 2) language model. Popular tools used in most of the best AI writing software

Check out Guy’s Interactive AI History Timeline by clicking the button below.

AI HISTORY TIMELINE

What does the future have in store for AI?

Having looked at a brief history of artificial intelligence, I believe it’s safe to say that AI will be integrated into almost every aspect of our lives. And it will play a much more significant role than ever before. We’re already able to perform deep learning tasks quickly and efficiently on large datasets. Furthermore, I believe we are not yet aware of what the possibilities really are. 

As technology is evolving rapidly, it will be interesting to see how we can develop and apply AI in the coming decades.

Where will AI technology take us next?

RELATED POSTS

 

GUY'S POST OUTLINE

GUY'S POST OUTLINE