We are writing this post to share a brief history of AI development: when, where, and how it started. We try to explain the context in an easy way that everyone can understand, just like we aim at eliminating the barriers for everyone to apply AI in your businesses.
AI Spring: The birth of AI
In the US:
In 1942, Isaac Asimov, an American science fiction writer, published his story Runaround, which features the first explicit appearance of the “Three Laws of Robotics”: (1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; (2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law; and (3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. Although it’s difficult to pinpoint the roots of AI, Asimov’s work inspired generations of scientists in the field of robotics, AI, and computer science.
In the UK:
Around the same age in Europe, Alan Turing, an English mathematician and computer scientist, developed “The Bombe” for the government to decipher the code used by the German army during WWII (This story was filmed into The Imitation Game, starring Benedict Cumberbatch as Turing). Later, Alan Turing published “Computing MAchinery and Intelligence” in 1952 and described how to both create intelligent machines and test their intelligence. Turing is widely considered the father of theoretical computer science and artificial intelligence.
In Sweden:
Speaking of WWII, in Sweden, where Labelf is based, Arne Beurling was an extraordinary professor and mathematician who deciphered Nazi Germany’s codes and cipher machines. Beurling spent only two weeks solving them with pen and paper, and his work was later used to decipher the teletypewriter traffic passing through Sweden. When being asked how he did the “impossible” work, Beurling did not reveal the methods and said “A magician does not reveal his secrets”.
Official concept formation of AI:
In 1956, In an official context, John McCarthy, an American computer scientist, organized the Dartmouth Summer Research Project on Artificial Intelligence at Dartmouth College. John McCarthy, together with Marvin Minsky, Nathaniel Rochester, and Claude Shannon, co-coined the term “Artificial Intelligence”. The workshop highlighted the beginning of AI spring and reunited researchers within different fields to “create a new research area aimed at building machines able to simulate human intelligence”.
AI Summer and Winter: The ups and downs of AI
Following the Dartmouth Conference (1956), the progress of AI was significant and substantial funding was invested in AI research. ELIZA, an early natural language processing (NLP) program developed by Joseph Weizenbaum at MIT, was a good example. The machines could not only play chess but engage in simple conversations with humans. Such projects drew generous funding to AI research. In 1970, Marvin Minsky proclaimed that a machine with the general intelligence of a human being could be developed within 3-8 years in an interview with Life Magazine.
However, the faith in AI development has not always been strong and the optimism was unwarranted. In 1973, the British government stopped funding AI research in all universities, except for three, after British mathematician James Lighthill and British Science Research Council questioned the overoptimism of AI research. Also, the high spending on AI research was criticized by the US Congress, and thus cutting the funding in AI research. The term “AI winter” referred to the developers’ over-promises, massive media promotion, and unrealistic expectations from the users, along with the reduced interests and funding.
The next boom came in the 1980s, especially with the rise of “expert systems”. Expert systems were enormously successful and corporations around the world started to develop and deploy expert systems, mostly within in-house AI departments. In addition to that AI research majorly focused on knowledge based systems and knowledge engineering, neural networks and connectionism also received huge attention. Funding returned to AI research, signifying the second AI summer, most noticeably in Japan and in the UK.
The high-and-low cycles repeated themselves. In the late 1980s, the hardware market experienced a sudden collapse. Desktop computers started to outperform the more expensive LISP machines, which were used to process LISP, the preferred language for AI. Expert systems were also considered too expensive to maintain and are limited only in a few special contexts. Moreover, AI progress once again did not catch on the expectations and the Japanese government ended its project in 1992, after 10 years of research and development.
Reference
Haenlein, M., & Kaplan, A. (2019). A brief history of artificial intelligence: On the past, present, and future of artificial intelligence. California management review, 61(4), 5-14.
Beckman, B. (2002). Codebreakers: Arne Beurling and the Swedish crypto program during World War II. American Mathematical Soc..