The first sign of AI showed up in the 1939 film, “Wizard of Oz,” showing the emotion-filled robot with no heart, Tin Man. This got many engineers thinking about AI, and by the 1950s Alan Turing, a young British mind, wrote a paper about machines being able to use available information and reasoning to solve problems and make decisions.
Although the idea was introduced in 1950, the pursuit of AI was delayed due to computers not being able to store commands. Five years later, Allen Newell, Cliff Shaw, and Herbert Simon created a program called Logic Theorist, which was designed to copy the problem-solving skills of a human. It was later presented at the Dartmouth Summer Research Project on Artificial Intelligence a year later.
Then came 1957 to 1974, when AI became more of a reality. AI was becoming more possible with the IBM 3850 mass storage system, Scelbi’s 8H computer, and the creation of the silver arm, according to the Computer History Museum. These creations convinced government agencies to fund AI research at several establishments. The government specifically wanted machines that could translate spoken languages and high data processing devices. Many were optimistic about AI and the expectations for them were even higher.
In 1970, Marvin Minsky told Life Magazine, “From three to eight years we will have a machine with the general intelligence of an average human being”. The foundation was set, but there was a long way to go until language processing, unique thinking, and self-recognition were achievable.
Breaking through the first layer of AI revealed a pile of obstacles. The biggest was the absence of computational power to store enough information or the ability to process it fast enough. Hans Moravec, a doctoral student from Stanford, stated that “computers were still millions of times too weak to exhibit intelligence.” As the inability to advance continued, so did the funding, and research came to a slow drip for ten years.
An expansion of the algorithmic toolkit and a boost in funds reignited AI in the 1980s. John Hopfield and David Rumelhart popularized a technique called “deep learning” that allowed computers to learn using experience. Further, Edward Feigenbaum started expert systems which copied the decision-making of a human expert. The program would ask experts in a field how they would respond to certain scenarios. Once this was learned for every scenario, the program would teach non-experts.
At this time, Japan started its Fifth Generation Computer Project, which invested four hundred million dollars towards AI and expert systems. Their goals were to improve AI, apply logic programming, and revolutionize computer processing. Sadly, none of their goals were met; however, the project inspired many young, talented engineers and scientists. Nevertheless, funding for the project concluded and AI dwindled.
Strangely, AI thrived without funding or the public’s enthusiasm. In 1997, world chess champion Gary Kasparov was defeated by IBM’s Deep Blue, a chess-playing program. This was the first time a world chess champion lost to a computer and it became a giant step to artificially intelligent decision-making programs. Although this was a huge accomplishment, speech recognition software was implemented on Windows the same year. This helped branch toward the spoken language translation that the government yearned for. Everything seemed possible for machines to do.
We have advanced further, and now even emotion is being mimicked on the face and is used to help, as shown by Milo, a robot developed by RoboKind to help children with autism spectrum disorders learn about emotional expression and empathy.
Everything started with a film showcasing a robot filled with emotion that inspired many brilliant engineers and spread the idea of making robots capable of what we can do. It started a chain reaction of advancement to where we are now. AI is currently branching in all directions such as transportation, healthcare, manufacturing, education, customer service, and media. Yet, this is only the beginning. We could soon be seeing many developments in the near future.