
In the first half of the 20th century, science fiction familiarized the world with the concept of artificially intelligent robots. It began with the “heartless” Tin man from the Wizard of Oz and continued with the humanoid robot that impersonated Maria in Metropolis.
By the 1950s we had a generation of scientists, mathematicians, and philosophers with the concept of artificial intelligence (or AI) culturally assimilated in their minds. One such person was Alan Turing, a young British polymath who explored the mathematical possibility of artificial intelligence. Turing suggested that humans use available information as well as reason in order to solve problems and make decisions, so why can’t machines do the same thing? This was the logical framework of his 1950 paper, Computing Machinery and Intelligence in which he discussed how to build intelligent machines and how to test their intelligence.Unfortunately, talk is cheap but first computers needed to fundamentally change. Before 1949 computers lacked a key prerequisite for intelligence: they couldn’t store commands, only execute them. Second, computing was extremely expensive.
On 1956, the proof of concept was initialized through “The Logic Theorist”, a program designed to mimic the problem solving skills of a human and was funded by Research and Development (RAND) Corporation. It’s considered by many to be the first artificial intelligence program and presented in 1956 during DSRPAI historic conference. Following the conference, everyone whole-heartedly aligned with the sentiment that AI was achievablea, the event catalyzed the next twenty years of AI research.
From 1957 to 1974, AI flourished. Computers could store more information and became faster, cheaper, and more accessible. Machine learning algorithms also improved and people got better at knowing which algorithm to apply to their problem. The successes convinced government agencies such as the Defense Advanced Research Projects Agency (DARPA) to fund AI research at several institutions. The government was particularly interested in a machine that could transcribe and translate spoken language as well as high throughput data processing. Optimism was high and expectations were even higher.
Breaching the initial fog of AI revealed a mountain of obstacles. The biggest was the lack of computational power to do anything substantial: computers simply couldn’t store enough information or process it fast enough..
In the 1980’s, AI was reignited by two sources: an expansion of the algorithmic toolkit, and a boost of funds. John Hopfield and David Rumelhart popularized “deep learning” techniques which allowed computers to learn using experience. On the other hand Edward Feigenbaum introduced expert systems which mimicked the decision making process of a human expert. Expert systems were widely used in industries. Unfortunately, most of the ambitious goals were not met. However, it could be argued that they inspired a talented young generation of engineers and scientists..
During the 1990s and 2000s, many of the landmark goals of artificial intelligence had been achieved.
In 1997, reigning world chess champion and grand master Gary Kasparov was defeated by IBM’s Deep Blue, a chess playing computer program. This highly publicized match was the first time a reigning world chess champion loss to a computer and served as a huge step towards an artificially intelligent decision making program. In the same year, speech recognition software, developed by Dragon Systems, was implemented on Windows. This was another great step forward but in the direction of the spoken language interpretation endeavor. It seemed that there wasn’t a problem machines couldn’t handle. Even human emotion was fair game as evidenced by Kismet, a robot that could recognize and display emotions.
It turns out, the fundamental limit of computer storage that was holding us back 30 years ago was no longer a problem. Moore’s Law, which estimates that the memory and speed of computers doubles every year, had finally caught up and in many cases, surpassed our needs. This is precisely how Deep Blue was able to defeat Gary Kasparov in 1997, and how Google’s Alpha Go was able to defeat Chinese Go champion, Ke Jie.
Artificial Intelligence is everywhere. We now live in the age of “big data” an age in which we have the capacity to collect huge sums of information too cumbersome for a person to process. The application of artificial intelligence in this regard has already been quite fruitful in several industries such as technology, banking, marketing, and entertainment.
We’ve seen that even if algorithms don’t improve much, big data and massive computing simply allow artificial intelligence to learn through brute force.
So what is in store for the future? In the immediate future, AI language is a big thing, we can also expect to see driverless cars on the road. In the long term, the goal is general intelligence, that is a machine that surpasses human cognitive abilities in all tasks.
Even if the capability is there, the ethical questions would serve as a strong barrier against fruition.
When that time comes (but better even before the time comes), we will need to have a serious conversation about machine policy and ethics (ironically both fundamentally human subjects), but for now, we’ll allow AI to steadily improve and run amok in society.
Credit https://sitn.hms.harvard.edu/ History of Artificial intelligence and https://ourworldindata.org/technological-change