Divergent Perspectives on the Future of Artificial Intelligence
Jinho Kim PhD.
Representative Swiss school of Management, Liaison Office in Seoul / www.ssmseoul.kr
As AI, such as conversational AI ChatGPT, evolves rapidly, there is controversy over whether artificial intelligence has the ability to reason like a human.
Recently, scientists at Microsoft claimed that AI has reached a stage of Artificial General Intelligence (AGI), where it can grow and infer on its own without human intervention. In fact, many people who have interacted with ChatGPT are concerned about whether the Singularity has finally arrived. The Singularity, or more specifically the Technological Singularity, refers to a future point when technology undergoes such a rapid change that human life as we know it becomes irrevocably altered. Beyond this point, humans will not be able to understand or catch up with technology. Put simply, it is said that once AI possesses similar or superior intelligence to humans, humanity could become enslaved by robots or face extinction. Renowned futurist Ray Kurzweil, famous for predicting the explosive growth of the web, robotic prosthetics, and the advent of self-driving cars, among other things, has very specifically predicted that the Singularity of AI will occur in 2045.
So when will the Singularity happen?
There are conflicting views, with some saying it is coming soon, and others arguing it will never arrive and we should not worry. For example, the ongoing debate between Facebook founder Mark Zuckerberg and Tesla CEO Elon Musk starkly highlights this divide. Musk has warned on Twitter, “If you’re not concerned about AI safety, you should be. AI poses a far greater risk than North Korea’s nuclear and missile threats.” He emphasizes his consistent view that if AI is not preemptively regulated, it will significantly threaten humanity’s basic survival and future, like in the movie ‘Terminator.’ In contrast, Zuckerberg has criticized those who have negative views about AI as being “very irresponsible,” arguing that AI can be effectively utilized in services that save lives, such as disease diagnosis and autonomous cars. Musk then retorted that “Zuckerberg’s understanding of AI is limited,” implying that Zuckerberg’s comments come from a lack of understanding of AI.
The late renowned physicist, Stephen Hawking, was also very pessimistic about AI. He affirmed that while AI developed so far has been useful, the emergence of strong AI with higher intelligence than humans is inevitable, making AI the worst event in human history. He even warned that if humanity cannot colonize a new planet within the next 100 years, it will face extinction. Likewise, Dr. Geoffrey Hinton, widely known as the ‘Godfather of Deep Learning,’ recently warned when leaving Google, “I regret my achievements and there are ‘very scary’ dangers from AI chatbots.”
Those who fear that AI will be a major disaster for humanity are convinced that “strong AI” with similar or higher intelligence to humans will appear in the near future. However, there are many experts who believe the opposite – that “strong AI” will never appear. These experts consider AI as a “smart servant that performs specific tasks well,” and believe that if properly utilized, AI can make life convenient without causing much worry. This perspective is grounded in Moravec’s paradox. Hans Moravec, a roboticist and professor at Carnegie Mellon University, stated, “It is relatively easy to build computers that perform at adult levels on intelligence tests or playing board games like checkers, but difficult or impossible to build computers that have the perceptual or motor skills of a one-year-old.” In other words, tasks difficult for humans, like intelligence tests or chess, are easy for robots, while perceptual or motor tasks, which are easy for humans, are impossible for robots. In the same vein, cognitive psychologist Steven Pinker of Harvard University said, “The biggest lesson from 35 years of AI research is that hard problems are easy and easy problems are hard.” To put it simply, problems that were thought to be very difficult to implement with AI (such as finding the best move in Go, a highly complex and difficult calculation) turned out to be easy (which is why AlphaGo was eventually developed), and problems that were thought to be easy to implement with AI (such as thinking or moving) turned out to be very difficult, making it challenging to create AI with these abilities. Therefore, although the AI Go program AlphaGo was created, it is impossible to create strong AI because it is impossible to create AI with the basic abilities of consciousness or movement that a one-year-old possesses.
Professor Moravec explains that the reason it is impossible for AI to follow even a one-year-old’s abilities in perception and movement is due to evolution. All human functions have been biologically learned and executed through a long evolutionary process, preserved, improved, and optimized by natural selection or natural extinction. As the older the human function is, the longer it has been improved through the process of natural selection. Therefore, people can perform these old functions unconsciously and without much effort.
For example, human functions developed through billions of years of evolution include facial recognition, spatial movement, determining motives for others’ actions, catching a thrown ball, recognizing who is speaking by their voice, setting goals, focusing on interesting things, etc. These are mostly related to perception, movement, and social relationships. Because these functions have been optimized over a long evolutionary process, roboticists cannot discover their technical principles, making it impossible to create AI that implements these functions.
On the other hand, abstract human thinking developed very recently (at most, a few hundred years ago). People have to work hard to function this way, especially because the technical principles are already well-known. Therefore, it is easier to implement abstract thinking functions in AI. The recently acquired human functions include mathematics, engineering, games, law, medicine, finance, administration, logical and scientific thinking, opening a can, etc. These are functions that have not yet evolved to fit well with the human body and brain, so humans do not perform them well, but AI is demonstrating impressive performance.
In summary, due to billions of years of gradual evolution, humans can easily perform everyday actions like walking, feeling, hearing, seeing, and communicating. In contrast, to engage in relatively recently acquired abstract thinking like calculations, a lot of time and energy must be consumed. Artificial intelligence finds it very difficult to perform everyday human actions, but mathematical calculations, logical analysis, etc. can now be easily implemented. AI can easily do things that are too difficult for humans, like calculating astronomical numbers or solving complex equations. Thus, it was thought that AI would easily do all the everyday actions that humans unconsciously perform easily, but it was realized that there is no way to teach computers to do so. Consequently, the longer the development time in the evolutionary process of human functions, the more difficult or impossible it is f
or AI to implement these functions.