Are we chasing a mirage in Artificial General Intelligence?
Image created by AI

Are we chasing a mirage in Artificial General Intelligence?

Rapid advances are being made by leading Big Tech firms in Generative AI with applications across a wide spectrum of businesses and industry. Many of these firms are also pursuing the goal of developing Artificial General Intelligence (AGI). How will a full-fledged AGI be different from the currently booming Generative AI? How will AGI benefit humanity and what risks are we likely to be confronted with? How close are we to develop AGI? Will we end up creating a monster super-human in the form of AGI? Or is AGI a mirage, which is unlikely to be achieved ever?

This article begins with a discussion on what we mean by AGI and how rapid advances in GenAI have the potential to accelerate the race towards achieving AGI. This is followed by a brief overview of what it takes to develop a full-fledged AGI and when, if at all, that is likely to happen. Finally, considering the potential benefits and risks of developing a full-fledged AGI in the future, it is discussed whether we are chasing a mirage in AGI and whether this is worth chasing.

Prior to the recent disruptive development of Generative AI, many successful use cases of traditional AI were at work. These included applications such as customer service chatbots, voice assistants, recommendation engines, AI powered business analytics and image/ facial recognition systems. Advancements in neural networks and deep learning have been a major breakthrough in these pursuits. However, Generative AI is the latest chip on the block which has brought about a paradigm shift in the way AI is used, in view of its almost universal applications.

What do we mean by AGI?

According to McKinsey, “AGI is a theoretical AI system with capabilities that rival those of a human. AGI once realized would replicate human-like cognitive abilities including reasoning, problem solving, perception, learning and language comprehension. Randy Bean goes a step further to describe AGI as that state of AI development where “it can perform all human cognitive skills better than the smartest human”. The charter of OpenAI describes AGI as “a highly autonomous system that outperforms humans”. Once realized to its full potential, AGI will adapt to any situation, think and perform any intellectual task like a human or even better and learn from its own mistakes. It will have the flexibility and adaptability to novel situations, making it capable of solving even new problems in different contexts. AGI, once fully developed, will possess the ability of abstract thinking, background knowledge, common sense, cause and effect correlation and transfer learning.

While describing various forms of AI, authors make a distinction between weak or narrow AI and strong AI. Weak AI, more accurately called Artificial Narrow Intelligence (ANI), refers to AI systems designed to handle specific tasks. Due to its task specificity, ANI already has applications in narrow, well-defined specific domains. These include, for example, “Deep Blue”, developed by IBM for the game of chess; “IBM Watson”, again developed by IBM for the game of Jeopardy; and “AlphaGo”, developed by Alphabet as an AI player in the Chinese board game of “Go”. On the other hand, strong AI possesses the attributes of generality, versatility and self-awareness. AGI therefore falls in the category of Strong AI. Truly speaking, AGI can be visualized as the strongest form of AI. However, there is no single definition or understanding of AGI and there are several approaches to developing AGI, according to researchers.

What will it take to realize AGI?

Several critical capabilities need to be developed to realize AGI. These include sensory perception including both visual and audio; natural language understanding; problem solving; fine motor skills; navigation; creativity; and social & emotional engagement. To make this happen, the firms engaged in developing AGI will need to work on developing advanced algorithms and new robotic approaches. These are required in order to develop the ability to learn quickly from the environment. To some extent, this is being achieved by creating Large Behavioral Models (LBMs) through training AI on large data sets of observed human actions and movements. AGI must have “the capability of object recognition, like in a two-year-old baby, the language understanding capability of a four-year-old kid and the manual dexterity of a six-year-old child”. To make all these happen, the researchers will need to address the issues of exponential growth in data volume and generation of new data sources. Besides, advanced versions of GPU, much more powerful than at present will be needed.

Has Generative AI accelerated the race towards reaching AGI?

In the recent past, the content generation capability of Generative AI in the form of text, images and videos has taken the world by storm. Generative AI exhibits some characteristics of “generality” as its content generation capability has been successfully applied across diverse industries and functions. However, Generative AI is still not human-like in terms of creativity, logical reasoning and sensory perception. It lacks a real understanding of the context and is not capable of adapting to and learning across various domains. However, it can be seen as a vital bridge between ANI and AGI and a stepping stone in the development of more human-like intelligence expected out of AGI.

Is AGI round the corner indeed?

Views about the timeline of realizing a full-fledged AGI range from highly cautionary to highly optimistic. Several experts have given their predictions on the likely timelines for emergence of AGI. Geoffrey Hinton, known as the Godfather of AI, predicted in 2023 that AGI could be realized between 5 and 20 years from that year. Louis Rosenberg, a long time AI researcher, predicted its realization by 2030. Patrick Winston, an erstwhile MIT professor, predicted its realization by 2040. Ray Kurzweil, a highly esteemed computer scientist, predicted its realization by 2045. Yet another expert, Jürgen Schmidhuber, co-founder of a Swiss AI company predicted the year 2050 for full realization of AGI.

In a survey conducted in 2012-13 with 550 participants, nearly 90% respondents opined that AGI would be realized by 2075. In another survey carried out in 2017, 45% respondents predicted its realization before 2060; 34% predicted its realization after 2060, while 21% predicted that a full-fledged AGI will be never realized. In a more recent survey with 1700 participants, a majority predicted that AGI would be realized before 2060. Rodney Brook, Panasonic Professor of Robotics (emeritus) at MIT, predicted that AGI will not arrive until the year 2300. According to Richard Sulton, Professor of Computer Science at the University of Alberta, there is 25% probability that AGI will be realized by 2030; 50% probability that this will be realized by 2040; and 10% probability that this may be never realized.

However, optimists like Sam Altman believe that, AGI will be realized in the near future, and OpenAI is expeditiously striving towards that goal. The development of AGI can’t be expected to be a phenomenon to be seen one fine morning, but will be an evolutionary process. According to him, “AGI will be surprisingly a continuous thing with every year a new and improved model evolving”. Mark Zuckerberg also believes that the realization of AGI would be a gradual process and there is no specific threshold that needs to be met before declaring the arrival of AGI.

Is it possible to realize AGI?

The strongest argument against the possibility of realizing a full-fledged AGI came from Hubert Dreyfus, the late American Philosopher and Professor at University of California, Berkley. He believed that “computers have no body, no childhood and no cultural practice and therefore they can’t acquire intelligence”. Ragner Ejelland supported this argument through additional observations, some of which are as follows:

a) Human knowledge is partially tacit and can’t be articulated. It is not feasible to develop algorithms for something which is tacit.

b) Human reasoning requires prudence (ability to make right decisions) and wisdom (ability to see the whole). Both of these attributes are not algorithmic.

c) Cultural practice is critical. We can’t understand what a specific object is unless we have grown in a culture where we have seen and handled such objects. Computers or AI machines are devoid of any cultural practice.

d) Causal Knowledge is an important part of human-like intelligence, as it is a prerequisite for acquisition of knowledge. Computers or AI machines can’t (yet) handle causality.

In a 2009 AGI Roadmap workshop, a list of 14 broad areas of capability of AGI were identified. These include the capabilities of (a) perception; (b) actuation; (c) memory; (d) learning; (e) reasoning; (f) planning; (g) attention; (h) motivation; (i) emotion; (j) modeling self and other; (k) social interaction; (l) communication; (m) quantitative judgment and (n) building/ creation. Each of these is further sub-divided into smaller sub-capabilities. It is extremely unlikely that we will ever develop a full-fledged AGI with all these capabilities and sub-capabilities.

The Church-Turing Thesis propounded in 1936 states that “given an infinite amount of time and memory, any problem can be solved by developing and using an algorithm”. However, since we don’t have an infinite amount of time and memory, it is not possible to model the human brain, which would be a crucial requirement for a full-fledged AGI. However, a contrasting argument is that while human intelligence is fixed, machine intelligence is continually growing owing to the development of advanced algorithms, processing power, and memory. It is therefore only a matter of time before machine intelligence exceeds human intelligence. Experts see the promise in Quantum Computing to revolutionize the acceleration in the development of machine intelligence.

Is AGI a worthwhile goal for human kind to achieve?

Experts see lot of benefits to the humanity from the development of AGI. These include the ability to address perennial societal problems such as expanding the global food production, detection and mitigation of natural disasters, superior quality and affordability of health care and an enhancement in the quality of education. According to Sam Altman, “AGI would help elevate humanity by increasing abundance, turbocharging the global economy and changing the limits of possibility… It can be a force multiplier for human ingenuity and creativity”. According to another expert, the complex challenges of climate change can possibly be addressed with AGI in the future.

However, the risks of AGI are enormous too. According to Randy Bean, AGI would result in an explosion in intelligence, transforming into a kind of super-intelligence, which will be impossible to control or contain. There is therefore an ethical dilemma in going ahead in developing AGI, which can be even superior to humans in terms of intelligence, but which may not be a responsible intelligence. According to the late Stephan Hawking, a well-known English theoretical physicist, AGI can spell the end of human race. Even, Sam Altman once believed that AGI can do grievous harm to the humanity as it carries with it a serious risk of misuse, drastic accidents, and societal disruption. It is noteworthy that OpenAI advocates short timelines and slow takeoff speeds in developing progressively advanced iterations of AGI, since short timelines are better amenable to coordination and a slow takeoff gives more time to the society to adapt.

Conclusion

The pursuit of AGI by leading tech firms is a non-reversible process, guided as it is by the increasing quest for competitive innovation and monopolistic commercial gains. At some point in time, like in the development of Generative AI, the capabilities of AGI will surpass human capabilities in several attributes, much more than the capabilities of Generative AI. But chances are that at any stage of its iteration, it will never be fully human-like in all the attributes a human possesses. According to Encyclopedia Britannica, “mirage” in optics is the deceptive appearance of a distant object. May be, AGI is like a mirage, which seems to be close, but which is really not there – not at least in the next few decades.

Yet, advanced versions of Artificial Intelligence, steadily progressing towards reaching the ultimate potential of AGI, will continue to be developed. Their capabilities, like those of or even superior to that of Generative AI today, will be highly impressive and revolutionary. However, despite the noble humanitarian goals of AGI that are professed and despite calls for guardrails and regulations, their destructive potential can’t be wished away, and we might be on the way to create super-humans.

The author is grateful to Mr. Vijay Sethi, Board Member, Management Advisor Digital Transformation and Sustainability Evangelist for providing constructive inputs for this article,

References

1. Alex Heath, “Mark Zuckerberg’s new goal is creating artificial general intelligence”, The Verge, Jan 18, 2024

2. Ben Goertzel, “Artificial General Intelligence: Concept, State of the Art, and Future Prospects”, Journal of Artificial General Intelligence”, 5 (1), 1-46, 2014

3. Cameron Hashemi-Pour, “What is artificial general intelligence (AGI)?”, TechTarget, DNM, Nov. 2023

4. Cem Dilnegani,, “When will Singularity happen?”, 10th March 2024

5. Fjelland, Ragnar, “Why general artificial intelligence will not be realized”. Humanities and Social Sciences Communications. 7. 10. 10.1057/s41599-020-0494-4, 2020

6. LeewayHertz, “Journey to AGI: Exploring the next frontier in Artificial Intelligence”, DNM

7. McKinsey Explainer, “What is Artificial General Intelligence?”, March 2024

8. McKinsey, “An Executive Primer on Artificial General Intelligence”, April 2020

9. Randy Bean, “Artificial General Intelligence and the Coming Wave”, October 24, 2023

10. Sam Altman, “Planning for AGI and beyond”, Feb, 24, 2023, OpenAI Blog

11. Neeraj Raisinghani, “A Guide to Artificial General Intelligence”, Solulab, DNM


Arvind Dang

Author and Blogger on Ethics

6mo

Partially understood concepts related to AI vs AGI . Really wonder how 10 capabilities mentioned in article would be embedded into AGI . (a) perception; (b) actuation; (c) memory; (d) learning; (e) reasoning; (f) planning; (g) attention; (h) motivation; (i) emotion; (j) modeling self and other; (k) social interaction; (l) communication; (m) quantitative judgment and (n) building/ creation. This is because of massive diversities amongst different cultures & value systems in 200+ countries. A bigger challenge indeed vis a vis AI . Pls keep educating , Batra Sahib 🙏🙏

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics