Overcoming AI’s limitations

Artificial general intelligence will be able to understand or learn any intellectual task that a human can. AGI will have high costs and huge risks, but it’s coming—maybe soon.

Overcoming AI’s limitations
Thinkstock

Whether we realize it or not, most of us deal with artificial intelligence (AI) every day. Each time you do a Google Search or ask Siri a question, you are using AI. The catch, however, is that the intelligence these tools provide is not really intelligent. They don’t truly think or understand in the way humans do. Rather, they analyze massive data sets, looking for patterns and correlations.  

That’s not to take anything away from AI. As Google, Siri, and hundreds of other tools demonstrate on a daily basis, current AI is incredibly useful. But bottom line, there isn’t much intelligence going on. Today’s AI only gives the appearance of intelligence. It lacks any real understanding or consciousness.

For today’s AI to overcome its inherent limitations and evolve into its next phase – defined as artificial general intelligence (AGI) – it must be able to understand or learn any intellectual task that a human can. Doing so will enable it to consistently grow in its intelligence and abilities in the same way that a human three-year-old grows to possess the intelligence of a four-year old, and eventually a 10-year-old, a 20-year-old, and so on.

The real future of AI

AGI represents the real future of AI technology, a fact that hasn’t escaped numerous companies, including names like Google, Microsoft, Facebook, Elon Musk’s OpenAI, and the Kurzweil-inspired Singularity.net. The research being done by all of these companies depends on an intelligence model that possesses varying degrees of specificity and reliance on today’s AI algorithms. Somewhat surprisingly, though, none of these companies have focused on developing a basic, underlying AGI technology that replicates the contextual understanding of humans.

What will it take to get to AGI? How will we give computers an understanding of time and space?

The basic limitation of all the research currently being conducted is that it is unable to understand that words and images represent physical things that exist and interact in a physical universe. Today’s AI cannot comprehend the concept of time and that causes have effects. These basic underlying issues have yet to be solved, perhaps because it is difficult to get major funding to solve problems that any three-year-old can solve. We humans are great at merging information from multiple senses. A three-year-old will use all of its senses to learn about stacking blocks. The child learns about time by experiencing it, by interacting with toys and the real world in which the child lives.

Likewise, an AGI will need sensory pods to learn similar things, at least at the outset. The computers don’t need to reside within the pods, but can connect remotely because electronic signals are vastly faster than those in the human nervous system. But the pods provide the opportunity to learn first-hand about stacking blocks, moving objects, performing sequences of actions over time, and learning from the consequences of those actions. With vision, hearing, touch, manipulators, etc., the AGI can learn to understand in ways that are simply impossible for a purely text-based or a purely image-based system. Once the AGI has gained this understanding, the sensory pods may no longer be necessary.

The costs and risks of AGI

At this point, we can’t quantify the amount of data it might take to represent true understanding. We can only consider the human brain and speculate that some reasonable percentage of it must pertain to understanding. We humans interpret everything in the context of everything else we have already learned. That means that as adults, we interpret everything within the context of the true understanding we acquired in the first years of life. Only when the AI community takes the unprofitable steps to recognize this fact and conquer the fundamental basis for intelligence will AGI be able to emerge.

The AI community must also consider the potential risks that could accompany AGI attainment. AGIs are necessarily goal-directed systems that inevitably will exceed whatever objectives we set for them. At least initially, those objectives can be set for the benefit of humanity and AGIs will provide tremendous benefit. If AGIs are weaponized, however, they will likely be efficient in that realm too. The concern here is not so much about Terminator-style individual robots as an AGI mind that is able to strategize even more destructive methods of controlling mankind.

Banning AGI outright would simply transfer development to countries and organizations that refuse to recognize the ban. Accepting an AGI free-for-all would likely lead to nefarious people and organizations willing to harness AGI for calamitous purposes.

How soon could all of this happen? While there is no consensus, AGI could be here soon. Consider that a very small percentage of the human genome (which totals approximately 750MB of information) defines the brain’s entire structure. That means developing a program containing less than 75MB of information could fully represent the brain of a newborn with human potential. When you realize that the seemingly complex human genome project was completed much sooner than anyone realistically expected, emulating the brain in software in the not-too-distant future should be well within the scope of a development team.

Similarly, a breakthrough in neuroscience at any time could lead to mapping of the human neurome. There is, after all, a human neurome project already in the works. If that project progresses as quickly as the human genome project, it is fair to conclude that AGI could emerge in the very near future.

While timing may be uncertain, it is fairly safe to assume that AGI is likely to gradually emerge. That means Alexa, Siri, or Google Assistant, all of which are already better at answering questions than the average three-year-old, will eventually be better than a 10-year-old, then an average adult, then a genius. With the benefits of each progression outweighing any perceived risks, we may disagree about the point at which the system crosses the line of human equivalence, but we will continue to appreciate – and anticipate – each level of advancement.

The massive technological effort being put into AGI, combined with rapid advances in computing horsepower and continuing breakthroughs in neuroscience and brain mapping, suggests that AGI will emerge within the next decade. This means systems with unimaginable mental power are inevitable in the following decades, whether we are ready or not. Given that, we need a frank discussion about AGI and the goals we would like to achieve in order to reap its maximum benefits and avoid any possible risks.

Charles Simon, BSEE, MSCS is a nationally recognized entrepreneur and software developer, and the CEO of FutureAI. Simon is the author of Will the Computers Revolt? Preparing for the Future of Artificial Intelligence, and the developer of Brain Simulator II, an AGI research software platform. For more information, visit https://futureai.guru/Founder.aspx.

New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to newtechforum@infoworld.com.

Copyright © 2022 IDG Communications, Inc.

How to choose a low-code development platform