Artificial Intelligence, even with its present-day capabilities, is still in its infancy- what researchers call Narrow AI. But what will be the pinnacle of AI? – the question has intrigued scientists, policy-makers as well as alarmists and Superintelligence holds a possible answer.
Oxford Philosopher Nick Bostrom defines “Superintelligence” as a hypothetical AI agent that possesses “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest,”.
In theory, Superintelligence will vastly surpass any human intellect, process and analyze data at scale impossible for human brains, excel at complex problem solving and possibly help us overcome global health, climate and food challenges.
Picture the classic Darwin evolution of humans from apes-to-humanoids-to-stone age man. Today’s AI operates at the ape stage, while the hypothetical Artificial General Intelligence (AGI) is humanoid and Superintelligence is the evolved man form.
The timeline prediction for Superintelligence remains hotly contested among scientists and researchers and few estimates include 2040 to 2060.
The concept of Superintelligence offers a great paradox to humankind: it can help cure diseases, mitigate disasters, drive space exploration and yet at the same time become powerful enough to destroy our very existence.