Author: Subhodeep Mukhopadhyay
Artificial Intelligence is a topic which evokes mixed reactions among people. Some consider AI to be a technological revolution which will solve all our problems and transform our planet into a veritable paradise. Others equate AI with robots courtesy of Hollywood movies; not good robots but rather exceedingly intelligent but evil and villainous robots with nefarious plans to wipe out the entire human civilization.
Meaningful public discourse on AI is practically non-existent. There is no worthwhile communication and dialogue between AI experts on one hand and the general public, policy-makers, and leaders on the other hand. Hence most attempts to answer the question of whether AI is a threat to humanity or a boon are informed by the camps to which people belong. For the former camp, AI is all about rainbows and roses and utopia, for the latter camp the future is a dark, grim and frightening dystopian society.
As a computer engineer, I have always had an insider view of the technologies and been witness to many technical changes over the decades. From BASIC, to C to C++ to Java – each iteration produced better and more sophisticated coding mechanisms. The rapid advancements in neural nets – from simple back-propagation to recurrent neural networks – was another example of technological progress.
Yes technology was advancing by leap and bounds. For example, given enough data, deep learning algorithms can perform language translation between any language pair after a few days of training. Some language translators would lose their jobs. But so what? This is the march of progress. Somewhere some other jobs would be created, and balance would once again prevail.
Many people – technocrats, economists, policy-makers, politicians and industrialists – believe in this naïve thesis. But after having read (and re-read) “Artificial Intelligence and the Future of Power: 5 Battlegrounds”, many of my preconceived notions have undergone drastic changes.
The first question is how do we scope AI? Many people, especially from a technical background view AI as a set of technologies that “mimic” and augment human intelligence. This is the camp I used to inhabit. At the end of the day AI is all about training a model using data, and using it for predictions/ output generation. That’s all it is. Such models can help augment human intelligence and drive productivity. AI is merely the next step in technological progress.
On the other hand if we scope AI wider and treat it as not just the underlying technology but also as an enabler which turbo-charges other cutting edge research – like nanotechnology, medicine & drug discovery, warfare etc. – then we get a very different picture. AI suddenly becomes a competitor and a clear and present danger to humans. It is this view that is lacking today, and this book by Rajiv Malhotra clearly allows us to view this sweeping canvas to understand the impact of AI. Only at such wide scoping does the real effect of AI across various domains of human life suddenly become clear.
The second question is what kind of threat are we talking about? Physical threat? Job security? Or something else?
AI is impacting diverse sectors – banking, military, healthcare, agriculture and anything else we can imagine – in unthinkable ways. ML algorithms are increasingly performing drug discovery better than trained researchers. Image recognition modules can and do recognize objects faster and much more accurately than humans, and even under conditions of poor lighting. This impact on economy is a serious threat, and there seems to be no clear solutions. For example, in the field of healthcare, the book explains that robotic nurses may quite possibly replace human nurses.
“Artificial Intelligence, in combination with mobile robots, will provide a much-needed support ecosystem for the care of the elderly. As the aging population increases worldwide, their care will have to be entrusted to robotic nurses, doctors and other helpers because the cost of human caregivers will become prohibitive.”Pg. 30 : Artificial Intelligence and the Future of Power: 5 Battlegrounds
No one can deny that AI is taking away and shall continue to take away jobs. Not only easily “automable” jobs but even jobs requiring higher cognitive capabilities.
Are physical threats from AI a possibility? Drone-based warfare or unmanned AI based warfare vehicles could become a reality soon. And when they do, they will seriously challenge the defense kurukshetra.
But the biggest threat looming in the horizon is the takeover of the human mind by machines. Top digital platforms/ Big Tech companies have powerful ML algorithms constantly collecting data and learning from it, profiling us. They suggest products which we may like. They influence who we may vote for. They tell us where to go on vacations and what to eat. They observe our reactions and keep on learning more about us. We are outsourcing our thinking and research ability to machines, relying on them more and more. Rajiv Malhotra cautions us that “no aspect of human functionality is ultimately beyond the scope of AI-based emotional analysis and manipulation.” Pg. 163 : Artificial Intelligence and the Future of Power: 5 Battlegrounds
“The AI industry promises to deliver designer-customized experiences, and these services are orchestrated to continue the cycle of dependence. Digital systems are fulfilling desires to such an extent that users become emotionally and psychologically dependent on them, and even addicted to the gratification they supply. People feel relieved that many tedious parts of their lives are on autopilot. However, it results in AI’s encroachment on agency and free will. This surrender of individualism can become a form of digital slavery.”Pg. 155 : Artificial Intelligence and the Future of Power: 5 Battlegrounds
Machines today know us better than we know ourselves. And this is the biggest existential threat – humans mentally becoming dependent on and slave to powerful AI algorithms. The AI revolution is upon us, and there are two ways to deal with it. We can either accept it the way it is, including the good and bad, or we can regulate AI to maximize the good and minimize the bad. But we can ignore AI at our own peril.
Subhodeep Mukhopadhyay is a civilization studies researcher with a background in data science. His writings on culture, philosophy and economics have appeared in various newspapers, online platforms and academic journals. He has authored two monographs titled “The Complete Hindu’s Guide to Islam” and “Ashoka the Ungreat“, and is a recipient of the Foundation for Indian Civilization Studies award for 2017.