AI ethics: a dharmic perspective #Dharma_Is_Not_Optional

Author: Divya Nagaraj.

Author Note: Rajiv Malhotra’s book AI and the Future of Power ,discusses many thought provoking issues posed by AI under the overarching guiding framework of Dharma. Ethics in AI is an extremely important area of research,study,policy which will not only affect the deployment of AI in myriad ways but also will affect the future trajectory of cultures. the modern narrative of identities. His (Rajiv ji) is probably the sole prominent Dharmic voice in this domain of AI and ethics. The definitional framework of Ethics, the supposedly “fair” representation of the “disenfranchised minority” view is skewed, already gamed and is by definition in opposition to Dharmic cosmological (jagat) views.

None of our Gurus and protectors of Dharma seem to be even aware of the issues , leave alone addressing and solving them. We as Dharmics , are walking blindfolded into an unknown, AI driven future that has been designed with principles averse to Dharma. I hope this book serves as a wake up call to all Dharmics , especially my generation.

As AI is gaining a steady foothold in all aspects of our lives and is set to conquer the world, there are also rising ethical concerns. Some of the prominent concerns are algorithmic bias, privacy, safety, disinformation and so on. There are also a growing number of organizations mainly concentrated in Europe and North America exclusively dealing with AI and Ethics. Some of these organizations are also affiliates of the tech giants such as Google, Facebook, Microsoft who are pioneering cutting-edge AI technologies. There are also several conferences dedicated to discussing AI Ethics. ‘AI Ethics’ is indeed the latest buzzword in the tech circles. 

For a layman with little or no acquaintance with AI, AI is viewed as just another technology, that is here to solve all the problems of the world. For technology , is supposed to make our lives easier and the world a better place, isn’t that the “contract”?

While the technocrats/companies/people well-acquainted with AI and aware of the pitfalls of the technology rely on these numerous organizations to keep a check on AI from the ethical perspective, they assume that the objective of developing a “clean and just” technology is being achieved.

‘The Future Society’, a non-profit think tank affiliated with Microsoft and other major corporations, is on a mission to create ‘fair and just’ AI for all. A quote from their website reveals that they have identified the pitfalls of the technology and want to help solve it:

“Artificial Intelligence (AI) can help us fight diseases, save lives and protect the environment, but AI can also create more inequalities, injustice and unemployment for many people in the world. AI has its opportunities, but it also has its risks.

Different people mean different things when they talk about AI. Some see a software tool, others see a big transformation in the economy. Some see a scientific field, others see a danger for humanity. These different views and the ways we talk about AI are all making it hard to decide what to do to benefit from AI.

We at The Future Society believe intelligent collective discussions and actions on this topic are needed and our mission is to advance the responsible adoption of AI for the benefit of humanity.

To do this, we build understanding, we build bridges and we build innovative solutions.” 

Therefore, tech giants like Microsoft are not only at the forefront of developing cutting-edge AI solutions, but also partner/affiliated with organizations working on the ethical concerns of AI. It is also worth investigating to what extent these organizations are neutral/biased toward certain corporate, national, or ideological stances.

Are both these camps of the optimistic layman and the concerned technocrats right in their assumptions?

This is where Rajiv Malhotra’s latest book, “Artificial Intelligence and the Future of Power”, assumes great significance as it tries to answer some of these seemingly intractable questions. Rajiv Malhotra’s book presents a balanced view of what is at stake with the advent of AI and redefines the whole notion of ‘AI Ethics’.

Here we shall focus on ‘AI ethics’ and not delve into the disruptions brought about by AI, “Artificial Intelligence and the Future of Power” is a definite must-read to understand the larger canvas of the AI battlegrounds and the impending crash of civilization.

It is a fact that AI algorithms can never be neutral and inherit the biases of the developer and the big data used to train these algorithms. 

“If a platform company champions specific values (which are invariably based on its civilizational ethos), whatever those values might be, it cannot be considered neutral. Moreover, despite what the digital giants claim about championing diversity, their core strategy depends on getting people to think and act the way they and their advertisers want. The business models are based on psychologically influencing people’s thinking rather than encouraging independent thinking.”

 Pg 306,  Artificial Intelligence and the Future of Power: 5 Battlegrounds, Rajiv Malhotra 2021

Given that today’s game-changing technologies are developed and managed by the West, Rajiv Malhotra points out that the AI algorithms are built on the premise of Western Universalism (a term from the book, ‘Being Different’).

West is both the driver of history and its goal, providing the template into which all other civilizations and cultures must fit. This view is so deeply entrenched in the consciousness of Europeans and Americans as to form a core part of their identities. Yet, it is virtually invisible from within the Western perspective itself.

Pg. 308, Being Different, Rajiv Malhotra

He calls for the decolonization of AI, which unfortunately is not on the agenda of any of the numerous organizations dealing with AI Ethics. It is apparent that these organizations identify and solve problems from their civilizational lens. Every civilization has its own set of problems and solutions unique to its culture and philosophy. Generalizing the problems and solutions will have disastrous consequences, as in the case of caste system that was imposed on India by its colonizers. Indians are yet to recover from this oppressive system imported from the West.

Understanding the “Caste system”

While dealing with the ethical concerns of AI, we ought to first discuss “who will define”, what is ethical and what is not. For instance, what is considered “ethical” in Saudi Arabia may not be ethical in the US. 

In this lecture from a few years ago, Rajiv Malhotra at Maharishi University explained that while exploring what could be automated or replaced with AI, one has to make sure it passes the test of dharma, that is the integrity, and the purpose of the original should not be lost in the process of substitution. He gives an example of a Vedic Yajna that cannot be achieved via virtual reality, for the physicality of the Yajna is of utmost importance and sacred.

The Evolution of Yajna

It is also important to note that development of AI applications today is driven mostly by the needs of capitalism and are antithetical to the message of Vedanta.

The chapter, ‘The Battle for Self’ from the book “Artificial Intelligence and the Future of Power” explains this antithesis:

“The notion of self will erode as a result of greater external dependency on algorithms that disrupt the natural sense of human individuality and free will. Humans will move away from the direction espoused by Vedanta and other spiritual traditions, taking materialism to new levels.”

Pg 203, Artificial Intelligence and the Future of Power: 5 Battlegrounds, Rajiv Malhotra 2021

This would be a major ethical concern from the Indic perspective, which is not addressed or considered an “ethical concern” even by the organizations dealing with AI & ethics. 

Ethics and AI are complex topics that need to be discussed and dealt with by stakeholders or representatives from different geographies and cultures. AI ethics cannot be decided and solved by the elite few of one culture or a country. ·

Hopefully, India gears up in the race of AI and not only becomes a stakeholder in the technology but also gets a seat at the table to debate and solve the problems of AI in the interest of dharma.

Divya Nagaraj from Bangalore is currently a Research Assistant at Infinity Foundation. She is a Computer Science Engineer from PES University, Bangalore with a Post-Graduate degree in management from SP Jain school, View More…

1 thought on “AI ethics: a dharmic perspective #Dharma_Is_Not_Optional”

  1. Sainath Shanbhag

    Should fundamental of ethics be discussed and made transparent ?
    Satya/Truth : Should satya be a part of ethic. If so, a source of wrong information should be punished and if it is from AI, the owner of the AI engine should be answerable and there should be a punishment for that.
    Asteya/ Non Stealing : Should the content based on which the AI created the media report and any material and the data source based on which this AI engine is built be published ? If NOT should there be any punishment for that ?
    Brahmacharya/ Non porn content : Should the AI using a relatively porn content be made ethical.
    Ahimsa: If there is a violence because of the content and it is found purposely done, should there be punishment ?

Leave a Reply

%d bloggers like this: