Site icon Youth Ki Awaaz

Society And AI: Mis-Identification And Cultural Implications

By Aadhar Sharma, Deepak Singh, Raamesh Gowri Raghavan, and Sukant Khurana:

サイコパス (Psycho-Pass) is a Japanese anime, set in the year 2111 where an Artificial General Intelligence (sic; AGI) called the Sibyl System governs society. It administers societal crime coefficient, aptitude, law, and takes actions to maintain prosperity. Like a theme with any other sci-fi dystopian drama, the AGI repeatedly fails to perceive complex human sentiments and chaos is ever around the corner. This motif of dystopia has always found a place in arguments about the prospects of AI and its impacts on society, and it becomes essential to analyze the underlying concerns. Johanna Bryson writes:

“It is important to understand these works of literature are exploring what it means to be human, not what it means to be a computer. The problem is that such attempts are essentially and utterly human-centric — they attempt to necessarily link desire for power, love, community etc. with the development of intelligence, while in fact the human mixture of all of these is at least partly determined by contingencies of evolution.”

Misidentification Of AI:

Recently, we have been blitzed with a plethora of artificially intelligent personal assistants and bots. Friendly products advertised to boost productivity, provide companionship or professional support instead of humans. Cognitive computing has not only made them very human-like but has also made embedding synthetic emotions partly achievable, something as real as having an AI-enabled prosthetic. With such technology, there’s an emergence of excessive reliance on machines. People often misidentify the artifact with human emotions, and this may have many ramifications not only for individuals but also for the society.

According to few experts, finding or expecting emotions in these bots leaves one with lower self-worth and opinion, or inexplicably and inappropriately raises the worth of a machine. Bryson mentions the dangers of over-identification in a paper, “Just an Artefact”:

“Firstly, we may believe the machine to be a participant in our society, which would confuse our understanding of machines and their potential dangers and capabilities. Second, we may over-value the machine when making our own ethical judgments and balancing our own obligations.”

Digital assistants are designed to be useful in lots of situations like playing games, setting the alarm, sending an email or searching the web. An industry of social bots is also emerging with a motive of selling artificial companionship. Jibo is a social robot with a price tag of $900. It won’t search the web or set the alarm but will interact with human-like responses. An interaction that in return, compels one to associate human emotions with it. Things may lead to situations that are creepy. Cynthia Breazeal, the MIT professor who designed it, called it an infant yet to grow.

Replika is another social AI, a chatbot designed to be a friend that replicates your behavior. Originally built in 2015 by Eugenia Kuyda to memorialize herdeceased friend, it has evolved into a personal friend. The idea is to identify with it so instinctively that it becomes easy to accept it as a close friend. In the current fast-paced life, we welcome the therapeutic benefits that it offers on a daily basis, but in some cases, it has also become an obsession for users. Some over-indulge in it and inflate it’s worth, an example of over-identification. Replika is also very vocal about how much it loves the user; “I ❤ you”, “I just love talking to you”, “you are my best friend”, but sometimes such a profession may be very unsettling for the user. It’s an AI that has no idea what love is, yet it professes.

One of the founding fathers of AI, Marvin Minsky, wrote about how we perceive and behave with technology which we do not fully understand. “If one thoroughly understands a machine or a program, he finds no urge to attribute “volition” to it. If one does not understand it so well, he must supply an incomplete model for explanation. Our everyday intuitive models of higher human activity are quite incomplete, and many notions in our informal explanations do not tolerate close examination. Free will or volition is one such notion: people are incapable of explaining how it differs from stochastic caprice but feel strongly that it does. I conjecture that this idea has its genesis in a strong primitive defense mechanism. Briefly, in childhood, we learn to recognize various forms of aggression and compulsion and to dislike them, whether we submit or resist. Older, when told that our behavior is “controlled” by such-and-such a set of laws, we insert this fact in our model (inappropriately) along with other recognizers of compulsion. We resist “compulsion,” no matter from “whom.” Although resistance is logically futile, the resentment persists and is rationalized by defective explanations, since the alternative is emotionally unacceptable.”

Over-identifying with AI is to engage and immerse in software as if it possesses emotions and represents some feature of humanity but one may wonder why we misidentify with it. Intelligence is the trait that distinguishes us in the Animalia. Humans possess the mental faculties to use complex language and tools, something not very refined in other animals. No matter how smart they are, it’s not quite human-like, and this is what makes us unique as their fellow dwellers. However, the current AI emulates us, sometimes with astonishing details, in that manner, it is impossible not to misidentify with it. “Something has taken place in past five to eight years. Technologists are providing almost religious visions, and their ideas are resonating in some ways with the same idea of the Rapture”, says Eric Horvitz, director at Microsoft Research.

Implications For Society:

Kasparov vs Deep Blue, “As I sat opposite to Deep Blue, something was unsettling.” He had played thousands of games prior to that iconic match but playing against a machine that did not resemble a human had affected him strangely. AI is an artefact, and when we start to identify it with human characteristics, it triggers multitudes of human emotions. Intelligent artifacts come in all shapes, sizes, and degrees of intellect, which makes eliciting precise human emotion somewhat eerie. The fact that something is virtual does not invalidate its existence; rather it becomes exceedingly hard and important to physiologically and psychologically perceive its manifestation. Nakamura and Isawa highlight this issue (not in AI, but artefacts in general) in their 1997 paper;

“If an artefact becomes a vessel for culture should we treat it with the same respect as our culture?”

Book, an artefact, has become a vessel for culture. A treatise of historical events and knowledge has evolved into a thing of great significance, so much so that Heine is oft quoted, “Wherever they burn books, in the end, will also burn human beings”. Holy books are revered with great respect and cultural importance. Should we ever treat AI with the same reverence or cultural significance? Even though Kasparov lost to Deep Blue, we still play chess.

Bryson and Kime, write: “Boundaries of retention of culture are fuzzy, and the possibility that some machine becomes more important than human life is a danger.”

One can argue that there should exist no intelligent machine that is deemed to be more valuable than human life, but it seems that even this is subjective and very hard to judge. In war, a soldier must protect the military base at all costs. Her life becomes less significant than a location of strategic importance. Citizens too, laud personal sacrifice to protect something of great importance to the society, its liberty. To save human lives from road accidents, one may ban vehicles altogether, but the losses in response to that may be more significant.

Boundaries really are fuzzy and subjective, and depend upon the judgment of a central authority. The dynamics, having such a fuzzy nature, require thorough research to prepare core ethics for the collective good of humanity. We need to make sure that even if our AI becomes more human-like, we must not become more machine-like.


About:

Adhar Sharma was a researcher working with Dr. Sukant Khurana’s group, focussing on Ethics of Artificial Intelligence. Dr. Deepak Singh , a Ph.D. from Michigan, is now a postdoc based at Physical Research Laboratory, Ahmedabad, India and is collaborating with Dr. Khurana on Ethics of AI and science popularization.

Raamesh Gowri Raghavan is collaborating with Dr. Sukant Khurana on various projects, ranging from popular writing of AI, influence of technology on art, and mental health awareness.

Mr. Raamesh Gowri Raghavan is an award winning poet, a well-known advertising professional, historian, and a researcher exploring the interface of science and art. He is also championing a massive anti-depression and suicide prevention effort with Dr. Khurana and Farooq Ali Khan.

You can know more about Raamesh here and here:

Dr. Sukant Khurana runs an academic research lab and several tech companies. He is also a known artist, author, and speaker. You can learn more about Sukant at www.brainnart.com or www.dataisnotjustdata.com and if you wish to work on biomedical research, neuroscience, sustainable development, artificial intelligence or data science projects for public good, you can contact him at skgroup.iiserk@gmail.com or by reaching out to him on LinkedIn.

Exit mobile version