Site icon Youth Ki Awaaz

Humans Are Passing Off Their Racism And Sexism To Artificial Intelligence

By Kaustubh Prabhu, Deepak Singh, Raamesh Gowri Raghavan, and Sukant Khurana:

Optimizing logistics, detecting fraud, composing art, conducting research, providing translations, and the list goes on: intelligent machine systems are transforming our lives for the better. As these systems become more capable, our world becomes more efficient and consequently richer.

Tech giants such as Alphabet, Amazon, Facebook, IBM, and Microsoft — as well as individuals like Stephen Hawking and Elon Musk — believe that now is the right time to talk about the nearly boundless landscape of Artificial Intelligence (AI). In many ways, this is just as much a new frontier for ethics and risk assessment as it is an emerging technology.

Machine systems usually go through a “training phase”, where they learn to detect the correct patterns and act accordingly to the input they are provided. Once a system is fully trained it can go to the “test phase” where it receives more examples in order to test its performance. However, it is not possible to cover all possible scenarios the training phase that a system can come across, hence these systems can be fooled in ways that humans would not be. This may be referred to as artificial stupidity.

A big example of this would be Microsoft’s experiment with the intelligent chat-bot Tay, who spewed racial slurs after learning them within a day. Tay was a chat-bot meant to tweet and sound like a young adult girl, with all of the same slang, verbiage, and vernacular. The more it interacted with humans on Twitter, the more it would learn and the more human it would sound. In less than 24 hours online miscreants, realized how easily Tay could be influenced, started teaching it offensive and racist language. After only 16 hours into its first day on Twitter, Tay had gone from friendly greetings to spouting offensive and racist comments. Microsoft turned Tay off, and the message was clear — “AI platforms are only as good as the data given to them.” Fed hate speech Tay was forced to assume that was just how humans spoke.

A group of researchers from the University of Bath, UK set out to determine just how biased an algorithm can be. Their study (published in April of this year in the journal Science) showed that machines can very easily acquire both the same conscious and unconscious biases held by humans. All it takes is some biased data.

AI has the ability to analyse data more quickly, and in a more accurate manner as compared to humans. However, it can inherit human biases and prejudices. AI is a learning algorithm which evolves on the data provided, and the easiest way to find that data is the internet. However, the language on the internet can be full of biases. A study at Stanford found that an internet-trained AI associated European American names with positive words like “love”, “care”, and African American names with negative words like “failure” and “cancer.”

Another important issue is that the person who is testing the data or creating the algorithm may imprint their own views on the machine. The data where human nature or behaviour is studied or recorded will always reflect the psychology of the subject, and their biases.

Luminoso’s Chief Science Officer Rob Speer oversaw the open-source dataset ConceptNet Numberbatch, which is used as a knowledge base for AI systems. One of Numberbatch’s data sources was tested by him, and obvious problems with their word associations were found. When fed the analogy question “Man is to woman as shopkeeper is to…” the system filled in “housewife”. It similarly associated women with sewing and cosmetics and men with bikes and utilities. These associations are appropriate for certain applications, but they would cause problems in general AI tasks such as evaluating job applications.

An AI which is unable to determine problematic associations would have no problem in ranking a woman’s resume lower than an identical man’s resume. Similarly, when Speer tried creating a restaurant review algorithm, it rated Mexican food lower because it used to associate “Mexican” with negative words like “illegal.” Here, we can observe the perspective of a certain community (who helped to create the database) reflected in the output.

There should be a check on the data input to the algorithms while learning. For such a check, a human element is currently needed for removing biases. The researchers are trying to create a system which understands morals and common sense, and hence eliminate the bias.

Alternate Perspective

There is no such algorithm which is biased as such because all of it, in the end, is the output of the statistical analysis of data provided and answers are correct. In 2016, ProPublica wrote an article stating that the COMPAS algorithm is biased but further analysis by many suggested it was because it gave answers without consideration to race, which were correct for the data set examined.

According to a research by Stanford, the bias is necessary for a proper output. The paper on ‘Algorithmic decision making and the cost of fairness’ it is necessary that bias exists, this gives correct output. The article on ‘A.I. ‘Bias’ Doesn’t Mean What Journalists Say It Means’ by Chris Stucchio & Lisa Mahapatra redefines bias from an AI Point of view. It explains the difference between real truth and ideal reality hence supporting AI.

Concluding that bias being an important question on Ethics of AI, it is necessary to define bias and study the cost of fairness. If cost of fairness is reduced in the future then one day AI with bias-free data can be bias-free.

This post was first published here.


About:

Kaustubh Prabhu was a researcher working in Dr Sukant Khurana’s group, focusing on Ethics of Artificial Intelligence. Dr Deepak Singh, a PhD from Michigan, is now a postdoc based at Physical Research Laboratory, Ahmedabad, India and is collaborating with Dr Khurana on Ethics of AI and science popularization.

Raamesh Gowri Raghavan is collaborating with Dr Sukant Khurana on various projects, ranging from popular writing of AI, the influence of technology on art, and mental health awareness.

Mr Raamesh Gowri Raghavan is an award-winning poet, a well-known advertising professional, historian, and a researcher exploring the interface between science and art. He is also championing a massive anti-depression and suicide prevention effort with Dr Khurana and Farooq Ali Khan.

You can know more about Raamesh here and here.

Dr Sukant Khurana runs an academic research lab and several tech companies. He is also a known artist, author, and speaker. You can learn more about Sukant at www.brainnart.com or www.dataisnotjustdata.com and if you wish to work on biomedical research, neuroscience, sustainable development, artificial intelligence or data science projects for public good, you can contact him at skgroup.iiserk@gmail.com or by reaching out to him on LinkedIn.

Exit mobile version