“Alexa, how’s the weather today?”- a part of morning ritual for many of us. As we move forward in the age of technology, digital personalities like Alexa and Siri have become the insufferable know-it-all, not capable of errors unlike humans. However, that is not quite right. True, artificial intelligence was designed to minimise error. It was still built by humans and humans do make errors. So, it appears, paradoxically, we built the error minimising technology with some errors.
If you look into it carefully, you can identify these errors easily. For example, have you ever realised that ‘Ok Google’ replies more efficiently to a male voice than a female one? Try another one. Have you ever heard of COMPAS program? COMPAS is an AI technology that is used in the US to identify crime likelihood in individuals. However, study has shown that it is more likely to find criminal in a dark African-American than a white American. Isn’t this cruel?
Why does AI make these biases? According to Tina Eliassi-Rad (associate professor of computer science at Northeastern University and a core faculty member in the university’s Network Science Institute) it is because of the data. AI is made up of complex algorithms. Algorithms are nothing but a set of instructions to carry out tasks. Algorithms are also used in computer programs. AI is a biased technology that essentially mimes humans. All the characteristics of the AI depends upon the data their algorithms have been based on. Use of biased data means creating biased algorithms. It doesn’t end here. The algorithm magnifies the bias character of our society. Another example of biased AI is Google’s facial recognition. At times, it mistakenly identifies an African-American person as a Gorilla. Prof. Eliassi-Rad says,
These embarrassing incidents could’ve been avoided if the machine learning algorithm was given more balanced training data.
Needless to say, the balanced data we require for AI algorithms should come from a balanced human race. Only if the society is ready to go beyond the exisiexi discrimination can we successful build an error minimising technology.
Shraddha Patil
Comments