Artificial Intelligent robots and devices are being taught to be racist, sexist and otherwise prejudiced by learning from humans, according to new research.
A massive study of millions of words on-line looked at how closely different terms were to each other in the text- the same way that automatic translators use “machine learning” to establish what language means.
Some of the results were stunning. The researches found male names were more closely associated with career-related terms than female ones, which were more closely associated words related to the family. This link was stronger than the non-controversial findings that musical instruments and flowers were pleasant and weapons and insects were unpleasant.
Female names were also strongly associated with artistic terms, while male names found to be closer to maths and science ones. There were strong associations, known as word “embeddings”, between Eurpoean or American names and pleasant terms, and African-American names and unpleasant terms.
The effects of such biases on Artificial Intelligence (AI) can be profound. For example Google Translate, which learns what words mean by the way people use them, translates the Turkish sentence “O bir doktor” into “he is a doctor” in English, even though Turkish pronouns are not gender specific. So, it can actually mean “he is a doctor” or “she is a doctor” to “hem-sire”, meaning nurse, in the same sentence and this is translated as “she is a nurse”.
In a paper about the new study in the journal Science, the researches wrote: “Our findings suggest that if we build an intelligent system that learns enough about the properties of language to be able to understand and produce it, in the process it will also acquire historical cultural associations, some of which can be objectionable.”