A new study published in Cornell University’s research archive, arXiv, has found popular artificial intelligence tools such as ChatGPT hold racist stereotypes regarding African American Vernacular English, the native variety of English spoke by Black Americans with its own accent, grammar, and vocabulary features.
The research team prompted various artificial intelligence tools regarding their perception of someone using African American Vernacular English versus someone using what the researchers describe as standard American English. The results found the programs tended to describe African American Vernacular English speakers as less desirable job applicants. When given prompts regarding criminality, the models were also more likely to say African American Vernacular English speakers were criminals and deserved the death penalty more than speakers of standard American English.
The study authors write that previous research on artificial language models has found overt examples of racism to be much more positive, but their study is unique in that it analyzes a more subtle aspect of racism in present day America. They believe their new “findings have far-reaching implications for the fair and safe employment of language technology.”