Southern Methodist University in Dallas has established the Intelligent Systems and Bias Examination Lab (ISaBEL). The lab’s mission is to understand how artificial intelligence systems, such as facial recognition algorithms, perform on diverse populations of users. The lab will examine how existing bias can be mitigated in these systems using the latest research, standards, and other peer reviewed scientific studies.
Algorithms provide instructions for computers to follow in performing certain tasks, and bias can be introduced through such things as incomplete data or reliance on flawed information. As a result, the automated decisions propelled through algorithms that support everything from airport security to judicial sentencing guidelines can inadvertently create disparate impact across certain groups. ISaBEL will design and execute experiments using a variety of diverse datasets that will quantify AI system performance across demographic groups.
“How to study and mitigate bias in AI systems is a fast moving area, with pockets of researchers all over the world making important contributions,” said John Howard, a research fellow and biometrics expert at the university. “Labs like ISaBEL will help ensure these breakthroughs make their way into the products where they can do the most good and also educate the next generation of computer scientists about these important issues.”