Reducing bias in Artificial Intelligence systems

SMU global research university has launched a new lab, ISaBEL, focused on reducing bias in Artificial Intelligence systems using academic research.

Quantifying and minimising bias in Artificial Intelligence (AI) systems is the goal of a new lab established within global research university SMU’s AT&T Centre for Virtualisation. Pangiam, a global artificial intelligence company, is the first industry partner for the Intelligent Systems and Bias Examination Lab (ISaBEL).

ISaBEL’s mission is to understand how AI systems, such as facial recognition algorithms, perform on diverse populations of users.  The Lab will examine how existing bias can be mitigated in these systems using the latest research, standards, and other peer reviewed scientific studies.

Algorithms provide instructions for computers to follow in performing certain tasks, and bias can be introduced through such things as incomplete data or reliance on flawed information. As a result, the automated decisions propelled through algorithms that support everything from airport security to judicial sentencing guidelines can inadvertently create disparate impact across certain groups.

ISaBEL will design and execute experiments using a variety of diverse datasets that will quantify AI system performance across demographic groups.

As the lab grows ISaBEL will seek additional industry partners to submit their algorithms for certification. “SMU’s AT&T Center for Virtualisation is the perfect place to work on these issues with its focus on cross-disciplinary research, education and training, and community outreach,” said Centre Director Suku Nair.

Both artificial intelligence and computer vision, which enables computers to pull information from digital images and videos, are quickly evolving and becoming increasingly accessible and adopted.

John Howard, an AT&T Center research fellow and biometrics expert, adds: “How to study and mitigate bias in AI systems is a fast moving area, with pockets of researchers all over the world making important contributions.

“Labs like ISaBEL will help ensure these breakthroughs make their way into the products where they can do the most good and also educate the next generation of computer scientists about these important issues.”

AI industry leaders know that end users must clearly understand bias measurement and the progress of bias mitigation to build trust among the general public and drive full market adoption.

Pangiam Chief AI Officer and SMU Alumnus, Shaun Moore says: “At Pangiam, we are fundamentally committed to driving the industry forward with impactful efforts such as this.

“Bias mitigation has been a paramount focus for our team since 2018 and we set out to demonstrate publicly our effort toward parity of performance across countries and ethnicities. SMU is the perfect institution for this research.”

ISaBEL is currently recruiting graduate and undergraduate students to participate in the lab’s AI research.  

Share

Featured Articles

Trustwave Reveals the Financial Sector's Cyber Threats

Although it's not new to think that financial services organisations are prime targets for cybercriminals, the threat landscape they find themselves in is

TCS and Google Cloud Join for Solution to Secure the Cloud

TCS partners with Google Cloud to launch a range of AI-powered cybersecurity solutions to help businesses secure their clouds against advanced threats

Cybersecurity Conglomerate Reveals Threats Facing Consumers

Cybersecurity Conglomerate Gen quarterly report reveals shocking statistics like the fact that consumers are now increasingly at risk from Ransomware

Decoding the US' Most Misunderstood Data Security Terms

Cyber Security

Orange Cyberdefense's Wicus Ross Talks Cyber Extortion Trend

Hacking & Malware

Palo Alto Networks Buy IBM's QRadar Assets in Win for SIEM

Network Security