Building Fairer Systems - Confronting Algorithmic Bias in AI
On the 22nd of January, the AI Faith and Civil Society Commission hosted an event titled “Building Fairer Systems: Confronting Algorithmic Bias in AI” in Portcullis House. This event, sponsored by Chi Onwurah MP, brought together leading experts to explore how AI systems can perpetuate bias, particularly affecting marginalised communities and explored solutions for promoting transparency, fairness, and accountability in AI deployment.
The event began with a speech from Chi Onwurah MP, Chair of the Science and Innovation Select Committee, who highlighted the lack of attention given to algorithmic bias. She stressed that technology is often imposed on people without their input, leaving them powerless to shape it and that those most affected by algorithmic bias are rarely involved in designing AI systems, as the creators of these technologies often do not reflect the diversity of the wider population. She also noted that decision-makers struggle to scrutinise generative AI models because tech companies are reluctant to disclose their algorithms, limiting transparency. She called for greater representation, transparency, and inclusion in AI development going forward.
The event then turned to a structured discussion, beginning with a discussion around the ways AI systems often replicate and reinforce societal biases, often disproportionately affecting individuals based on race, gender, and disability. Among the points raised in the discussion included:
- The lack of representation in the dataset due to many people do not acknowledge or record sensitive or personal data online;
- The problem with incorrect data, such as medical misidentifications or historical hiring biases, which then feeds into AI outputs, as well as inherently discriminatory data where there is a risk that AI models trained on deeper structural inequalities will perpetuate discrimination; and
- The fundamental issue with the way in which AI algorithms tend to identify patterns and disregard outliers, leading to misrepresentative results for individuals who do not conform to dominant trends.
Turning to the possible solutions to mitigate against algorithmic bias which included changes to the data-sets, changes that need to be made to algorithms, and changes that need to be made when analysing AI outputs, the following measures were suggested:
- An ‘Ingredients Label’ for AI systems which includes clear disclosure of training data sources.
- Mandatory auditing & impact assessments for all AI models, with oversight from inclusive assessment bodies;
- Changes to AI models so that they are designed to cope with statistical outliers;
- Supervised learning to ensure close human oversight in AI training;
- Sector-specific AI models or smaller more specialised models which are designed for different purposes.
- Equipment of those using the models to ensure adequate interpretation of them; and
- Public education and upskilling should be encouraged to ensure equitable participation, particularly targeting underrepresented communities.
An overarching theme in the event was the call for greater collaboration between policymakers, developers, and misrepresented groups to embed fairness into AI from conception to deployment. The discussion underscored the urgent need for AI governance structures that prioritise fairness, transparency, and accountability. While AI presents immense opportunities, it must be developed and implemented responsibly to prevent further entrenchment of societal biases. Thank you to Chi Onwurah MP, Kate Devlin for chairing and all attendees for contributing to such an insightful event.