On the 22nd January, the AI Faith and Civil Society Commission will be hosting an event titled “Building Fairer Systems: Confronting Algorithmic Bias in AI” in Portcullis House. This event, chaired by Prof Kate Devlin, will bring together leading experts to explore how AI systems can perpetuate biases—whether racial, disability, socio-economic, or gender-based—and what can be done to mitigate these challenges. It will also focus on how intersectionality informs our understanding of algorithmic bias and its impact on marginalised communities.
The event will discuss some of the following questions:
- In what ways can the design of algorithms perpetuate systemic discrimination (in terms of racial, gender, socio-economic and disability bias)?
- What are some examples of ways marginalised and underrepresented groups have been disproportionately affected by biased AI systems?
- What technical solutions, such as bias auditing or fairness-aware algorithms hold the most promise for mitigating bias? What are their limitations?
- What might a more comprehensive regulatory framework for AI look like in terms of addressing bias? How can we ensure greater collaboration between AI developers, policymakers, and advocacy groups to build fairer AI systems?