Pinned
5
min read

Algorithmic Bias Roundtable in Portcullis House

4.2.2025

On the 22nd of January, the AI Faith and Civil Society Commission hosted an event titled “Building Fairer Systems: Confronting Algorithmic Bias in AI” in Portcullis House. This event, sponsored by Chi Onwurah MP, brought together leading experts to explore how AI systems can perpetuate bias, particularly affecting marginalised communities and explored solutions for promoting transparency, fairness, and accountability in AI deployment.

The event began with a speech from Chi Onwurah MP, Chair of the Science and Innovation Select Committee, who highlighted the lack of attention given to algorithmic bias. She stressed that technology is often imposed on people without their input, leaving them powerless to shape it and that those most affected by algorithmic bias are rarely involved in designing AI systems, as the creators of these technologies often do not reflect the diversity of the wider population. She also noted that decision-makers struggle to scrutinise generative AI models because tech companies are reluctant to disclose their algorithms, limiting transparency. She called for greater representation, transparency, and inclusion in AI development going forward.

The event then turned to a structured discussion, beginning with a discussion around the ways AI systems often replicate and reinforce societal biases, often disproportionately affecting individuals based on race, gender, and disability. Among the points raised in the discussion included:

  • The lack of representation in the dataset due to many people do not acknowledge or record sensitive or personal data online; 
  • The problem with incorrect data, such as medical misidentifications or historical hiring biases, which then feeds into AI outputs, as well as inherently discriminatory data where there is a risk that AI models trained on deeper structural inequalities will perpetuate discrimination; and 
  • The fundamental issue with the way in which AI algorithms tend to identify patterns and disregard outliers, leading to misrepresentative results for individuals who do not conform to dominant trends. 

Turning to the possible solutions to mitigate against algorithmic bias which included changes to the data-sets, changes that need to be made to algorithms, and changes that need to be made when analysing AI outputs, the following measures were suggested:

  • An ‘Ingredients Label’ for AI systems which includes clear disclosure of training data sources. 
  • Mandatory auditing & impact assessments for all AI models, with oversight from inclusive assessment bodies; 
  • Changes to AI models so that they are designed to cope with statistical outliers; 
  • Supervised learning to ensure close human oversight in AI training; 
  • Sector-specific AI models or smaller more specialised models which are designed for different purposes. 
  • Equipment of those using the models to ensure adequate interpretation of them; and 
  • Public education and upskilling should be encouraged to ensure equitable participation, particularly targeting underrepresented communities. 

An overarching theme in the event was the call for greater collaboration between policymakers, developers, and misrepresented groups to embed fairness into AI from conception to deployment. The discussion underscored the urgent need for AI governance structures that prioritise fairness, transparency, and accountability. While AI presents immense opportunities, it must be developed and implemented responsibly to prevent further entrenchment of societal biases. Thank you to Chi Onwurah MP, Kate Devlin for chairing and all attendees for contributing to such an insightful event. 

G
Pinned
5
min read

Algorithmic Bias Roundtable in Portcullis House

On the 22nd of January, the AI Faith and Civil Society Commission hosted an event titled “Building Fairer Systems: Confronting Algorithmic Bias in AI” in Portcullis House.

The Commission is proud to announce three new members of our Associates Programme, that aims to create a collaborative community of individuals and organisations interested in the intersection of AI, faith, and civil society. They will have the opportunity to participate in Commission events, contribute to discussions, and showcase their AI-related work on the Commission's platform.

Dr Chinmay Pandya is the Editor of the Dev Sanskriti, an Interdisciplinary International Journal that addresses a abroad range of Indian intellectual interests and religious pedagogies. He is responsible to guide the ethos, academic rigour and policy implementation at DSVV. Dr Pandya is also the Chairperson of the International Festival of Yoga, Culture and Spirituality and has convened more than two hundred national and international colloquia at DSVV; and is the Co-founder of the First Centre for Baltic Culture and Studies of Asia, Founder of the South Asian Institute for Peace & Reconciliation and a Member of the ICCR Governing Council


Dr Nathan Mladin is a Senior Researcher at the think tank Theos in London. His research, speaking and writing focus on technology ethics and theology of culture. He holds a PhD in Systematic Theology from Queen’s University Belfast and is the author of several publications, including Data and Dignity: Why Privacy Matters in the Digital Age (Theos, 2023) and AI and the Afterlife: From Digital Mourning to Mind Uploading (Theos, 2024). He is also author of ‘The Question of Surveillance Capitalism’ (with Stephen N Williams), a chapter in The Robot Will See You Now: Artificial Intelligence and the Christian Faith (SPCK, 2021).


Prof Dr Beth Singler is the Assistant Professor in Digital Religion(s) and co-lead of the Media Existential Encounters and Evolving Technology Lab at the University of Zurich where she leads projects on religion and AI. As an anthropologist, her research focusses on the human, and considers the religious, cultural, social, and ethical implications of developments in AI and robotics.  Her research has been recognised with awards, including the 2021 Digital Religion Research Award from the Network for New Media, Religion, and Digital Culture Studies. Her popular science communication work includes a series of award-winning short documentaries on AI, writing and presenting a BBC Radio 4 documentary on the cultural impact of The Terminator forty years on, popular publications, science festival talks, press interviews, and international media appearances. Beth has spoken about her research at Greenbelt, at the Hay Festival as one of the Hay 30 to watch, as well as at New Scientist Live, Ars Electronica, the Edinburgh Science Festival, the Cheltenham Science Festival, and has appeared several times on BBC Click and BBC Click Live, and on BBC Radio 3 for the Year of Blade Runner. She is co-editor of the Cambridge Companion to Religion and AI (2024) and author of Religion and AI: An Introduction (2024). Her publications, interviews, and talks are all available at bvlsingler.com.

G