Regulating AI in the UK: Lessons Learnt from the EU AI Act
On 4th November 2024, a group of AI experts, policymakers and parliamentarians gathered for a roundtable in Portcullis House titled: ‘Regulating AI in the UK: Lessons Learnt from the EU AI Act’. Chaired by the Director of Big Brother Watch and one of our AI Faith & Civil Society Commissioners, Silkie Carlo, the group first heard from the event’s sponsor, Florence Eshalomi MP who expressed her interest in the opportunities that AI offers but her concern that some communities, especially those from disadvantaged backgrounds, may struggle to adapt to emerging technologies at the pace that they are being developed.
Silkie then handed over to two guest speakers whose expertise in the EU AI Act gave attendees the chance to hear about the successes of the Act, possible future challenges that may emerge and how to mitigate these when considering laying similar legislation in the UK.
First, we heard from Daniel Leufer, Senior Policy Analyst from Access Now who explained the importance of transparency in the AI space. He stressed that we need to know what is being procured by public sectors. Acknowledging that introducing transparent processes is still a work in progress, but work worth doing, he said: ‘the transparency that the AI Act brings is not perfect, but it does flip the status quo’. Leufer also stressed that the term ‘AI’ is so broad that legislators must be conscious of the introduction of new technologies in such a fast paced innovative space, and ensure through the law, that these do not undermine existing rights.
The group then heard from Francesca Fanucci, Senior Legal Adviser at the European Centre for Not-For-Profit Law who called for a horizontal approach across sectors, noting that some sectors use AI in more problematic ways than others. She said: ‘Some tech is created for a sector and then used for another sector and it doesn’t work very well. We need to focus on the processes: transparency, accountability, meaningful consultation with civil society and an understanding of risk’.
Dawn Butler MP, who has called for a ‘Digital Bill of Rights’ to protect citizens from harmful technology, gave further insight into the implications of AI particularly in relation to automated decision-making and surveillance tools including facial recognition. She expressed serious concern around the potential for AI to perpetuate discrimination against vulnerable groups, and the need for robust legislation to protect citizens against bias.
As the floor was opened for questions and discussions, Lord Ranger of Northwood, the Vice-Chair of the AI APPG, continued the discussion around bias, explaining that ‘we must ensure that mistakes are not built into the AI that we use. The organisations that use AI must be looked at as well as the AI itself’. Lord Tarassenko asked that, as someone who has worked in the AI space especially around algorithms, we also recognise where things work well. He explained that policy makers and algorithm writers often work in silos, and encouraged more dialogue between groups to ensure a robust understanding on both sides. This was echoed by Peter Fortune MP who highlighted the fast-paced innovation of the AI sector and how crucial it is that parliamentarians are well-educated about the opportunities and harms so they are well-equipped to make decisions about it.
As the routable drew to a close, Silkie Carlo posed a question to all attendees - ‘If you had 5 minutes with Peter Kyle MP, the Secretary of State for Science, Innovation and Technology, what would you want to tell him?’ Following such a vibrant discussion, the room was full of ideas. From stressing that innovation is at its best through regulation, to the importance of transparency and the global influence any UK legislation will have, the room was in agreement that there is more to be done. But there was also consensus that for change to happen, the views of and impact on civil society must be the priority.