AI Experts Debate Whether Machines Could Ever Match Human Intelligence at RSA Event
On 31 October 2024, a lively and thought-provoking panel on “Might AI be as Intelligent as Humans?” took place at RSA House in London, exploring AI’s evolving role in society, its ethical implications, and whether machines can ever truly replicate human intelligence.
The event brought together a panel of AI experts, including:
Key Discussion Points:
Trust in AI vs. AI Companies: Kate Devlin highlighted that while AI itself may be reliable in terms of processing and intelligence, the companies developing AI are driven by profit, which can compromise data privacy and user safety. Users are often unaware that their data shared with chatbots returns to these companies, raising ethical concerns over sensitive data.
Human vs. Artificial Intelligence: Jeremy Peckham noted a shift in ethical standards, where companies now frequently use copyrighted data without permission—a shift from earlier norms that respected data ownership. He argued this “forgiveness over permission” mindset creates more ethical questions than ever before.
AI in Marketing and Manipulation: Adam Naisbitt discussed the potential for AI in levelling the playing field in marketing for economically disadvantaged regions, as AI can offer strategic advice previously available only through costly consultants. However, he raised concerns over where marketing ends and manipulation begins, noting that AI has enabled a rise in ineffective or even deceptive marketing tactics.
Scamming and Radicalisation Risks: With AI’s capability to form relationships, Adam warned that AI can easily be used to groom or radicalise individuals, as scammers and extremists could leverage AI’s personalisation capabilities to build trust and manipulate vulnerable users.
Big Tech vs. Government Regulations: Kate argued that regulation of AI is inadequate, especially with respect to powerful technologies like autonomous weapons. She also pointed out AI’s environmental impact, with significant energy and water use for each AI transaction, adding to concerns over sustainability. Jeremy added that proper regulation could support rather than stifle innovation.
Creativity and AI: The panellists questioned AI’s potential for genuine creativity. Adam contended that creativity should serve a purpose rather than be pursued for its own sake in AI, while Jeremy emphasised that AI merely mimics human behaviour without intrinsic understanding or expression.
Possibility of AI “Coming Alive”: Jeremy dismissed the idea, asserting human uniqueness, while Kate expressed scepticism, linking consciousness to phenomena that AI lacks. Adam warned that we as humans should always consider this a possibility so that it regulates the way we engage with AI.
Transparency and Trust in AI Systems: Adam compared AI trust to that of a calculator, which we use and rely on without question because of our understanding of how it works, and generations of built trust. In AI, we lack understanding of the “black box” mechanics behind AI systems. Jeremy questioned where moral responsibility lies, pointing out that artefacts like AI lack intrinsic agency.
AI’s Role in Developing Economies: Jeremy and Kate both noted AI’s potential in agriculture and education in less economically developed countries (LEDCs), but they also expressed concerns about exploitative practices by tech giants, which rarely prioritise the needs of these regions. Adam added that while AI can democratise opportunities, increased competition may challenge local markets, because it results in their competing with MEDC markets
Legal and Ethical Decision-Making: Jeremy raised concerns about judges using AI for rulings, as these systems prioritise efficiency over human ethical reasoning, underlining the importance of human judgement and appeals in critical decisions.
Bias and Limitations in AI: Kate pointed out that while large language models (LLMs) accumulate vast knowledge, they amplify human biases instead of reducing them. Jeremy agreed, noting that AI companies rarely acknowledge data biases in their models, which the commission is assessing as a risk to humanity
The panellists concurred on the need for transparency, ethical oversight, and equitable development practices, as big tech companies’ advancements continue to outpace public sector regulation, leaving governments and civil society with the challenge of managing AI’s societal impact.
Reflecting on the event, Jeremy Peckham remarked that it was a lively and engaging discussion, attendees posed insightful questions and a diverse range of opinions were shared. The event was highly attended and the feedback received from attendees was overwhelmingly positive.