Do we need a bill of Digital Rights?
Jeremy Peckham, Research Lead at the AI, Faith and Civil Society Commission & Mohammed Ahmed, Research Manager at the AI, Faith and Civil Society Commission
Introduction to the Issue
In the age of advanced technology, existing human rights frameworks—such as the UN Human Rights Charter and the US Declaration of Independence—fall short of addressing the challenges posed by modern digital tools, especially AI. The idea of a Bill of Digital Rights would be to define the fundamental rights that should govern human interactions in the digital world, ensuring technology serves humanity rather than diminishes it.
This case study explores key areas of concern where AI applications could harm fundamental human rights and how these concerns could be addressed through a Bill of Digital Rights. Some of these areas are listed below and discussed in this article in an introductory manner. Please be sure to check our other case studies for more detail.
Key Areas of Concern:
1. Cognition and Creativity
AI-driven decision support systems are increasingly used in sectors such as finance, medicine, and law. However, these systems often rely on probabilistic outcomes and data that is inherently biased due to human input. Over time, reliance on these AI tools can erode cognitive acuity, as individuals and institutions may defer decisions to algorithms instead of using human judgment.
Recommendations:
To mitigate this, there must always be human oversight, especially in decisions impacting individuals and communities, along with the right to appeal any decision made by AI.
2. Authentic Relationships
The proliferation of digital assistants and AI-driven simulations that mimic human behaviour presents concerns for authentic human relationships. These technologies can foster reliance on machines for emotional support, which could erode meaningful human interactions. Moreover, the drive to create AI that seems more human could contribute to gender stereotyping and unrealistic expectations of technology.
Recommendations:
Whilst there shouldn’t be outright ban on such devices, it should be a requirement that users should always know that they are interacting with an artefact, not a human. More empirical research is needed in this area regarding harms to humanity.
Research should be conducted on methods to ensure that such artefacts do not appear human (e.g. the use of non-human voices).
The evaluation of a user’s emotions, personality and character by AI based artefacts, simulating a dialogue should be banned (e.g. Interviewing systems).
3. Freedom & Privacy
Privacy and freedom is lost through the use of private data and surveillance of citizens, whether by the state or private companies. The use of AI to monitor, track, and identify citizens from facial or other personal attributes is unprecedented in any civilisation and is quite different from the use of other biometrics such as fingerprints. The European Commission and many other countries are well aware of these dangers and action is required urgently to avoid mass surveillance being normalised. Although not involving AI, the deployment of Covid-19 tracking apps has brought this prospect even closer.
Recommendations:
There should be an outright ban on the state's use of AI based surveillance technologies. An even greater level of surveillance has already been established in the private sector through Big Tech’s use of a user’s browsing data, shopping activity and a host of other data gatherers such as FitBit health monitors. Much of humanity has already lost its freedom and autonomy.
GDPR legislation needs strengthening to avoid the extraction and use of personal data. The practice of companies providing free services or products in exchange for data should be banned without explicit and informed consent. Subscription models for these services, that don’t use manipulative AI algorithms, would help to preserve our privacy and freedom.
4. Moral Autonomy
The delegation of moral decision-making to AI systems, such as autonomous weapons or self-driving cars, threatens human moral agency. Machines making life-and-death decisions undermine the accountability that humans must have when such critical choices are made.
Recommendations:
A Bill of Digital Rights should enforce that any system involving moral decisions—particularly where lives are at risk—must require human oversight and judgment to ensure accountability.
5. Dignity of Work
AI and automation are already transforming workplaces by replacing human labour with machines, raising concerns about the dignity of work.
Recommendations:
While AI can replace jobs in some fields, it is crucial that workers displaced by automation are offered alternative employment opportunities.
6. Truth and Reality
Augmented Reality (AR) and Virtual Reality (VR) systems, though immersive, carry risks of detaching individuals from the real world. Prolonged exposure to these technologies could blur the lines between reality and virtual experiences, leading to addictive behaviours or a loss of connection to real-world relationships.
Recommendations:
Further empirical research is needed to assess the impact of these technologies on users' health.
Safeguards should be implemented to protect against overuse or addiction.
Strong human oversight is essential in the development of these technologies to avoid harm.
References
BBW-DPDI-Briefing-for-House-of-Lord-Committee-Stage.pdf (bigbrotherwatch.org.uk)
Fit or misfit: Can GDPR and the AI Act interplay? | Simmons & Simmons (simmons-simmons.com)
EPRS_STU(2020)641530_EN.pdf (europa.eu)
Auctioneer stock image. Image of rights, hammer, verdict - 113333089