Rethinking Digital Regulation


Blog post by Dr Christina J. Colclough, Founder of The Why Not Lab. Published 29 August 2024


Photo by Markus Spiske, available on Unsplash

What three overarching principles could change the regulation of digital systems so fundamental rights are truly protected in digitalised societies and labour markets?

Across the world, many governments are currently tabling proposals for regulating “AI”. The EU has adopted their EU AI Act, a risk-based approach that stipulates the obligations developers and deployers have based on the risk level of the system in question.

Whilst the EU AI Act as well as other regulatory proposals claim that the regulation will protect fundamental rights, I and many others (e.g. Amnesty International, Liberties, Access Now) claim they sufficiently won’t. This, somewhat sadly, includes the Council of Europe’s Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law. Whilst it includes many good clauses, it falls short: either it does not include one or more of the three main principles I list below, or it words them weakly.

We need to turn things upside down and start afresh if we truly want to defend our fundamental rights. This blog post proposes three main principles that the regulation of digital systems should be built on to do just that.

Each of the three can be operationalised through a range of obligations and commitments. These will be described in subsequent blog posts as I work through the operational parts.

This blog post is work in progress - I welcome any and all constructive comments and feedback.


Principle 1: The Right to be Free From Algorithmic Manipulation.

All fundamental rights declarations, charters and laws should include in their preamble the Right to Be Free From Algorithmic Manipulation.

There can be no exercise of the freedom of speech or thought, nor of dignity and equality etc when we are algorithmically manipulated. This right thus ensures the realisation of all of the other fundamental rights.

Principle 2: Algorithmic systems must be inclusively governed.

Inclusive means in meaningful cooperation with representatives of the subjects of these systems and other relevant stakeholders such as consumer organisations, environmental organisations and experts.

This principle will enforce transparency and protect democracy. It will ensure that deployers of digital systems actually understand the real and potential consequences of the digital systems that choose to use, and adjust or reject them if harms are caused to the subjects.

Yes this sounds cumbersome at first, but on second thought, can any governance procedure or impact assessment be meaningful or even truthful if a multitude of voices are not party to it both prior to, and after, the deployment of the digital system(s)? Same voices should be heard and have full access to remedies, system information, system changes etc and have the right to call on principle 1 above if injustices are felt/experienced and not satisfactorily addressed.

Principle 3: Reverse the burden of proof

Borrowing from anti-discrimination laws in some countries and regions, the reverse burden of proof will mitigate information asymmetries between those deploying the digital systems, and those who are subjects of the systems.

The logic is that it should never be down to the subjects of digital systems to prove that they have been harmed by digital systems when they don't have full access to the algorithmic systems, their instructions, training data etc., nor can be expected to have the knowledge to understand the consequences of said systems.

It should, simply, be the responsibility of deployers to prove that harms or other violations are not caused.

The reverse burden of proof will put real action to the principles in many AI proposals on Accountability, Explainability and Fairness (see for example the OECD AI Principles for their definitions).

Why These Three?

I’ve given this lots of thought. My conclusion is that if every “AI” - or whatever label we use to describe regulation related to algorithmic and/or data-driven system - has to respect these three rights, we would have a situation where:

  • Businesses and public services would have to offer a no-tracking, no algorithmic inferencing, no data-trading option (Principle 1).

  • We would dis-incentivise what Professor Shoshana Zuboff calls “markets in human futures” - i.e. the analysis and trading of behavioural predictions based on our personal data and/or personally identifiable information.

  • We would therefore limit if not remove entirely fundamental rights violations that oftentimes occur unbeknownst to subjects, yet can have real life negative affects.

  • We can hold public services, private companies, employers, and organisations accountable and liable for the digital systems they use (Principle 2). Inclusive governance assumes transparency. Transparency is necessary for accountability. Subjects of these systems and their representatives must know what systems are being used, for what purposes and based on what data, and they must have an equal role in the ongoing governance of these systems.

  • We can ensure that accountability is directed not only to the social field, but also the environmental. By including obligations to account for, and reveal, the environmental degradation costs of not only building the hardware needed but also the ongoing water, land and electricity consumption of running these systems, we put people and planet before profit.

  • We can demand that the deployers of these systems, actually understand the potentials and pitfalls of the digital systems they are using. Inclusive governance (Principle 2) and the reverse burden of proof (Principle 3) assume that the entities who deploy these systems can explain how they work, their purpose(s), their instructions etc. Meaningful governance is not possible if you can’t explain what it is you are governing. Nor can you prove or disprove discrimination or other harms.

  • We will limit information asymmetries and therefore enable the full respect of fundamental rights in principle and practice.

Combined, these three principles will promote workplace democracy. They will strengthen democratic control over the systems used in public services as well as in the market. They will put social and environmental impact costs, risks and possible benefits centre stage thus providing a more nuanced, even real, evaluation of any productivity and/or efficiency promises.

They will prevent the opaque algorithmic manipulation that we all are currently subject to. They will, in a meaningful way, put humans at the centre of societal and labour market change. And importantly, they offer an opt-out option. They do not assume that digital change is the only constant we all must adjust to. On the contrary they provide us with the right to say no. This is what Dr Jonnie Penn and Ben Tarnoff insightfully call a de-computerization strategy.


Please offer your comments and feedback!

Previous
Previous

Why we shouldn’t talk about the gig economy

Next
Next

HK: Teknologisk naivitet er ikke vejen frem - vi skal tale om overvågning