AI and Human Rights

Panel at the 2021 "Athens Roundtable on AI, Human Rights and the Rule of Law". With Elizabeth Thomas-Raynaud from GPAI, Marielza Oliveira from UNESCO, Cornelia Kutterer from Microsoft EU, Patrick Penninck from the Council of Europe, and the Why Not Lab's Christina J. Colclough

Here's excerpts of what the Why Not Lab's Christina Colclough said:

Should there be a convention you ask in singular? No! There should be many! And this is the thing we need to. I mean there's lots and lots of comments in the chat here about the complexity of all of this, well you know let's peel the layers´ off the onion here and then really start looking at the core features of artificial intelligence, it's deployment in the public, the private and in workplaces and see where is it that we need actually to have some conventions. These could be around transparency, around the co-governance, around the co-design of algorithmic systems to ensure that they do not intentionally or unintentionally harm - that's number one.

I agree with most of what my fellow speakers have said, but I really think we need to start from the ground up here. What I do in the work in the Why Not Lab is really work with workers and unions across the world in all regions to bridge a huge gap knowledge gap here and that is around data around AI, algorithms. How do we understand these new technologies that are being introduced into workplaces and how then on that understanding can workers in unions start building a response to this and to really with the ultimate aim of tabling an alternative digital ethos.

Now this leads into the idea of a convention. What we are seeing now is several things in workplaces and I’m going to limit my comments to the workplace. What we see is that management are introducing tools and systems which, for the vast vast majority are third-party systems, they have not necessarily been trained in identifying harms or risks in understanding what could be the unintended consequences of the use of these systems in terms of violations, of discrimination and bias and so forth. But also lots of other harms we can see workers are subject to: increased work speed. intensity etc etc.

So, what we see here is that management are introducing these systems and they are not governing them, and if they are governing them at all it’s from a risk perspective, risk of being hacked or safety or something. it’s not from a social technical perspective.

One of the things that the Why Not Lab is helping the unions with is actually starting that conversation with management around how we could co-govern these systems not to remove the responsibility and liability from management but actually to ensure that management does take that responsibility seriously. So, empowerment from the ground up I think is extremely essential.

Can law keep up?

Now this is a question which is almost fixating law in as a constant. Now law can be kept up if our politicians took responsibility. Now we are standing, and I said this when I bowed out of the GPAI Steering Committee, we are standing on the shoulders of giants. Politicians who before in history dared take responsibility, and I think the world is now looking at the current global politicians to say take responsibility.

Let's face it, the current digital ethos which is running around the world right now is doing more harm than good, especially in a human rights perspective. The Universal Declaration of Human Rights, which has formed the basis of many human rights laws around the world, is so profound- and I really want to support what Marielza has said: We just have to enforce these absolute rights.

Thinking that through, at the moment so many both workers and citizens of course are being manipulated to a degree that we must ask: Do they really have a freedom of thought? How is this being manifested in relation to their work opportunities? For example, are we narrowing the labour market into very exclusive labour markets where anything outside of the norm is actually thrown out the window?

And then I really want to say something, because I am, if I can be so rude, really really tired of hearing governments and high politicians talk about how they respect human rights and yet they are allowing the abuse of human rights within their borders. Just in the world of work, union busting, for example, this is an abuse of human rights. So I really think we should tidy up our own backyards and then acknowledge we don't just need one international convention, we need several.

And we need to break this down into the very core of artificial intelligence, or whatever you want to call it, algorithmic systems so that it is a co-building, a co-governance of these systems no matter where they are deployed.

Moderator: Christina I’m sure you have some things you want to say on that topic of how companies can step up more?

What companies could do is bring dialogue back into vogue. To stop perceiving their employees as their enemies. To really value that the union representatives or the workers themselves have their ear to the ground. They are the ones who are living the impacts, or in the majority of cases, the harms that these systems are subjecting them to.

Management are not experts

And I am so frustrated in almost every single governance model that has been produced from academia, experts and think tanks there is an assumption in them that management knows what they're dealing with. They don’t. This is a fallacy.

The majority of companies I’ve spoken to who are deploying third-party systems do not know how to govern these systems in a social technical environment. So we need to bring dialogue back into vogue.

Second thing, what can companies do? Respect the collective agreements, respect human rights, freedom of association, the right to collective bargaining and through collective agreements start actually discussing the implementation, the purposes of these systems.

Certification

My third point is, and this is this is another mind-boggling thing about international law including the EI AI Act, if you certify at all, you are also certifying it as it is at the time of certification. I don't think you need to be much of a technical expert to know that the majority of these systems either self-learn through machine learning or get adapted because the instructions to the algorithms get changed. You cannot certify once. We need this and here I think that if Paul Nemitz is still on the call one of the genius things of the GDPR, although not many are living up to this, is the periodic reassessment of data protection impact assessments. For example, and this we need to understand, we need to periodically reassess these systems and you cannot, nobody can, justifiably unilaterally do this. You have to do this with stakeholders, multiple stakeholders at the table.

Co-governance

So what can companies do? That's it. You co-govern these systems, take their responsibility, educate themselves so they know what they're actually dealing with, and periodically commit to reassessments and to if harms are being experienced to throw the system out the window if it cannot be adjusted.

Previous
Previous

Worker Insights & Charting a Better Path for Workplace AI