“Protecting workers’ rights in digitised workplaces”


Op-ed. By Christina J. Colclough. Published in Equal Times on 04.05.2023. Available in English, Spanish and French.


Summary

This op-ed by the Why Not Lab argues that workers and their unions must understand the means through which harms are caused by digital technologies to protect workers' fundamental rights, values and freedom.

This implies acquiring new capacities to fill the holes in current AI and dataprotection through collective bargaining. Read on to find out what must be done, and why employers and governments need to know, what they need to know, too!

Conceptual illustration showing a dark silhouette overlaid with ones and zeroes in neon green.

Illustration: Equal Times / Victor De Schwanberg / Science Photo Library via AFP

Protecting workers’ rights in digitised workplaces - Knowing what we need to know

Across all sectors of the global economy automated and algorithmic management systems (AMS) of various kinds are being deployed by employers to, supposedly, increase productivity and efficiency. Whilst this quest is in no way new – management has since the dawn of capitalism surveyed workers and sought to improve productivity – the depth and the breadth of the impacts of these systems are. Whilst some AMS can have a positive impact on working conditions, many don’t. Across the world, workers have reported about a range of negative impacts, amongst others the intensification of work and the pace of work, discrimination and bias, and the loss of autonomy and dignity.

Whilst these effects unfortunately are issues workers and their unions have had to deal with even long before the digitisation of work, the means to these harms are different. Preventing them from happening in digitised workplaces requires therefore that we understand the means. In the case of (semi-) automated and AMS this implies that we understand what data, algorithms, artificial intelligence (AI)/machine learning, inferences and a lot more are, and how these in turn can affect workers.

So what AMS are we already seeing? Berkeley Labour Center’s classification is here useful and identifies three different types of systems:

  • Human resource analytics, such as hiring, performance evaluation, and on-the-job training.

  • Algorithmic management,, such as workforce scheduling, coordination, and direction of worker activities.

  • Task automation, where some or even all tasks making up a job are automated using data-driven technologies.

Common for all three types is that they: 1) delegate managerial decisions to (semi-) automated systems; 2) all use and collect data, from either workers and their activities, or from customers (for example how they rate a worker), and/or from third party systems (such as online devices, profiling systems, public datasets, previous employers, and/or data brokers); and 3) have been programmed to fulfil a particular purpose. Some have been given instructions as to how to fulfil that purpose. More advanced systems that, for example, use machine learning or deep learning, are however not told how to fulfil the purpose but rather get feedback from a human when they are on the right track.

Regardless of the individual system, at some point in the life cycle from its development to its use, humans have been involved. They have determined the purpose, developed the system, maybe they have decided to reuse a system designed for one purpose and altered it somehow to fit another purpose, someone has determined the instructions, decided which datasets it should be trained on and later use, and so forth.

Preventing harm requires capacity building

All of the above hints to things we need to know and understand so that we can defend workers’ rights in digitised workplaces. Firstly, we need to know what digital technologies are being used in our workplaces. We then need a basic understanding of them, such as: who has developed them, how do they work, what data has been used to train the system, are these data representative of our culture, traditions and institutions? We need to know what algorithmic inferences are and which ones are being used in the system and/or are subsequently made. We must find out what the instructions to the systems have been and who has given them, and how all of this together and independently can impact workers’ rights up and down value and supply chains today as in the future.

To know what to ask and what to look for requires specific, and for many new, competencies.

Admittedly, this is not a small set of tasks. To know what to ask and what to look for requires specific, and for many new, competencies. In some countries, management might be obliged by law to provide workers with some of this information. In some, management might be interested in engaging with the workers and therefore are happy to share what they know. In others, management might be tight-lipped and say nothing nor engage meaningfully with the workers.

In all cases, workers and their representatives could begin by defining general principles for the use of digital technologies in workplaces (see, for example, the British TUC’s 2021 report When AI is the Boss). They could then map, analyse and query each system used. With this knowledge, they can negotiate guardrails and permissions both around the systems’ current but also future impacts on workers.

Managerial fuzz

Yet it is not only the workers who urgently need to build capacity – so too do the employers who are deploying digital technologies

Yet it is not only the workers who urgently need to build capacity – so too do the employers who are deploying digital technologies. Reports from unions from across the world reveal that managers “don’t know what they should know” either. Maybe the human resources department is using an automated scheduling tool that the IT department has purchased on the order of executive management. Who is responsible for the impacts of the tool? Who has been trained to identify and remedy harms? In many cases, the division of responsibility between managers with regards to the governance of these technologies has not been made clear. Managerial fuzz abounds. Who has informed the employees about the system? Do the systems actually do what they claim? How should they be governed for (un)intended harms and by whom? Who is evaluating the outcomes and making the final decision to go with the system’s recommendations or results, or not? What are the rights of those affected?

It is alarming, to say the least, that so many workers report that they have never been informed about what digital technologies their employer is using to manage them. Equally concerning is the fact that managers are deploying technologies they have not properly understood. Given that the vast majority of digital technologies deployed in workplaces are third-party systems, if employers aren’t governing the technologies that are designed by others yet deployed in their workplace, control and power seems to be slipping further away from the workers and into the hands of the vendors and/or developers. The labour-management relation is thus becoming a three-party relation, yet few fully realise this. The increasing power of third-party vendors and developers is occurring at the expense of the autonomy of labour and management. This, in turn, will indirectly, if not directly, have a negative influence on worker power.

Governmental omissions

Many governments across the world have already improved or are looking into improving data protection regulations but also into regulating AI. An element of these regulation proposals has to do with mandatory audits or impact assessments. Whilst this is good, there are some worrying tendencies. Firstly, no government is discussing that these audits or assessments should be made in cooperation with the workers and/or their representatives. This includes within the EU – otherwise heralded as a region in support of social dialogue.

Secondly, they all assume that the tech developers and/or management have the competencies they need to meaningfully conduct these audits and assessments. Do they though? Is self-assessment sufficient? Is it acceptable that they alone decide (if they at all actively do) that a system is acceptable to use if it is fair to 89 per cent of the workers? What about the remaining 11 per cent? Shouldn’t the workers concerned have a say?

Capacity building is happening

To fix all of these issues, there can be little doubt that capacity building is required. Fortunately, over the last one to two years, more and more unions are doing exactly this.

The global union for public services unions, PSI, is this year concluding a three-year capacity building project called Our Digital Future. It is training regional groups of digital rights organisers, trade union leaders and bargaining officers and equipping them with tools and guides to help bridge the gap from theory to practice and strengthen their collective bargaining.

The International Transport Workers’ Federation is running a two-year union transformation project that is introducing unions to a tailor-made Digital Readiness Framework that seeks to help unions tap into the potentials of digital technologies - but responsibly and with privacy and rights at heart.

Education International has launched a three-part online course on their ALMA platform on the challenges of EdTech and possible union responses.

The British TUC has just launched an e-learning course called Managed by Artificial Intelligence: Interactive learning for union reps that in a practical and guided way helps unions map the digital technologies used and from there form critical responses.

The AFL-CIO in the United States has created a Technology Institute that according to AFL- CIO president Liz Shuler is “a hub for skills and knowledge to help labour reach the next frontier, grow and deploy our bargaining power, and make sure the benefits of technology create prosperity and security for everyone, not just the wealthy and powerful.”

Three Norwegian unions, Nito, Finansforbundet and Negotia, have collaborated to create an online course for shop stewards that is a general introduction to AI systems in workplaces as well as provides a tool to support shop stewards in asking the necessary questions to protect workers’ rights and to hold management responsible.

Many other national, regional and global unions are leaping into this capacity building work through workshops and conferences on the digitalisation of work and workers. These events are inspiring their continued work to transform their strategies and table new demands in collective bargaining. The thrust from the unions will bring employers to the table, and in turn entice them to know what they need to know to address the union demands. Given the sluggishness and gaping holes in current governmental AI regulation discussions, collective bargaining will be essential for workers and their unions in order to reshape the digitisation of work in respect of workers’ fundamental rights, freedom and autonomy.

Previous
Previous

“Negotiating the two faces of digitalisation”

Next
Next

“Reshaping the Digitization of Public Services”