“Negotiating the two faces of digitalisation”
Op-ed. By Christina J. Colclough. Published in Equal Times. Published on 2023.07.12. Available in English, Spanish and French.
Summary
This op-ed offers insights into how we can begin to map the current and future impacts of the digitalisation of work and workers. It suggests we focus on a quadrant consisting of (semi)automation as well as quantification processes across a timeline from immediate to long-term. This disruption mapping will provide workers and unions with an overview of real and possible futures and can be a good method to use when planning for collective bargaining and/or policy advocacy.
Op-ed
Full or semi-automation/robotisation
Whilst the process of automation is nothing new, the extent and speed of it is. This is not least due to the launch this year of corporate-driven generative AI systems, such as OpenAI’s #ChatGPT and Google’s #Bard. It has been estimated that over 300 million jobs worldwide will be severely affected by these systems.
In time, this disruption will hit workers across all occupations. In education, teachers will be able to use these systems to prepare lesson plans or evaluate student exams. In the film and media world, scripts could be written, special effects designed, and actors replaced by automation. Journalism can be automated, fiction writing too. In the health sector, patient care plans, illness diagnosis and even care workers can be substituted by machines. Coders, accountants, game developers, could all be out of work. Customer call centres could be fully automated, research jobs as well.
Yet the impact of this disruption will not be equally felt across the world and across skill levels. A recent report from global consulting firm McKinsey finds that: “Adoption is also likely to be faster in developed countries, where wages are higher and thus the economic feasibility of adopting automation occurs earlier. Even if the potential for technology to automate a particular work activity is high, the costs required to do so have to be compared with the cost of human wages.” It further states that “generative AI has more impact on knowledge work associated with occupations that have higher wages and educational requirements than on other types of work”.
Yet the impact of this disruption will not be equally felt across the world and across skill levels.
To map this process, start by reflecting on the immediate impacts that (semi-) automation/robotisation will have on jobs, tasks, worker autonomy and working conditions. Then reflect on the long-term consequences of these disruptions.
Quantification
The process of quantification is more opaque, yet just as disruptive. Quantification refers to how data and algorithmic systems turn our actions and non-actions into measurable events.
Put simply: “You are late six days out of 10” or “your productivity rate is higher than your peers”. In reality, these calculations can include many more inputs: your gender, age, ethnicity, postcode, educational level, shopping habits, BMI or other health data, and much, much more. The calculus can also be way more complex: it can compare all of your attributes against very large datasets. It can find patterns and thereby create ‘facts’ or ‘truths’ that few have insights into.
These opaque quantifications can have immediate consequences: you can get fired, hired or promoted. But importantly, they are fed into algorithmic systems that future workers will be measured against.
These opaque quantifications can have immediate consequences: you can get fired, hired or promoted.
For example (again, simplified for the sake of explanation): a system has found out that your productivity is declining and has been for the last three years. You are 52 years old, a woman, divorced, and you rent a small apartment on the outskirts of a small town. Your health seems to be in decline as your BMI is rising.
Future job applicants that share all or most of these characteristics will likely be flagged as ‘less productive’ or of ‘declining productivity’ by a (semi-) automated hiring system. Meaning they will most likely never be considered for a job similar to yours.
But what if the calculations missed some crucial facts about your life? You had been in a car accident and broke your knee a year ago. You have been going to physiotherapy since but feel progress is slow as you still can’t run as fast as you used to. You have never owned a house and have always rented as you believe this is better. You prefer small village life. What if the algorithms calculated your behaviour negatively, whilst much of it isn’t?
Now take those miscalculations and replicate them by the thousands. The effects on future workers would be very real, yet all on the wrong basis.
The effects on future workers would be very real, yet all on the wrong basis.
Such quantification and labelling occurs all of the time. Wolfie Christl, a privacy researcher at the Vienna-based research institute Cracked Labs, recently discovered a 650,000 row spreadsheet on ad platform Xandr’s website. It revealed a massive collection of ‘audience segments’ used to target consumers based on highly specific, sometimes intimate information and inferences.
To map the immediate and possible long-term effects of quantification, start by describing the digital systems the employer is using. These could be automated scheduling tools, hiring tools or productivity scores. Then imagine the profiles/inferences you think these systems are creating and on which data. Map the effects that these might have in the future.
Union responses
Unions must mitigate against these severe disruptions to ensure inclusive and diverse labour markets now and in the future. To do so they could unpack these two faces of the digitalisation of work as well as their immediate and future effects. This disruption mapping will provide workers and unions with an overview of real and possible futures and can be a good method to use when planning for collective bargaining and/or policy advocacy. Here are a few pointers as to what could be included in either or both:
Disruption’s obligations: In most countries across the world, companies who introduce disruptive technologies have few obligations towards the employees. Unions could demand that disruption cannot take place without obligations towards those who are disrupted. This could include the obligation to continuously upskill or re-skill workers in working time. To offer support to workers who will lose their jobs in the form of career advice, training programmes and the like. The cost of these programmes should be borne exclusively by the disruptive employer.
Unions could demand that disruption cannot take place without obligations towards those who are disrupted.
Data: We need to know what data management (and importantly third parties who might have access to this data) is collected, from which sources and for what purpose(s).
Inference transparency: We should have the right to be informed about all of the inferences/profiles created using our data. Importantly, we also need to demand the right to know what inferences we are subjects of, measured against and manipulated by. Far from all data protection regulations in the world provide us with these rights, especially not the latter.
Freedom from algorithmic manipulation: Human rights/civil rights should be extended to include the right to be free from algorithmic manipulation. This is strongly linked to the demand for inference transparency but goes further by offering an all-encompassing opt-out clause. There can be no freedom of thought, expression or being if our life opportunities are algorithmically defined and limited.
Data rights: We should also negotiate for our rights around the use of the data extracted from us, including the inferences. These rights must be extendable to the employer and to any third party who might have access to our data. Whilst many data protection regulations include a number of rights, far from all grant data subjects (i.e. us) with the eight rights European workers have under the GDPR. These should be clearly extended to inferences – collective and individual, present and future. This includes adding a new data right currently only found in the Californian data protection regulation, the CPRA, namely the right to prohibit the selling of one’s personal data.
Co-determination and governance rights: Whilst it is nearly impossible to explain these two big topics in a single paragraph, workers should have the right to: (a) be consulted on the introduction of any new technology (German workers have strong rights in this respect); (b) co-determine the purpose, the data used and the instructions provided to the system; (c) to know about the inferences made and edit or block them; (d) be involved in the necessary continuous governance of digital technologies at work so their experiences and opinions are part of the impact assessments. This latter point is very important as obligations towards ex-post governance seem to be lacking in many policy proposals.