Publication
“Generative AI: A Worker’s Perspective”
Report produced for Public Services International, 30 October 2024. Available at: https://publicservices.international/resources/digital-publication/generative-ai-a-workers-perspective-?lang=en&id=15435&showLogin=true .
About the report
The report consists of two parts. The first covers issues such as what Generative AI is, how it works and use cases. It then moves on to cover some of the important problems with Gen AI. It can displace workers, it lacks accountability, it lacks human oversight, it hallucinates, it is culturally insensitive, has a huge environmental impact, infringes on intellectual property and is discriminative - to name but a few.
Workplace Policy
The second part offers suggestions to what should be included in a work place policy on Generative AI. Many public and private workplaces do not have a policy in place, yet are nudging their workers to use the tools. Beware! Given the problems identified, serious harms can be caused.
We suggest therefore that a workplace policy on Generative AI should be in place before any worker uses the systems at work. It should include language on, amongst others:
Who can use AI, how it will be integrated into work processes, what kind of tasks it is permitted to assist with and a plan for upskilling/reskilling workers in working time.
Not uploading any sensitive information, including datasets
Gen AI training to ensure workers a good comprehension of what the AI can and cannot do to avoid misunderstandings about its potential impact and performance.
Human-in-control principles that ensure workers have the right and adequate time to check the validity of the information produced by generative AI systems
How data will be protected, stored, and used if management uses generative AI to process employee data. This includes personal information and communication that may be analysed by Generative AI.
Bias mitigation strategies, including how management will ensure that AI systems do not reinforce existing biases, especially in decision-making processes affecting workers and/or the public.
Accountability structures: Management must establish accountability structures, so it’s clear who is responsible for decisions made with AI input. Workers and the public should know where to raise concerns if AI is making errors or causing harm.
Environmental accounting
Grievance mechanisms to enable problems to be addressed early before they escalate as well as helping to identify patterns over time
Supply chain due diligence (such as guaranteeing that all workers involved in the development and moderation of generative AI systems, including data annotators, content moderators, and contract workers, must be paid a living wage based on their region.
Access the full report in English on the button below. For Arabic, French, Spanish, Portuguese and Korean versions, please go to PSIs website
This report was written for PSI in connection with a global workshop on Generative AI for trade unions and worker advocates. The workshop is part of PSI’s 3-year project called Negotiating Our Digital Future, which is supported by the Friedrich Ebert Stiftung.