The TUC is calling for stronger rules to protect workers from decisions made by artificial intelligence systems.
AI-powered technologies are now making “high-risk, life changing” decisions about workers’ lives including line-managing, hiring and firing staff”, said the body which represents unions.
But a new law will “dilute” existing protections, it said.
The government said the TUC’s assessment was “wrong” and has said safeguards will remain in place.
A spokesperson said it was committed to improving and upholding workers’ rights: “AI is set to drive growth and create new highly-paid jobs throughout the UK, while allowing us to carry out our existing jobs more efficiently and safely.
“That is why we are working with businesses and regulators to ensure AI is used safely and responsibility in business settings.”
Artificial intelligence is a branch of computer science which develops machines and software which can perform tasks which normally require humans to do them, for example, decision-making or speech recognition.
A number of researchers have raised concerns about the use of AI in the workplace, particularly recruitment tools which use speech and video to determine a candidate’s suitability for a job. They argue the systems are unscientific and include biases.
The TUC says AI is being used to analyse facial expressions, tone of voice and accents to assess candidates’ suitability for roles.
Left unchecked, it argues, AI could lead to greater discrimination at work. For example tools that analyse facial expressions, may disadvantage candidates or employees with certain disabilities.
However, some of the firms promoting AI tools argue that conversely computer systems will make more impartial decisions than humans alone.
Mary Towers, an employment rights policy officer at the TUC, told the BBC its research had found AI tools used in a variety of industries “at the recruitment stage in things like CV sifting, but then beyond that in team allocation, allocation of work, disciplinary measures, right through to termination of employment”.
“We found evidence of AI powered tools being used in all the different ways in which you would expect a human manager to carry out functions at work,” she said.
‘Unrealistic targets’
AI tools can also be used to track worker performance, sometimes making automated decisions to effectively fire employees.
The TUC warns AI could “set unrealistic targets that then result in workers being put in dangerous situations that impact negatively on their both physical health and mental well being”.
Some employers require workers to carry devices which record data about their activity which can then be analysed. One warehouse worker told AI policy campaigners Connected by Data that it meant that if they took too many toilet breaks it would be flagged and they would have to explain why they weren’t working.
Last month the government published its white paper on AI which proposed spreading regulation of the technology across different existing bodies rather than creating a single new watchdog.
According to the TUC, the paper offered only “vague” and “flimsy” guidance to regulators on how to ensure AI is used ethically at work, “and no additional capacity or resource to cope with rising demand”.
It also suggested the Data Protection and Digital Information Bill, which had its second reading Monday, will water down many protections in current data protection legislation – including the right to human review of automated decisions.
‘Human review’
The TUC wants firms to reveal how AI is being used to make decisions about staff. All decisions should be subject to a human review so that workers may challenge them, it says.
The government believes its reforms do not alter workers’ rights to seek human review of significant, solely automated decisions.
The bill will maintain protections that UK workers currently have while giving organisations greater flexibility, it says.
But other organisations have made similar criticisms of the bill. Connected by Data told the BBC: “The bill reduces the ability of workers to access data that is held about them, or to challenge how it is used, meaning they may never be able to know why their ability to earn a living has been threatened.”
To show why such rules are important it points to a recent court case which ordered two ride-hailing firms to turn over information related to automated decisions to dismiss workers in the UK and Portugal.
Angela Rayner, Labour’s deputy leader and shadow secretary of state for the future of work, also supported the TUC’s call, telling the BBC that AI was “already transforming the economy”.
Workers must “have a proper say in how technologies are implemented”, she said.
“Labour will update employment rights and protections so they are fit for the modern economy.”