The TUC has published a report urging the government to introduce new legal protections for workers exposed to the use of potentially discriminatory artificial intelligence (AI) in the workplace.

The report, based on a study by the AI Law Consultancy, claims that workers are currently at risk of being "hired and fired" by potentially discriminatory algorithms. Indeed, the TUC's general secretary Frances O'Grady has warned that algorithms, which have been used more widely since the start of the COVID-19 pandemic, could lead to "widespread discrimination and unfair treatment", particularly for gig economy workers and those in insecure work.

This issue was highlighted recently when Uber was criticised after its AI software for facial identification reportedly failed to accurately identify dark-skinned faces, resulting in many workers being unable to access its app and find jobs. The software uses a photo comparison tool to compare pictures of drivers with photos held on its database when the contractors open the app, to prove they are the person who has logged on. Tests have shown that the software used by Uber has a failure rate of 20.8% for darker-skinned female faces and 6% for males.

Click here for the full report

Comment