This article by Adrian Wakeling, Senior Policy Adviser at Acas, takes a detailed look and reviews a recent Acas report on ethical practices of algorithms at work.

INTRODUCTION: Ethical use of algorithms

Hamlet memorably said that “There is nothing either good or bad but thinking makes it so”. But if the thinking is being done by a set of mathematical equations rather than a person, does this make us more or less able to assess what is good or bad in the world of work?

The way we use computer codes at work – primarily to improve the efficiency of our decision-making and the profitability of our businesses – is the focus of a new Acas report. ‘My Boss the Algorithm: an ethical look at algorithms in the workplace’, uncovers some of the intended and unintended consequences of letting technology take the upper hand.

TOO MUCH INFORMATION

In the ideal world, technology does all the boring, repetitive stuff no one likes doing, and frees up time for better use of skills at work and quality human interactions. In reality, algorithmic management also presents managers with a vast amount of data they are not always trained to interrogate or question. This gives room for what has been termed ‘strategic ignorance’, when it’s useful for those in positions of power not to know something.

MIMICKING PREJUDICE AND BIAS

Algorithms hold out the promise of making fairer decisions but, as many organisations have found to their cost, they are only as fair as the people who programme them. To paraphrase Stephen Bush from his article in the New Statesman: ‘algorithms are just slightly less racist than humans’. Bush gives the example of the Metropolitan Police’s use of facial recognition tech, which performed very badly but was still twice as good as the old powers of stop and search. So, algorithms may merely hold up a mirror to our own prejudices.

NEW ERA OF DIGITAL TAYLORISM

Taylorism – or the appliance of science to human engineering and work processes – was big at the height of the Second Industrial Revolution at the start of the twentieth century. Many commentators feel that it could be making a digital comeback with the current use of algorithms to improve efficiency by monitoring workers and controlling the allocation of tasks. But as the recent work by Carnegie Trust and RSA has highlighted, having a say in how you do your job, ‘work autonomy’, is a key component of job quality. Surely, ‘good work’ for all is something we can all get behind?

As algorithms are often largely invisible, it is difficult for most of us to understand just what influence they have. This is why ‘augmentation technology’ is so widely touted, with humans working more closely with machines and, in the end, having the final say.

Despite the risks, algorithms do offer genuine opportunities to improve working lives for all of us. For example, they could play a part in:

  • Narrowing the gender pay gap - it has been widely reported that the gender pay gap may take decades to close. It cannot help if the data algorithms use to recruit and promote people is historical, when gender bias and stereotyping was most ingrained. Employers could commit to using this technology to positively take action to recruit and promote women.

  • Recruiting and retaining - more neuro-diverse employees. Acas research has shown that traditional recruitment and management techniques do not often work for neuro-diverse candidates and employees. Algorithms could play a part in accommodating a wider spectrum of skills and preferences.

  • Improving transparency and accountability - in the way decisions are made. It is a little-known fact that the General Data Protection Regulations (GDPR) safeguard individuals from solely automated decision-making with some caveats around consent. The tricky thing for employees is that the benefits of monitoring, such as wearing wrist bands to check on personal health, get bundled up with the drawbacks, in terms of how the data about you will be used. We need to be far more open about this exchange.

CONCLUSION

It’s become a cliché to say that technology is inherently neutral and only becomes good or bad in human hands. A much more pressing concern is whether we are clear about what good and bad at work means and what we are prepared to do to champion it.

Adrian Wakeling is a Senior Policy Advisor at Acas.

Comment