Employers should be mindful of GDPR when embracing new people analytics technologies, warns Ben Favaro, a Senior Associate in the data privacy group at Lewis Silkin LLP.

Introduction

HR professionals are increasingly exploiting artificial intelligence (AI) and people analytics (the use of personal data in analytical processes to make decisions about workers and candidates), and the potential for further growth is strong. 

From recruitment and staff wellbeing to talent retention and performance management, HR teams globally are increasingly turning to AI and people analytics to perform traditional HR tasks. This use of people analytics technology is fundamentally changing what it means to work as a HR professional (watch the highlights from the Future of Work Hub event last year on HR in the age of big data, AI and algorithms here).

What is people analytics?

People analytics, also known as talent analytics or HR analytics, refers to the method of analytics that can help managers and executives make decisions about their employees or workforce. People analytics involves obtaining employee data (which could come from a variety of sources) and then applying statistical and technological expertise to obtain intelligence and insights. This can then result in better, and faster, decision making and give a competitive advantage.

Examples of people analytics

Employers are deploying people analytics technology in a variety of ways.

In recruitment, major companies use AI to sort through hundreds of applications to shortlist successful candidates and discard the CVs of unsuccessful candidates. Facial recognition technologies are being used to scan the expressions of candidates during video interviews to determine the candidate’s personality traits, emotions, and potentially whether the candidate is telling truthful answers!  Employee monitoring apps and bracelets feed performance data directly to software which can then make objective decisions about the workforce, rather than those decisions being made subjectively by a manager who may be prone to biases or prejudices. 

Big data, AI and people analytics can provide employers with valuable insights into their workforce. But what about when monitoring and decision making people analytics technology get a little too creepy? Employers need to be mindful of the rights and the expectations of their workforce and the candidate pool when collecting and using data and must navigate through a complex legal framework which protects the rights of data subjects.

How people analytics technology conflicts with GDPR

There is no doubt that legislation has failed to keep pace with the speed at which big data and technology has advanced.  In Europe, the General Data Protection Regulation (GDPR), which came into force in May 2018, was designed to address this and acknowledged the changing landscape in the introductory text to the Regulation:

...rapid technological developments and globalisation have brought new challenges for the protection of personal data.  The scale of collection and sharing of personal data has increased significantly.  Technology allows both private companies and public authorities to make use of personal data on an unprecedented scale in order to pursue their activities…. Those developments require a strong and more coherent data protection framework in the EU.

Employers seeking to embrace the opportunities that technology offers will face challenges in meeting the requirements of the GDPR.On the face of it, the inherent nature of big data and some elements of AI technologies come into direct conflict with the fundamental principles of the GDPR. In many cases, there are no simple solutions which achieve GDPR compliance while simultaneously taking full advantage of the benefits of new technologies. Employers need to consider and balance (i) the interests of the data subjects protected by the GDPR; and (ii) the employer’s interests in investing in new data driven people analytics technologies within the workplace.

People analytics and the principles of GDPR

Below we look at some of the challenges raised by people analytics and, set against the principles of GDPR, we suggest some actions for employers to consider.     

The purpose limitation

All processing of personal data requires employers to have a lawful basis for the processing. AI and machine learning inherently use more and more data to improve and expand on the functions that they are able to perform.  Where these new functions mean that the personal data is being used for a purpose other than that for which the data was originally collected, the new purpose must be compatible with the original purpose. Employers must be able to show that there is a link between the initial purposes and the new AI purposes and the result must not be unexpected or have an unjustified impact on the employee.

The data minimisation principle

The GDPR provides that controllers must ensure personal data being processed is limited to what is necessary for the controller’s purposes. However, “big data” (by definition) requires huge amounts of personal data to be collected and processed in order to be effective.  The larger the amounts of data being collected, the more difficult it will be for an employer to demonstrate that the data they have collected is necessary for the purposes for which it was collected.

Storage limitation

The GDPR also provides that personal data may not be kept for longer than needed.  Once the purpose the personal data was collected for has been completed, the GDPR requires that personal data be permanently deleted.  Data controllers achieve this by implementing retention policies and conducting periodic reviews of stored data to determine which data can be deleted and which can lawfully be retained.  However, the value in big data and AI comes from the large data sets being stored for future use and analysis.  Big data is valuable, and it goes against the business instinct to destroy a valuable asset.  Further, there is often temptation to collect and hold data over long periods of time in the hope that one day it may become useful (and valuable), even if there is no current use for the data. 

GDPR and the transparency principle

Transparency is a fundamental part of the GDPR.  Individuals have a right to be informed about the collection and use of their personal data.  However, gone are the days when a data controller could comply with the law by hiding behind a long, convoluted and complex (i.e. unreadable) privacy notice.  The GDPR requires controllers to provide information about the processing in a concise, transparent, intelligible and easily accessible way using clear and plain language.  Also, the transparency principle is no longer merely about making the information available to data subjects; it requires controllers to explain the what, how, who, when and why of data processing.  As AI algorithms become more and more complex and the amount of personal data being collected and stored gets bigger and bigger, the task of explaining the collection and use of personal data becomes more and more difficult.  Machines don’t “think” in the same way humans do and with AI technologies it is sometimes impossible to retrace the decision making “tree” to determine why a particular decision was made.  The complexities of the methods used by AI to make decisions and the often opaque nature of the algorithms that they are operating can make it difficult for an organisation to be transparent to their employees and candidates in a meaningful way.

GDPR and automated decision making

In most circumstances, employees and candidates have the right not to be subject to a decision made solely using automated means which produces a legal effect concerning him or her or similarly significantly affects him or her.  This might include a decision not to offer a candidate employment or a decision to discipline or dismiss an employee.  Where automated decision making is permitted, employers are required to implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests and at least provide the right to obtain human intervention to express his or her point of view and to contest the automated decision.

Discrimination and AI

AI technology does not suffer from the same biases and prejudices that humans are prone to suffer from.  This means that AI technology can be used to make decisions that are, theoretically, free from bias, prejudice, and discrimination. 

However, AI technologies are only as reliable as the data that they are fed, and flawed datasets produce flawed AI decision-making technologies.  A global retail giant recently found out to their detriment how even AI technology can be affected by bias.  The AI was taught to recognise preferred candidates by identifying words in CVs that also appeared on the CVs of previously successful candidates.  However, given that most previously successful candidates in the industry were men, the algorithm was (inadvertently) taught to select men over women.  Overcoming this challenge is not as simple as just stripping away the protected characteristic (i.e. removing gender, race or sexual orientation as being a factor the AI considers) because other data sets often reveal the protected characteristic and the outcome ends up being the same (e.g. even after ‘gender’ is removed from the AI decision making process, other data points common to one gender will still exist and the AI may end up using these other data points to make a discriminatory decision). 

Finally, it is not enough to implement systems that identify discriminatory decisions in hindsight.  The technology needs to be designed to avoid making the discriminatory decision in the first place. 

Data protection impact assessments (“DPIAs”)

The GDPR mandates that data protection impact assessments (“DPIAs”) be undertaken where processing is likely to result in a high risk to individuals, particularly where the processing involves the use of new technologies.  Emerging people analytics technologies such as new AI HR systems using big data will almost certainly require DPIAs to be conducted. 

Even where they are not mandatory, conducting a DPIA before implementing new technologies goes towards complying with the accountability principle as it is effectively a ‘roadmap’ demonstrating compliance with the other data protection principles.  DPIAs are not intended to necessarily remove all risk from new technology in the workplace but, rather, are intended to asses, minimise and manage the risks to employees and candidates. 

A DPIA could be used, for example, to assess the likelihood of recruitment technologies demonstrating bias towards female applicants, identify safeguards to minimise those risks, and then implement a system of review to ensure that the risk is appropriately managed into the future as the technology is used in different ways. 

HR and GDPR: suggested actions for employers using people analytics

Advancements in AI and algorithm technology in HR make the apparently impossible become possible.  But just because technology means that you could, it does not necessarily mean that (legally or ethically) you always should.  

That being said, privacy laws shouldn’t mean that businesses should shy away from exploring the use of new technologies in the workplace and reaping their potentially game-changing benefits.

So what are some solutions to the challenges employers face?

Avoid implementing new technologies that use your employees or potential candidates’ data in ways that they would likely not expect. 

Put yourself in their shoes and if the new technology gives off a ‘creepy’ factor, it may be a sign to avoid it. 

Be open with employees and candidates about the existence of the technology, its benefits, and any potential risks. 

Effort should be made to update privacy notices to clearly explain the use of data driven technologies.  Key contacts within the business (e.g. HR or the data privacy lead) should be trained to answer questions from concerned employees or candidates about how their data is being used. 

An element of human control or review over automated decision making should be maintained

Especially for decisions which might have adverse effects on employees or candidates. 

Where decisions are to be made automatically, employers should ensure that the decisions are made fairly.

Employers have a duty to understand how decisions using AI are made and should therefore only ever use off the shelf technologies which are sufficiently transparent about how they operate.

Ben Favaro is a Senior Associate in the data privacy team at Lewis Silkin LLP.

Comment