Artificial intelligence in the workplace – ComputerWeekly.com

Far from being a futuristic concept relegated to the realms of science fiction, the use of artificial intelligence (AI) in the workplace is becoming more common. The benefits of using AI are often cited by reference to time and productivity savings. However, the challenges of implementing AI into HR practice and procedures should not be underestimated.

AI technologies are already being used across a broad range of industries, at every stage in the employment cycle. From recruitment to dismissal, their use has significant implications. In recent months, incidents at Meta, Estee Lauder and payment service company Xsolla have hit the headlines for utilising AI when dismissing employees.

All three companies used algorithms as part of their selection process. For Meta and Xsolla, the algorithms used analysed employee performance against key metrics to identify those who were unengaged and unproductive. These employees were subsequently dismissed.

Similarly, Estee Lauder used an algorithm when making three makeup artists redundant, which assessed employees during a video interview. The software measured the content of the womens answers and expressions during interview and evaluated the results against other data about their job performance.It led to their dismissal.

Where algorithms are used in place of human decision-making, they risk replicating and reflecting existing biases and inequalities in society.

An AI system is created by a variety of participants, from those writing the code, inputting the instructions, those supplying the dataset on which the AI system is trained and those managing the process. There is significant scope for bias to be introduced at each stage.

If, for example, a bias towards recruiting men is included in the dataset, or women are under-represented, this is likely to be replicated in the AI decision. The result is an AI system making decisions that reproduces inherent bias. If unaddressed, those biases can become exaggerated as the AI learns becoming more adept at differentiating using those biases.

To mitigate this risk, HR teams should test the technology with comparison between AI and human decisions looking for bias. This is only going to be effective in combating unconscious bias if the reviewers comprise a diverse group themselves. If bias is discovered, the algorithm can and should be changed.

AI systems are increasingly being viewed by employers as an efficient way of measuring staff performance. While AI may identify top performers based on key business metrics, they lack personal experience, emotional intelligence and the ability to form an opinion to shape decisions. There is a danger that low-performing staff could be disregarded solely on an assessment of metrics. Smart employees are likely to find ways to manipulate AI to their advantage in a way that might not be so easy without technology.

It is tempting to trust AI to limit legal risks by using it for decision-making. Superficially, this may be right, but the potential unintended consequences of any AI system could easily create a lack of transparency and bias equivalent to that of its human creators.

When AI systems are used, there is an obligation to consider how these might impact on fairness, accountability and transparency in the workplace.There is also a risk of employers exposing themselves to costly discrimination claims, particularly where the policy of using AI disadvantages an employee because of a protected characteristic (such as sex or race) and discriminatory decisions are made as a result.

Until AI develops to outperform humans in learning from mistakes or understanding the law, its use is unlikely to materially mitigate risk in the meantime.

Catherine Hawkes is a senior associate in the employment law team at RWK Goodman.

Read the original here:
Artificial intelligence in the workplace - ComputerWeekly.com

Related Posts
This entry was posted in $1$s. Bookmark the permalink.