Why impartiality has to be a key part of your AI game plan – E&T Magazine

A few simple principles can help businesses avoid the potentially disastrous implications of unleashing artificial-intelligence-based agents that develop their own unconscious bias.

With 60 per cent of UK companies already using or planning to implement artificial intelligence, the debate around its ethical challenges is gaining momentum. Earlier this year, for example, the European Commission drafted a white paper on AI outlining a new approach to excellence and trust a clear indication that Europe is seriously considering stricter measures its use.

According to Gartner, a quarter of employees who already use technology in their work will have an AI-powered digital colleague that they interact with on a daily basis by 2021. While the opportunity to create new efficiencies and offer an elevated level of service to consumers is immense, so are the risks.

For organisations just getting started, there are some key considerations to apply in the process of defining a responsible AI game plan.

First, you need to plan for the unexpected. Remember what happened when Microsoft deployed its customer-facing chatbot, Tay, on Twitter? Within 24 hours, Tay had become a racist and sexist spokesperson by re-posting content learned from her online interactions. This illustrates that while you cannot predict every possible interaction, your implementation strategy must be carefully planned. Common roll-out strategy safety checks should include testing with representative users before roll-out, using human oversight as an extra layer of analysis and safety, and designing the way the digital employee responds to unplanned and out-of-scope interactions.

Companies using intelligent digital employees must also think about how their brand fits in; the style and tone of the dialogue, the level to which the agent can execute tasks or not and the audiences they interact with. Whether an agent is used internally or in customer service, the AI solution inevitably becomes a brand ambassador. This understanding will help define appropriate and consistent guidelines, which will ensure the AI is trained to represent the values, goals, and culture of the business.

The collection and provisioning of adequate training data is another big challenge for AI developers. Amazon, for example, has been scrutinised for the bias of its automated recruitment too, trained with data from CVs submitted over a 10-year period, which reflected the gender imbalance within the tech industry and therefore unfairly disadvantaged female applicants. The impartiality of training data and modelling is only as impartial as the design team. It is of critical importance for companies to reflect on the potential gaps in their data objectivity, use test methods that explore the potential impacts and operate with transparency on the progress of these issues.

To build a diverse team from a list of qualified applicants, consider using techniques that mitigate unconscious bias. Three tactics that can help are: removing names from the applicant review processes, augmenting candidate evaluation with a skills test to evaluate potential, and introducing structure to the interview process in order to focus on the same set of questions and answers for every possible hire. Lastly, consider setting diversity goals for your AI team that emphasise the importance of representation and balance of decision-making power for successful AI projects.

One way to leverage the potential of machine learning without risking a Tay-like disaster is through managed self-learning. This means that a digital employee is first trained via a defined data set and then learns through user interactions a process that is overseen by a supervisory body to check and approve the newly acquired knowledge. This approach ensures that learned behaviours correspond to the organisations functional and ethical vision for its AI implementation, and are based on accurate and unbiased data.

Before embarking on any AI implementation, you need to think hard about what your ideal digital employee would look like. Ensuring its ethical development requires comprehensive and thoughtful planning: reviewing the make-up of the team, the representativeness of datasets, and how it will be managed on an ongoing basis.

Considering the potentially disastrous repercussions of unconscious bias in this field, its clear that organisations need to make an enormous effort to ensure their AI solutions are designed and set up to act impartially. By adhering to the principles described here, companies can not only minimise the risk of reputational damage, but also benefit from better outcomes of their AI investments in the medium and long term.

Esther Mahr is a conversational experience designer with IPsoft. Noelle Langston is the companys director of experience design.

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Read more from the original source:

Why impartiality has to be a key part of your AI game plan - E&T Magazine

Related Posts

Comments are closed.