RDS and Trust Aware Process Mining: Keys to Trustworthy AI? – Techopedia

Posted: January 24, 2022 at 10:35 am

By 2024, companies are predicted to spend $500 billion annually on artificial intelligence (AI), according to the International Data Corporation (IDC).

This forecast has broad socio-economic implications because, for businesses, AI is transformativeaccording to a recent McKinsey study, organizations implementing AI-based applications are expected to increase cash flow 120% by 2030.

But implementing AI comes with unique challenges. For consumers, for example, AI can amplify and perpetuate pre-existing biasesand do so at scale. Cathy ONeil, a leading advocate for AI algorithmic fairness, highlighted three adverse impacts of AI on consumers:

In fact, a PEW survey found that 58% of Americans believe AI programs amplify some level of bias, revealing an undercurrent of skepticism about AIs trustworthiness. Concerns relating to AI fairness cut across facial recognition, criminal justice, hiring practices and loan approvalswhere AI algorithms have proven to produce adverse outcomes, disproportionately impacting marginalized groups.

But what can be deemed as fairas fairness is the foundation of trustworthy AI? For businesses, that is the million-dollar question.

AI's ever-increasing growth highlights the vital importance of balancing its utility with the fairness of its outcomes, thereby creating a culture of trustworthy AI.

Intuitively, fairness seems like a simple concept: Fairness is closely related to fair play, where everybody is treated in a similar way. However, fairness embodies several dimensions, such as trade-offs between algorithmic accuracy versus human values, demographic parity versus policy outcomes and fundamental, power-focused questions such as who gets to decide what is fair.

There are five challenges associated with contextualizing and applying fairness in AI systems:

In other words, what may be considered fair in one culture may be perceived as unfair in another.

For instance, in the legal context, fairness means due process and the rule of law by which disputes are resolved with a degree of certainty. Fairness, in this context, is not necessarily about decision outcomesbut about the process by which decision-makers reach those outcomes (and how closely that process adheres to accepted legal standards).

There are, however, other instances where application of corrective fairness is necessary. For example, to remedy discriminatory practices in lending, housing, education, and employment, fairness is less about treating everyone equally and more about affirmative action. Thus, recruiting a team to deploy an AI rollout can prove a challenge in terms of fairness and diversity. (Also read: 5 Crucial Skills That Are Needed For Successful AI Deployments.)

Equality is considered to be a fundamental human rightno one should be discriminated against on the basis of race, gender, nationality, disability or sexual orientation. While the law protects against disparate treatmentwhen individuals in a protected class are treated differently on purposeAI algorithms may still produce outcomes of disparate impactwhen variables, which are on-their-face bias-neutral, cause unintentional discrimination.

To illustrate how disparate impact occurs, consider Amazons same-day delivery service. It's based on an AI algorithm which uses attributessuch as distance to the nearest fulfillment center, local demand in designated ZIP code areas and frequency distribution of prime membersto determine profitable locations for free same-day delivery. Amazon's same-day delivery service was also found to be biased against people of coloureven though race was not a factor in the AI algorithm. How? The algorithm was less likely to deem ZIP codes predominantly occupied by people of colour as advantageous locations to offer the service. (Also read: Can AI Have Biases?)

Group fairness' ambition is to ensure AI algorithmic outcomes do not discriminate against members of protected groups based on demographics, gender or race. For example, in the context of credit applications, everyone ought to have equal probability of being assigned a good credit score, resulting in predictive parity, regardless of demographic variables.

On the other hand, AI algorithms focused on individual fairness strive to create outcomes which are consistent for individuals with similar attributes. Put differently, the model ought to treat similar cases in a similar way.

In this context, fairness encompasses policy and legal considerations and leads us to ask, What exactly is fair?

For example, in the context of hiring practices, what ought to be a fair percentage of women in management positions? In other words, what percentage should AI algorithms incorporate as thresholds to promote gender parity? (Also read: How Technology Is Helping Companies Achieve Their DEI Goals in 2022.)

Before we can decide what is fair, we need to decide who gets to decide that. And, as it stands, the definition of fairness is simply what those already in power need it to be to maintain that power.

As there are many interpretations of fairness, data scientists need to consider incorporating fairness constraints in the context of specific use cases and for desired outcomes. Responsible Data Science (RDS) is a discipline influential in shaping best practices for trustworthy AI and which facilitates AI fairness.

RDS delivers a robust framework for the ethical design of AI systems that addresses the following key areas:

While RDS provides the foundation for instituting ethical AI design, organizations are challenged to look into how such complex fairness considerations are implemented and, when necessary, remedied. Doing so will help them mitigate potential compliance and reputational risks, particularly as the momentum for AI regulation is accelerating.

Conformance obligations to AI regulatory frameworks are inherently fragmentedspanning across data governance, conformance testing, quality assurance of AI model behaviors, transparency, accountability, and confidentiality process activities. These processes involve multiple steps across disparate systems, hand-offs, re-works, and human-in-the-loop oversight between multiple stakeholders: IT, legal, compliance, security and customer service teams.

Process mining is a rapidly growing field which provides a data-driven approach for discovering how existing AI compliance processes work across diverse process participants and disparate systems of record. It is a data science discipline that supports in-depth analysis of how current processes work and identifies process variances, bottlenecks and surface areas for process optimization.

R&D teams, who are responsible for the development, integration, deployment, and support of AI systems, including data governance and implementation of appropriate algorithmic fairness constraints.

Legal and compliance teams, who are responsible for instituting best practices and processes to ensure adherence to AI accountability and transparency provisions; and

Customer-facing functions, who provide clarity for customers and consumers regarding the expected AI system inputs and outputs.

By visualizing compliance process execution tasks relating to AI training datasuch as gathering, labeling, applying fairness constraints and data governance processes.

By discovering record-keeping and documentation process execution steps associated with data governance processes and identifying potential root causes for improper AI system execution.

By analyzing AI transparency processes, ensuring they accurately interpret AI system outputs and provide clear information for users to trust the results.

By examining human-in-the-loop interactions and actions taken in the event of actual anomalies in AI systems' performance.

By monitoring, in real time, to identify processes deviating from requirements and trigger alerts in the event of non-compliant process tasks or condition changes.

Trust aware process mining can be an important tool to support the development of rigorous AI compliance best practices that mitigate against unfair AI outcomes.

That's importantbecause AI adoption will largely depend on developing a culture of trustworthy AI. A Capgemini Research Institute study reinforces the importance of establishing consumer confidence in AI: Nearly 50% of survey respondents have experienced what they perceive as unfair outcomes relating to the use of AI systems, 73% expect improved transparency and 76% believe in the importance of AI regulation.

At the same time, effective AI governance results in increased brand loyalty and in repeat business. Instituting trustworthy AI best practices and governance is good business. It engenders confidence and sustainable competitive advantages.

Author and trust expert Rachel Botsman said it best when she described trust as, the remarkable force that pulls you over that gap between certainty and uncertainty; the bridge between the known and the unknown.

Visit link:

RDS and Trust Aware Process Mining: Keys to Trustworthy AI? - Techopedia

Related Posts