Trustworthy AI versus ethical AI – what’s the difference, and why does it matter? – Diginomica

Posted: May 11, 2021 at 10:35 pm

( Andrey_Popov - Shutterstock)

I've written before about semantic ambiguity in natural languages and how difficult, if impossible, it would be to render natural languages into a digital object.

The reasons: the gestalt of a silicon processor versus the human brain and brain-to-brain communication:

Something isambiguouswhen it can be understood in two or more possible senses or ways. If theambiguityis in a single word, it is called lexicalambiguity. ... In everyday speech,ambiguitycan sometimes be understood as something witty or deceitful.

Today, the terms trustworthy AI and ethical AI are used interchangeably. The problem is that trustworthy AI is not necessarily ethical, and ethical AI is necessarily trustworthy. The casual commingling of the terms has unfortunate circumstances

Let's break down trust and trustworthiness. There is a difference between 'trust,' which can be described in pretty straightforward factual terms and 'trustworthiness,' which is a very different matter and has a value component. It is about what or whoshouldbe trusted. Unfortunately, we can both trust those who are not trustworthy, and not trust those who are. Trust and transparency go together: we can only trust what we are clear is being asked of us.

Ethics determines whether a strategy should be chosen because it seeks simple utilitarian terms to secure the best overall aggregate balance of harms and costs. Or whether it rests on a belief that there are fundamental human rights that should never be sacrificed. Values inform a judgment of a proper or proportionate balancing of the loss of individual liberty and privacy for the gain of certain public goods; or whether it is fair to expect some social and age groups to suffer disproportionately in any public health initiative.

Here is one attempt to define ethical AI:

Organizations ready to embrace AI and thrive in the Age of With must start by putting trust at the center. First, they must thoroughly assess whether their organization meets the criteria for trustworthy and ethical AI; it's a necessary step in increasing the returns and managing the risks that constitute the transformational promise of AI.

This is messed up. Trust is something given based on transparency, reputation, and sometimes, unfortunately, blind faith in wholly untrustworthy characters. It is not a consummate good. Putting trust at the center implies ethics are of secondary concern.

Here are a few examples of trustworthy but potentially unethical models:

Predictive Policing: City government is in an endless cycle of allocation of resources for all of the things they have to do. One area that gets cut is policing. In an attempt to introduce some element of fairness (or at least science) to how to deploy and redeploy policing, they have turned to several AI solutions that predict where police need to be. Implementing these systems shows a matter of trust in the operation and outcome, but the experience has led to unethical outcomes.

The models themselves are, for the most, free from bias, but they calculate occurrences of crimes in segments of the city and assign more policing. The problem is, though organized to fight Class1 crimes, homicides, arson and assaults, more boots on the ground begin to pick up more Class 2 crimes, such as vagrancy, trespass, curfew or hold a small number of drugs. As this data flows back to the system in an infinite feedback loop, more police are assigned, and crime rates soar. Moreover, since these phenomena occur in neighborhoods with a high degree of color, the result is entirely unethical.

Intrusive personalization: Giving your most initiate information to ad servers and marketers, clicking through websites, ordering online, talking to Siri, people tend to trust these applications even though subsequent use of the data can be highly unethical in persuasion, digital phenotyping and disrupting civil society.

Life insurance: life insurance is the paradigm of trust. Their commercials of "good hands," "the future is safe," partners afterlife too," "for a secure life." The assumption when purchasing life insurance is that the face value will be distributed promptly to your beneficiaries. There are circumstances, clearly elucidated in the contract, where that would not happen, such as suicide in the first two years, or acts of war. But two-year-exclusion doesn't offer complete protection. Another exclusion is the matter of "material representations on the application." Effectively, it provides the insurance company the right to deny the death claim on material representations, which can be minor:

The cynical part of this is that Insurers typically do not investigate these situations at the beginning. Still, when a claim is large, or just beyond the two years, they will dig into thousands of sources to invalidate a claim and return the premium plus interest, but not the death benefit. This is an example of the perception of a trustworthy instrument: "If I die, my family will be taken care of," which is, in fact, an unethical process.

Ethical, but not trust trustworthy: one prominent example of ethical but not trustworthy is the use of machine learning in radiology. After some early gaffes when Stanford Medical's radiation oncology model produces noticeable result between different ethnic groups, they went back to the drawing board. They developed a system that identified tumors that most of a panel of radiologists did not, and the degree of false-positive and negatives was evenly distributed across groups. They'd developed an ethical system, cleansed of bias, but trust was a different issue. First of all, doctors are a conservative bunch. Many refused to accept the result. Then there was an unanticipated problem. As they licensed the software to other hospitals, the accuracy of the system dropped dramatically. The reason was that Stanford had state-of-the-art imaging technology that other hospitals did not. Trust in the system plummeted, and took some time to regain.

Let's drop "trustworthy" as a criterion for ethical AI. Ethics are about knowing what to do and doing it. Trust is about what or whoshouldbe trusted or how to create trust, whether or not it's ethical. Though they are commingled in specific ways, pursuing trust to the exclusion of ethics is dangerous.

Continue reading here:

Trustworthy AI versus ethical AI - what's the difference, and why does it matter? - Diginomica

Related Posts