Ethics Codes Are Not Enough to Curb the Danger of Bias in AI – BRINK

Posted: January 27, 2020 at 12:26 am

AI technology is a mirror of society and may exacerbate existing power imbalances.

Photo: Shutterstock

Share this article

As we enter a new decade, what has been termed a global AI race is starting to dictate the agenda of policymakers across the globe.

To boost competitiveness and national security capabilities, many national governments have released national AI strategies in which they pledge substantial investments into AI innovation. Supranational organizations are not lagging behind. Since 2018, the European Commission has been contributing 1.5 billion euros ($1.6 billion) to AI innovation via the Horizon 2020 framework.

While it is unclear how much corporations invest directly or indirectly into AI innovation (e.g., via R&D or M&A), global investment into AI startups has risen continually. In 2019, it was at $37 billion, compared to $1.3 billion in 2010.

But with big investments come big risks.

While governments and corporations are hedging their bets, they are turning a blind eye to the substantial challenges that this supersized AI hype brings for society. For example, there is mounting evidence that AI systems not only perpetuate but exacerbate inequalities.

Often, these inequalities occur along the well-known fault lines of race, gender, and social class, and their intersections. For example, algorithms that are used in the U.S. health care systems have been proven to be racially biased. A 2019 study showed that a risk-prediction tool that is widely deployed in the U.S. to help identify and target patients complex health needs privileges white patients.

The same risk score saw black patients to be significantly sicker than white patients, effectively denying the additional care they needed. Similarly, object detection systems, which are used in autonomous vehicles, have been proven to show higher error rates for pedestrians with darker skin tones.

In theory, this predictive inequity puts pedestrians with darker skin tones at a higher risk of being struck by an autonomous vehicle (though, it should be noted that most autonomous vehicles use more than one type of sensor).

We have also seen highly problematic uses of automated decision-making systems in the criminal justice system. For example, in 2016, the risk assessment algorithm COMPAS, used to inform fundamental decisions about a defendants freedom, was found to operate with racial bias, thereby privileging white defendants.

In a famous HR example, a companys hiring algorithm taught itself that characteristics of maleness (such as playing baseball) were predictors for a successful career within the company. This meant that CVs of women were sent to the bottom of the pile, even though they did not explicitly state the gender of the applicant.

Gender-based discrimination has also been proven to exist in the context of job advertising algorithms, not showing certain job or housing adds to women. Another example is the existence of gender bias in automated decisions about credit lines, putting women at a disadvantage despite a better credit score.

There is also proof of discrimination based on social class. For example, class-based discrimination has been shown to occur when government agencies use algorithms to automate decision-making in areas such as child welfare or unemployment payments, systematically putting poor people at a disadvantage by increasing their risk of losing custody of their children or losing their benefits.

Since 2018, new ideas around making machine learning and AI fair, ethical and transparent have captivated technologists and researchers alike (not least evidenced in the success of the ACM FAT* conference). The main idea behind these interventions is to mitigate the social impacts of AI by technological means. But there are big obstacles to this quest.

Making AI fair requires a sound definition of fairness, related to anti-discrimination law, and of bias that can be codified. But statistical bias is something entirely different from cultural and cognitive bias, and fairness in AI design will point us to bigger questions around the moral and legal foundations of fairness and to how data is classified and processed.

Similar issues arise in the context of ethical AI.

For example, ethics codices have been shown not to affect decision-making in software development, and the idea of a moral or ethical machine remains abstract and unfit for real-world contexts.

It remains questionable if there can even be a technological fix for a social problem. AI technology is a mirror of society and may exacerbate existing power imbalances. Issues of discrimination and oppression require policy action, not just attention from governments and companies alike.

AI systems are designed for scale. But that also means their adverse impacts scale for better and, often, for worse. Currently, regulators are not prepared for this scope. To date, the regulatory and legal environment of the (often global) development and deployment of AI technologies remains weak and patchy.

While the General Data Protection Regulation has brought some clarity with regard to data collection and processing (but notably not inferential analytics), it has created legal asymmetries between EU-based users and non-EU-based users using the same service.

There are ongoing efforts to establish rules for responsible AI, notably by the European Commission. Such frameworks spark important conversations about changing business practices in AI design, but they are still to be tested in practice, and above all, they remain unenforceable.

We see this void being filled by precedent lawsuits. The systemic and potentially devastating discrimination that can occur through AI systems may eventually lead to class action lawsuits, for example, in the context of government use of AI systems, or in the context of corporations using AI in hiring.

And this is not all. It is likely that more challenges, harms and risks will emerge in the context of AI and the climate crisis, misinformation, automated warfare, worker and citizen surveillance, mobility, city design and governance, and more.

In the meantime, the narrative that technology can fix society and will lead us into a prosperous Fourth Industrial Revolution has helped us to ignore the actual abilities and limits of AI. This is an illusion that poses a risk in that it hinders sustainable innovation.

AI technologies cannot easily be deployed into work processes and the organization of social life. Often, underpaid workers and individuals labor to integrate AI technologies, making up for technological shortcomings through human intervention (whether that is in front or behind the screen).

We must be mindful of the fact that AI technologies continue to be grounded in statistical analysis. This means that automated predictions are based on correlation, not causation, limiting what conclusions can be drawn from an inference.

While many firms generously invest in AI, the return on investment, especially for machine learning projects, is turning out to be slow. New interventions, such as algorithmic auditing, are also seeing a slow uptake with new specialist firms only fairly recently entering the market (such as ORCAA and ArthurAI).

Against that backdrop, both governments and corporations would do well in considering the social, economic and ecological opportunity costs of the AI hype: In 2020, innovation must look beyond AI to address the planets most urgent problems, ranging from inequality to the climate crisis.

View post:

Ethics Codes Are Not Enough to Curb the Danger of Bias in AI - BRINK

Related Posts