AI Weekly: The road to ethical adoption of AI – VentureBeat

Posted: August 14, 2021 at 1:30 am

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!

As new principles emerge to guide the development ethical, safe, and inclusive AI, the industry faces self-inflicted challenges. Increasingly, there are many sets of guidelines the Organization for Economic Cooperation and Developments AI repository alone hosts more than 100 documents that are vague and high-level. And while a number of tools are available, most come without actionable guidance on how to use, customize, and troubleshoot them.

This is cause for alarm, because as the coauthors of a recent paper write, AIs impacts are hard to assess especially when they have second- and third-order effects. Ethics discussions tend to focus on futuristic scenarios that may not come to pass and unrealistic generalizations that make the conversations untenable. In particular, companies run the risk of engaging in ethics shopping, ethics washing, or ethics shirking, in which they ameliorate their position with customers to build trust while minimizing accountability.

The points are salient in light of efforts by European Commissions High-level Expert Group on AI (HLEG) and the U.S. National Institute of Standards and Technology, among others, to create standards for building trustworthy AI. In a paper, digital ethics researcher Mark Ryan argues that AI isnt the type of thing that has the capacity to be trustworthy because the category of trust simply doesnt apply to AI. In fact, AI cant have the capacity to be trusted as long as it cant be held responsible for its actions, he argues.

Trust is separate from risk analysis that is solely based on predictions based on past behavior, he explains. While reliability and past experience may be used to develop, confer, or reject trust placed in the trustee, it is not the sole or defining characteristic of trust. Though we may trust people that we rely on, it is not presupposed that we do.

Productizing AI responsibly means different things to different companies. For some, responsible implies adopting AI in a manner thats ethical, transparent, and accountable. For others, it means ensuring that their use of AI remains consistent with laws, regulations, norms, customer expectations, and organizational values. In any case, responsible AI promises to guard against the use of biased data or algorithms, providing an assurance that automated decisions are justified and explainable at least in theory.

Recognizing this, organizations must overcome a misalignment of incentives, disciplinary divides, distributions of responsibilities, and other blockers in responsibly adopting AI. It requires an impact assessment framework thats not only broad, flexible, iterative, possible to operationalize, and guided, but highly participatory as well, according to the papers coauthors. They emphasize the need to shy away from anticipating impacts that are assumed to be important and become more deliberate in deployment choices. As a way of normalizing the practice, the coauthors advocate for including these ideas in documentation the same way that topics like privacy and bias are currently covered.

Another paper this from researchers at the Data & Society Research Institute and Princeton posits algorithmic impact assessments as a tool to help AI designers analyze the benefits and potential pitfalls of algorithmic systems. Impact assessments can address the issues of transparency, fairness, and accountability by providing guardrails and accountability forums that can compel developers to make changes to AI systems.

This is easier said than done, of course. Algorithmic impact assessments focus on the effects of AI decision-making, which doesnt necessarily measure harms and may even obscure them real harms can be difficult to quantify. But if the assessments are implemented with accountability measures, they can perhaps foster technology that respects rather than erodes dignity.

As Montreal AI ethics researcher Abhishek Gupta recently wrote in a column: Design decisions for AI systems involve value judgements and optimization choices. Some relate to technical considerations like latency and accuracy, others relate to business metrics. But each require careful consideration as they have consequences in the final outcome from the system. To be clear, not everything has to translate into a tradeoff. There are often smart reformulations of a problem so that you can meet the needs of your users and customers while also satisfying internal business considerations.

For AI coverage, send news tips toKyle Wiggers and be sure to subscribe to the AI Weekly newsletterand bookmark our AI channel,The Machine.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

Read the original post:

AI Weekly: The road to ethical adoption of AI - VentureBeat

Related Posts