UC convenes Artificial Intelligence Working Group to harness innovative technology, establish guardrails for equitable and ethical use – University of…

Posted: December 19, 2020 at 8:32 am

Artificial intelligence (AI) machines or computer programs capable of learning and problem-solving to perform tasks that typically require humans can make people and organizations more efficient. At the same time, these technological advances can prompt serious concerns around privacy, equity and safety.

In response to this societal challenge, the University of California formed a Presidential Working Group on Artificial Intelligence in early 2020 that brings together leading campus experts to determine how UC can harness the significant benefits offered by AI while ensuring its responsible use.

AI can help UC operate better in many ways such as reducing biases inherent in human decision-making, strengthening cybersecurity and improving the quality of health care, said Stuart Russell, professor of computer science at UC Berkeley, co-chair of the working group and a world-renowned expert on the development and ethical deployment of artificial intelligence. The work of this panel places UC at the forefront of developing principles and standards for the ethical use of AI in a university setting.

Forms of AI, such as machine learning and predictive modeling, have been used for decades to help people streamline their work by automating time-consuming or complex tasks. Today, AI is used for everything from financial fraud detection to identifying terrorism suspects. When used correctly, it has shown promise in uncovering unconscious bias in the selection of job applicants, or in improving health care outcomes by more thoroughly and rapidly processing patient health metrics, data and images.

Areas where AI can most benefit UC operations include health, human resources, campus safety and student experiences, such as admissions and grading. If not thoughtfully implemented and monitored, AI can have unintended consequences such as reinforcing human biases, misidentifying an individual through facial recognition, inadvertently revealing private information or failing to accurately diagnose a patients symptoms.

The University of California is an ideal place for the thorny undertaking of defining safe and ethical uses for AI, said UC President Michael V. Drake, M.D. We have the intellectual horsepower in technology, law, ethics and other disciplines to realize the benefits of AI while establishing necessary, practical safeguards.

UCs AI Working Group has the potential to positively impact artificial intelligence beyond the Universitys own uses. Because of UCs size and stature as a preeminent public research university as well as Californias third-largest employer, its guidelines for ethical development and implementation of AI could influence standards within academia, business and government worldwide.

The AI Working Groups University of California Ethical Principles, slated for publication in January 2021, will focus on transparency, fairness and accountability. A full report on AI governance and technical recommendations for the University is expected in fall 2021.

In addition to Russell, UCs AI Working Group is co-chaired by Brandie Nonnecke, director of the CITRIS Policy Lab at UC Berkeley, and Alex Bustamante, senior vice president and chief compliance and audit officer in UCs Office of the President. Members of the working group draw from faculty, staff and researchers at all 10 UC campuses. A complete list is on the website of the UC Office of the President.

The presidential group was launched by the UC Office of the Presidents Office of Ethics, Compliance and Audit Services (ECAS) and the CITRIS Policy Lab, which is housed within CITRIS and the Banatao Institute.

Original post:

UC convenes Artificial Intelligence Working Group to harness innovative technology, establish guardrails for equitable and ethical use - University of...

Related Posts