University of California to publish database of how it uses AI – EdScoop

Posted: October 21, 2021 at 10:29 pm

The University of California announced plans Monday to launch a public database and assess how the system is using artificial-intelligence based technologies.

Administrators said theyll follow four recommendations from a new Responsible Artificial Intelligence report on risks and opportunities in academics, health, human resources and policing. The report sheds light on how the 10-campus, 250,000 student system currently uses AI and includes recommendations on incorporating eight guiding ethical principles for UCs use of AI in its services, such as procurement, and monitoring the impact of automated decision-making, facial recognition and chatbots. UC also plans to establish departmental AI councils, according to the report.

The report is among the first of its kind for colleges and universities, according to a statement UC emailed to EdScoop.

Because of UCs size and stature as a preeminent public research university as well as Californias third-largest employer, the principles and guidance from the report have the potential to positively inform the development and implementation of AI standards beyond university settings within the spheres of research, business, and government, the statement reads.

A working group group, convened by UC President Michael Drake in 2020, gathered information to better understand the AI landscape at UC. According to the systems statement, the group focused on where AI is most likely to affect individual rights in university settings.

In academics, the report broke recommendations down into how AI could affect admissions and financial aid, student success, mental health and grading and remote proctoring. The report recommended using AI-powered software to inform human decision-making or in areas where the technology could reach out to improve the student experience, like AI-powered chatbots.

But administrators need to be careful in using AI to automate decisions entirely, the report states, citing concerns about historical bias being reinforced through the software pulling from previous data particularly in admissions. Some UC campuses already use formulas to help sort through admissions, and AI could help clean data on those admissions and automatically calculate scores, but there must be human oversight to ensure equity, the report states.

This means that the computational model must be able to take into account difficult-to-quantify criteria such as valuing life experiences as part of a students capacity for resilience and persistence needed to complete college-level work, it reads. If the computational model does not accommodate criteria such as life experience, a human must remain in the loop on that part of the review.

Colleges and universities across the country are using AI in everyday processes, lightening workloads so human employees can focus on tasks that require human judgement or empathy.

But there are also widespread ethical concerns about AI reinforcing historical bias. Universities often lack a universal ethical approach when buying new technologies, according to Brandie Nonnecke, the founding director of UC Berkleys lab on technology use policy and one of the working group members.

Its good were setting up these processes now, Nonnecke said in the press release. Other entities have deployed AI and then realized that its producing discriminatory or less efficient outcomes. Were at a critical point where we can establish governance mechanisms that provide necessary scrutiny and oversight.

See the rest here:

University of California to publish database of how it uses AI - EdScoop

Related Posts