Why kids need special protection from AIs influence – MIT Technology Review

Posted: September 18, 2020 at 1:05 am

Vosloo led the drafting of a new set of guidelines from Unicef designed to help governments and companies develop AI policies that consider childrens needs. Released on September 16, the nine new guidelines are the culmination of several consultations held with policymakers, child development researchers, AI practitioners, and kids around the world. They also take into consideration the UN Convention on the Rights of the Child, a human rights treaty ratified in 1989.

The guidelines arent meant to be yet another set of AI principles, many of which already say the same things. In January of this year, a Harvard Berkman Klein Center review of 36 of the most prominent documents guiding national and company AI strategies found eight common themesamong them privacy, safety, fairness, and explainability.

Rather, the Unicef guidelines are meant to complement these existing themes and tailor them to children. For example, AI systems shouldnt just be explainablethey should be explainable to kids. They should also consider childrens unique developmental needs. Children have additional rights to adults, Vosloo says. Theyre also estimated to account for at least one-third of online users. Were not talking about a minority group here, he points out.

In addition to mitigating AI harms, the goal of the principles is to encourage the development of AI systems that could improve childrens growth and well-being. If theyre designed well, for example, AI-based learning tools have been shown to improve childrens critical-thinking and problem-solving skills, and they can be useful for kids with learning disabilities. Emotional AI assistants, though relatively nascent, could provide mental-health support and have been demonstrated to improve the social skills of autistic children. Face recognition, used with careful limitations, could help identify children whove been kidnapped or trafficked.

Children should also be educated about AI and encouraged to participate in its development. It isnt just about protecting them, Vosloo says. Its about empowering them and giving them the agency to shape their future.

Talking about disadvantaged groups, of course children are the most disadvantaged ones.

Unicef isnt the only one thinking about the issue. The day before those draft guidelines came out, the Beijing Academy of Artificial Intelligence (BAAI), an organization backed by the Chinese Ministry of Science and Technology and the Beijing municipal government, released a set of AI principles for children too.

The announcement follows a year after BAAI released the Beijing AI principles, understood to be the guiding values for Chinas national AI development. The new principles outlined specifically for children are meant to be a concrete implementation of the more general ones, says Yi Zeng, the director of the AI Ethics and Sustainable Development Research Center at BAAI who led their drafting. They closely align with Unicefs guidelines, also touching on privacy, fairness, explainability, and child well-being, though some of the details are more specific to Chinas concerns. A guideline to improve childrens physical health, for example, includes using AI to help tackle environmental pollution.

While the two efforts are not formally related, the timing is also not coincidental. After a flood of AI principles in the last few years, both lead drafters say creating more tailored guidelines for children was a logical next step. Talking about disadvantaged groups, of course children are the most disadvantaged ones, Zeng says. This is why we really need [to give] special care to this group of people. The teams conferred with one another as they drafted their respective documents. When Unicef held a consultation workshop in East Asia, Zeng attended as a speaker.

Unicef now plans to run a series of pilot programs with various partner countries to observe how practical and effective their guidelines are in different contexts. BAAI has formed a working group with representatives from some of the largest companies driving the countrys national AI strategy, including education technology company TAL, consumer electronics company Xiaomi, computer vision company Megvii, and internet giant Baidu. The hope is to get them to start heeding the principles in their products and influence other companies and organizations to do the same.

Both Vosloo and Zeng hope that by articulating the unique concerns AI poses for children, the guidelines will raise awareness of these issues. We come into this with eyes wide open, Vosloo says. We understand this is kind of new territory for many governments and companies. So if over time we see more examples of children being included in the AI or policy development cycle, more care around how their data is collected and analyzedif we see AI made more explainable to children or to their caregiversthat would be a win for us.

View post:

Why kids need special protection from AIs influence - MIT Technology Review

Related Posts