Creating a Curious, Ethical, and Diverse AI Workforce – War on the Rocks

Posted: March 5, 2020 at 6:24 pm

Does the law of war apply to artificially intelligent systems? The U.S. Department of Defense is taking this question seriously, and adopted ethical principles for artificial intelligence (AI) in February 2020 based on the Defense Innovation Boards set of AI ethical guidelines proposed last year. However, just as defense organizations must abide by the law of war and other norms and values, individuals are responsible for the systems they create and use.

Ensuring that AI systems are as committed as we are to responsible and lawful behavior requires changes to engineering practices. Considerations of ethical, moral, and legal implications are not new to defense organizations, but they are starting to become more common in AI engineering teams. AI systems are revolutionizing many commercial sector products and services, and are applicable to many military applications from institutional processes and logistics to those informing warfighters in the field. As AI becomes ubiquitous, changes are necessary to integrate ethics into AI development now, before it is too late.

The United States needs a curious, ethical AI workforce working collaboratively to make trustworthy AI systems. Members of AI development teams must have deep discussions regarding the implications of their work on the warfighters using them. This work does not come easily. In order to develop AI systems effectively and ethically, defense organizations should foster an ethical, inclusive work environment and hire a diverse workforce. This workforce should include curiosity experts (people who focus on human needs and behaviors), who are more likely to imagine the potential unwanted and unintended consequences associated with the systems use and misuse, and ask tough questions about those consequences.

Create an Ethical, Inclusive Environment

People with similar concepts of the world and a similar education are more likely to miss the same issues due to their shared bias. The data used by AI systems are similarly biased, and people collecting the data may not be aware of how that is conveyed through the data they create. An organizations bias will be pervasive in the data provided by that organization, and the AI systems developed with that data will perpetuate the bias.

Bias can be mitigated with ethical workforces that value diverse human intelligence and the wide set of possible life experiences. Diversity doesnt just mean making sure that there are a mix of genders on the project team, or that people look different, though those attributes are important. A project team should have a wide set of life experiences, disability status, social status, and experience being the other. Diversity also means including a mix of people in uniform, civilians, academic partners, and contractors as well as those individuals who have diverse life experiences. This diversity does not mean lowering the bar of experience or talent, but rather extending it. To be successful, all of these individuals need to be engaged as full members of the project team in an inclusive environment.

Individuals coming from different backgrounds will be more capable of imagining a broad set of uses, and more importantly, misuses of these systems. Assembling a diverse workforce that brings talented, experienced people together will reinforce technology ethics. Imbuing the workforce with curiosity, empathy, and understanding for the warfighters using the systems and affected by the systems will further support the work.

Diverse and inclusive leadership is key to an organizations success. When leadership in the organization isnt diverse, it is less likely to attract and, more importantly, retain talent. This is primarily thought to be because those talented individuals may assume that the organization is not inclusive or that there is no future in the organization for them. If leadership is lacking in diversity, an organization can promote someone early or hire from the outside if necessary.

Adopting a set of technology ethics is a first step to supporting project teams in making better, more confident decisions that are ethical. Technology ethics are ethics that are designed specifically for development of software and emerging technologies. They help align diverse project teams to assist them in setting appropriate norms for AI systems. Much like physicians adhere to a version of the American Medical Associations Code of Medical Ethics, technology ethics help guide a project team working on AI systems that have the potential for harm (most AI systems do). These project teams need to have early, difficult conversations about how they will manage a variety of situations.

A shared set of technology ethics serves as a central point to guide decision-making. We are all unique, and many of us have shared knowledge and experiences. These are what naturally draw people together, making it feel like a bigger challenge to work with people who have completely different experiences. However, the experience of working with people who are significantly different builds the capacity for innovation and creative thinking. Using ethics as a bridge between differences strengthens the team by creating shared knowledge and common ground. Technology ethics must be weaved into the work at a very early stage, and the AI workforce must continue to advocate technology ethics as the AI system matures. Human involvement (a human-in-the-loop) is required throughout the life cycle of AI systems an AI system cannot be simply turned on and left to run. Technology ethics should be considered throughout the entire life cycle.

Without technology ethics it is harder for project teams to align, and important discussions may be inadvertently skipped. Technology ethics bring into focus the obligation for the project team to take its work and its implications seriously, and can also empower individuals to ask tough questions with regard to unwanted and unintended consequences that they imagine with the systems use and misuse. By aligning on a set of technology ethics, the development team can define clear directives with regard to system functionality.

Identifying a set of technology ethics is an intimidating task and one that should be approached carefully. Some project teams will want to adopt guidance initially from organizations such as the Association for Computing Machinerys Code of Ethics and Professional Conduct, and the Montreal Declaration for a Responsible Development of Artificial Intelligence, while others like IBM and Microsoft are developing their own guidance. The Defense Departments newly adopted five AI ethics principles are: responsible, equitable, traceable, reliable, and governable. The original Defense Innovation Board recommendation is described in detail in the supporting document.

In the past, ethics have only been referenced in, and not directly part of, software development efforts. The knowledge that AI systems can cause much broader harm more quickly than software technologies could in the past raises new ethical questions that need to be addressed by the AI workforce. A skilled and diverse workforce, bursting with curiosity and engaged with the AI system, will result in AI systems that are accountable to humans, de-risked, respectful, secure, honest, and usable.

Value Curiosity

AI systems will be created and used by a wide range of individuals, and misuse will come from potentially unexpected sources individuals and organizations with completely different experiences and with potentially unlimited resources. Adversaries already are using techniques that are very difficult to anticipate. The adoption of technology ethics isnt enough to make AI systems safe. Making sure that the teams building these systems are able to imagine and then mitigate issues is profoundly important.

The term curiosity experts is shorthand for people who have a broad range of skills and job titles including: cognitive psychologists, digital anthropologists, human-machine interaction and human-computer interaction professionals, and user experience researchers and designers. Curiosity experts core responsibility is to be curious and speculative within the ethical and inclusive environment an organization has created. Curiosity experts will partner with defense experts, and may already be part of your team, doing research and helping to make interactions more usable.

Curiosity experts help connect the human needs, the initial problem to be solved, and the solution to an engineering problem. Working with defense experts (and ideally the warfighters themselves), they will enable a project team to uncover potential issues before they arise by focusing on understanding how the system will be used, the situation and constraints for using the system, as well as the abilities of the people who will use it. Curiosity experts can conduct a variety of proven qualitative and quantitative methods, and once they have a solid understanding, they share that information with the project team in easy to consume formats such as stories. The research they conduct is necessary to understand the needs that are being addressed, so that the team builds the right thing. This may sound familiar wargaming uses very similar tactics, and storytelling is an important component.

Its important for curiosity experts to lead (and then teach others to lead) co-design activities such as abusability testing and other speculative exercises, in which the project team imagines the misuse of the AI system they are considering building. AI systems need to be interpretable and usable by warfighters, and this has been recognized as a priority by the Defense Advanced Research Projects Agency, which is working on the Explainable AI program. Curiosity experts with interaction design experience can contribute materially to this effort as they help keep the people using these systems in mind, and call out the AI workforce when necessary. When the project team asks, Why dont they get it? curiosity experts can nudge the project team to pivot instead to What can we do better to meet the warfighters needs? As individuals on the team become more comfortable with this mindset, they become curiosity experts at heart, even when their primary responsibility is something else.

Hire a Diverse Workforce

Building diverse project teams helps to increase each individuals creativity and effectiveness. Diversity in this sense relates to skill sets, education (with regard to school and program), and problem-framing approach. Coming together with different ways of looking at the world will help teams and organizations solve challenging problems faster.

Building a diverse project team to advance this ethical framework will take time and effort. Organizations that represent minority groups such as the National Society of Black Engineers and technical conferences that embrace diversity such as the Grace Hopper Celebration can be a great resource. Prospective candidates should ask hard questions about the organization, including about the organizations ethics, diversity, and inclusion. These questions are indicative of curious individuals you want on your team. Once you recruit more diverse individuals, you can set progress goals. For example, Atlassian introduced a new approach to diversity reporting in 2016 that focused on team dynamics and shared how people from underrepresented backgrounds were spread across the companys teams.

It is common in technology, and AI specifically, to value specific degrees and learning styles. Some employers have staffed their organization with class after class of graduates from particular degree programs at particular universities. These organizations benefit from the ability of these graduates to easily bond and rely on shared knowledge. However, these same benefits can become weaknesses for the project team. The peril of creating high-risk products and services with a homogeneous team is that they may all miss the same critical piece of information; have the same gaps in technical knowledge; assume the same things about the process; or not be able to think differently enough to imagine unintended consequences. They wont even realize their mistake until it is too late.

In many organizations this risk is disguised by adding one or two individuals to a group who are significantly different from the majority in an aspect such as gender, race, or culture. Unfortunately, their presence isnt enough to significantly reduce the risk of groupthink, and their experience will be dismissed because it is different if there are not enough individuals who are socially distinct in the group. Eventually, due to many of these factors, retention becomes a significant concern. Project teams need to be built with diversity from the start, or be quickly adjusted.

A diverse team of thoughtful and talented machine learning experts, programmers, and curiosity experts (among others) is not yet complete. The AI workforce needs direct access to experts in the military or defense industry who are familiar with the situations and organizations the AI system is being designed for, and who can spot assumptions and issues early. These individuals, be they in uniform, civilians, or consultants, may also be able to act as liaisons to the warfighters so that more direct contact can be made with those closest to the work.

Rethinking the Workforce

Encouraging project teams to be curious and speculative in imagining scenarios at the edges of AI will help to prepare for actual system use. As the AI workforce considers how to manage a variety of use cases, framing conversations with technology ethics will provoke serious and contentious discussions. These conversations are precious with regard to aligning the team prior to facing a difficult situation. A clear understanding of what the expectations are in specific situations helps the team to create mitigation plans for how they will respond, both during the creation of the AI system and once it is in production.

The AI sector needs to think about the workforce in different ways. As Prof. Hannah Fry suggests in The Guardian, diversity and inclusion in the workforce is just as important as a technology ethics pledge (if not more so) to ensure that we are reducing unwanted bias and unintended consequences. Creating an ethical, inclusive environment, valuing curiosity, and hiring a diverse workforce are necessary steps to make ethical AI. Clear communication and alignment on ethics is the best way to bring disparate groups of people into a shared understanding and to create AI systems that are accountable to humans, de-risked, respectful, secure, honest, and usable.

Over the next several years, my organization, Carnegie Mellon Universitys Software Engineering Institute, is advancing a professional discipline of AI Engineering to help the defense and national security communities develop, deploy, operate, and evolve game-changing mission capabilities that leverage rapidly evolving artificial intelligence and machine learning technologies. At the core of this effort is supporting the AI workforce in designing trustworthy AI systems by successfully integrating ethics in a diverse workforce.

Carol Smith (@carologic) is a senior research scientist in Human-Machine Interaction at Carnegie Mellon Universitys Software Engineering Institute and an adjunct instructor for CMUs Human-Computer Interaction Institute. She has been conducting user experience research to improve the human experience across industries for 19 years and working to improve AI systems since 2015. Carol is recognized globally as a leader in user experience and has presented over 140 talks and workshops in over 40 cities around the world, served two terms on the User Experience Professionals Association international board, and is currently an editor for the Journal of Usability Studies and the upcoming Association for Computing Machinery Digital Threats: Research and Practice journals Special Issue on Human-Machine Teaming. She holds an M.S. in Human-Computer Interaction from DePaul University.

This material is based upon work funded and supported by the Department of Defense under Contract No. FA8702-15-D-0002 with Carnegie Mellon University for the operation of the Software Engineering Institute, a federally funded research and development center.

The views, opinions, and/or findings contained in this material are those of the author(s) and should not be construed as an official government position, policy, or decision, unless designated by other documentation.

Image: U.S. Air Force (Photo by J.M. Eddins Jr.)

See the article here:

Creating a Curious, Ethical, and Diverse AI Workforce - War on the Rocks

Related Posts