Does the law of war apply to artificially intelligent systems? The U.S. Department of Defense is taking this question seriously, and adopted ethical principles for artificial intelligence (AI) in February 2020 based on the Defense Innovation Boards set of AI ethical guidelines proposed last year. However, just as defense organizations must abide by the law of war and other norms and values, individuals are responsible for the systems they create and use.
Ensuring that AI systems are as committed as we are to responsible and lawful behavior requires changes to engineering practices. Considerations of ethical, moral, and legal implications are not new to defense organizations, but they are starting to become more common in AI engineering teams. AI systems are revolutionizing many commercial sector products and services, and are applicable to many military applications from institutional processes and logistics to those informing warfighters in the field. As AI becomes ubiquitous, changes are necessary to integrate ethics into AI development now, before it is too late.
The United States needs a curious, ethical AI workforce working collaboratively to make trustworthy AI systems. Members of AI development teams must have deep discussions regarding the implications of their work on the warfighters using them. This work does not come easily. In order to develop AI systems effectively and ethically, defense organizations should foster an ethical, inclusive work environment and hire a diverse workforce. This workforce should include curiosity experts (people who focus on human needs and behaviors), who are more likely to imagine the potential unwanted and unintended consequences associated with the systems use and misuse, and ask tough questions about those consequences.
Create an Ethical, Inclusive Environment
People with similar concepts of the world and a similar education are more likely to miss the same issues due to their shared bias. The data used by AI systems are similarly biased, and people collecting the data may not be aware of how that is conveyed through the data they create. An organizations bias will be pervasive in the data provided by that organization, and the AI systems developed with that data will perpetuate the bias.
Bias can be mitigated with ethical workforces that value diverse human intelligence and the wide set of possible life experiences. Diversity doesnt just mean making sure that there are a mix of genders on the project team, or that people look different, though those attributes are important. A project team should have a wide set of life experiences, disability status, social status, and experience being the other. Diversity also means including a mix of people in uniform, civilians, academic partners, and contractors as well as those individuals who have diverse life experiences. This diversity does not mean lowering the bar of experience or talent, but rather extending it. To be successful, all of these individuals need to be engaged as full members of the project team in an inclusive environment.
Individuals coming from different backgrounds will be more capable of imagining a broad set of uses, and more importantly, misuses of these systems. Assembling a diverse workforce that brings talented, experienced people together will reinforce technology ethics. Imbuing the workforce with curiosity, empathy, and understanding for the warfighters using the systems and affected by the systems will further support the work.
Diverse and inclusive leadership is key to an organizations success. When leadership in the organization isnt diverse, it is less likely to attract and, more importantly, retain talent. This is primarily thought to be because those talented individuals may assume that the organization is not inclusive or that there is no future in the organization for them. If leadership is lacking in diversity, an organization can promote someone early or hire from the outside if necessary.
Adopting a set of technology ethics is a first step to supporting project teams in making better, more confident decisions that are ethical. Technology ethics are ethics that are designed specifically for development of software and emerging technologies. They help align diverse project teams to assist them in setting appropriate norms for AI systems. Much like physicians adhere to a version of the American Medical Associations Code of Medical Ethics, technology ethics help guide a project team working on AI systems that have the potential for harm (most AI systems do). These project teams need to have early, difficult conversations about how they will manage a variety of situations.
A shared set of technology ethics serves as a central point to guide decision-making. We are all unique, and many of us have shared knowledge and experiences. These are what naturally draw people together, making it feel like a bigger challenge to work with people who have completely different experiences. However, the experience of working with people who are significantly different builds the capacity for innovation and creative thinking. Using ethics as a bridge between differences strengthens the team by creating shared knowledge and common ground. Technology ethics must be weaved into the work at a very early stage, and the AI workforce must continue to advocate technology ethics as the AI system matures. Human involvement (a human-in-the-loop) is required throughout the life cycle of AI systems an AI system cannot be simply turned on and left to run. Technology ethics should be considered throughout the entire life cycle.
Without technology ethics it is harder for project teams to align, and important discussions may be inadvertently skipped. Technology ethics bring into focus the obligation for the project team to take its work and its implications seriously, and can also empower individuals to ask tough questions with regard to unwanted and unintended consequences that they imagine with the systems use and misuse. By aligning on a set of technology ethics, the development team can define clear directives with regard to system functionality.
Identifying a set of technology ethics is an intimidating task and one that should be approached carefully. Some project teams will want to adopt guidance initially from organizations such as the Association for Computing Machinerys Code of Ethics and Professional Conduct, and the Montreal Declaration for a Responsible Development of Artificial Intelligence, while others like IBM and Microsoft are developing their own guidance. The Defense Departments newly adopted five AI ethics principles are: responsible, equitable, traceable, reliable, and governable. The original Defense Innovation Board recommendation is described in detail in the supporting document.
In the past, ethics have only been referenced in, and not directly part of, software development efforts. The knowledge that AI systems can cause much broader harm more quickly than software technologies could in the past raises new ethical questions that need to be addressed by the AI workforce. A skilled and diverse workforce, bursting with curiosity and engaged with the AI system, will result in AI systems that are accountable to humans, de-risked, respectful, secure, honest, and usable.
Value Curiosity
AI systems will be created and used by a wide range of individuals, and misuse will come from potentially unexpected sources individuals and organizations with completely different experiences and with potentially unlimited resources. Adversaries already are using techniques that are very difficult to anticipate. The adoption of technology ethics isnt enough to make AI systems safe. Making sure that the teams building these systems are able to imagine and then mitigate issues is profoundly important.
The term curiosity experts is shorthand for people who have a broad range of skills and job titles including: cognitive psychologists, digital anthropologists, human-machine interaction and human-computer interaction professionals, and user experience researchers and designers. Curiosity experts core responsibility is to be curious and speculative within the ethical and inclusive environment an organization has created. Curiosity experts will partner with defense experts, and may already be part of your team, doing research and helping to make interactions more usable.
Curiosity experts help connect the human needs, the initial problem to be solved, and the solution to an engineering problem. Working with defense experts (and ideally the warfighters themselves), they will enable a project team to uncover potential issues before they arise by focusing on understanding how the system will be used, the situation and constraints for using the system, as well as the abilities of the people who will use it. Curiosity experts can conduct a variety of proven qualitative and quantitative methods, and once they have a solid understanding, they share that information with the project team in easy to consume formats such as stories. The research they conduct is necessary to understand the needs that are being addressed, so that the team builds the right thing. This may sound familiar wargaming uses very similar tactics, and storytelling is an important component.
Its important for curiosity experts to lead (and then teach others to lead) co-design activities such as abusability testing and other speculative exercises, in which the project team imagines the misuse of the AI system they are considering building. AI systems need to be interpretable and usable by warfighters, and this has been recognized as a priority by the Defense Advanced Research Projects Agency, which is working on the Explainable AI program. Curiosity experts with interaction design experience can contribute materially to this effort as they help keep the people using these systems in mind, and call out the AI workforce when necessary. When the project team asks, Why dont they get it? curiosity experts can nudge the project team to pivot instead to What can we do better to meet the warfighters needs? As individuals on the team become more comfortable with this mindset, they become curiosity experts at heart, even when their primary responsibility is something else.
Hire a Diverse Workforce
Building diverse project teams helps to increase each individuals creativity and effectiveness. Diversity in this sense relates to skill sets, education (with regard to school and program), and problem-framing approach. Coming together with different ways of looking at the world will help teams and organizations solve challenging problems faster.
Building a diverse project team to advance this ethical framework will take time and effort. Organizations that represent minority groups such as the National Society of Black Engineers and technical conferences that embrace diversity such as the Grace Hopper Celebration can be a great resource. Prospective candidates should ask hard questions about the organization, including about the organizations ethics, diversity, and inclusion. These questions are indicative of curious individuals you want on your team. Once you recruit more diverse individuals, you can set progress goals. For example, Atlassian introduced a new approach to diversity reporting in 2016 that focused on team dynamics and shared how people from underrepresented backgrounds were spread across the companys teams.
It is common in technology, and AI specifically, to value specific degrees and learning styles. Some employers have staffed their organization with class after class of graduates from particular degree programs at particular universities. These organizations benefit from the ability of these graduates to easily bond and rely on shared knowledge. However, these same benefits can become weaknesses for the project team. The peril of creating high-risk products and services with a homogeneous team is that they may all miss the same critical piece of information; have the same gaps in technical knowledge; assume the same things about the process; or not be able to think differently enough to imagine unintended consequences. They wont even realize their mistake until it is too late.
In many organizations this risk is disguised by adding one or two individuals to a group who are significantly different from the majority in an aspect such as gender, race, or culture. Unfortunately, their presence isnt enough to significantly reduce the risk of groupthink, and their experience will be dismissed because it is different if there are not enough individuals who are socially distinct in the group. Eventually, due to many of these factors, retention becomes a significant concern. Project teams need to be built with diversity from the start, or be quickly adjusted.
A diverse team of thoughtful and talented machine learning experts, programmers, and curiosity experts (among others) is not yet complete. The AI workforce needs direct access to experts in the military or defense industry who are familiar with the situations and organizations the AI system is being designed for, and who can spot assumptions and issues early. These individuals, be they in uniform, civilians, or consultants, may also be able to act as liaisons to the warfighters so that more direct contact can be made with those closest to the work.
Rethinking the Workforce
Encouraging project teams to be curious and speculative in imagining scenarios at the edges of AI will help to prepare for actual system use. As the AI workforce considers how to manage a variety of use cases, framing conversations with technology ethics will provoke serious and contentious discussions. These conversations are precious with regard to aligning the team prior to facing a difficult situation. A clear understanding of what the expectations are in specific situations helps the team to create mitigation plans for how they will respond, both during the creation of the AI system and once it is in production.
The AI sector needs to think about the workforce in different ways. As Prof. Hannah Fry suggests in The Guardian, diversity and inclusion in the workforce is just as important as a technology ethics pledge (if not more so) to ensure that we are reducing unwanted bias and unintended consequences. Creating an ethical, inclusive environment, valuing curiosity, and hiring a diverse workforce are necessary steps to make ethical AI. Clear communication and alignment on ethics is the best way to bring disparate groups of people into a shared understanding and to create AI systems that are accountable to humans, de-risked, respectful, secure, honest, and usable.
Over the next several years, my organization, Carnegie Mellon Universitys Software Engineering Institute, is advancing a professional discipline of AI Engineering to help the defense and national security communities develop, deploy, operate, and evolve game-changing mission capabilities that leverage rapidly evolving artificial intelligence and machine learning technologies. At the core of this effort is supporting the AI workforce in designing trustworthy AI systems by successfully integrating ethics in a diverse workforce.
Carol Smith (@carologic) is a senior research scientist in Human-Machine Interaction at Carnegie Mellon Universitys Software Engineering Institute and an adjunct instructor for CMUs Human-Computer Interaction Institute. She has been conducting user experience research to improve the human experience across industries for 19 years and working to improve AI systems since 2015. Carol is recognized globally as a leader in user experience and has presented over 140 talks and workshops in over 40 cities around the world, served two terms on the User Experience Professionals Association international board, and is currently an editor for the Journal of Usability Studies and the upcoming Association for Computing Machinery Digital Threats: Research and Practice journals Special Issue on Human-Machine Teaming. She holds an M.S. in Human-Computer Interaction from DePaul University.
This material is based upon work funded and supported by the Department of Defense under Contract No. FA8702-15-D-0002 with Carnegie Mellon University for the operation of the Software Engineering Institute, a federally funded research and development center.
The views, opinions, and/or findings contained in this material are those of the author(s) and should not be construed as an official government position, policy, or decision, unless designated by other documentation.
Image: U.S. Air Force (Photo by J.M. Eddins Jr.)
See the article here:
Creating a Curious, Ethical, and Diverse AI Workforce - War on the Rocks
- AI File Extension - Open . AI Files - FileInfo [Last Updated On: June 14th, 2016] [Originally Added On: June 14th, 2016]
- Ai | Define Ai at Dictionary.com [Last Updated On: June 16th, 2016] [Originally Added On: June 16th, 2016]
- ai - Wiktionary [Last Updated On: June 22nd, 2016] [Originally Added On: June 22nd, 2016]
- Adobe Illustrator Artwork - Wikipedia, the free encyclopedia [Last Updated On: June 25th, 2016] [Originally Added On: June 25th, 2016]
- AI File - What is it and how do I open it? [Last Updated On: June 29th, 2016] [Originally Added On: June 29th, 2016]
- Ai - Definition and Meaning, Bible Dictionary [Last Updated On: July 25th, 2016] [Originally Added On: July 25th, 2016]
- ai - Dizionario italiano-inglese WordReference [Last Updated On: July 25th, 2016] [Originally Added On: July 25th, 2016]
- Bible Map: Ai [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- Ai dictionary definition | ai defined - YourDictionary [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- Ai (poet) - Wikipedia, the free encyclopedia [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- AI file extension - Open, view and convert .ai files [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- History of artificial intelligence - Wikipedia, the free ... [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- Artificial intelligence (video games) - Wikipedia, the free ... [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- North Carolina Chapter of the Appraisal Institute [Last Updated On: September 8th, 2016] [Originally Added On: September 8th, 2016]
- Ai Weiwei - Wikipedia, the free encyclopedia [Last Updated On: September 11th, 2016] [Originally Added On: September 11th, 2016]
- Adobe Illustrator Artwork - Wikipedia [Last Updated On: November 17th, 2016] [Originally Added On: November 17th, 2016]
- 5 everyday products and services ripe for AI domination - VentureBeat [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Realdoll builds artificially intelligent sex robots with programmable personalities - Fox News [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- ZeroStack Launches AI Suite for Self-Driving Clouds - Yahoo Finance [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- AI and the Ghost in the Machine - Hackaday [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Why Google, Ideo, And IBM Are Betting On AI To Make Us Better Storytellers - Fast Company [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Roses are red, violets are blue. Thanks to this AI, someone'll fuck you. - The Next Web [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Wearable AI Detects Tone Of Conversation To Make It Navigable (And Nicer) For All - Forbes [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Who Leads On AI: The CIO Or The CDO? - Forbes [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- AI For Matching Images With Spoken Word Gets A Boost From MIT - Fast Company [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- Teach undergrads ethics to ensure future AI is safe compsci boffins - The Register [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- AI is here to save your career, not destroy it - VentureBeat [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- A Heroic AI Will Let You Spy on Your Lawmakers' Every Word - WIRED [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- With a $16M Series A, Chorus.ai listens to your sales calls to help your team close deals - TechCrunch [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- Microsoft AI's next leap forward: Helping you play video games - CNET [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- Samsung Galaxy S8's Bixby AI could beat Google Assistant on this front - CNET [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- 3 common jobs AI will augment or displace - VentureBeat [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- Stephen Hawking and Elon Musk endorse new AI code - Irish Times [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- SumUp co-founders are back with bookkeeping AI startup Zeitgold - TechCrunch [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Five Trends Business-Oriented AI Will Inspire - Forbes [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- AI Systems Are Learning to Communicate With Humans - Futurism [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Pinterest uses AI and your camera to recommend pins - Engadget [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Chinese Firms Racing to the Front of the AI Revolution - TOP500 News [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Real life CSI: Google's new AI system unscrambles pixelated faces - The Guardian [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- AI could transform the way governments deliver public services - The Guardian [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Amazon Is Humiliating Google & Apple In The AI Wars - Forbes [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- What's Still Missing From The AI Revolution - Co.Design (blog) [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Legaltech 2017: Announcements, AI, And The Future Of Law - Above the Law [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Can AI make Facebook more inclusive? - Christian Science Monitor [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- How a poker-playing AI could help prevent your next bout of the flu - ExtremeTech [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Dynatrace Drives Digital Innovation With AI Virtual Assistant - Forbes [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- AI and the end of truth - VentureBeat [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Taser bought two computer vision AI companies - Engadget [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Google's DeepMind pits AI against AI to see if they fight or cooperate - The Verge [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- The Coming AI Wars - Huffington Post [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Is President Trump a model for AI? - CIO [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- Who will have the AI edge? - Bulletin of the Atomic Scientists [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- How an AI took down four world-class poker pros - Engadget [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- We Need a Plan for When AI Becomes Smarter Than Us - Futurism [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- See how old Amazon's AI thinks you are - The Verge [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- Ford to invest $1 billion in autonomous vehicle tech firm Argo AI - Reuters [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- Zero One: Are You Ready for AI? - MSPmentor [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- Ford bets $1B on Argo AI: Why Silicon Valley and Detroit are teaming up - Christian Science Monitor [Last Updated On: February 12th, 2017] [Originally Added On: February 12th, 2017]
- Google Test Of AI's Killer Instinct Shows We Should Be Very Careful - Gizmodo [Last Updated On: February 12th, 2017] [Originally Added On: February 12th, 2017]
- Google's New AI Has Learned to Become "Highly Aggressive" in Stressful Situations - ScienceAlert [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- An artificially intelligent pathologist bags India's biggest funding in healthcare AI - Tech in Asia [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- Ford pledges $1bn for AI start-up - BBC News [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- Dyson opens new Singapore tech center with focus on R&D in AI and software - TechCrunch [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- How to Keep Your AI From Turning Into a Racist Monster - WIRED [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- How Chinese Internet Giant Baidu Uses AI And Machine Learning - Forbes [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- Humans engage AI in translation competition - The Stack [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Watch Drive.ai's self-driving car handle California city streets on a ... - TechCrunch [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Cryptographers Dismiss AI, Quantum Computing Threats - Threatpost [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Is AI making credit scores better, or more confusing? - American Banker [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- AI and Robotics Trends: Experts Predict - Datamation [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- IoT And AI: Improving Customer Satisfaction - Forbes [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- AI's Factions Get Feisty. But Really, They're All on the Same Team - WIRED [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Elon Musk: Humans must become cyborgs to avoid AI domination - The Independent [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Facebook Push Into Video Allows Time To Catch Up On AI Applications - Investor's Business Daily [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Defining AI, Machine Learning, and Deep Learning - insideHPC [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- AI Predicts Autism From Infant Brain Scans - IEEE Spectrum [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- The Rise of AI Makes Emotional Intelligence More Important - Harvard Business Review [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Google's AI Learns Betrayal and "Aggressive" Actions Pay Off - Big Think [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- AI faces hype, skepticism at RSA cybersecurity show - PCWorld [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- New AI Can Write and Rewrite Its Own Code to Increase Its Intelligence - Futurism [Last Updated On: February 17th, 2017] [Originally Added On: February 17th, 2017]