The intelligence community is developing its own AI ethics – C4ISRNet

The Pentagon made headlines last month when it adopted its five principles for the use of artificial intelligence, marking the end of a months-long effort with significant public debate over what guidelines the department should employ as it develops new AI tools and AI-enabled technologies.

Less well known is that the intelligence community is developing its own principles governing the use of AI.

The intelligence community has been doing its own work in this space as well. Weve been doing it for quite a bit of time, said Ben Huebner, chief of the Office of Director of National Intelligences Civil Liberties, Privacy, and Transparency Office, at an Intelligence and National Security Alliance event March 4.

According to Huebner, ODNI is making progress in developing its own principles, although he did not give a timeline for when they would be officially adopted. They will be made public, he added, noting there likely wouldnt be any surprises.

Fundamentally, theres a lot of consensus here, said Huebner, who noted that ODNI had worked closely with the Department of Defenses Joint Artificial Intelligence Center on the issue.

Key to the intelligence communitys thinking is focusing on what is fundamentally new about AI.

Bluntly, theres a bit of hype, said Huebner. Theres a lot of things that the intelligence community has been doing for quite a bit of time. Automation isnt new. Weve been doing automation for decades. The amount of data that were processing worldwide has grown exponentially, but having a process for handling data sets by the intelligence community is not new either.

What is new is the use of machine learning for AI analytics. Instead of being explicitly programmed to perform a task, machine learning tools are fed data to train them to identify patterns or make inferences before being unleashed on real world problems. Because of this, the AI is constantly adapting or learning from each new bit of data it processes.

Know all the coolest acronyms Sign up for the C4ISRNET newsletter about future battlefield technologies.

Subscribe

Enter a valid email address (please select a country) United States United Kingdom Afghanistan Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, The Democratic Republic of The Cook Islands Costa Rica Cote D'ivoire Croatia Cuba Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guinea Guinea-bissau Guyana Haiti Heard Island and Mcdonald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati Korea, Democratic People's Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, The Former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands Netherlands Antilles New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory, Occupied Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Reunion Romania Russian Federation Rwanda Saint Helena Saint Kitts and Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and The Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia and Montenegro Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and The South Sandwich Islands Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand Timor-leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela Viet Nam Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe

Thanks for signing up!

By giving us your email, you are opting in to the C4ISRNET Daily Brief.

That is fundamentally different from other IC analytics, which are static.

Why we need to sort of think about this from an ethical approach is that the government structures, the risk management approach that we have taken for our analytics, assumes one thing that is not true anymore. It generally assumes that the analytic is static, explained Huebner.

To account for that difference, AI requires the intelligence community to think more about explainability and interpretability. Explainability is the concept of understanding how the analytic works, while interpretability is being able to understand a particular result produced by an analytic.

If we are providing intelligence to the president that is based on an AI analytic and he asks--as he doeshow do we know this, that is a question we have to be able to answer, said Huebner. Were going to need to ensure that we have transparency and accountability in these structures as we use them. They have to be secure and resilient.

ODNI is also building an ethical framework to help employees implement those principles in their daily work.

The thing that were doing that we just havent found an analog to in either the public or the private sector is what were referring to as our ethical framework, said Huebner. That drive for that came from our own data science development community, who said We care about these principles as much as you do. What do you actually want us to do?

In other words, how do computer programmers apply these principles when theyre actually writing lines of code? The framework wont provide all of the answers, said Huebner, but it will make sure employees are asking the right questions about ethics and AI.

And because of the unique dynamic nature of AI analytics, the ethical framework needs to apply to the entire lifespan of these tools. That includes the training data being fed into them. After all, its not hard to see how a data set with an underrepresented demographic could result in a higher error rate for that demographic than the population as a whole.

If youre going to use an analytic and it has a higher error rate for a particular population and youre going to be using it in a part of the world where that is the predominant population, we better know that, explained Huebner.

The IC wants to avoid those biases due to concerns over privacy, civil liberties, and frankly, accuracy. And if biases are introduced into an analytic, intelligence briefers need to be able to explain that bias to policy makers so they can factor that into their decision making. Thats part of the concepts of explainability and interpretability Huebner emphasized in his presentation.

And because they are constantly changing, these analytics will require some sort of periodic review as well as a way to catalog the various iterations of the tool. After all, an analytic that was reliable a few months ago could change significantly after being fed enough new data, and not always for the better. The intelligence community will need to continually check the analytics to understand how theyre changing and compensate.

Does that mean that we dont do artificial intelligence? Clearly no. But it means that we need to think about a little bit differently how were going to sort of manage the risk and ensure that were providing the accuracy and objectivity that we need to, said Huebner. Theres a lot of concern about trust in AI, explainability, and the related concept of interpretability.

See more here:
The intelligence community is developing its own AI ethics - C4ISRNet

Related Posts
This entry was posted in $1$s. Bookmark the permalink.