The U.S. military, algorithmic warfare, and big tech – VentureBeat

We learned this week that the Department of Defense is using facial recognition at scale, and Secretary of Defense Mark Esper said he believes China is selling lethal autonomous drones. Amid all that, you may have missed Joint AI Center (JAIC) director Lieutenant General Jack Shanahan who is charged by the Pentagon with modernizing and guiding artificial intelligence directives talking about a future of algorithmic warfare.

Algorithmic warfare, which could dramatically change warfare as we know it, is built on the assumption that combat actions will happen faster than humans ability to make decisions. Shanahan says algorithmic warfare would thus require some reliance on AI systems, though he stresses a need to implement rigorous testing and evaluation before using AI in the field to ensure it doesnt take on a life of its own, so to speak.

We are going to be shocked by the speed, the chaos, the bloodiness, and the friction of a future fight in which this will be playing out, maybe in microseconds at times. How do we envision that fight happening? It has to be algorithm against algorithm, Shanahan said during a conversation with former Google CEO Eric Schmidt and Google VP of global affairs Kent Walker. If were trying to do this by humans against machines, and the other side has the machines and the algorithms and we dont, were at an unacceptably high risk of losing that conflict.

The three spoke Tuesday in Washington, D.C. for the National Security Council on AI conference, which took place a day after the group delivered its first report to Congress with input from some of the biggest names in tech and AI like Microsoft Research director Eric Horvitz, AWS CEO Andy Jassy, and Google Cloud chief scientist Andrew Moore. The final report will be released in October 2020.

The Pentagon first ventured into algorithmic warfare and a range of AI projects with Project Maven, an initiative to work with tech companies like Google and startups like Clarifai. It was created two years ago with Shanahan as director following a recommendation by Schmidt and the Defense Innovation Board.

In an age of algorithmic warfare, Shanahan says the Pentagon needs to bring AI to service members at every level of the military so people with firsthand knowledge of problems can use AI to further military goals. Shanahan acknowledged that a decentralized approach to development, experimentation, and innovation will be accompanied by higher risk but argued that it could be essential to winning wars.

Algorithmic warfare is included in the National Security Council on AI draft report, which minces no words about the importance of AI to U.S. national security and states unequivocally that the development of AI will shape the future of power.

The convergence of the artificial intelligence revolution and the reemergence of great power competition must focus the American mind. These two factors threaten the United States role as the worlds engine of innovation and American military superiority, the report reads. We are in a strategic competition. AI will be at the center. The future of our national security and economy are at stake.

The report also acknowledges that the world may experience an erosion of civil liberties and acceleration of cyber attacks in the AI era. And it references China more than 50 times, noting the intertwined nature of Chinese and U.S. AI ecosystems today, and Chinas goal to become a global AI leader by 2030.

Its worth noting that the NSCAI report chooses to focus on narrow artificial intelligence, rather than artificial general intelligence (AGI), which doesnt exist yet.

When we might see the advent of AGI is widely debated. Rather than focusing on AGI in the near term, the Commission supports responsibly dealing with more narrow AI-enabled systems, the report reads.

Last week, the Defense Innovation Board (DIB) released its AI ethics principles recommendations for the Department of Defense, a document created with contributions from LinkedIn cofounder Reid Hoffman; MIT CSAIL director Daniela Rus; and senior officials from Facebook, Google, and Microsoft. The DoD and JAIC will now consider which principles and recommendations to adopt going forward.

Former Google CEO Eric Schmidt acted as chair of both the NSCAI and DIB boards and oversaw the creation of both reports. Schmidt was joined on the NSCAI board by Horwitz, Jassy, and Moore, along with former Deputy Secretary of Defense Robert Work.

At the conference on Tuesday, Schmidt, Shanahan, and Walker revisited the controversy at Google over Project Maven. When Googles participation in the project became public in spring 2018, thousands of employees signed an open letter protesting the companys involvement.

Following months of employee unrest, Google adopted its own set of AI principles, which includes a ban on creating autonomous weaponry.

Google also pledged to end its Project Maven contract by the end of 2019.

Its been frustrating to hear concerns around our commitment to national security and defense, Walker said, noting the work Google is doing with JAIC on issues like cybersecurity and health care. He added that Google will continue to work with the Department of Defense, saying This is a shared responsibility to get this right.

An understanding that military applications of AI are a shared responsibility is critical to U.S. national security, Shanahan said, while acknowledging that mistrust between the military and industry flared up during the Maven episode.

While the Maven computer vision work Google did was for unarmed drones, Shanahan said the backlash revealed many tech workers broader concerns about working with the military and highlighted the need to clearly communicate objectives.

But he argued that the military is in a state of perpetual catch-up, and bonds between government, industry, and academia must be strengthened for the country to maintain economic and military supremacy.

The NSCAI report also references a need for people in academia and business to reconceive their responsibilities for the health of our democracy and the security of our nation.

No matter where you stand with respect to the governments future use of AI-enabled technologies, I submit that we can never attain the vision outlined in the Commissions interim report without industry and academia together in an equal partnership. Theres too much at stake to do otherwise, he said.

Heather Roff is a senior research analyst at Johns Hopkins University and former research scientist at Googles DeepMind. She was the primary author of the DIB report and an ethics advisor for the creation of the NSCAI report.

She thinks media coverage of the DIB report sensationalized use of autonomous weaponry but generally failed to consider applications of AI across the military as a whole in areas like logistics, planning, and cybersecurity. She also cited AIs value in facilitating audits for the U.S. military, which has the largest budget of any military in the world and is one of the largest employers in the country.

The draft version of the NSCAI report says autonomous weaponry can be useful but adds that the commission intends to address ethical concerns in the coming year, Roff said.

People concerned about the use of autonomous weapons should recognize that despite ample funding, the military has much bigger structural challenges to address today, Roff said. Issues raised in the NSCAI report include service members being unprepared to use open source software or download the GitHub client.

The only people doing serious work on AGI right now are DeepMind and OpenAI, maybe a little Google Brain, but the department doesnt have the computational infrastructure to do what OpenAI and Deep Mind are doing. They dont have the compute, they dont have the expertise, they dont have the hardware, [and] they dont have the data source or the data, she said.

The NSCAI is scheduled to meet next with NGOs to discuss issues like autonomous weapons, privacy, and civil liberties.

Liz OSullivan is a VP of ArthurAI in New York and part of the Human Rights Watch Campaign to Stop Killer Robots. Last year, after voicing opposition to autonomous weapons systems with coworkers, she quit her job at Clarifai in protest over work being done on Project Maven. She thinks the two reports have a lot of good substance but that they take no explicit stance on important issues, like whether historical hiring data that favors men can be used.

OSullivan is concerned that a 2012 DoD directive mentioned in both reports that calls for appropriate levels of human judgement is being interpreted to mean that autonomous weapons will always have human control. She would rather the military adopt the idea of meaningful human control, such as has been advocated in the United Nations.

Roff, who previously worked in autonomous weapons research, said a misconception is that deployment of AI systems requires a human in the loop. Last-minute edits to the AI ethics report clarify a need for the military to have an off switch if AI systems begin to take actions on their own or attempt to avoid being turned off.

Humans in the loop is not in the report for a reason, which is [that] a lot of these systems will act autonomously in the sense that it will be programmed to do a task and there wont be a human in the loop per se. It will be a decision aid or it will have an output, or if its cybersecurity its going to be finding bugs and patching them on [its] own, and humans cant be in the loop, Roff said.

Although the AI ethics report was compiled with multiple public comment sessions, OSullivan believes the DIB AI ethics report and NSCAI report lack input from people who oppose autonomous weapons.

Its pretty clear they selected these groups to be representative of industry, all very centrist, she said. That explains to me at least why theres not a single representative on that board who is anti-autonomy. They stacked the deck, and they had to know what they were doing when they created these groups.

OSullivan agrees that the military needs technologists, but believes it has to be upfront about what people are working on. Concern over computer vision-based projects like Maven springs from the fact that AI is a dual-use technology, and an object detection system designed for civilian use can also be used for weapons.

I dont think its smart for all of the tech industry to abandon our government. They need our help, but simultaneously, were in a position where in some cases we cant know what were working on because its classified or parts of it might be classified, she said. There are plenty of people within the tech industry who do feel comfortable working with the Department of Defense, but it has to be consensual, it has to be something where they really do understand the impact and the gravity of the tasks that theyre working on. If for no other reason than understanding the use cases when youre building something, [it] is incredibly important to design [AI] in a responsible way.

Continued here:
The U.S. military, algorithmic warfare, and big tech - VentureBeat

Related Posts
This entry was posted in $1$s. Bookmark the permalink.