Machine AI Is On The Race to Overtake Human AI – Analytics Insight

Posted: May 4, 2020 at 10:55 pm

Often time, we have faced equal excitement and dilemma over introducing Artificial Intelligence in the military. Even the Department of Defense (DoD) of the USA is caught up in the same predicament. However, the recent findings by the Defense Intelligence Agency (DIA) may finally have a solution to this problem. Apart from it, this study also proves who shall be a better judge, human, or AI in case of analyzing an enemy activity.

Throughout history, humans are perceived to be an expert in comprehending and deducing a situation, even in comparison to AI. But according to this research experiment by DIA shows that both AI and humans have different risk tolerances during data scarcity. AI can be more vigilant about concluding similar situations when data is inadequate. The early results showcase how machine and human analysts fare in understanding critical data-driven decision making and match one another in matters of vital national security fields.

In May 2019, DIA had announced the program of the Machine-Assisted Analytic Rapid-Repository System (MARS). The mission was proposed to reframe the agencys understanding of data centers and support the departments development of AI in the future. Hence, the system was designed to engage users from early development to cut back risks on national security challenges or priorities change and improve continuously.

Terry Busch, Division Chief of Integrated Analysis and Methodologies within the Directorate for Analysis at DIA and technical director of MARS, says, Earlier this year our team had set up a test between a human and AI. The program will ask both humans and machines to discern if a ship is in the United States based on a certain amount of information at an April 27 National Security Powered by AI webinar four analysts came up with four methodologies, and the machine came up with two different methodologies, and that was cool. They all agreed that this particular ship was in the United States.

Therefore first test results were positive as both AI and humans algorithms made identical observations based on the given dataset by Automatic Identification System (AIS) feed. The second stage results, however, had a change of opinions. The team disconnected the worldwide ships tracker, AIS. Now the objective was to identify how it impacts the confidence levels of the AI analyzing methods. This procedure was essential to understand what goes into AI algorithms and how it affects it and by what magnitude.

And the output was surprising. After the removal of information sources, both the machine and humans were left with access to common source materials like social media, open-source kinds of things, or references to the ship being in the United States. While the confidence level lowered in machines, human fed algorithms ended up coming off as pretty overconfident. Ironically both the systems deemed itself accurate.

This experiment highlights how military leaders shall base their reliance on AI for decision driven situations. While it does not infer defense intelligence work to be handled over software, it does emphasize the need to build insights in a deficient data scenario. That also means teaching analysts to become data literate to understand things like confidence intervals and other statistical terms. The chief concern from machine-based AI was bias and retraining itself to error. Addressing these issues can help to foster both AI systems into a collaborative and complementing platform.

Busch explains, The data is currently outpacing the tradecraft or the algorithmic work that were doing. And were focusing on getting the data readyWeve lifted places where the expert is the arbiter of what is accurate.

More:

Machine AI Is On The Race to Overtake Human AI - Analytics Insight

Related Posts