The CIA Can’t Protect Its Own Hacking Tools. Why Should We Trust Government Privacy and Security Proposals? – Reason

We are often told that law enforcement must have a way to get around strong encryption technologies in order to catch bad guys. Such a "backdoor" into security techniques would only be used when necessary and would be closely guarded so it would not fall into the wrong hands, the story goes.

The intelligence community does not yet have a known custom-built backdoor into encryption. But intelligence agencies do hold a trove of publicly unknown vulnerabilities, called "zero days," they use to obtain hard-to-get data. One would hope that government agencies, especially those explicitly dedicated to security, could adequately protect these potent weapons.

A recently released 2017 DOJ investigation into a breach of the CIA Center for Cyber Intelligence's (CCI) "Vault 7" hacking tools publicized in 2016 suggests that might be too big of an ask. Not only was the CCI found to be more interested in "building up cyber tools than keeping them secure," the nation's top spy agency routinely made rookie security mistakes that ultimately allowed personnel to leak the goods to Wikileaks.

The released portions of the report are frankly embarrassing. The CCI cyber arsenal was not appropriately compartmentalized, users routinely shared admin-level passwords without oversight, there seemed to be little controls over what content users could access, and data was stored and available to all users indefinitely. No wonder there was a breach.

It gets worse. Because the CIA servers lacked activity monitoring and audit capabilities, the agency did not even realize it was hacked until Wikileaks publicly announced it in March of 2017. As the report notes, if the hack was the result of a hostile foreign government like, say, China, the CIA might still be in the dark about the hack. Might there be other unknown breaches that fit this bill?

The report recommended several measures the CIA should take to shore up its internal defenses. Among the few that were not redacted: do a better job of protecting zero days and vetting personnel. Okay, so don't make all of the same mistakes again: got it.

Well, it looks like even this goal was too ambitious for the CIA. Intelligence gadfly Sen. Ron Wyden (DOre.), who first publicized the report, wrote a letter Director of National Intelligence John Ratcliffe stating that "the intelligence community is still lagging behind" three years after the report was first published. He demanded public answers for outstanding security problems in the intelligence community, such as a lack of basic practices like multi-factor and email authentication protocols.

What a snafu. It is absurd enough that the CIA of all places cannot even implement basic password protection programs. But when intelligence hacking units cannot even manage to protect its own hacking tools, our troubles multiply.

The CIA is unfortunately not uniquely incompetent among the intelligence community. The National Security Agency (NSA) found itself the victim of a similar zero day link in the 2016 Shadow Brokers dump. These are just two incidents that the public knows about. A culture of lax security practices invites attacks from all kinds of actors. We don't know how many times such hacking tools may have been discovered by more secretive outfits.

Many policy implications follow. There is a strong case to be made that intelligence agencies should not hoard zero-day vulnerabilities at all but should report them to the appropriate body for quick patching. This limits their toolkit, but it makes everyone safer overall. Of course, foreign and other hostile entities are unlikely to unilaterally disarm in this way.

The intelligence community supposedly has a process for vetting which zero days should be reported and which are appropriate to keep secret, called the Vulnerabilities Equities Process (VEP). Agencies must describe a vulnerability to a board who decides whether it's dangerous enough to need patching or useful enough for spying purposes.

For example, a vulnerability in some technology that is only used in China would probably be kept for operations. Theoretically, a vulnerability in some technology that is widely-used in the United States would be reported for fixing to keep Americans safe. As these incidents show, this does not always happen.

The VEP process is clearly insufficient, given these high-profile breaches. The very least the intelligence community can do is appropriately secure the bugs they've got. Efforts like Wyden's seek to impose more accountability on these practices.

There's a more general lesson about government efforts to improve security and privacy as well.

As implied earlier, we should strongly resist government efforts to compromise encryption in the name of law enforcement or anything else. Some of the most technically savvy government bodies cannot even secure the secret weapons they have not advertised. Can you imagine the attack vectors if they publicly attain some master encryption-breaking technique?

It also demonstrates the weaknesses of many top-down proposals to promote privacy or security. Government plans often attempt to sketch out master checklists that must be followed perfectly on all levels to work well. They can be time-consuming and burdensome, which means that personnel often cut corners and shirk accountability. Then when disaster inevitably strikes, the conclusion is that "people didn't stick to the plan hard enough," not that the plan was generally unrealistic to start.

There isn't a lot that the public can do about seemingly out-of-control intelligence agencies failing to secure potent cyberweapons beyond making a fuss. "National security" and all that. But it does give us a powerful argument against granting more power to these insecure intelligence bodies to break strong encryption. Governments can't even protect their secret cyber weapons. They almost certainly will not be able to protect a known backdoor into encryption.

See the original post here:
The CIA Can't Protect Its Own Hacking Tools. Why Should We Trust Government Privacy and Security Proposals? - Reason

Related Posts
This entry was posted in $1$s. Bookmark the permalink.