Encryption flaw opened Android and Apple smartphones to online drive-by attacks

Ninety-five per cent of the world's smartphones in use today have been wide open to a decade-old flaw that would have enabled attackers to steal passwords and other sensitive data.

The security flaw, dubbed "Freak", would have exposed visitors to US government websites - and possibly many more - to drive-by attacks. The websites that exploited the flaw included Whitehouse.gov, NSA.gov and FBI.gov.

News of the flaw was made public when internet company Akamai revealed in a corporate blog for customers that it was working to provide a fix. The flaw was discovered following last year's discovery of a catastrophic flaw in OpenSSL.

"The problem is that, until CVE 2015-0204 was raised - and fixed - an OpenSSL client using strong ciphers (anything other than export) could be tricked into accepting such a weak key. An attacker connects to the web server with an export cipher and gets a message signed with the weak RSA key, wrote Akamai's Rich Salz.

He continued: "He then cracks that key. The following day, for future connections from innocent browsers, he can act as a man in the middle. The attacker will use the cracked key to connect to clients, who will accept it. The attacker will then have access to all communication between the client and server. A server that does not support the export ciphers will never use the export RSA key and never send it to a client. A client that has the CVE fixed will never accept such a key."

The security flaw was found by a team of researchers from Microsoft and IT security organisations in the US, France and Spain. It was the result of a ban on US exports of "strong" encryption until the late 1990s, which saw much weaker security standards adopted in widely used software instead. The use of that software continued as a result of inertia in the IT industry, even after the US export ban was lifted.

"Researchers discovered in recent weeks that they could force browsers to use the old export-grade encryption then crack it over the course of just a few hours. Once cracked, hackers could steal passwords and other personal information and potentially launch a broader attack on the Web sites themselves by taking over elements on a page, such as a Facebook 'Like' button," reported the Washington Post.

John Hopkins University cryptographer Matthew Green, one of the researchers who helped uncover the flaw, said that it demonstrated the folly of governments attempt to mandate backdoors into secure software so that they could eavesdrop on people's online and communications activities.

Weakening security, he said, added complexity that attackers with nefarious intent could - and would - exploit. "When we say this is going to make things weaker, we're saying this for a reason."

The name "Freak" stands for "factoring related attack on RSA keys" and describes how the attack works against the Data Encryption Standard (DES) when one system authenticates with another.

See more here:
Encryption flaw opened Android and Apple smartphones to online drive-by attacks

Will HIPAA Require Encryption?

By Megan Williams, contributing writer

You and your healthcare IT clients could be facing even more legislation around healthcare data, and this time, its about encryption.

Currently, the Health Insurance Portability and Accountability Act (HIPAA) and the Health Information Technology for Economic and Clinical Health (HITECH) Act do not contain mandates around encryption, but that may soon change. The Senate Health, Education, Labor, and Pensions committee is rethinking its approach to encryption in their efforts to revisit HIPAA, according to FierceHealthIT.

The legislation is coming up on its 20-year anniversary, and many in the industry feel regulations around encryption dont properly address the new security threats that are becoming so common in the healthcare sector.

HITECH

The answer to HIPAAs lack of focus on encryption came in 2009 in the form of the HITECH Act, which, much like todays Meaningful Use initiatives, placed incentives around encryption, and avoided imposing a rigid solution across the industry. Indiana University law professor, Nicolas Terry told the AP, that it seemed like a reasonable balance at the time, but that recent events may have proven the compromise unworkable.

Basically, the industry hasnt gone for the incentives in big enough ways. Over 40 percent of healthcare employees arent using full-disk, or file-level encryption devices at work, according to a Forrester research report, leaving huge segments of the industry vulnerable, just as attacks are increasing, and growth in security-testing concepts like the Internet of Things are taking off.

The current chair of the HIMSS Privacy And Security Policy Task Force doesnt believe much will happen, though, before the next presidential election.

On a smaller level, states like New Jersey have taken the lead, and enacted legislation requiring health insurance companies to encrypt patient information, according to NJ.com. All insurance companies using data containing personal information must either protect that data by encryption, or by any other method or technology rendering it unreadable, undecipherable, or otherwise unusable by an unauthorized person.

Where Encryption Falls Short

See the original post:
Will HIPAA Require Encryption?

Google reneges on its promise of encryption by default for Lollipop devices

Not wanting to be outdone by Apple, when it was announced that iOS 8 would encrypt data by default, Google felt compelled to follow suit. Back in September Google said that in Lollipop "encryption will be enabled by default out of the box, so you won't even have to think about turning it on". But six months is a long time, and it now seems that Google has had a change of heart.

Well, as noted by Ars Technica, many of the Lollipop handsets appearing at MWC 2015 -- including the Samsung Galaxy S6 -- do not have encryption enabled. Of course there is nothing to stop users from manually enabling it, but that's not really the point; the idea was that you "won't even have to think about turning it on". So what gives?

It's all about performance. On-the-fly encryption and decryption eats up valuable processor time, and handsets took a hit in terms of speed. Take a look at a blog post from Google about the release of Lollipop from late October and you'll immediately notice references to "new security features protecting you, like encryption by default" and the assurance that "full device encryption occurs at first boot".

But scroll to the bottom of the post and you'll see that there's an addendum:

In September, we announced that all new Android Lollipop devices would be encrypted by default. Due to performance issues on some Android partner devices, we are not yet at encryption by default on every new Lollipop device. That said, our new Nexus devices are encrypted by default and Android users (Jelly Bean and above) have the option to encrypt the data on their device in Settings ---> Security ---> Encryption. We remain firmly committed to encryption because it helps keep users safe and secure on the web.

Such is the change of heart that full device encryption by default is no longer a requirement for partners according to Google's Android Compatibility Program document (section 9.9 on page 59) it's now optional:

If the device implementation has a lock screen, the device MUST support full-disk encryption of the application private data, (/datapartition) as well as the SD card partition if it is a permanent, non-removable part of the device [Resources, 107]. For devices supporting full-disk encryption, the full-disk encryption SHOULD be enabled all the time after the user has completed the out-of-box experience. While this requirement is stated as SHOULD for this version of the Android platform, it is very strongly RECOMMENDED as we expect this to change to MUST in the future versions of Android.

So there is still a requirement for Samsung et al to support device encryption, but there is no requirement for it to be enabled as initially promised. You'll notice that the Android Compatibility Program document was last updated in the middle of January -- Google didnt publicize the change, and it's only now that people are starting to notice and question it.

Problems with performance and compatibility are quite reasonable reasons for delaying encryption by default, but Google's lack of transparency is worrying. Following Google's promises back in the latter end of 2014, anyone buying a new Lollipop device would quite reasonably expect that their device is encrypted -- and the uninitiated may not even bother to check. The balance between performance and security is one for users to make for themselves, but Google needs to be open and honest about what is going on.

Follow this link:
Google reneges on its promise of encryption by default for Lollipop devices

What you need to know about the ‘FREAK’ bug

Now that has come back to haunt us, in the form of a nasty computer bug.

Researchers have discovered a flaw -- which they call the FREAK bug -- that can let a hacker spy on your Internet session and steal your login credentials.

It affects lots of supposedly secure websites, from Symantec.com to NSA.gov. Apple's Safari browser and some Android Web browsers are vulnerable. (Google's Chrome, Mozilla's Firefox and Microsoft's Internet Explorer are OK.)

Apple (AAPL, Tech30) told CNNMoney it plans to have a fix for iPhone and Mac users next week in the form of a software update. Google (GOOG) did not immediately respond requests for comment.

Kickstarter, WePay, and many other websites that feature Facebook (FB, Tech30) "like" buttons are also vulnerable to this, researchers said.

The issue, explained

Buried somewhere deep inside the code of some Web browsers and websites is an old, weak version of encryption that can easily be cracked. And the only reason it exists is because of bad U.S. policies that have since been abolished.

Back in the 1990s, the federal government restricted the export of powerful data encryption. Computer companies were forced to employ two versions of encryption: weak and strong. But the weak stuff stuck around long after it was no longer needed.

The bug was found late last year by academic security researchers at the French computer science institute INRIA. They've been quietly helping Apple and others fix this behind the scenes since November. They dubbed it the FREAK bug, short for "Factoring Related Attack on RSA Keys."

Akamai (AKAM), a company that hosts websites with an extra layer of protection, made the bug public on Tuesday. The company said it's racing to fix the problem for all of its customers.

More:
What you need to know about the 'FREAK' bug

What to know about the ‘FREAK’ computer bug

Now that has come back to haunt us, in the form of a nasty computer bug.

Researchers have discovered a flaw -- which they call the FREAK bug -- that can let a hacker spy on your Internet session and steal your login credentials.

It affects lots of supposedly secure websites, from Symantec.com to NSA.gov. Apple's Safari browser and some Android Web browsers are vulnerable. (Google's Chrome, Mozilla's Firefox and Microsoft's Internet Explorer are OK.)

Apple (AAPL, Tech30) told CNNMoney it plans to have a fix for iPhone and Mac users next week in the form of a software update. Google (GOOG) did not immediately respond requests for comment.

Kickstarter, WePay, and many other websites that feature Facebook (FB, Tech30) "like" buttons are also vulnerable to this, researchers said.

The issue, explained

Buried somewhere deep inside the code of some Web browsers and websites is an old, weak version of encryption that can easily be cracked. And the only reason it exists is because of bad U.S. policies that have since been abolished.

Back in the 1990s, the federal government restricted the export of powerful data encryption. Computer companies were forced to employ two versions of encryption: weak and strong. But the weak stuff stuck around long after it was no longer needed.

The bug was found late last year by academic security researchers at the French computer science institute INRIA. They've been quietly helping Apple and others fix this behind the scenes since November. They dubbed it the FREAK bug, short for "Factoring Related Attack on RSA Keys."

Akamai (AKAM), a company that hosts websites with an extra layer of protection, made the bug public on Tuesday. The company said it's racing to fix the problem for all of its customers.

See the original post:
What to know about the 'FREAK' computer bug

Bad, old U.S. policy causes ‘FREAK’ computer bug

Now that has come back to haunt us, in the form of a nasty computer bug.

Researchers have discovered a flaw -- which they call the FREAK bug -- that can let a hacker spy on your Internet session and steal your login credentials.

It affects lots of supposedly secure websites, from Symantec.com to NSA.gov. Apple's Safari browser and some Android Web browsers are vulnerable. (Google's Chrome, Mozilla's Firefox and Microsoft's Internet Explorer are OK.)

Apple (AAPL, Tech30) told CNNMoney it plans to have a fix for iPhone and Mac users next week in the form of a software update. Google (GOOG) did not immediately respond requests for comment.

Kickstarter, WePay, and many other websites that feature Facebook (FB, Tech30) "like" buttons are also vulnerable to this, researchers said.

The issue, explained

Buried somewhere deep inside the code of some Web browsers and websites is an old, weak version of encryption that can easily be cracked. And the only reason it exists is because of bad U.S. policies that have since been abolished.

Back in the 1990s, the federal government restricted the export of powerful data encryption. Computer companies were forced to employ two versions of encryption: weak and strong. But the weak stuff stuck around long after it was no longer needed.

The bug was found late last year by academic security researchers at the French computer science institute INRIA. They've been quietly helping Apple and others fix this behind the scenes since November. They dubbed it the FREAK bug, short for "Factoring Related Attack on RSA Keys."

Akamai (AKAM), a company that hosts websites with an extra layer of protection, made the bug public on Tuesday. The company said it's racing to fix the problem for all of its customers.

Read this article:
Bad, old U.S. policy causes 'FREAK' computer bug

What the FREAK? Huge SSL security flaw stems from US government backdoor

Seven hours is all it takes to crack the encryption that is in place on some supposedly secure websites. Security experts blame the US government's ban on the use of strong encryption back in the 1990s for a vulnerability that has just come to light. Named FREAK (Factoring attack on RSA-EXPORT Keys), the flaw exists on high-profile websites including, ironically, NSA.gov.

Restrictions that limited security to just 512-bit encryptions were lifted in the late 90s, but not before it was baked into software that is still in use today. The ban on the shipping of software with stronger encryption apparently backfired as it found its way back into the States. Security experts say the problem is serious, and the vulnerability is relatively easy to exploit.

Browsers can be hijacked and tricked into accessing websites using legacy encryption -- this was the discovery of researchers at Inria in France. There was disbelief that such old protection measures were still being used, but it soon became clear that hackers needed just a matter of hours to exploit the weak security to steal passwords and personal information, or even launch a full-scale attack on a website.

Talking to the Washington Post Matthew Green, a cryptographer at Johns Hopkins Information Security Institute, said that US government had effectively weakened its own security with the earlier ban on the exporting of strong encryption. "When we say this is going to make things weaker, we're saying this for a reason."

The vulnerability could be exploited on vulnerable sites, with encryption cracked in just seven hours. Worryingly, if test samples are correct, more than a quarter of websites that were previously thought to be secure are vulnerable to the problem. In a blog post, Green explains that the vulnerability affects OpenSSL (used by Android) and Apple TLS/SSL clients (used by Safari). He goes on to explain that "the SSL protocol itself was deliberately designed to be broken" and that a man-in-the-middle attack could be easily launched on sites:

The 512-bit export grade encryption was a compromise between dumb and dumber. In theory it was designed to ensure that the NSA would have the ability to 'access' communications, while allegedly providing crypto that was still 'good enough' for commercial use. Or if you prefer modern terms, think of it as the original "golden master key".

In effect, a backdoor put in place by the US government has left countless websites insecure. Green points out that the lengthy list of affected sites includes connect.facebook.net which is used to deliver Facebook's Like button to millions of websites. If this was hijacked, the consequences could be dire.

Patches will almost certainly be on the way, but the final word goes to Matthew Green who sums up the source of the problem quite succinctly:

Encryption backdoors will always turn around and bite you in the ass. They are never worth it.

Here is the original post:
What the FREAK? Huge SSL security flaw stems from US government backdoor

FBI’s attack on encryption

WASHINGTON As Googles Android smartphone operating system was coming under attack in fall 2012 from malware with the colorful names of Loozfon and FinFisher, the FBIs Internet Crime Complaint Center issued an alert against the threat.

Depending on the type of phone, the FBI said, the operating system may have encryption available. This can be used to protect the users personal data.

Last fall, when Apple and Google announced they were cleaning up their operating systems to ensure that their users information was encrypted to prevent hacking and potential data loss, FBI Director James Comey attacked both companies. He claimed the encryption would cause the users to place themselves above the law.

The tech community fired back. The only actions that have undermined the rule of law, Ken Gude wrote in Wired, are the governments deceptive and secret mass-surveillance programs.

The battle resumed in February 2015. Michael Steinbach, FBI assistant director for counterterrorism, said it is irresponsible for companies like Google and Apple to use software that denies the FBI lawful means to intercept data.

Yet the FBI does have a lawful means to intercept it: the Foreign Intelligence Surveillance Act. Its scope was expanded by Congress after the 9/11 attacks.

Its worth noting that the FBI never asked Congress to force tech companies to build back doors into their products immediately after the 9/11 attacks. Only after Google and Apple took steps to patch existing security vulnerabilities did the bureau suddenly express concern that terrorists might be exploiting this encryption.

In fact, the bureau has a host of legal authorities and technological capabilities at its disposal to intercept and read communications, or even to penetrate facilities or homes to implant audio and video recording devices. The larger problem confronting the FBI and the entire U.S. intelligence community is their over-reliance on electronic technical collection against terrorist targets.

The best way to disrupt any organized criminal element is to get inside of it physically. But the U.S. governments counterterrorism policies have made that next to impossible.

The FBI, for example, targets the very Arab-American and Muslim-American communities it needs to work with if it hopes to find and neutralize home-grown violent extremists, including promulgating new rules on profiling that allow for the potential mapping of Arab- or Muslim-American communities.

The rest is here:
FBI’s attack on encryption

How To Sabotage Encryption Software (And Not Get Caught)

In the field of cryptography, a secretly planted backdoor that allows eavesdropping on communications is usually a subject of paranoia and dread. But that doesnt mean cryptographers dont appreciate the art of skilled cyphersabotage. Now one group of crypto experts has published an appraisal of different methods of weakening crypto systems, and the lesson is that some backdoors are clearly better than othersin stealth, deniability, and even in protecting the victims privacy from spies other than thebackdoors creator.

In a paper titled Surreptitiously Weakening Cryptographic Systems, well-known cryptographer and author Bruce Schneier and researchers from the Universities of Wisconsin and Washington take the spys view to the problem of crypto design: What kind of built-in backdoor surveillance works best?

Their paper analyzes and rates examples of both intentional and seemingly unintentional flaws built into crypto systems over the last two decades. Their results seem to imply, however grudgingly, that the NSAs most recent known method of sabotaging encryption may be the best option, both in effective, stealthy surveillance and in preventing collateral damage to the Internets security.

This is a guide to creating better backdoors. But the reason you go through that exercise is so that you can create better backdoor protections, says Schneier, the author of the recent book Data and Goliath, on corporate and government surveillance. This is the paper the NSA wrote two decades ago, and the Chinese and the Russians and everyone else. Were just trying to catch up and understand these priorities.

The researchers looked at a variety of methods of designing and implementing crypto systems so that they can be exploited by eavesdroppers. The methods ranged from flawed random number generation to leaked secret keys to codebreaking techniques. Then the researchers rated them on variables like undetectability, lack of conspiracy (how much secret dealing it takes to put the backdoor in place), deniability, ease of use, scale, precision and control.

Heres the full chart of those weaknesses and their potential benefits to spies. (The ratingsL, M, and H stand for Low, Medium and High.)

A bad random number generator, for instance, would be easy to place in softwarewithout many individuals involvement, and if it were discovered, could be played off as a genuinecoding error rather than a purposeful backdoor. As an example of this, the researchers point to an implementation of Debian SSL in 2006 in which two lines of code were commented out, removing a large source of the entropy needed to create sufficiently random numbers for the systems encryption. The researchers acknowledge that crypto sabotagewas almost certainly unintentional, the result of a programmer trying to avoid a warning message from a security tool. But the flaw nonetheless required the involvement of only one coder, went undiscovered for two years, and allowed a full break of Debians SSL encryption for anyone aware of the bug.

Another, even subtler method of subverting crypto systemsthat the researchers suggest is what they call implementation fragility, which amounts to designing systemsso complex and difficult that coders inevitably leave exploitable bugs in the software that uses them. Many important standards such as IPsec, TLS and others are lamented as being bloated, overly complex, and poorly designedwith responsibility often laid at the public committee-oriented design approach, the researchers write. Complexity may simply be a fundamental outcome of design-by-committee, but a saboteur might also attempt to steer the public process towards a fragile design. Thatkind of sabotage, if it were found, would be easily disguisedas the foibles of a bureaucratic process.

But when it comes to a rating for controlthe ability to distinguish who will be able to exploit the security weakness youve insertedthe researchers label implementation fragility and bad number generation as low.Use a bad random number generator or fragile crypto implementation, and any sufficiently skilled cryptanalysts who spot the flaw will be able to spy on your target. Its clear that some of these thingsare disastrous in terms of collateraldamage, says paper co-author University of Wisconsin computer scientist Thomas Ristenpart. If you have a saboteur leaving vulnerabilities in criticalsystem that can be exploited by anyone, then this is just disastrous for the security of consumers.

More here:
How To Sabotage Encryption Software (And Not Get Caught)