Encryption | California State University, Northridge

Bluetooth itself as a technology isn't secure, it's not only about the implementation, there are some serious flaws in the design itself. Bluetooth isn't a short range communication method - just because you're a bit far doesn't mean you're safe.Class I Bluetooth deviceshave a range up to 100 meters. Bluetooth isn't a mature communicate method (security-wise). With smart phones, it has turned into something totally different from what it was meant to be. It was created as a way to connect phones to peripherals. Please don't use Bluetooth for accessing Level 1 data.

If you do need to use Bluetooth devices please do the following

Here is theWindows documentation.

Here is theMacintosh documentation

Disk encryption safely protects all the data stored on a hard drive. When the entire hard disk is encrypted, everything on that disk is protected if the computer is lost or stolen. CSUN recommends the following drive encryption programs for non-portable storage devices. Select the appropriate link for more information on how to use each program:

E-mails may be encrypted and/or authenticated to prevent the contents from being read by unintended recipients. Please ask your local tech if you believe you need to encrypt e-mail messages.

The following encryption methods are available for protecting files and folders stored on portable storage devices such as, USB sticks, external hard drives and other mobile devices. Select the appropriate link below for more information on how to use each program:

There are storage devices that use hardware based encryption.

File encryption is designed to protect stored (at rest) files or folders.

Additional information is available by clicking on each product name.

Caution: Data in encrypted files are not retrievable if the encryption key is lost.

Following are examples of file encryption software to use when encrypting your data:

The following productivity tools let you password-protect and/or encrypt individual files:

It is possible to encrypt entire networks, which may be desirable in certain situations. If you think this may be relevant to you, please contact your local tech for assistance.

Visit link:
Encryption | California State University, Northridge

How have ARM TrustZone flaws affected Android encryption? – TechTarget

Google received a lot of praise for the security improvements in Android N, but some security experts have taken...

Enjoy this article as well as all of our content, including E-Guides, news, tips and more.

By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.

You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

Google to task over what they claim are shortcomings with Android N encryption. What are the issues with Android N's encryption scheme?

Encryption is the cornerstone of information security, yet it is notoriously difficult to implement well, particularly on desktops and mobile devices used by non-tech-savvy users. Ease of use, speed and data recovery all need to be balanced against robust encryption.

The two main technologies for meeting these requirements are full disk encryption (FDE) and file-based encryption (FBE). FBE only encrypts selected folders or files, which remain encrypted until the user chooses to access them by providing the correct credentials. FDE encrypts the entire contents of a device's hard drive, so if the device is lost or stolen, or the drive is placed into another device, all the data remains protected. However, once a user unlocks their device, none of the data is protected, as the entire contents of the drive will have been decrypted. While desktop computers are regularly turned off, most mobile devices are left on indefinitely, leaving sensitive data decrypted and potentially accessible to unauthorized users.

Since Android version 5.0, Android devices have had FDE enabled by default. This is based on the Linux kernel subsystem dm-crypt, a widely used and robust encryption scheme. But, like every encryption scheme, it is only as strong as the key used to encrypt the data.

An independent researcher, Gal Beniamini, posted an exploit code that breaks Android's FDE on devices running on Qualcomm chips by leveraging weaknesses in the chips' design.

ARM TrustZone is a system-on-a-chip and CPU system-wide approach to security that supports a Trusted Execution Environment, backed by hardware-based access control, which cannot be interfered with by less trusted applications or the operating system.

Android's Keystore Keymaster module is intended to assure the protection of cryptographic keys generated by applications, and it runs in the ARM TrustZone. It contains the device encryption key (DEK) used for FDE, which is further protected through encryption with a key derived from the user's unlock credentials. This key is bound to the device's hardware through the intermediate Keymaster signature. This means all cryptographic operations have to be performed directly on the device itself by the Keymaster module, thus preventing off-device brute force attacks.

However, as the key derivation process is not truly hardware-bound, the Keymaster signature is stored in software instead of hardware, and is directly available to the TrustZone. This makes Android's FDE only as robust as the ARM TrustZone kernel or Keymaster module.

Beniamini's previous blog posts have shown that applications that run in the TrustZone in Android devices using Qualcomm chips can be reverse-engineered. By reverse-engineering the Keymaster module and leveraging two ARM TrustZone kernel vulnerabilities he discovered, Beniamini developed an off-device exploit to decrypt the DEK. No longer restricted to a limited number of password attempts, the user's credentials can be brute forced by passing them through the key derivation function until the resulting key decrypts the stored DEK. Once the DEK is decrypted, it can be used to decrypt the entire drive, breaking Android's FDE scheme. The attacker can also downgrade a patched device to a vulnerable version to extract the key.

This flaw makes Android's FDE implementation far weaker than Apple's, which has encryption keys that are properly bound to the device's hardware, and which are never divulged to software or firmware. This means an attacker must brute force an iOS user's password on the device. This requires overcoming the on-device protections, like delays between decryption attempts and wiping user data after so many failed attempts. Android devices, on the other hand, perform encryption using keys which are directly available to the ARM TrustZone software.

Poor implementation is usually the weak point in any encryption technology. While the two ARM TrustZone vulnerabilities used by Beniamini, CVE-2015-6639 and CVE-2016-2431, have been patched, many devices remain susceptible to the attack because they have yet to receive the patches. This is a constant problem that plagues Android devices due to restrictions and delays created by manufacturers or carriers that prevent end users from receiving or installing the updates they release.

Read about the new memory protection features in the Linux kernel on Android OS

Learn about the security features in the Samsung Knox platform

Find out the differences between symmetric and asymmetric encryption types

Go here to read the rest:
How have ARM TrustZone flaws affected Android encryption? - TechTarget

Why isn’t US military email protected by standard encryption tech? – Naked Security

One of the United States Senates most tech-savvy members is asking why much of the US militarys email still isnt protected by standard STARTTLS encryption technology.

Last month, Sen. Ron Wyden (D-Oregon) shared his concerns with DISA, the federal organization that runs mail.mil for the US army, navy, marines and the Coast Guard:

The technology industry created STARTTLS fifteen years ago to allow email servers to communicate securely and protect email messages from surveillance as they are transmitted over the internet. STARTTLS is widely supported by email server software but, critically, it is often not enabled by default, meaning email server administrators must turn it on.

Wyden noted that major tech companies including Google, Yahoo, Microsoft, Facebook, Twitter, and Apple use STARTTLS, as do the White House, Congress, NSA, CIA, FBI, Director of National Intelligence, and Department of Homeland Security but not DISA.

A 2015 Motherboard investigation originally uncovered the limited use of STARTTLS by U.S. government security agencies. Since then, Motherboard reports, many of the aforementioned agencies have started using STARTTLS but not DISA.

Wyden observed that until DISA enables STARTTLS, unclassified email messages sent between the military and other organizations will be needlessly exposed to surveillance and potentially compromised by third parties.

Even if all the military messages sent through DISAs servers are unclassified, if Wyden is correct, this might conceivably give adversaries additional insights into the US militarys structure, decision-makers, and decision-making processes.

Early reports on Wydens letter quoted DISA as saying that it would respond formally to him. DISA told Naked Security:

We are not at liberty to discuss specific tactics, techniques, and procedures by which DISA guards DOD email traffic. Email is one of the largest threat vectors in cyberspace. We can tell you that DISA protects all DOD entities with its Enterprise Email Security Gateway Solution (EEMSG) as a first line of defense for email security.

DISAs DOD Enterprise Email (DEE) utilizes the EEMSG for internet email traffic and currently rejects more than 85% of daily email traffic due to malicious behavior. DISA inspects the remaining 15% of email traffic to detect advanced, persistent cybersecurity threats. The Agency always makes deliberate risk-based decisions in the tools it uses for cybersecurity, to include email protocols for the DoD.

In the news you can use spirit, this might be a good time for a brief primer on STARTTLS. This SMTP extension aims to partially remedy a fundamental shortcoming of the original SMTP email protocol: it didnt provide a way to signal that email communication should be secured as messages hop across servers towards their destinations.

Using STARTTLS, an SMTP client can connect over a secure TLS-enabled port; the server can then advertise that a secure connection is available, and the client can request to use it.

STARTTLS isnt perfect. It can be vulnerable to downgrade attacks, where an illicit man-in-the-middle deletes a servers response that STARTTLS is available. Seeing no response, the client sends its message via an insecure connection, just as it would have if STARTTLS never existed. But, as the Internet Engineering Task Force (IETF) puts it, this opportunistic security approach offers some protection most of the time.

IETF says protocols like STARTTLS are:

not intended as a substitute for authenticated, encrypted communication when such communication is already mandated by policy (that is, by configuration or direct request of the application) or is otherwise required to access a particular resource. In essence, [they are] employed when one might otherwise settle for cleartext.

For context, Google reports that 88% of the Gmail messages it sends to other providers are now encrypted via TLS (in other words, both Google and the other provider supports TLS/STARTTLS encryption); 85% of messages inboundto Gmail are encrypted.

Would STARTTLS offer value in securing the military communications DISA manages through mail.mil? From the outside, its easy to say Yes. But it sure would be fascinating to hear the technical conversation between DISAs security experts and Senator Wydens.

Email service providers are caught on the horns of a dilemma, it seems. Naked Securitys Paul Ducklin says:

STARTTLS only deals with server-to-server encryption of the SMTP part, so it isnt a replacement for end-to-end encrypted email in environments where thats appropriate.In other words, there are situations in which you may be able to make a strong case for not needing STARTTLS. But my opinion is that its easier just to turn on STARTTLS anyway just think of all the time youll save not having to keep explaining that strong case of yours.

As for you: if you arent using STARTTLS wherever its available to you, why not?

Read the rest here:
Why isn't US military email protected by standard encryption tech? - Naked Security

Keeping the enterprise secure in the age of mass encryption – Information Age

By automatically discovering every key and certificate generated by your organisation as they are created, and integrating this data into security tools, you can finally shine a light on encrypted tunnels

Organisations have always been told that strong encryption is their friend. When applied to internet traffic, encryption secures the connection between user and website, locking the bad guys out and foiling the hijackers attempting to spoof legitimate sites or eavesdrop on communications.

So when Mozilla recently revealed that the majority of web pages loaded by Firefox used the secure HTTPS protocol, it seemed like a good news day for information security. Naturally, the story is far more complex than that.

The truth is that the hackers are getting increasingly adept at hiding in these encrypted tunnels which disguises their attacks from even the best defences. For example, roughly 90% of CIOs have already been attacked, or expect to be, by hackers hiding in encrypted traffic.

>See also:Enterprises using IoT arent securing sensitive data Thales

Businessesurgently need to improve their management of encrypted tunnels, or they risk compromising the effectiveness of our cyber security defences. But for that to happen, organisations must first gain visibility and control over their expansive estates of digital keys and certificates.

These keys and certificates are the cryptographic assets that form the foundation of encryption, allowing machines to identify each other in the same way usernames and passwords work for human users.

CISOs do not accept having limited visibility over identity and access management for all their users the same rigorous oversight needs to be extended to keys and certificates.

The growth of HTTPS is both a positive and negative thing. Encryption is the primary tool used to keep internet transactions out of the reach of prying eyes, and weve seen increased adoption over the past few years, partly driven by revelations of mass state surveillance exposed by NSA whistleblower Edward Snowden.

HTTPS protects the sensitive data of hundreds of millions of users around the world, offering protection against man-in-the-middle attacks and attackers looking to spoof trusted sites.

Encrypted traffic is beginning to become the norm, rather than the exception, and a survey from this years RSA Conference showed that this trend will continue: two-thirds (66%) of attendees said that their organisation is planning to increase encryption usage.

>See also:Who owns your companys encryption keys?

But what happens when a hacker manages to get into encrypted traffic? This is not a hypothetical problem a third (32%) of security professionals at RSA said that they are either not confident or have only 50% confidence in their organisations ability to protect and secure encrypted communications.

And once a hacker does get into encrypted traffic it will offer the same protections, but this time against the organisations security tools. Intrusion detection and prevention systems, firewalls and similar tools are rendered useless, unable to inspect the traffic going in and out of the organisation.

A hacker could hide malware or web exploits from these tools to launch an attack and then use the encrypted tunnel to ferry stolen data out again.

The problem ultimately boils down to the digital keys and certificates that form the Internets base of cyber security and trust. Today, this system is used to secure everything from online banking to mobile apps and the Internet of Things (IoT). Theres just one problem: our foundation is built on sand.

The volume of keys and certificates has exploded over recent years, thanks to virtualisation and the growth in mobile devices, cloud servers and now the IoT. Everything with an IP address depends on a key and certificate to create a secure connection.

>See also:Network security doesnt just begin and end with encryption

But organisations simply cant keep track of this explosive growth, often leaving them unsecured and managed manually. This has allowed cyber criminals to sneak in and use unprotected keys and certificates for their own ends.

The problem will only get worse as the number of IoT devices grows. Gartner recently claimed 8.4 billion connected devices will be in use globally by the end of 2017, up 31% from 2016, and reach a staggering 20.4 billion by 2020.

Additionally, half of the organisations Venafi polled last year said they saw key and certificate usage grow by over 25%. And one in five claimed it had increased by more than 50%.

As keys and certificates grow, so do the opportunities for the hackers. But there is hope. If were able to provide our security tools with the all-important keys, then they can open up and inspect encrypted traffic to ensure it doesnt contain anything malicious.

This is easier said than done; especially given the hundreds of thousands of keys and certificates a typical organisation must manage. New keys and certificates are retired and created every day.

What organisations need is centralised intelligence and automation system. This will ensure that all security tools are provided with a continuously updated list of all the relevant keys and certificates they need in order to inspect encrypted traffic.

>See also:Keys to the castle: Encryption in the cloud

By automatically discovering every key and certificate generated by your organisation as they are created, and integrating this data into security tools, you can finally shine a light on encrypted tunnels.

The result? IT leaders will not only benefit from improved resilience from cyber attacks, data breaches and the like, but also finally gain full value from their technology investments.

With encrypted traffic growing all the time and 85% of CIOs expecting criminal misuse of keys and certificates to get worse, businessescant afford to hang around.

Sourced byKevin Bocek, chief cyber-security strategist atVenafi

Nominations are now open for theTech Leaders Awards 2017, the UKs flagship celebration of the business, IT and digital leaders driving disruptive innovation and demonstrating value from the application of technology in businesses and organisations. Nominating is free and simply: just click here to enter. Good luck!

Go here to see the original:
Keeping the enterprise secure in the age of mass encryption - Information Age

Cripple encryption and you weaken global and national security – Irish Times

There are long-standing, sound reasons why encryption backdoors have failed to get the green light any time they have been proposed in the US or EU

In the midst of the hullabaloo last week over Brexit and article 50 trigger-pulling, not many noticed that EU Commissioner for Justice Vera Jourov proposed the EU-wide introduction of encryption backdoors for popular social apps such as WhatsApp.

Just in case you missed it (and most people likely did, as Jourovs speech to this effect was made on March 28th, the day before the UKs article 50 letter was delivered to EU officials), she said she will announce three or four options in June to allow law enforcement agencies to access encrypted communications.

These will include proposals for binding legislation, as well as voluntary, yet, she suggested, nonetheless mandatory or enforceable compliance from technology companies.

Jourov noted: At the moment, prosecutors, judges, also police and law enforcement authorities are dependent on whether or not providers will voluntarily provide the access and the evidence. This is not the way we can facilitate and ensure the security of Europeans, being dependent on some voluntary action.

She said she intended to introduce clear, simple rules into the European legislation to let law enforcement demand access from technology companies to communications and to do this with swift, reliable response.

However, she said in her speech to the EU Justice and Home Affairs Council that nonlegislative solutions would be needed initially, because legislative solutions, such as a requirement for backdoors, could take years to bring in.

She wouldnt go into details on how that would all work, but we can all look forward now to June, when the proposals arrive in this fresh reconsideration of business, economic, security and, of course, human rights lunacy.

Perhaps we will need some EU shenanigans to exasperate us in June, now that Jourov also has just announced that the joint US-EU review of transatlantic data transfer agreement Privacy Shield wont occur in June, as had been presumed, but has been pushed into September.

Well, proposing encryption backdoors yet again will certainly exasperate.

Backdoors are a secret method of bypassing the normal authentication needed to access the contents of an encrypted file or message. They are built into the application, so that every instance of the application ends up with this secret tunnel. In short, backdoors are deliberate security flaws to cripple a security product.

For example, when you download and install WhatsApp, your messages are automatically encrypted when sent, and can only be decrypted by the user you send them to. But a backdoor would enable law enforcement authorities to also see the message.

Which might seem a good idea given security concerns about terrorism and criminal activity, and Jourov, of course, referenced recent attacks in Europe. And thats why a consideration of backdoors is again on the EU table.

Officials in the UK, France and Germany have been pressing for months for European law enforcement to have a method of accessing encrypted communications. As recently as March 26th, UK home secretary Amber Rudd said the companies that produce encrypted apps should be forced to give police access to contents of messages when asked.

But the problem with encryption is that once you build in a deliberate vulnerability, the application is no longer secure. Even if the key to the backdoor is designed to only be in the possession of security agencies and law enforcement, every shred of evidence in the digital world to date indicates it wont remain a secret and will eventually be located and exploited. Vulnerabilities tend to get found out, one way or another.

And it wont be the good guys that do the exploiting. No, it will of course be the same dark side actors that encryption exists to protect against.

Maybe you are thinking that you dont care if security agencies can read your WhatsApp discussions with your friends if it helps prevent a suicide bomber. But it isnt just about you.

Encryption is ubiquitous, needed for the basic functioning of banks, governments, businesses large and small, utilities, the military, citizen transactions and interactions, just about everything you can think of. Weaken it, and you weaken national and international security, national grids, global transactions, the worlds economies.

Meanwhile, the bad guys will of course just switch to or themselves create something other than WhatsApp (or Signal, or iMessage, any other service forced to install a backdoor).

There are thus long-standing, sound reasons why encryption backdoors have failed to get the green light any time they have been proposed in the US or EU. They can be summed up simply: if you cripple encryption, then you cripple security overall.

Thats not to say legislators are impervious to eventually doing something truly catastrophic. But I wouldnt wager that Europe will bring in backdoors any time soon.

The evidence is far too strong that backdoors would be extraordinarily risky, for little payback. In addition, theres a steep, perhaps impossible challenge of figuring out even some kind of voluntary scheme, given the way encryption services work (secret is secret).

So, the June proposals will be interesting to see. Expect to be exasperated.

Read the original post:
Cripple encryption and you weaken global and national security - Irish Times

6 workarounds for accessing encrypted devices – GCN.com

6 workarounds for accessing encrypted devices

The story of Syed Farooks iPhone is a perfect illustration of both the power of encryption on personal devices and the governments frustration with such security when it hinders an investigation.

In the wake of the 2015 San Bernadino, Calif., shootings, investigators wanted access to Farooks iPhone. The phone was encrypted, the FBI asked Apple to write software to give it access and Apple refused to comply. What ensued was a long battle that played out in courts and in public. In the end the government allegedly paid $1 million to third party to have the phone unlocked.

Access to encrypted information need not always be as difficult or expensive for investigators, however. Two cybersecurity experts have published an essay that discusses the practical, technological and legal implications of six encryption workarounds.

Encryption raises a challenge for criminal investigators, wrote Orin S. Kerr, director of the Cybersecurity Law Initiative at George Washington University Law School, and Bruce Schneier, fellow at Harvard Universitys Berkman Klein Center for Internet & Society and CTO at Resilient. When law enforcement attempts to access encrypted data, only ciphertext or scrambled information can be seen, which is useless unless it can be decrypted. For government investigators," Kerr & Schneider wrote, "encryption adds an extra step: They must figure out a way to access the plaintext form of a suspects encrypted data.

The following workarounds have been used by investigators since messages have been encrypted back to the time of Elizabeth I when decoded private letters revealed an assassination plot. Today, because encryption is so widespread, investigators come across it in routine cases, making ways to bypass encryption especially timely and relevant.

1. Find the key. The most obvious of the six ways to get around encryption is finding the passwords, passcodes or passphrases required to get into a device. The key might be written down somewhere or stored on an accessible device.

2. Guess the key. Although encryption keys themselves are long and random, the passwords that protect them are usually easier to guess. Investigators have used a suspects date of birth as a password to access personal devices. Password-cracking software can try millions of passwords per second, but investigators can be limited by a devices features that only allow a certain number of password tries before locking out the would-be user.

3. Compel the key. Merely asking, Whats your password? could get investigators the exact information they need, and authorities could legally compel device owners or others who know its password to provide it, the authors said. Both the Fourth and Fifth Amendments provide the device owners with some protection, but considerable ambiguity remains about how much of a burden [these Amendments] impose on investigators.

4. Exploit a flaw in the encryption scheme. This workaround requires finding a flaw in the encryption and using that weakness to gain access to the device. This technique, commonly used by hackers, is analogous to breaking into a locked car by breaking a window instead of picking the lock, the researchers said. The FBI likely gained access to the San Bernardino shooters phone this way, the authors said. The company helping the FBI may have found a flaw in an auto-erase function used on the phone to make it harder to guess passwords. This approach relied on two workarounds in tandem: First, exploit the flaw; second, guess the key, they said.

5. Access plaintext when the device is in use. This workaround requires accessing a device while it is in use and its data has been decrypted, such as when a suspect using a device is arrested before the phone or computer can be shut down. Gaining remote access is much more complicated than physically seizing the machine, the two said. First, hacking will require the government to have figured out a technical means to gain remote access to the device. Second, government hacking can raise complex legal questions under the Fourth Amendment and other laws. Dozens of federal courts are currently considering the legality.

6. Locate a plaintext copy. Cant get into the device? Find the information somewhere else. The information that investigators are looking for likely exists in an unencrypted version somewhere, Kerr and Schneier suggested; cloud copies are increasingly common. In the San Bernardino case, investigators were able to get iCloud backups of the shooters phone. The information was six weeks out of date which is why the FBI paid for the workaround -- but it still provided insight.

Read the full paper here.

About the Author

Matt Leonard is a reporter/producer at GCN.

Before joining GCN, Leonard worked as a local reporter for The Smithfield Times in southeastern Virginia. In his time there he wrote about town council meetings, local crime and what to do if a beaver dam floods your back yard. Over the last few years, he has spent time at The Commonwealth Times, The Denver Post and WTVR-CBS 6. He is a graduate of Virginia Commonwealth University, where he received the faculty award for print and online journalism.

Leonard can be contacted at mleonard@gcn.com or follow him on Twitter @Matt_Lnrd.

Click here for previous articles by Leonard.

Here is the original post:
6 workarounds for accessing encrypted devices - GCN.com

VeraCrypt – Home

Project Description

VeraCrypt is a free disk encryption software brought to you by IDRIX (https://www.idrix.fr) and that is based on TrueCrypt 7.1a.

Windows / MacOSX / Linux / Raspbian/ Source Downloads

Online Documentation (click here for latest User Guide PDF)

Release Notes / Changelog

Frequently Asked Question

Android & iOS Support

Contributed Resources & Downloads (Tutorials, PPA, ARM, Raspberry Pi...)

Warrant Canary

Contact US

VeraCrypt adds enhanced security to the algorithms used for system and partitions encryption making it immune to new developments in brute-force attacks. VeraCrypt also solves many vulnerabilities and security issues found in TrueCrypt. The following post describes some of the enhancements and corrections done: https://veracrypt.codeplex.com/discussions/569777#PostContent_1313325

As an example, when the system partition is encrypted, TrueCrypt uses PBKDF2-RIPEMD160 with 1000 iterations whereas in VeraCrypt we use 327661. And for standard containers and other partitions, TrueCrypt uses at most 2000 iterations but VeraCrypt uses 655331 for RIPEMD160 and 500000 iterations for SHA-2 and Whirlpool.

This enhanced security adds some delay only to the opening of encrypted partitions without any performance impact to the application use phase. This is acceptable to the legitimate owner but it makes it much harder for an attacker to gain access to the encrypted data.

Starting from version 1.12, it is possible to use custom iterations through the PIM feature, which can be used to increase the encryption security.

Starting from version 1.0f, VeraCrypt can load TrueCrypt volume. It also offers the possibility to convert TrueCrypt containers and non-system partitions to VeraCrypt format.

UPDATE October 17th 2016 : VeraCrypt 1.19 has been released. It includes fixes for issues reported by Quarkslab audit that was funded by OSTIF. This release also brings many enhancements and fixes, like Serpent algorithm speedup by a factor of 2.5 and the support of Windows 32-bit for EFI system encryption. Please check therelease notes for the complete list of changes. Download for Windows is here.

UPDATE August 18th 2016 : The Windows installer for VeraCrypt 1.18 has been updated to include drivers signed by Microsoft that allow VeraCrypt to run on Windows 10 Anniversary Edition. Windows Installer version was incremented to 1.18a but there is no changed at VeraCrypt level. Linux and MacOSX installers remain unchanged.

UPDATE August 17th 2016 : VeraCrypt 1.18 has been released. It brings EFI system encryption for Windows (a world first in open source community) and it solves a TrueCrypt vulnerability that allows attacker to detect the presence of hidden volume. This release also brings many enhancements and fixes. Please check therelease notes for the complete list of changes. Download for Windows is here.

As usual, a MacOSX version is available in the Downloads section or by clicking on the following link. It supports MacOSX 10.6 and above and it requires OSXFUSE 2.3 and later(https://osxfuse.github.io/). MacFUSE compatibility layer must checked during OSXFUSE installation. Also a Linux version is available in the Downloads section or by clicking on the following link. The package contains the installation scripts for 32-bit and 64-bit versions, and for GUI and console-only version (choose which script is adapted the best to your machine).

All released files are signed with a PGP key available on the following link : https://www.idrix.fr/VeraCrypt/VeraCrypt_PGP_public_key.asc . It's also available on major key servers with ID=0x54DDD393. Please check that its fingerprint is 993B7D7E8E413809828F0F29EB559C7C54DDD393.

SHA256 and SHA512 sums for all released files are available in the Downloads section.

VeraCrypt on the fly encrypting the system partition :

VeraCrypt creating an encrypted volume :

Changing the GUI language of VeraCrypt

See original here:
VeraCrypt - Home

Cable System Encryption | Federal Communications Commission

Cable operators with all-digital systems may encrypt their services. This lets cable operators activate and deactivate cable service without sending a technician to your home. If your cable operator chooses to encrypt its services, you will need a set-top box or CableCARD for every television set in your home on which you want to continue to view cable programming.

Why allow encryption?

Encryption of all-digital cable service allows cable operators to activate and deactivate cable service remotely, relieving many consumers of the need to wait at home to receive a cable technician when they sign up for -- or cancel -- cable service, or expand service to an existing cable connection in their home. In addition, encryption reduces service theft, which often degrades the quality of cable service received by paying subscribers. Encryption also reduces the number of service calls necessary for manual installations and disconnections.

What does this mean for cable subscribers?

If you are a cable subscriber, you should be aware:

If you currently rely on unencrypted cable service to receive broadcast channels from your cable operator (i.e., your digital television connects directly to the cable system without the addition of a set-top box or CableCARD), and your cable operator begins to encrypt, you will need a set-top box or CableCARD to continue to view those channels after your operator encrypts them.

If, at the time your cable operator begins to encrypt, you subscribe

Then you are entitled to

only to broadcast basic service and do not have a set-top box or CableCARD

a set-top box or CableCARD on up to two television sets without charge or service fee for two years from the date your cable operator begins to encrypt.

to a level of service other than broadcast basic service but use a digital television to receive only the basic service tier without use of a set-top box or CableCARD

a set-top box or CableCARD on one television set without charge or service fee for one year from the date your cable operator begins to encrypt.

only to the basic service tier without use of a set-top box or CableCARD and you receive Medicaid

a set-top box or CableCARD on up to two television sets without charge or service fee for five years from the date your cable operator begins to encrypt.

What if I subscribe to cable service after an all digital cable operator has commenced encrypting their service?

What does this mean for over-the-air television viewers and Direct Broadcast Satellite (DBS) subscribers?

Cable System Encryption Guide (pdf)

File a Complaint with the FCC

Visit our Consumer Complaint Center at consumercomplaints.fcc.gov to file a complaint or tell us your story.

Request Accessible Format

To request this article in an accessible format - braille, large print, Word or text document or audio - email fcc504@fcc.gov, or write the address or call the phone number at the bottom of this page.

Consumer Help Center

Learn about consumer issues - visit the FCC's Consumer Help Center at http://www.fcc.gov/consumers.

See more here:
Cable System Encryption | Federal Communications Commission

Schneier on Security

Here are some squid cooking tips.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Tags: squid

Posted on March 10, 2017 at 4:02 PM 67 Comments

A decade ago, I wrote about the death of ephemeral conversation. As computers were becoming ubiquitous, some unintended changes happened, too. Before computers, what we said disappeared once we'd said it. Neither face-to-face conversations nor telephone conversations were routinely recorded. A permanent communication was something different and special; we called it correspondence.

The Internet changed this. We now chat by text message and e-mail, on Facebook and on Instagram. These conversations -- with friends, lovers, colleagues, fellow employees -- all leave electronic trails. And while we know this intellectually, we haven't truly internalized it. We still think of conversation as ephemeral, forgetting that we're being recorded and what we say has the permanence of correspondence.

That our data is used by large companies for psychological manipulation -- we call this advertising -- is well-known. So is its use by governments for law enforcement and, depending on the country, social control. What made the news over the past year were demonstrations of how vulnerable all of this data is to hackers and the effects of having it hacked, copied and then published online. We call this doxing.

Doxing isn't new, but it has become more common. It's been perpetrated against corporations, law firms, individuals, the NSA and -- just this week -- the CIA. It's largely harassment and not whistleblowing, and it's not going to change anytime soon. The data in your computer and in the cloud are, and will continue to be, vulnerable to hacking and publishing online. Depending on your prominence and the details of this data, you may need some new strategies to secure your private life.

There are two basic ways hackers can get at your e-mail and private documents. One way is to guess your password. That's how hackers got their hands on personal photos of celebrities from iCloud in 2014.

How to protect yourself from this attack is pretty obvious. First, don't choose a guessable password. This is more than not using "password1" or "qwerty"; most easily memorizable passwords are guessable. My advice is to generate passwords you have to remember by using either the XKCD scheme or the Schneier scheme, and to use large random passwords stored in a password manager for everything else.

Second, turn on two-factor authentication where you can, like Google's 2-Step Verification. This adds another step besides just entering a password, such as having to type in a one-time code that's sent to your mobile phone. And third, don't reuse the same password on any sites you actually care about.

You're not done, though. Hackers have accessed accounts by exploiting the "secret question" feature and resetting the password. That was how Sarah Palin's e-mail account was hacked in 2008. The problem with secret questions is that they're not very secret and not very random. My advice is to refuse to use those features. Type randomness into your keyboard, or choose a really random answer and store it in your password manager.

Finally, you also have to stay alert to phishing attacks, where a hacker sends you an enticing e-mail with a link that sends you to a web page that looks almost like the expected page, but which actually isn't. This sort of thing can bypass two-factor authentication, and is almost certainly what tricked John Podesta and Colin Powell.

The other way hackers can get at your personal stuff is by breaking in to the computers the information is stored on. This is how the Russians got into the Democratic National Committee's network and how a lone hacker got into the Panamanian law firm Mossack Fonseca. Sometimes individuals are targeted, as when China hacked Google in 2010 to access the e-mail accounts of human rights activists. Sometimes the whole network is the target, and individuals are inadvertent victims, as when thousands of Sony employees had their e-mails published by North Korea in 2014.

Protecting yourself is difficult, because it often doesn't matter what you do. If your e-mail is stored with a service provider in the cloud, what matters is the security of that network and that provider. Most users have no control over that part of the system. The only way to truly protect yourself is to not keep your data in the cloud where someone could get to it. This is hard. We like the fact that all of our e-mail is stored on a server somewhere and that we can instantly search it. But that convenience comes with risk. Consider deleting old e-mail, or at least downloading it and storing it offline on a portable hard drive. In fact, storing data offline is one of the best things you can do to protect it from being hacked and exposed. If it's on your computer, what matters is the security of your operating system and network, not the security of your service provider.

Consider this for files on your own computer. The more things you can move offline, the safer you'll be.

E-mail, no matter how you store it, is vulnerable. If you're worried about your conversations becoming public, think about an encrypted chat program instead, such as Signal, WhatsApp or Off-the-Record Messaging. Consider using communications systems that don't save everything by default.

None of this is perfect, of course. Portable hard drives are vulnerable when you connect them to your computer. There are ways to jump air gaps and access data on computers not connected to the Internet. Communications and data files you delete might still exist in backup systems somewhere -- either yours or those of the various cloud providers you're using. And always remember that there's always another copy of any of your conversations stored with the person you're conversing with. Even with these caveats, though, these measures will make a big difference.

When secrecy is truly paramount, go back to communications systems that are still ephemeral. Pick up the telephone and talk. Meet face to face. We don't yet live in a world where everything is recorded and everything is saved, although that era is coming. Enjoy the last vestiges of ephemeral conversation while you still can.

This essay originally appeared in the Washington Post.

Tags: doxing, essays, Google, Google Glass, hacking, passwords, privacy, surveillance

Posted on March 10, 2017 at 6:15 AM 53 Comments

Google's Project Zero is serious about releasing the details of security vulnerabilities 90 days after they alert the vendors, even if they're unpatched. It just exposed a nasty vulnerability in Microsoft's browsers.

This is the second unpatched Microsoft vulnerability it exposed last week.

I'm a big fan of responsible disclosure. The threat to publish vulnerabilities is what puts pressure on vendors to patch their systems. But I wonder what competitive pressure is on the Google team to find embarrassing vulnerabilities in competitors' products.

Tags: browsers, Google, Microsoft, patching, vulnerabilities

Posted on March 9, 2017 at 6:28 AM 38 Comments

If I had to guess right now, I'd say the documents came from an outsider and not an insider. My reasoning: One, there is absolutely nothing illegal in the contents of any of this stuff. It's exactly what you'd expect the CIA to be doing in cyberspace. That makes the whistleblower motive less likely. And two, the documents are a few years old, making this more like the Shadow Brokers than Edward Snowden. An internal leaker would leak quickly. A foreign intelligence agency -- like the Russians -- would use the documents while they were fresh and valuable, and only expose them when the embarrassment value was greater.

James Lewis agrees:

But James Lewis, an expert on cybersecurity at the Center for Strategic and International Studies in Washington, raised another possibility: that a foreign state, most likely Russia, stole the documents by hacking or other means and delivered them to WikiLeaks, which may not know how they were obtained. Mr. Lewis noted that, according to American intelligence agencies, Russia hacked Democratic targets during the presidential campaign and gave thousands of emails to WikiLeaks for publication.

To be sure, neither of us has any idea. We're all guessing.

To the documents themselves, I really liked these best practice coding guidelines for malware, and these crypto requirements.

I am mentioned in the latter document:

Cryptographic jargon is utilized throughout this document. This jargon has precise and subtle meaning and should not be interpreted without careful understanding of the subject matter. Suggested reading includes Practical Cryptography by Schneier and Ferguson, RFCs 4251 and 4253, RFCs 5246 and 5430, and Handbook of Applied Cryptography by Menezes, van Oorschot, and Vanstone.

EDITED TO ADD: Herbert Lin comments.

The most damning thing I've seen so far is yet more evidence that -- despite assurances to the contrary -- the US intelligence community hoards vulnerabilities in common Internet products and uses them for offensive purposes.

EDITED TO ADD (3/9): The New York Times is reporting that the CIA suspects an insider:

Investigators say that the leak was the work not of a hostile foreign power like Russia but of a disaffected insider, as WikiLeaks suggested when it released the documents Tuesday. The F.B.I. was preparing to interview anyone who had access to the information, a group likely to include at least a few hundred people, and possibly more than a thousand.

An intelligence official said the information, much of which appeared to be technical documents, may have come from a server outside the C.I.A. managed by a contractor. But neither he nor a former senior intelligence official ruled out the possibility that the leaker was a C.I.A. employee.

EDITED TO ADD (3/9): WikiLeaks said that they have published less than 1% of what they have, and that they are giving affected companies an early warning of the vulnerabilities and tools that they're publishing.

Commentary from The Intercept.

Tags: CIA, cryptography, leaks, malware, Russia, WikiLeaks

Posted on March 8, 2017 at 9:08 AM 151 Comments

The New York Times is reporting that the US has been conducting offensive cyberattacks against North Korea, in an effort to delay its nuclear weapons program.

EDITED TO ADD (3/8): Commentary.

Tags: cyberattack, cyberwar, national security policy, North Korea

Posted on March 8, 2017 at 7:03 AM 20 Comments

WikiLeaks just released a cache of 8,761 classified CIA documents from 2012 to 2016, including details of its offensive Internet operations.

I have not read through any of them yet. If you see something interesting, tell us in the comments.

EDITED TO ADD: There's a lot in here. Many of the hacking tools are redacted, with the tar files and zip archives replaced with messages like:

::: THIS ARCHIVE FILE IS STILL BEING EXAMINED BY WIKILEAKS. ::: ::: IT MAY BE RELEASED IN THE NEAR FUTURE. WHAT FOLLOWS IS ::: ::: AN AUTOMATICALLY GENERATED LIST OF ITS CONTENTS: :::

Hopefully we'll get them eventually. The documents say that the CIA -- and other intelligence services -- can bypass Signal, WhatsApp and Telegram. It seems to be by hacking the end-user devices and grabbing the traffic before and after encryption, not by breaking the encryption.

New York Times article.

EDITED TO ADD: Some details from The Guardian:

According to the documents:

I just noticed this from the WikiLeaks page:

Recently, the CIA lost control of the majority of its hacking arsenal including malware, viruses, trojans, weaponized "zero day" exploits, malware remote control systems and associated documentation. This extraordinary collection, which amounts to more than several hundred million lines of code, gives its possessor the entire hacking capacity of the CIA. The archive appears to have been circulated among former U.S. government hackers and contractors in an unauthorized manner, one of whom has provided WikiLeaks with portions of the archive.

So it sounds like this cache of documents wasn't taken from the CIA and given to WikiLeaks for publication, but has been passed around the community for a while -- and incidentally some part of the cache was passed to WikiLeaks. So there are more documents out there, and others may release them in unredacted form.

Wired article. Slashdot thread. Two articles from the Washington Post.

EDITED TO ADD: This document talks about Comodo version 5.X and version 6.X. Version 6 was released in Feb 2013. Version 7 was released in Apr 2014. This gives us a time window of that page, and the cache in general. (WikiLeaks says that the documents cover 2013 to 2016.)

If these tools are a few years out of date, it's similar to the NSA tools released by the "Shadow Brokers." Most of us thought the Shadow Brokers were the Russians, specifically releasing older NSA tools that had diminished value as secrets. Could this be the Russians as well?

EDITED TO ADD: Nicholas Weaver comments.

EDITED TO ADD (3/8): These documents are interesting:

The CIA's hand crafted hacking techniques pose a problem for the agency. Each technique it has created forms a "fingerprint" that can be used by forensic investigators to attribute multiple different attacks to the same entity.

This is analogous to finding the same distinctive knife wound on multiple separate murder victims. The unique wounding style creates suspicion that a single murderer is responsible. As soon one murder in the set is solved then the other murders also find likely attribution.

The CIA's Remote Devices Branch's UMBRAGE group collects and maintains a substantial library of attack techniques 'stolen' from malware produced in other states including the Russian Federation.

With UMBRAGE and related projects the CIA cannot only increase its total number of attack types but also misdirect attribution by leaving behind the "fingerprints" of the groups that the attack techniques were stolen from.

UMBRAGE components cover keyloggers, password collection, webcam capture, data destruction, persistence, privilege escalation, stealth, anti-virus (PSP) avoidance and survey techniques.

This is being spun in the press as the CIA is pretending to be Russia. I'm not convinced that the documents support these allegations. Can someone else look at the documents. I don't like my conclusion that WikiLeaks is using this document dump as a way to push their own bias.

Tags: CIA, cyberwar, hacking, malware, redaction, WikiLeaks, zero-day

Posted on March 7, 2017 at 9:08 AM 101 Comments

Matthew Green and students speculate on what truly well-designed ransomware system could look like:

Most modern ransomware employs a cryptocurrency like Bitcoin to enable the payments that make the ransom possible. This is perhaps not the strongest argument for systems like Bitcoin -- and yet it seems unlikely that Bitcoin is going away anytime soon. If we can't solve the problem of Bitcoin, maybe it's possible to use Bitcoin to make "more reliable" ransomware.

[...]

Recall that in the final step of the ransom process, the ransomware operator must deliver a decryption key to the victim. This step is the most fraught for operators, since it requires them to manage keys and respond to queries on the Internet. Wouldn't it be better for operators if they could eliminate this step altogether?

[...]

At least in theory it might be possible to develop a DAO that's funded entirely by ransomware payments -- and in turn mindlessly contracts real human beings to develop better ransomware, deploy it against human targets, and...rinse repeat. It's unlikely that such a system would be stable in the long run humans are clever and good at destroying dumb things but it might get a good run.

One of the reasons society hasn't destroyed itself is that people with intelligence and skills tend to not be criminals for a living. If it ever became a viable career path, we're doomed.

Tags: bitcoin, crime, ransomware

Posted on March 7, 2017 at 8:15 AM 22 Comments

Longtime Internet security-policy pioneer Howard Schmidt died on Friday.

He will be missed.

Tags: cybersecurity, national security policy

Posted on March 6, 2017 at 2:15 PM 4 Comments

The New York Times reports that Uber developed apps that identified and blocked government regulators using the app to find evidence of illegal behavior:

Yet using its app to identify and sidestep authorities in places where regulators said the company was breaking the law goes further in skirting ethical lines -- and potentially legal ones, too. Inside Uber, some of those who knew about the VTOS program and how the Greyball tool was being used were troubled by it.

[...]

One method involved drawing a digital perimeter, or "geofence," around authorities' offices on a digital map of the city that Uber monitored. The company watched which people frequently opened and closed the app -- a process internally called "eyeballing" -- around that location, which signified that the user might be associated with city agencies.

Other techniques included looking at the user's credit card information and whether that card was tied directly to an institution like a police credit union.

Enforcement officials involved in large-scale sting operations to catch Uber drivers also sometimes bought dozens of cellphones to create different accounts. To circumvent that tactic, Uber employees went to that city's local electronics stores to look up device numbers of the cheapest mobile phones on sale, which were often the ones bought by city officials, whose budgets were not sizable.

In all, there were at least a dozen or so signifiers in the VTOS program that Uber employees could use to assess whether users were new riders or very likely city officials.

If those clues were not enough to confirm a user's identity, Uber employees would search social media profiles and other available information online. Once a user was identified as law enforcement, Uber Greyballed him or her, tagging the user with a small piece of code that read Greyball followed by a string of numbers.

When Edward Snowden exposed the fact that the NSA does this sort of thing, I commented that the technologies will eventually become cheap enough for corporations to do it. Now, it has.

One discussion we need to have is whether or not this behavior is legal. But another, more important, discussion is whether or not it is ethical. Do we want to live in a society where corporations wield this sort of power against government? Against individuals? Because if we don't align government against this kind of behavior, it'll become the norm.

Tags: courts, Edward Snowden, NSA, power, privacy, surveillance, terms of service, Uber

Posted on March 6, 2017 at 6:24 AM 41 Comments

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.

See more here:
Schneier on Security

Good News From CIA Leak: Encryption Works! – The New American

The media have spun the recent story about CIA-developed hacking tools by claiming either that there's nothing to worry about, or that the problem is so severe that it is no longer possible to protect our privacy through encryption. In reality, privacy is under attack, but encryption still works.

With WikiLeaks recent disclosure of the CIAs secret hacking program, many are left wondering how deep the rabbit hole goes. How secure are the devices and softwares that people all over the world use and depend on every day? While the mainstream media have reported on this either as if there is nothing to it or its the end of both privacy and encryption, the truth is that encryption can still be used effectively to protect privacy.

As The New American has reported in previous articles, the tools (read: cyber weapons) developed by the CIA are scarily invasive. Any hacker who is worth his weight in silicon and who also has access to these tools has the ability to remotely access and control devices such as computers, mobile devices, and SmartTVs to watch and listen to targets, as well as the theoretical (if not actual) ability to hack and control cars and trucks to disable or override steering, brakes, acceleration, and airbag controls. And thanks to the haphazard way the cyber-weapon files and documents were circulated within the CIA and its contractor companies, that could be a lot of hackers.

And despite the pooh-poohing by the intelligence community and many in the mainstream media, recent statements by the CIA and White House, coupled with the FBI's investigation into the source of the leaked CIA documents, serve as admissions that the disclosures are genuine. So regarding both the existence of the cyber weapons and the fact that the CIA lost control of them, it is really is as bad as it looks.

But that is also very good news.

Buried in the CIA documents (and WikiLeaks analysis of those documents) is the fact that there has been a shift in the way the surveillance state gathers information. In the wake of the Snowden revelations about mass surveillance almost four years ago, many this writer included began to implement ways to protect themselves against mass surveillance. The most effective tool for that is encryption. By encrypting data at rest (files and folders stored on a device), the owners of that data can be assured that it can only be accessed by someone with the encryption key or password. By encrypting data in motion (communications), the parties to those communications have the same assurances.

Apple introduced encryption by default for devices running newer versions of iOS; Google followed suit with encryption by default for all devices running newer versions of Android. Millions of people in the United States and worldwide began using encrypted communication applications. The surveillance hawks predicted the end of the world, claiming that terrorists were using those tools to go dark. The hawks demanded back doors into the encrypted devices and softwares.

Reports of recent revelations about the CIA hacking program focus on the fact that the vulnerabilities exploited by the CIA-developed cyber weapons allow the hackers to compromise the underlying operating systems (such as iOS, Android, Windows, MacOS, Linux, Solaris, and others) to capture the data before it is encrypted. As this writer noted in an earlierarticle:

Because the operating systems themselves would be compromised, all software running on those devices would be subject to corruption, as well. This would mean that privacy tools such as those this writer uses on a regular basis would be rendered useless. For instance, an application such as Signal used for encrypting text messages and phone calls on mobile devices would continue to encrypt the communications, leaving the user feeling secure. But since the keyboard would record (and report) all keystrokes before Signal could encrypt and send the text message, the communication could still be harvested by the hackers. Likewise, since the microphone itself could be activated, it would make no difference that the communication leaving the device would be encrypted; the hackers would still be able to capture the unencrypted voice recordings of both parties.

So, how is that good news? Put simply: it means that encryption works!

The surveillance state has had to change its game. As the New York Times reported recently:

The documents indicate that because of encryption, the agency must target an individual phone and then can intercept only the calls and messages that pass through that phone. Instead of casting a net for a big catch, in other words, C.I.A. spies essentially cast a single fishing line at a specific target, and do not try to troll an entire population.

The difference between wholesale surveillance and targeted surveillance is huge, said Dan Guido, a director at Hack/Secure, a cybersecurity investment firm. Instead of sifting through a sea of information, theyre forced to look at devices one at a time.

The New American reached out to several companies and organizations involved in promoting digital liberty to ask what the CIA revelations mean for the state of privacy. What we found shows that for users who are willing to invest the time to keep their systems and programs up-to-date the CIA hacking tools can be effectively blocked.

Dr. Andy Yen is the CEO and one of the founders of ProtonMail, an open-source, end-to-end encrypted, Zero-Knowledge e-mail service with its servers in Switzerland. Dr. Yen told The New American that the CIA revelations are the biggest intelligence leak since Snowden in 2013 and the documents released so far appear to just be the tip of the iceberg. When asked about the security of ProtonMail running on devices that may have been compromised by hackers (the government or otherwise) exploiting the devices vulnerabilities, Dr. Yen said, From what we have seen so far, it is clear that ProtonMail's cryptography is not compromised, so the email privacy of our users is still secure. He added, We are encouraging users to work to harden their endpoint devices, by actively patching all the software that they run.

Part of that initiative to encourage users to harden their endpoint devices came in the form of a statement ProtonMail released the same day WikiLeaks dumped the CIA documents and files. Part of that statement says:

We can state unequivocally that there is nothing in the leaked CIA files which indicates any sort of crack of ProtonMails encryption. And despite claims to the contrary, there is also no evidence that Signal/Whatsapp end-to-end encryption has been breached. Heres what we do know:

Over the past three years, the CIA has put together a formidable arsenal of cyberweapons specially designed to gain surveillance capabilities over end-user devices such as mobile phones and laptop/desktop computers. These advanced malwares enable the CIA to record actions such as keystrokes on a mobile device, allowing them to conduct surveillance without breaking encryption. Through this technique, US intelligence agencies can gain access to data before they have been encrypted. This is in fact the only way to achieve data access, because cracking the cryptography used in advanced secure communication services such as ProtonMail and Signal is still impractical with current technology.

In other words, the danger is in running old software, including operating systems that are missing the most recent updates. We asked Dr. Yen if a user running the most recent patches for their operating system and other software could be at risk using ProtonMail. He answered, There can never be zero risk, so the way I would put it is, a user who has fully updated all his software would be at lowest risk of CIA hacking.

That is because outdated operating systems (Im looking at all of you who are still running Windows XP), software programs, and applications do not have the most up-to-date security patches. All software has vulnerabilities. As those vulnerabilities are discovered, the software developers issue updates to plug those vulnerabilities. Going over the list of the CIAs notes on how to attack different devices, operating systems, and softwares, one common denominator shines through: they all depend on exploiting unpatched vulnerabilities.

In the quote above from one of this writers previous articles, there is a reference to Signal an application for encrypted texts and phone calls. The company behind Signal is Open Whisper Systems. Signal has a list of endorsements from people Ed Snowden, Laura Poitras, Bruce Schneier, and others who have a real understanding of cryptography and the need for private communications. In a statement to The New American, Open Whisper Systems said:

These leaks are confirmation that ubiquitous encryption provided by WhatsApp and Signal are forcing intelligence agencies to use malware, pushing them from undetectable mass surveillance to high risk targeted attacks.

There again is the evidence that encryption works for those use it and keep their devices and software up-to-date.

Another open-source, end-to-end encrypted, Zero-Knowledge service is SpiderOak One, which offers an online backup service similar in function to DropBox with the distinction that everything built into SpiderOak One has the users privacy in mind. Since it is built on open-source software, there is no way for anything nefarious to be hidden in the code. Since it is end-to-end encrypted, even the administrators dont have access to the users data. Since it is Zero-Knowledge, the administrators dont know (or have any way to know) users passphrases. In a statement published on its website, SpiderOak said:

The latest leak of the Vault 7 files includes many exploits, but unlike previous leaks, initial analysis seems to indicate that they are entirely for attacks against endpoints.

This transition from network level to endpoint-focused attack is an interesting trend that points to an interesting hypothesis: Encryption is working.

Encryption - and particularly end-to-end encryption - fundamentally changes the cost of attacks. No longer can an adversary simply sniff network traffic, either locally or globally. To eavesdrop on communications they must take the more expensive and risky approach of compromising endpoints.

The take-away? Encryption works. At least for those willing to take the time and effort to make sure their endpoint devices (computers, mobile devices, routers, etc) are running up-to-date, reliable, trustworthy operating systems and software (which almost certainly excludes Microsoft Windows).

The answer to the question, How can someone protect themselves from surveillance? has not changed. Replace Windows with either Mac or (even better) Linux. Use open-source software and avoid proprietary software as much as you can. Encrypt everything you can, including your hard drive. Encrypt all communications, and encourage others to do the same. Its simple to do with applications such as ProtonMail and Signal. Keep your operating system and other software up-to-date. Dont store anything to an online backup service without first encrypting it there is no cloud; its just someone elses computer. And most importantly think about privacy and security. Make it a guiding principle in the way you use computers. Any chain is only as strong as its weakest link. The way you use computers the choices you make, the programs and applications you use, and the ways you use them are the biggest factors after following the above steps.

As for making a SmartTV secure, the best bet is to get rid of it. Period. The software is proprietary and the thing is designed as a spy tool.

Encryption has changed the game for the surveillance hawks. Now, instead of being able to conduct mass surveillance on scale, they are forced to compromise select and specific endpoint devices. If you are the specific target of a three-letter-agency, there is little you can do to avoid being spied on. For the rest of us, things are actually looking better.

More:
Good News From CIA Leak: Encryption Works! - The New American