Government Report Reveals Its Favorite Way to Hack iPhones, Without Backdoors – VICE

The US government is once again reviving its campaign against strong encryption, demanding that tech companies build backdoors into smartphones and give law enforcement easy, universal access to the data inside them.

At least two companies that sell phone-cracking tools to agencies like the FBI have proven they can defeat encryption and security measures on some of the most advanced phones on the market. And a series of recent tests conducted by the National Institute of Standards and Technology (NIST) reveal that, while there remain a number of blind spots, the purveyors of these tools have become experts at reverse engineering smartphones in order to extract troves of information off the devices and the apps installed on them.

Asked whether the NIST test results have any bearing on the public debate about backdoors for police, Barbara Guttman, who oversees the Computer Forensic Tool Testing program for NIST told Motherboard, None at all.

This is a completely different question. Thats a policy question, she said, adding that NISTs only purpose is to ensure that If youre acquiring the phone [data], you should acquire it correctly.

But the demonstrated ability of phone cracking tools to break into and extract data from the latest phones is further proof that the government is perfectly capable of getting into terrorists devices, Andres Arrieta, the director of consumer privacy engineering at the Electronic Frontier Foundation, told Motherboard.

When it comes to the capabilities from law enforcement, I think these documents show theyre quite capable, he said. In the San Bernardino case, they claimed they didnt have the capabilities and they made a big circus out of it, and it turned out they did. Theyve proven consistently that they have the tools.

The never-ending public debate over smartphone security has focused on backdoors for law enforcement to bypass device encryptionand more recently, Apple features that erase all data after 10 failed password attempts or block data extraction through lightning ports. But accessing a phone is only part of the battle; once inside, digital forensic investigators have to understand the complicated data structures they find and translate them into a format that meets the high accuracy standards for evidence, using acquisition tools from companies like Cellebrite, Grayshift, and MSAB.

Results from an NIST test of Cellebrite found that it largely works as expected.

In a series of reports published over the last year, NISTs Computer Forensic Tool Testing program documented how well the latest tools perform that task on dozens of different smartphones and apps. The tests paint a picture of an industry trying to keep pace with the constantly changing smartphones and social media landscapewith mixed results.

Lets say you can get into the phone, you can defeat the encryption. Now you have a blob of ones and zeros, Bob Osgood, a veteran FBI agent who is now the director of digital forensics at George Mason University, told Motherboard. Smartphones contain millions of lines of code, the structures of which differ between every device and can change with every OS or app update. Cracking a phones encryption doesnt necessarily mean an investigator can access the code on it, including deleted and hidden files, hence the need for the tools tested by NIST. In the digital forensics world, the state of complete Nirvana is to get a complete image of the phone, Osgood said. The amount of technical know-how it takes to actually do this stuffreverse engineer, beat the encryption, get data itselfis massive. There are a million moving targets.

Take Cellebrite, the Israeli company whose Universal Forensic Extraction Device (UFED) is a favorite of police departments and the FBI. In June, the company announced that its new premium tool could crack the encryption on any iOS device and many top-end Androidsa major win for law enforcement agencies that had been complaining about built-in encryption.

The companys current UFED 4PC software is then capable of accurately extracting the vast majority of important device informationGPS data, messages, call logs, contactsfrom an iPhone X and most previous models, according to a NIST test from April. It was able to partially extract data from Twitter, LinkedIn, Instagram, Pinterest, and Snapchat as well. NIST did not test the extraction ability for other apps, like Signal.

UFED 4PC could not extract email data from newer iPhone models, but police can gain access to cloud email services like Gmail with a warrant.

Results from Cellebrite on Android phones

Cellebrite was less successful with phones running Android and other operating systems, though. The UFED tool was unable to properly extract any social media, internet browsing, or GPS data from devices like the Google Pixel 2 and Samsung Galaxy S9 or messages and call logs from the Ellipsis 8 and Galaxy Tab S2 tablets. It got absolutely nothing from Huaweis P20 Pro phone.

Some of the newer operating systems are harder to get data from than others. I think a lot of these [phone] companies are just trying to make it harder for law enforcement to get data from these phones ... under the guise of consumer privacy, Detective Rex Kiser, who conducts digital forensic examinations for the Fort Worth Police Department, told Motherboard. Right now, were getting into iPhones. A year ago we couldnt get into iPhones, but we could get into all the Androids. Now we cant get into a lot of the Androids.

Cellebrite, which did not respond to requests for comment, frequently updates its products to address the failures discovered in testing and in the field, experts said, so the weaknesses NIST identified may no longer exist. Previous NIST testing data, though, shows that many blindspots can last for years.

It is important to note that just because a cracking tool cant successfully extract data doesnt mean a forensic investigator cant eventually get to it. The process just becomes much longer, and requires significant expertise.

Kiser said that Cellebrite is currently the industry leader for most devices. The exception is iPhones, where Grayshift, an Atlanta-based company that counts an ex-Apple security engineer among its top staff, has taken the lead.

Like Cellebrite, Grayshift claims that its GrayKey toolwhich it sells to police for between $15,000 and $30,000can also crack the encryption on any iPhone. And once inside, NIST test results show that GrayKey can completely extract every piece of data off an iPhone X, with the exception of Pinterest data, where the tool achieved partial extraction.

Grayshift did not respond to a request for comment.

Other products, like Virginia-based Parabens E3:DS or Swedish MSABs XRY displayed weaknesses in acquiring social media, internet browsing, and GPS data for several phones. Some of those tests, though, are older than the recent results for Cellebrite and Grayshift.

In the NIST tests, both Cellebrite and Grayshift devices were able to extract nearly all the data from an iPhone 7one of the phones used by the Pensacola naval air station shooter. That incident prompted the Department of Justices latest call for phone manufacturers to create encryption backdoors, despite ample evidence that hacking tools can break into the latest, most privacy conscious phones, like the iPhone 11 Pro Max.

This whole thing with the new terrorists and [the FBI] cant get into their phones, thats complete BS, Jerry Grant, a private New York digital forensic examiner who uses Cellebrite tools, told Motherboard.

The rest is here:
Government Report Reveals Its Favorite Way to Hack iPhones, Without Backdoors - VICE

US Government Report Reveals Its Favourite Way to Hack iPhones, Without Backdoors – VICE

This article originally appeared on VICE US.

The US government is once again reviving its campaign against strong encryption, demanding that tech companies build backdoors into smartphones and give law enforcement easy, universal access to the data inside them.

At least two companies that sell phone-cracking tools to agencies like the FBI have proven they can defeat encryption and security measures on some of the most advanced phones on the market. And a series of recent tests conducted by the National Institute of Standards and Technology (NIST) reveal that, while there remain a number of blind spots, the purveyors of these tools have become experts at reverse engineering smartphones in order to extract troves of information off the devices and the apps installed on them.

Asked whether the NIST test results have any bearing on the public debate about backdoors for police, Barbara Guttman, who oversees the Computer Forensic Tool Testing program for NIST told Motherboard, None at all.

This is a completely different question. Thats a policy question, she said, adding that NISTs only purpose is to ensure that If youre acquiring the phone [data], you should acquire it correctly.

But the demonstrated ability of phone cracking tools to break into and extract data from the latest phones is further proof that the government is perfectly capable of getting into terrorists devices, Andres Arrieta, the director of consumer privacy engineering at the Electronic Frontier Foundation, told Motherboard.

When it comes to the capabilities from law enforcement, I think these documents show theyre quite capable, he said. In the San Bernardino case, they claimed they didnt have the capabilities and they made a big circus out of it, and it turned out they did. Theyve proven consistently that they have the tools.

The never-ending public debate over smartphone security has focused on backdoors for law enforcement to bypass device encryptionand more recently, Apple features that erase all data after 10 failed password attempts or block data extraction through lightning ports. But accessing a phone is only part of the battle; once inside, digital forensic investigators have to understand the complicated data structures they find and translate them into a format that meets the high accuracy standards for evidence, using acquisition tools from companies like Cellebrite, Grayshift, and MSAB.

Results from an NIST test of Cellebrite found that it largely works as expected.

In a series of reports published over the last year, NISTs Computer Forensic Tool Testing program documented how well the latest tools perform that task on dozens of different smartphones and apps. The tests paint a picture of an industry trying to keep pace with the constantly changing smartphones and social media landscapewith mixed results.

Lets say you can get into the phone, you can defeat the encryption. Now you have a blob of ones and zeros, Bob Osgood, a veteran FBI agent who is now the director of digital forensics at George Mason University, told Motherboard. Smartphones contain millions of lines of code, the structures of which differ between every device and can change with every OS or app update. Cracking a phones encryption doesnt necessarily mean an investigator can access the code on it, including deleted and hidden files, hence the need for the tools tested by NIST. In the digital forensics world, the state of complete Nirvana is to get a complete image of the phone, Osgood said. The amount of technical know-how it takes to actually do this stuffreverse engineer, beat the encryption, get data itselfis massive. There are a million moving targets.

Take Cellebrite, the Israeli company whose Universal Forensic Extraction Device (UFED) is a favorite of police departments and the FBI. In June, the company announced that its new premium tool could crack the encryption on any iOS device and many top-end Androidsa major win for law enforcement agencies that had been complaining about built-in encryption.

The companys current UFED 4PC software is then capable of accurately extracting the vast majority of important device informationGPS data, messages, call logs, contactsfrom an iPhone X and most previous models, according to a NIST test from April. It was able to partially extract data from Twitter, LinkedIn, Instagram, Pinterest, and Snapchat as well. NIST did not test the extraction ability for other apps, like Signal.

UFED 4PC could not extract email data from newer iPhone models, but police can gain access to cloud email services like Gmail with a warrant.

Results from Cellebrite on Android phones

Cellebrite was less successful with phones running Android and other operating systems, though. The UFED tool was unable to properly extract any social media, internet browsing, or GPS data from devices like the Google Pixel 2 and Samsung Galaxy S9 or messages and call logs from the Ellipsis 8 and Galaxy Tab S2 tablets. It got absolutely nothing from Huaweis P20 Pro phone.

Some of the newer operating systems are harder to get data from than others. I think a lot of these [phone] companies are just trying to make it harder for law enforcement to get data from these phones ... under the guise of consumer privacy, Detective Rex Kiser, who conducts digital forensic examinations for the Fort Worth Police Department, told Motherboard. Right now, were getting into iPhones. A year ago we couldnt get into iPhones, but we could get into all the Androids. Now we cant get into a lot of the Androids.

Cellebrite, which did not respond to requests for comment, frequently updates its products to address the failures discovered in testing and in the field, experts said, so the weaknesses NIST identified may no longer exist. Previous NIST testing data, though, shows that many blindspots can last for years.

It is important to note that just because a cracking tool cant successfully extract data doesnt mean a forensic investigator cant eventually get to it. The process just becomes much longer, and requires significant expertise.

Kiser said that Cellebrite is currently the industry leader for most devices. The exception is iPhones, where Grayshift, an Atlanta-based company that counts an ex-Apple security engineer among its top staff, has taken the lead.

Like Cellebrite, Grayshift claims that its GrayKey toolwhich it sells to police for between $15,000 and $30,000can also crack the encryption on any iPhone. And once inside, NIST test results show that GrayKey can completely extract every piece of data off an iPhone X, with the exception of Pinterest data, where the tool achieved partial extraction.

Grayshift did not respond to a request for comment.

Other products, like Virginia-based Parabens E3:DS or Swedish MSABs XRY displayed weaknesses in acquiring social media, internet browsing, and GPS data for several phones. Some of those tests, though, are older than the recent results for Cellebrite and Grayshift.

In the NIST tests, both Cellebrite and Grayshift devices were able to extract nearly all the data from an iPhone 7one of the phones used by the Pensacola naval air station shooter. That incident prompted the Department of Justices latest call for phone manufacturers to create encryption backdoors, despite ample evidence that hacking tools can break into the latest, most privacy conscious phones, like the iPhone 11 Pro Max.

This whole thing with the new terrorists and [the FBI] cant get into their phones, thats complete BS, Jerry Grant, a private New York digital forensic examiner who uses Cellebrite tools, told Motherboard.

Go here to read the rest:
US Government Report Reveals Its Favourite Way to Hack iPhones, Without Backdoors - VICE

Encryption Software Market: Global Industry Analysis, Size, Share, Growth, Trends and Forecast 2020 – 2025 – Expedition 99

This report focuses on the global Encryption Software status, future forecast, growth opportunity, key market and key players. The study objectives are to present the Encryption Software development in United States, Europe and China.

In 2017, the global Encryption Software market size was million US$ and it is expected to reach million US$ by the end of 2025, with a CAGR of during 2018-2025.

The key players covered in this study

IBM

Microsoft

Sophos Ltd

Gemalto

Net App Inc

Hewlett- Packard

Vormetric

Oracle

Intel

Symantec

Market segment by Type, the product can be split into

Encryption for Data-at-rest

Full Disc Encryption (FDE)

File Level Encryption

Others

Market segment by Application, split into

IT & Telecom

BFSI

Government & Public Utilities

Manufacturing Enterprise

Others

Market segment by Regions/Countries, this report covers

United States

Europe

China

Japan

Southeast Asia

India

Central & South America

The study objectives of this report are:

To analyze global Encryption Software status, future forecast, growth opportunity, key market and key players.

To present the Encryption Software development in United States, Europe and China.

To strategically profile the key players and comprehensively analyze their development plan and strategies.

To define, describe and forecast the market by product type, market and key regions.

In this study, the years considered to estimate the market size of Encryption Software are as follows:

History Year: 2013-2017

Base Year: 2017

Estimated Year: 2018

Forecast Year 2018 to 2025

For the data information by region, company, type and application, 2017 is considered as the base year. Whenever data information was unavailable for the base year, the prior year has been considered.

Table of Contents

Chapter One: Report Overview

1.1 Study Scope

1.2 Key Market Segments

1.3 Players Covered

1.4 Market Analysis by Type

1.4.1 Global Encryption Software Market Size Growth Rate by Type (2013-2025)

1.4.2 Encryption for Data-at-rest

1.4.3 Full Disc Encryption (FDE)

1.4.4 File Level Encryption

1.4.5 Others

1.5 Market by Application

1.5.1 Global Encryption Software Market Share by Application (2013-2025)

1.5.2 IT & Telecom

1.5.3 BFSI

1.5.4 Government & Public Utilities

1.5.5 Manufacturing Enterprise

1.5.6 Others

1.6 Study Objectives

1.7 Years Considered

Chapter Two: Global Growth Trends

2.1 Encryption Software Market Size

2.2 Encryption Software Growth Trends by Regions

2.2.1 Encryption Software Market Size by Regions (2013-2025)

2.2.2 Encryption Software Market Share by Regions (2013-2018)

2.3 Industry Trends

2.3.1 Market Top Trends

2.3.2 Market Drivers

2.3.3 Market Opportunities

Chapter Three: Market Share by Key Players

3.1 Encryption Software Market Size by Manufacturers

3.1.1 Global Encryption Software Revenue by Manufacturers (2013-2018)

3.1.2 Global Encryption Software Revenue Market Share by Manufacturers (2013-2018)

3.1.3 Global Encryption Software Market Concentration Ratio (CRChapter Five: and HHI)

3.2 Encryption Software Key Players Head office and Area Served

3.3 Key Players Encryption Software Product/Solution/Service

3.4 Date of Enter into Encryption Software Market

3.5 Mergers & Acquisitions, Expansion Plans

Chapter Four: Breakdown Data by Type and Application

4.1 Global Encryption Software Market Size by Type (2013-2018)

4.2 Global Encryption Software Market Size by Application (2013-2018)

Chapter Five: United States

5.1 United States Encryption Software Market Size (2013-2018)

5.2 Encryption Software Key Players in United States

5.3 United States Encryption Software Market Size by Type

5.4 United States Encryption Software Market Size by Application

Chapter Six: Europe

6.1 Europe Encryption Software Market Size (2013-2018)

6.2 Encryption Software Key Players in Europe

6.3 Europe Encryption Software Market Size by Type

6.4 Europe Encryption Software Market Size by Application

Chapter Seven: China

7.1 China Encryption Software Market Size (2013-2018)

7.2 Encryption Software Key Players in China

7.3 China Encryption Software Market Size by Type

7.4 China Encryption Software Market Size by Application

Chapter Eight: Japan

Original post:
Encryption Software Market: Global Industry Analysis, Size, Share, Growth, Trends and Forecast 2020 - 2025 - Expedition 99

Apple Watch rewards, iCloud encryption, and WhatsApp hacks on the AppleInsider Podcast – AppleInsider

Feature

By Lester Victor MarksFriday, January 24, 2020, 05:49 am PT (08:49 am ET)

AppleInsider editor Victor Marks and writer William Gallagher discuss:

We like reader email send us your comments and concerns!

The show is available on iTunes and your favorite podcast apps by searching for "AppleInsider." Click here to listen, subscribe, and don't forget to rate our show.

Listen to the embedded SoundCloud feed below:

Sponsors:

Masterclass - Get unlimited access to EVERY MasterClass, and as an AppleInsider listener, you get 15% off the Annual All-Access Pass! Go to masterclass.com/appleinsider.

CLEAR is the absolute best way to get through airport security. It works great with Pre-Check too! Right now, listeners of our show can get their first two months of CLEAR for FREE. Go to clearme.com/appleinsider and use code appleinsider.

Show notes:

Follow our hosts on Twitter: @vmarks and @wgallagher

Feedback and comments are always appreciated. Please contact the AppleInsider podcast at [emailprotected] and follow us on Twitter @appleinsider, plus Facebook and Instagram.

Those interested in sponsoring the show can reach out to us at [emailprotected].

See the article here:
Apple Watch rewards, iCloud encryption, and WhatsApp hacks on the AppleInsider Podcast - AppleInsider

A Blizzard of Information – The Independent

As Edward Snowden, the NSA contractor, reports in his new memoir, Permanent Record, on the morning of September 11, 2001, the NSAs director Michael Hayden, issued the order to evacuate before most of the country even knew what had happened. Twelve years later, Snowden rocketed from complete obscurity to international headlines and public fame.

Snowden used his access to the NSAs mass surveillance and bulk data collection programs to alert the press and public. Snowdens memoir was published in September of last year and is, for being authored by such a technology inclined individual, surprisingly well written. His prose is full of witticism, his passion for civil liberties is palpable and his explanation of complex technological aspects of the programs that he worked on is elucidating.

According to Snowden, these mass surveillance programs violate the Fourth Amendment, which holds, The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no warrants shall issue, but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.

Snowdens position is one the U.S. government clearly disagrees with, which led to Snowden being trapped in Russia on June 23, 2013, when then-Secretary of State John Kerry revoked Snowdens passport while he was in mid-flight from Hong Kong to Moscow. Snowden had planned to connect to another flight that would take him toward his final destination in Ecuador where he planned to seek asylum.

Snowden, still stuck in Russia, now serves as the president of the board of directors of the Freedom of the Press Foundation which is, as he writes, a nonprofit organization dedicated to protecting and empowering public-interest journalism in the new millennium. He states that the major goal of the organization is to preserve and strengthen First and Fourth Amendment rights through the development of encryption technologies. In his memoir, Snowden gives two reasons for his stance: the civil service environment of his family and the civil liberties environment of the early internet. His father and maternal grandfather both served as engineers in the United States Coast Guard.

Snowden writes in his preface about how todays internet is unrecognizable from the internet of his youth. He rightly labels the internet of today as surveillance capitalism, the monetization and commercialization of individuals data. We, and by extension the data we generate through our online interactions, are the product for these platforms. This data, generated by our interactions with online platforms such as Facebook, Twitter, Youtube and Google, is collected and often used to individually target advertisements.

Until recently, most people werent aware that their data was being collected or how the methods of collection were being implemented. Even though this knowledge has now become more available to the public, the technologies that operate on these models have become almost indispensable for many people.

For anyone interested in Edward Snowdens journey from public servant to international martyr, Permanent Record is a thoroughly enjoyable and informative read.

See more here:
A Blizzard of Information - The Independent

The Flawed Humanity of Silicon Valley – The New York Times

Every week brings a fresh hell in the tech world. As news of the latest scandals pile up over weeks, months and eventually years, narratives switch. Friendly tech companies become Big Tech. The narrative is flattened. The tech giants become monolithic and their employees become caricatures often of villains.

The truth is always messier, more interesting and more human. It is a central tension animating Anna Wieners excellent memoir, Uncanny Valley. The book traces Ms. Wieners navigating the tech world as a start-up employee in the mid 2010s what might be thought of as the last years before Silicon Valleys fall from darling status. Ms. Wiener said she was drawn into the tech world by its propulsive qualities. Graduating into a recession and spending her early 20s in publishing, tech offered opportunities: jobs, the seductive feeling of creating something and, of course, the money was good.

But what makes Uncanny Valley so valuable is the way it humanizes the tech industry without letting it off the hook. The book allows us to see the way that flawed technology is made and marketed: not by villains, but by blind spots, uncritical thinking and armies of ambivalent people coming into work each day trying their best all while, sometimes unwittingly, laying the foundation of the surveillance economy.

From a privacy standpoint, Uncanny Valley is helpful in understanding what its like being on the other end of the torrent of information that streams from our devices each minute. Early on, Ms. Wiener recounts working for a successful data analytics company and the gold rush toward big data, noting that not everyone knew what they needed from big data, but everyone knew that they needed it.

When confronted with the mass of information her company collected, Ms. Wiener describes feeling uncomfortable with the God Mode view that granted employees full access to user data. This was a privileged vantage point from which to observe the tech industry, and we tried not to talk about it, she writes. This, she notes, becomes a pattern. When Edward Snowden blew the whistle on the National Security Agencys Prism program in 2013, employees at her own data company never discussed the news.

What she describes is a familiar dissociation for anyone who spends time interrogating tech companies on their privacy policies. Her company simply didnt consider itself part of the surveillance economy:

We werent thinking about our role in facilitating and normalizing the creation of unregulated, privately held databases on human behavior. We were just allowing product managers to run better A/B tests. We were just helping developers make better apps. It was all so simple: people loved our product and leveraged it to improve their own products, so that people would love them, too. There was nothing nefarious about it. Besides, if we didnt do it, someone else would. We were far from the only third-party analytics tool on the market. The sole moral quandary in our space that we acknowledged outright was the question of whether or not to sell data to advertisers. This was something we did not do, and we were righteous about it. We were just a neutral platform, a conduit. If anyone raised concerns about the information our users were collecting, or the potential for abuse of our product, the solutions manager would try to bring us back to earth by reminding us that we werent data brokers. We did not build cross-platform profiles. We didnt involve third parties. Users might not know they were being tracked, but that was between them and our customer companies.

They were, in other words, just doing their jobs.

Ms. Wiener frequently returns to this reticence to question the product, the end goals of the technology and the Silicon Valley ethos as a whole.

At her next job working on the terms of service team for a large open source code platform, she reveals how the evolution of the internet pushed her and her co-workers into becoming reluctant content moderators. Soon it became her teams job to fashion a balance between preserving free speech on her platform and protecting it from trolls and neo-Nazis:

We wanted to tread lightly: core participants in the open-source software community were sensitive to corporate oversight, and we didnt want to undercut anyones techno-utopianism by becoming an overreaching arm of the company-state. We wanted to be on the side of human rights, free speech and free expression, creativity and equality. At the same time, it was an international platform, and who among us could have articulated a coherent stance on international human rights?

As a journalist who has covered content moderation issues for the better part of a decade, the perspective is somewhat clarifying. Decisions that feel ad hoc or made by one or two people in the belly of a large company often are. What looks from the outside like a conspiracy or nefarious techno-authoritarianism is often just confusion caused by poor management, poor communication and dizzying growth. Most of the company did not seem aware of how common it was for our tools to be abused, Ms. Wiener writes of her group of de facto moderators. They did not even seem to know that our team existed. It wasnt their fault we were easy to miss. There were four of us for the platforms nine million users.

In this instance, Uncanny Valley shows how the internet can thrust ordinary people into extraordinary positions of power usually without qualifications or a how-to guide. This is not to say that the book excuses any of the industrys reckless behavior. Like a good travel writer, Ms. Wiener positions herself as an insider-outsider, participating in something bigger than myself and still feeling apart from it. And she is sufficiently critical of her and her peers participation in the industry. She writes that she would wonder whether the N.S.A. whistle-blower had been the first moral test for my generation of entrepreneurs and tech workers, and we had blown it, she writes at one point near the end of the memoir.

Ms. Wieners memoir comes at a point where the backlash against Silicon Valley is strong enough to have earned its own name. Narratives have hardened and aggrieved tech employees are adopting a bunker mentality. As Ranjan Roy of the newsletter Margins wrote recently of Facebook, the rank and file are seeing that they are the villains, and will increasingly become so. As so much of the reporting shows, the increased scrutiny and criticism of the techlash is important and almost all is warranted. Big Tech has amassed wild, unregulated power that has grown unchecked.

Still, its easy to get conspiratorial and to fall comfortably into black and white notions of good versus evil. Uncanny Valley is a reminder that the reality is far more muddled but no less damning. Our dystopia isnt just the product of mustache-twirling billionaires drunk with power and fueled by greed though it is that, too, sometimes. Its also the result of uncritical thinking, blind spots caused by an overwhelmingly white male work force and a pathological reluctance to ask the bigger question: Where is this all going? What am I building?

Read more:
The Flawed Humanity of Silicon Valley - The New York Times

The U.S. government’s been trying to stop encryption for 25 years. Will it win this time? – Tom’s Guide

SAN FRANCISCO In the age of mass digital surveillance, how private should your data and communications be? That question lies at the heart of the encryption panel that kicked off the Enigma Conference here yesterday (Jan. 27).

Four cryptography experts discussed the origins of the first "Crypto Wars" in the 1990s, the state of the current Crypto Wars between the government and technology companies two weeks ago, the U.S. attorney general called out Apple for not unlocking a terror suspect's iPhones and what's at stake now for consumers, companies and governments.

"It is a basic human right for two people to talk confidentially no matter where they are. This is sacrosanct," said Jon Callas, senior technologist at the American Civil Liberties Union (ACLU) and a veteran of the fight between the U.S. government and tech companies over the use of cryptography to protect digital communications in the 1990s.

It may be a human right, but most countries have not enshrined confidential conversations in their legal codes. What started as a resurgent fight against government surveillance in the wake of the documents leaked by Edward Snowden in 2013 has now bloomed into a larger struggle over who gets to encrypt communications and data.

In Snowdens wake, end-to-end encrypted messaging has become far more accessible, while Apple and Google have introduced on-device encrypted data storage by default. But access to those services could soon depend on which country you are in and whose digital services you're using.

The 1990s Crypto Wars centered on the Clipper Chip, a hardware chip designed to protect phone users calls from surveillance unless the government wanted to listen in. It was a "backdoor" that was going to be built into every cellphone.

But in 1994, cryptographer Matt Blaze, one of the panelists at yesterday's Enigma Conference talk, exposed security vulnerabilities in the Clipper Chip. Experts spent the next three years finding even more vulnerabilities in the Clipper Chip and fighting in court to prevent its inclusion in devices.

Since the commercial internet was in its infancy at the time, legal and computer security experts had to take on faith that the World Wide Web would eventually be important, Blaze said. With the publication in 1997 of a report on the risks of key recovery that Blaze co-authored, most U.S. federal agencies stopped fighting against the cryptographers.

"The FBI became the only organization arguing that computer security was too good," Blaze said.

Today, government access to encrypted communications through a mandated backdoor is not the law of the land in any single country. But laws requiring varying degrees of government access to encrypted communications are becoming more common, said panelist Riana Pfefferkorn, associate director of surveillance and cybersecurity at the Stanford Law School Center for Internet and Society.

Following the panel discussion, Pfefferkorn said she sees a growing trend, especially in the United States and India, to tie serious liability issues, in both criminal and civil law, to the encryption debate.

"In the U.S., it's child pornography. In India, it's the threat of mob violence," Pfefferkorn said. "They seem like two separate issues, but they're a way of encouraging the regulation of encryption without regulating encryption.

"They're going to induce providers to stop deploying end-to-end encryption lest they face ruinous litigation," she added. "It feels like a bait-and-switch."

Daniel Weitzner, the founding director of the Internet Policy Research Initiative at the Massachusetts Institute of Technology, noted during the panel that India's proposed changes to its intermediary liability law would make internet communications providers ("intermediaries") legally responsible for the actions and speech of their users.

He said India's proposals are similar to changes demanded by U.S. senators, including the EARN IT Act of 2019 authored by Senators Lindsey Graham (R-South Carolina) and Richard Blumenthal (D-Connecticut). Weitzner added that there are other countries with even tougher tech-liability laws on the books.

The United Kingdom passed the Investigative Powers Act in 2016, also known as the Snoopers' Charter. It lets the British government issue statutorily vague Technical Capacity Notices that let it mandate encryption backdoors or otherwise force companies to stop using end-to-end encryption. There's no requirement that the British government has to ever reveal the results of the evaluation process guiding the issuance of the notices.

Australia's Assistance and Access Bill from 2018 is similar, except that it specifically bans the introduction of systemic vulnerabilities into the product in question. What's not clear is another question raised by the legal mandate: Whats the difference between a technical vulnerability and a legally-mandated software backdoor?

As technology itself has grown more complicated and nuanced since the 1990s, so has the burden of responsibility facing its advocates. Proposals to change encryption should be tested "multiple times" strategically and technically, argued the Carnegie Encryption Working Group in September 2019.

And Susan Landau and Denis McDonough said in a column for The Hill that it would be wiser for the tech community to find common ground with governments over data at rest, such as data stored on a locked iPhone, instead of the more contentious data in transit embodied by end-to-end encrypted messaging apps.

Ultimately, the future of the consumer use of encryption is likely to depend heavily on the developers and companies that make it available.

They could split their products, offering different levels of encryption for different countries and regions, as Netscape did in the 1990s, said Pfefferkorn. Or they could refuse to offer encrypted products in countries or regions that demand weaker encryption or backdoor access.

"Or," Pfefferkorn said, "it could be broken for everyone."

More:
The U.S. government's been trying to stop encryption for 25 years. Will it win this time? - Tom's Guide

Preventing hospital readmissions with the help of artificial intelligence – University of Virginia The Cavalier Daily

The University Health Systems data science team recently advanced to the next stage of a nationwide competition to apply artificial intelligence to hospital readmissions, a persistent and costly issue. Sponsored by the Centers for Medicare and Medicaid Services, the inaugural Artificial Intelligence Health Outcomes Challenge initially received hundreds of applications. CMS chose only 25 submissions, the Universitys among them, to execute their proposed strategies.

A few years ago, in order to significantly reduce unplanned readmissions to the hospital, the University initiated efforts to develop a cutting-edge yet easily accessible solution to this widespread problem. Bommae Kim, senior data scientist for the University Health System, began pursuing remedies for the epidemic of readmissions in 2018.

Usually, after a patient was discharged, they couldnt manage their disease for some reason, so were trying to figure out what that reason is and help, Kim said.

The University Health Systems data science team found that three percent of patients at the University constitute 30 percent of readmissions within the first 30 days following release from the hospital, while the majority of the remaining 70 percent return within a year. After identifying the need to decrease such adverse events, data scientists in the Universitys Health System, such as Jason Adams, turned to artificial intelligence to target key factors that contribute to a patient returning unnecessarily to the hospital.

The purpose is to take this amount of information and in an automated way to tell that a person is at risk and what is the course of action that can best help that patient, Adams said.

Kim acts as project leader alongside a team of data scientists and information technology personnel. Overseen by Jon Michel, director of data science for the University Health System, the researchers produce models that help predict the likelihood of readmission and subsequently provide actionable advice for physicians.

Only a year or so later in 2019, CMS announced a competition to tackle the same challenge. CMS directed participants to employ the computing power of artificial intelligence to construct a model that accurately and efficiently flags patients at risk of returning to the hospital for non-routine treatments. More than 300 applicants submitted proposals during the launch stage of the challenge.

The University was one of only 25 groups selected to advance to the next stage, vying with organizations such as IBM, Deloitte and the Mayo Clinic for the $1 million grand prize and utilization by the CMS Innovation Center to determine payment and service delivery strategies.

Were doing this for our U.Va. patients, but it would be nice to win the competition because then we can deploy our approach at the national level, Kim said. We believe in our approach.

For this phase of the competition, CMS distributed Medicare claims data to the remaining teams. Claims from all across the country provide the opportunity to fine-tune the Universitys model with data outside of the University Health System. According to Application Systems Analyst Programmer Angela Saunders, the supplemental details will prove beneficial for the Universitys models.

Saunders did point out challenges with the millions of rows of data, which require extensive resources to simply store in an environment suitable for manipulation. Furthermore, inconsistencies lingered in the dataset from year to year, requiring the feature engineering team to sift and sort through the tables, standardizing entries and column headers, which detail the traits associated with each claimant.

Its not just a little data, Saunders said. We have exhausted a lot of resources just to get the data to consistency. Each year, things change just a little bit and so just getting it into a consistent format is a lot of the battle.

Based on the teams assessments, much of the feature engineering portion of the project at least the preliminary round of it has been completed. The next step involves transporting data to Rivanna, the Universitys high performance computing system, and fitting predictive models to the data. Data scientist Rupesh Silwal, who helps design and evaluate multiple iterations of the modeling architecture, noted the importance of not only systemizing the entries, but also of ensuring sensitive medical data remains anonymous.

The feature engineering team has cleaned the data, made sure everything makes sense from year to year and that all of the sensitive information is scrubbed so we can move the data to this other computing infrastructure, Silwal said. Part of our effort has been focused on getting the data in there and using it to set up a modeling environment to see if we can make predictions.

Specifics regarding modeling techniques and factors employed in creating the Universitys unique solution could not be revealed at this time, due to the proprietary nature of the ongoing competition. In broad terms, factors such as past utilization of certain hospital services like the Emergency Department or chronic conditions contribute to the initial formulation of the model, as they are indicators of high potential for readmission, data scientist Adis Ljubovic said.

Those are fairly well-known and were using that as the baseline, but we also have the secret sauce ones that are preventable, Ljubovic said.

Other variables intended to capture financial, transportation and lifestyle information for patients also augment the standard determinants of readmission, while electronic medical records from the University provide documentation of trends in the Universitys own health system.

Another distinctive aspect of the Universitys proposal is its commitment to a solution that clinicians accept. Senior data scientist John Ainsworth and Ljubovic, along with other members of the Universitys project, assert that the healthcare industry generally adopts a conservative mindset with regards to artificial intelligence modeling in hospitals. However, the University Health Systems data scientists have consulted with doctors at the University hospital about introducing tools physicians trust and can easily adopt.

Data science techniques bring with them the potential for accuracy, for bringing in and ingesting larger datasets, Ainsworth said. The richness of the data gets recorded and putting up the information in front of clinicians that can help them take meaningful action is what were going for. If we can ... give them some sense of where preventative strategies might lie, that can support them in their goal of caring for patients.

Several members of the team agreed a complex issue like hospital readmission calls for a collective approach. In the University Health Systems data science department, that can be a rare occurrence, several data scientists remarked, as their separate assignments often occupy most of their time. Senior Business Intelligence Developer Manikesh Iruku expressed appreciation for the chance to learn more about data science techniques, and others shared similar experiences when it came to exploring different subfields of data science.

Saunders and data scientist Valentina Baljak emphasized the confidence this collaboration has given the group to tackle new tasks.

Frequently for us, we have our own projects and its a one-person project, Baljak said. Occasionally you collaborate with someone, but I dont really think we had a project that involved all of us at the same time. That has been a great experience.

Currently, competitors are finalizing their project packages to submit to CMS in February. CMS plans to winnow the field down to the seven best proposals by April. Regardless of the outcome, the Universitys team plans to put their results and newly developed models into practice within the Universitys Health System.

In particular for healthcare, in some ways the best is yet to come in the data science world, Ainsworth said. The future is bright for data science in healthcare.

Read the original:
Preventing hospital readmissions with the help of artificial intelligence - University of Virginia The Cavalier Daily

You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It’s Making the World a Weirder Place – Chemistry World

Janelle ShaneWildfire2019 | 272pp | 20ISBN9781472268990

Buy this book on Amazon.co.uk

How many giraffes can you see? Too many to count. This seems unlikely unless you are an artificial intelligence algorithm trained on a dataset in which giraffes were somewhat overrepresented.

As Janelle Shane explains in her book on how artificial intelligence works and why its making the world a weirder place, accidental over-inclusion of giraffes in image collections used to train AIs is exceedingly likely. The presence of a giraffe is so rare and exciting that you are almost certain to take a photo.

An optical physicist, Shane documents such eccentricities of AI on her blog and Twitter feed, which she has now expanded into this delightful book. The title comes from an experiment she ran to see if an AI could generate human-acceptable pick-up lines. The results were, as you can see, mixed.

This is the crux of Shanes highly compelling argument about AI: its danger doesnt come from exceeding intelligence human-like AI remains firmly within the realm of science fiction but from the very weird things that narrowly focused algorithms do. Like the self-driving car algorithm that identified a sideways-on lorry as a road sign, causing a fatal accident. As artificial intelligence becomes ever more deeply embedded in our modern digital lives, it behoves us all to understand it better and know its limitations and failings.

I really loved this book, and, if you like your serious science accompanied by very cute cartoon illustrations, you will too. Shanes explanations are not only laugh-out-loud hilarious but also so accessible. Reading the book moved me to go do my own experiments in AI weirdness, like playing with predictive text on my smartphone and chatting with virtual chat bots about giraffes. I feel much better informed as a result.

This book features in our book club podcast, which you can listen tohere.

See the original post here:
You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It's Making the World a Weirder Place - Chemistry World

Cambridge Science Festival examines the effects and ethics of artificial intelligence – Cambridge Network

Artificial intelligence features heavily as part of a series of events that cover new technologies at the 2020 Cambridge Science Festival (9 22 March), which is run by the University of Cambridge.

Hype around artificial intelligence, big data and machine learning has reached a fever pitch. Drones, driverless cars, films portraying robots that look and think like humans Today, intelligent machines are present in almost all walks of life.

During the first week of the Festival, several events look at how these new technologies are changing us and our world. In AI and society: the thinking machines (9 March). Dr Mateja Jamnik, Department of Computer Science and Technology, considers our future and asks: What exactly is AI? What are the main drivers for the amazing recent progress in AI? How is AI going to affect our lives? And could AI machines become smarter than us? She answers these questions from a scientific perspective and talks about building AI systems that capture some of our informal and intuitive human thinking. Dr Jamnik demonstrates a few applications of this work, presents some opportunities that it opens, and considers the ethical implications of building intelligent technology.

Artificial intelligence has also created a lot of buzz about the future of work. In From policing to fashion: how the use of artificial intelligence is shaping our work (10 March), Alentina Vardanyan, Cambridge Judge Business School, and Lauren Waardenburg, KIN Center for Digital Innovation, Amsterdam, discuss the social and psychological implications of AI, from reshaping the fashion design process to predictive policing.

Speaking ahead of the event, Lauren Waardenburg said: Predictive policing is quite a new phenomenon and gives one of the first examples of real-world data translators, which is quite a new and upcoming type of work that many organisations are interested in. However, there are unintended consequences for work and the use of AI if an organisation doesnt consider the large influence such data translators can have.

Similarly, AI in fashion is also a new phenomenon. The feedback of an AI system changes the way designers and stylists create and how they interpret their creative role in that process. The suggestions from the AI system put constraints on what designers can create. For example, the recommendations may be very specific in suggesting the colour palette, textile, and style of the garment. This level of nuanced guidelines not only limits what they can create, but it also puts pressure on their self-identification as a creative person.

The technology we encounter and use daily changes at a pace that is hard for us to truly take stock of, with every new device release, software update and new social media platform creating ripple effects. In How is tech changing how we work, think and feel? (14 March), a panel of technologists look at current and near-present mainstream technology to better understand how we think and feel about data and communication. With Dr David Stillwell, Lecturer in Big Data Analytics and Quantitative Social Science at Cambridge Judge Business School; Tyler Shores, PhD researcher at the Faculty of Education; Anu Hautalampi, Head of social media for the University of Cambridge; and Dex Torricke-Barton, Director of the Brunswick Group and former speechwriter and communications for Mark Zuckerberg, Elon Musk, Eric Schmidt and United Nations. They discuss some of the data and trends that illustrate the impact tech has upon our personal, social, and emotional lives as well as discussing ways forward and what the near future holds.

Tyler Shores commented: One thing is clear: the challenges that we face that come as a result of technology do not necessarily have solutions via other forms of technology, and there can be tremendous value for all of us in reframing how we think about how and why we use digital technology in the ways that we do.

The second week of the Festival considers the ethical concerns of AI. In Can we regulate the internet? (16 March), Dr Jennifer Cobbe, The Trust & Technology Initiative, Professor John Naughton, Centre for Research in the Arts, Social Sciences and Humanities, and Dr David Erdos, Faculty of Law, ask: How can we combat disinformation online? Should internet platforms be responsible for what happens on their services? Are platforms beyond the reach of the law? Is it too late to regulate the internet? They review current research on internet regulation, as well as ongoing government proposals and EU policy discussions for regulating internet platforms. One argument put forward is that regulating internet platforms is both possible and necessary.

When you think of artificial intelligence, do you get excited about its potential and all the new possibilities? Or rather, do you have concerns about AI and how it will change the world as we know it? In Artificial intelligence, the human brain and neuroethics (18 March), Tom Feilden, BBC Radio 4 and Professor Barbara Sahakian, Department of Psychiatry, discuss the ethical concerns.

In Imaging and vision in the age of artificial intelligence (19 March), Dr Anders Hansen, Department of Applied Mathematics and Theoretical Physics, also examines the ethical concerns surrounding AI. He discusses new developments in AI and demonstrates how systems designed to replace human vision and decision processes can behave in very non-human ways.

Dr Hansen said: AI and humans behave very differently given visual inputs. A human doctor presented with two medical images that, to the human eye are identical, will provide the same diagnosis for both cases. The AI, however, may on the same images give 99.9% confidence that the patient is ill based on one image, whereas on the other image (that looks identical) give 99.9% confidence that the patient is well.

Such examples demonstrate that the reasoning the AI is doing is completely different to the human. The paradox is that when tested on big data sets, the AI is as good as a human doctor when it comes to predicting the correct diagnosis.

Given the non-human behaviour that cannot be explained, is it safe to use AI in automated diagnosis in medicine, and should it be implemented in the healthcare sector? If so, should patients be informed about the non-human behaviour and be able to choose between AI and doctors?

A related event explores the possibilities of creating AI that acts in more human ways. In Developing artificial minds: joint attention and robotics (21 March), Dr Mike Wilby, lecturer in Philosophy at Anglia Ruskin University, describes how we might develop our distinctive suite of social skills in artificial systems to create benign AI.

One of the biggest challenges we face is to ensure that AI is integrated into our lives, such that, in addition to being intelligent and partially autonomous, AI is also transparent, trustworthy, responsive and beneficial, Dr Wilby said.

He believes that the best way to achieve this would be to integrate it into human worlds in a way that mirrors the structure of human development. Humans possess a distinctive suite of social skills that partly explains the uniquely complex and cumulative nature of the societies and cultures we live within. These skills include the capacity for collaborative plans, joint attention, joint action, as well as the learning of norms of behaviour.

Based on recent ideas and developments within Philosophy, AI and Developmental Psychology, Dr Wilby examines how these skills develop in human infants and children and suggests that this gives us an insight into how we might be able to develop benign AI that would be intelligent, collaborative, integrated and benevolent.

Further related events include:

Bookings open on Monday 10 February at 11am.

Image: Artificial_intelligence_by_gerd_altmann

Read the original post:
Cambridge Science Festival examines the effects and ethics of artificial intelligence - Cambridge Network