Artificial intelligence is helping us talk to animals (yes, really) – Wired.co.uk

Each time any of us uses a tool, such as Gmail, where theres a powerful agent to help correct our spellings, and suggest sentence endings, theres an AI machine in the background, steadily getting better and better at understanding language. Sentence structures are parsed, word choices understood, idioms recognised.

That exact capability could, in 2020, grant the ability to speak with other large animals. Really. Maybe even faster than brain-computer interfaces will take the stage.

Our AI-enhanced abilities to decode languages have reached a point where they could start to parse languages not spoken by anyone alive. Recently, researchers from MIT and Google applied these abilities to ancient scripts Linear B and Ugaritic (a precursor of Hebrew) with reasonable success (no luck so far with the older, and as-yet undeciphered Linear A).

First, word-to-word relations for a specific language are mapped, using vast databases of text. The system searches texts to see how often each word appears next to every other word. This pattern of appearances is a unique signature that defines the word in a multidimensional parameter space. Researchers estimate that languages all languages can be best described as having 600 independent dimensions of relationships, where each word-word relationship can be seen as a vector in this space. This vector acts as a powerful constraint on how the word can appear in any translation the machine comes up with.

These vectors obey some simple rules. For example: king man + woman = queen. Any sentence can be described as a set of vectors that in turn form a trajectory through the word space.

These relationships persist even when a language has multiple words for related concepts: the famed near-100 words Inuits have for snow will all be in similar dimensional spaces each time someone talks about snow, it will always be in a similar linguistic context.

Take a leap. Imagine that whale songs are communicating in a word-like structure. Then, what if the relationships that whales have for their ideas have dimensional relationships similar to those we see in human languages?

That means we should be able to map key elements of whale songs to dimensional spaces, and thus to comprehend what whales are talking about and perhaps to talk to and hear back from them. Remember: some whales have brain volumes three times larger than adult humans, larger cortical areas, and lower but comparable neuron counts. African elephants have three times as many neurons as humans, but in very different distributions than are seen in our own brains. It seems reasonable to assume that the other large mammals on earth, at the very least, have thinking and communicating and learning attributes we can connect with.

What are the key elements of whale songs and of elephant sounds? Phonemes? Blocks of repeated sounds? Tones? Nobody knows, yet, but at least the journey has begun. Projects such as the Earth Species Project aim to put the tools of our time particularly artificial intelligence, and all that we have learned in using computers to understand our own languages to the awesome task of hearing what animals have to say to each other, and to us.

There is something deeply comforting to think that AI language tools could do something so beautiful, going beyond completing our emails and putting ads in front of us, to knitting together all thinking species. That, we perhaps can all agree, is a better and perhaps nearer-term ideal to reach than brain-computer communications. The beauty of communicating with them will then be joined to the market ideal of talking to our pet dogs. (Cats may remain beyond reach.)

Mary Lou Jepsen is the founder and CEO of Openwater. John Ryan, her husband, is a former partner at Monitor Group

The illegal trade of Siberian mammoth tusks revealed

I ditched Google for DuckDuckGo. Here's why you should too

How to use psychology to get people to answer your emails

The WIRED Recommends guide to the best Black Friday deals

Get The Email from WIRED, your no-nonsense briefing on all the biggest stories in technology, business and science. In your inbox every weekday at 12pm sharp.

by entering your email address, you agree to our privacy policy

Thank You. You have successfully subscribed to our newsletter. You will hear from us shortly.

Sorry, you have entered an invalid email. Please refresh and try again.

Read the rest here:

Artificial intelligence is helping us talk to animals (yes, really) - Wired.co.uk

Quantum leap: Why we first need to focus on the ethical challenges of artificial intelligence – Economic Times

By Vivek Wadhwa

AI has the potential to be as transformative to the world as electricity, by helping us understand the patterns of information around us. But it is not close to living up to the hype. The super-intelligent machines and runaway AI that we fear are far from reality; what we have today is a rudimentary technology that requires lots of training. Whats more, the phrase artificial intelligence might be a misnomer because human intelligence and spirit amount to much more than what bits and bytes can encapsulate.

I encourage readers to go back to the ancient wisdoms of their faith to understand the role of the soul and the deeper self. This is what shapes our consciousness and makes us human, what we are always striving to evolve and perfect. Can this be uploaded to the cloud or duplicated with computer algorithms? I dont think so.

What about the predictions that AI will enable machines to have human-like feeling and emotions? This, too, is hype. Love, hate and compassion arent things that can be codified. Not to say that a machine interaction cant seem human we humans are gullible, after all. According to Amazon, more than 1 million people had asked their Alexa-powered devices to marry them in 2017 alone. I doubt those marriages, should Alexa agree, would last very long!

Todays AI systems do their best to replicate the functioning of the human brains neural networks, but their emulations are very limited. They use a technique called Deep Learning. After you tell a machine exactly what you want it to learn and provide it with clearly labelled examples, it analyses the patterns in those data and stores them for future application. The accuracy of its patterns depends on completeness of data. So the more examples you give it, the more useful it becomes.

Herein lies a problem, though an AI system is only as good as the data it receives. It is able to interpret them only within the narrow confines of the supplied context. It doesnt understand what it has analysed so it is unable to apply its analysis to other scenarios. And it cant distinguish causation from correlation.

AI shines in performing tasks that match patterns in order to obtain objective outcomes. Examples of what it does well include playing chess, driving a car on a street and identifying a cancer lesion in a mammogram. These systems can be incredibly helpful extensions of how humans work, and with more data, the systems will keep improving. Although an AI machine may best a human radiologist in spotting cancer, it will not, for many years to come, replicate the wisdom and perspective of the best human radiologists. And it wont be able to empathise with a patient in the way that a doctor does. This is where AI presents its greatest risk and what we really need to worry about use of AI in tasks that may have objective outcomes but incorporate what we would normally call judgement. Some such tasks exercise much influence over peoples lives. Granting a loan, admitting a student to a university, or deciding whether children should be separated from their birth parents due to suspicions of abuse falls into this category. Such judgements are highly susceptible to human biases but they are biases that only humans themselves have the ability to detect.

And AI throws up many ethical dilemmas around how we use technology. It is being used to create killing machines for the battlefield with drones which can recognise faces and attack people. China is using AI for mass surveillance, and wielding its analytical capabilities to assign each citizen a social credit based on their behaviour. In America, AI is mostly being built by white people and Asians. So, it amplifies their inbuilt biases and misreads African Americans. It can lead to outcomes that prefer males over females for jobs and give men higher loan amount than women. One of the biggest problems we are facing with Facebook and YouTube is that you are shown more and more of the same thing based on your past views, which creates filter bubbles and a hotbed of misinformation. Thats all thanks to AI.

Rather than worrying about super-intelligence, we need to focus on the ethical issues about how we should be using this technology. Should it be used to recognise the faces of students who are protesting against the Citizenship (Amendment) Act? Should India install cameras and systems like China has? These are the types of questions the country needs to be asking.The writer is a distinguished fellow and professor, Carnegie Mellon Universitys College of Engineering, Silicon Valley.

This story is part of the 'Tech that can change your life in the next decade' package.

View post:

Quantum leap: Why we first need to focus on the ethical challenges of artificial intelligence - Economic Times

Science in the 2010s: Artificial Intelligence – Labmate Online

Artificial intelligence (AI) has transformed the face of computing, making its mark on everything from cybersecurity to modern medicine. There's no sign of a slowdown, with analysts predicting that by 2022 worldwide spending within the AI industry will soar to US$79.2 billion. There have been some incredible breakthroughs over the past decade, with some of the most significant highlighted below. 2010 Deep learning advances

While the foundations for deep learning sate to the 1980s, researchers George Dahl and Abdel-rahman Mohamed broke new ground in 2010 when they developed advanced deep learningspeech recognition tools. This paved the way for more deep learning advances focusing on anything from facial recognition to machine translation.

In 2011 a question-answering computer system developed by IBM's DeepQA project made headlines when it outplayed Brad Rutter and Ken Jennings, two of the most successful contestants to take part in the popular American game show Jeopardy!

Artificial intelligence took another stride forward in October 2011 when Apple launched Siri, it's signature personal assistant. From reciting the weather forecast to plotting a route on Google Maps, Siri is now used by hundreds of millions of people around the world.

In 2015 Google successfully pulled off "the world's first fully driverless ride on public roads" using its Waymomodel. The passenger was a blind American man called Steve Mahan, a close friend of principal engineer Nathaniel Fairfield.

Perceptions of artificial intelligence were challenged in 2018 when a set of original paintings created by machines using Generative Adversarial Network technology sold for more than US$400,000 at a Christies auction. The portrait was created using a two-part algorithm that analysed image data from 15,000 portraits dating from the 14th to 20th centuries.

Artificial intelligence won more headlines in 2019 when Google launched an AI system that can detect lung cancer with more accuracy than human radiologists. The system is powered by deep learning and uses an algorithm to analyse computed tomography (CT) scans and predict the risk of developing the disease.

Want to know more about the most significant scientific breakthroughs of 2019? Introducing the latest technology from robotics companyAndrew Alliance, 'Addressing the challenges of the reproducibility crisis with improved automation and protocol sharing' spotlights advanced laboratory automation and software infiltrating labs around the world.

Read the original post:

Science in the 2010s: Artificial Intelligence - Labmate Online

In 2020, lets stop AI ethics-washing and actually do something – MIT Technology Review

Last year, just as I was beginning to cover artificial intelligence, the AI world was getting a major wake-up call. There were some incredible advancements in AI research in 2018from reinforcement learning to generative adversarial networks (GANs) to better natural-language understanding. But the year also saw several high-profile illustrations of the harm these systems can cause when they are deployed too hastily.

A Tesla crashed on Autopilot, killing the driver, and a self-driving Uber crashed, killing a pedestrian. Commercial face recognition systems performed terribly in audits on dark-skinned people, but tech giants continued to peddle them anyway, to customers including law enforcement. At the beginning of this year, reflecting on these events, I wrote a resolution for the AI community: Stop treating AI like magic, and take responsibility for creating, applying, and regulating it ethically.

In some ways, my wish did come true. In 2019, there was more talk of AI ethics than ever before. Dozens of organizations produced AI ethics guidelines; companies rushed to establish responsible AI teams and parade them in front of the media. Its hard to attend an AI-related conference anymore without part of the programming being dedicated to an ethics-related message: How do we protect peoples privacy when AI needs so much data? How do we empower marginalized communities instead of exploiting them? How do we continue to trust media in the face of algorithmically created and distributed disinformation?

Sign up for The Algorithm artificial intelligence, demystified

But talk is just thatits not enough. For all the lip service paid to these issues, many organizations AI ethics guidelines remain vague and hard to implement. Few companies can show tangible changes to the way AI products and services get evaluated and approved. Were falling into a trap of ethics-washing, where genuine action gets replaced by superficial promises. In the most acute example, Google formed a nominal AI ethics board with no actual veto power over questionable projects, and with a couple of members whose inclusion provoked controversy. A backlash immediately led to its dissolution.

Meanwhile, the need for greater ethical responsibility has only grown more urgent. The same advancements made in GANs in 2018 have led to the proliferation of hyper-realistic deepfakes, which are now being used to target women and erode peoples belief in documentation and evidence. New findings have shed light on the massive climate impact of deep learning, but organizations have continued to train ever larger and more energy-guzzling models. Scholars and journalists have also revealed just how many humans are behind the algorithmic curtain. The AI industry is creating an entirely new class of hidden laborerscontent moderators, data labelers, transcriberswho toil away in often brutal conditions.

But not all is dark and gloomy: 2019 was the year of the greatest grassroots pushback against harmful AI from community groups, policymakers, and tech employees themselves. Several citiesincluding San Francisco and Oakland, California, and Somerville, Massachusettsbanned public use of face recognition, and proposed federal legislation could soon ban it from US public housing as well. Employees of tech giants like Microsoft, Google, and Salesforce also grew increasingly vocal against their companies use of AI for tracking migrants and for drone surveillance.

Within the AI community, researchers also doubled down on mitigating AI bias and reexamined the incentives that lead to the fields runaway energy consumption. Companies invested more resources in protecting user privacy and combating deepfakes and disinformation. Experts and policymakers worked in tandem to propose thoughtful new legislationmeant to rein in unintended consequences without dampening innovation. At the largest annual gathering in the field this year, I was both touched and surprised by how many of the keynotes, workshops, and posters focused on real-world problemsboth those created by AI and those it could help solve.

So here is my hope for 2020: that industry and academia sustain this momentum and make concrete bottom-up and top-down changes that realign AI development. While we still have time, we shouldnt lose sight of the dream animating the field. Decades ago, humans began the quest to build intelligent machines so they could one day help us solve some of our toughest challenges.

AI, in other words, is meant to help humanity prosper. Lets not forget.

To have more stories like this delivered directly to your inbox,sign upfor our Webby-nominated AI newsletter The Algorithm. It's free.

See the original post:

In 2020, lets stop AI ethics-washing and actually do something - MIT Technology Review

designboom TECH predictions 2020: AI and the third era of computing – Designboom

tech predictions 2020: scientists have already used it to explore our ancient origins, beer lovers have rigged taps with it to pour the perfect pint, and now humankind wants to use it to find everything out about everyone artificial intelligence is making rapid strides and theres talk of a new evolution that could fundamentally change life on our planet.

this month, LA-based studio ouchhh created a 3 billion-pixel digital monolith combining AI with data learnt from the pre-pottery neolithic period (read more)

in 2020, artificial intelligence will reach new heights. robotic scanners that serve the perfect pizza, seem pretty schoolboy in comparison to its future potential. the AI of tomorrow uses its political prowess instead of its culinary skills. it will decide who should be hired and who should be fired, who is guilty and who is innocent, deciding the fate of entire nations.

earlier this year, dominoes announced the launch of a new pizza-checkingrobot which uses a mix of AI, advanced machine learning and sensor technology to identify pizza type, even topping distribution and correct toppings (read more)

deepfakes refer to manipulated videos, or other digital representations produced by sophisticated artificial intelligence, that generate fabricated images and sounds that appear to be real. these falsified videos are becoming increasingly sophisticated and accessible, with the danger of making people believe something is real when it is not. its just in time for the 2020 US election where some fear it could be used to undermine the reputation of political candidates by making the candidate appear to say or do things that never actually occurred.

in june, a doctored video of mark zuckerberg was uploaded to instagram raising concerns over falsified content (read more)

gartner, an IT research and advisory company, reports that by 2024, the world health organization will identify online shopping as an addictive disorder. that might be in part because by then, as the same report suggests, AI which is able to identify emotions will influence more than half of the online advertisements you see. by 2020, it is predicted that 85% of customer interactions in retail will be managed by artificial intelligence. new technology could monitor customers reactions to brands, pricing and store layouts, helping retailers make decisions based on consumer responses. its kind of like market research but 24/7: if emotions read negative, it might be time to lower prices, and if shoppers appear confused, it might be time for a redesign.

just a couple of months ago, researchers at openAI developed a roboticarm that usesartificial intelligence to solve a rubiks cube one-handed (read more)

theres no hiding your emotions in the future. newly developed artificial emotional intelligence puts power in the hands of big businesses with an incentive to know exactly whats on your mind and when. it might not change the way we shop entirely, but the use of AI to detect consumer emotions will surely change the way we are sold to. imagine a hyperpersonalized shopping experience curated by humanoid sales assistants whose ability to understand what you want or need happens before youve even had time to articulate it.

in september, designboom reported on a new PSA in america that used artificial intelligence to create a composite portrait of hunger by scanning the faces of americans (read more)

the biggest concern of the future is if brands will be transparent and if so, how? consumers will demand an education on how their data is being collected and used. AI that can scan human beings for their emotional state is already being used to vet job seekers, test criminal suspects for signs of deception, and set insurance prices but just cause AI can read our emotionsshould it? research center AI now institute has called for new laws to restrict the use of emotion-detecting for fears that it is built on markedly shaky foundations. we just cant rely on AI doing its job properly when peoples lives are at stake. with AI around theres no room for human error, but theres still plenty of space for machine-made mistakes.

israel-based startup seedo is developing fully automated, commercial-scale cannabis farms for example (read more)

but its not all bad AI is set to drive sustainability in 2020 and beyond. companies will use it to measure environmental and social effects within their businesses, automatically optimizing operations for sustainability. that includes operating responsibly, reducing waste, making smarter transportation strategies.

kieron marchese I designboom

dec 27, 2019

See the article here:

designboom TECH predictions 2020: AI and the third era of computing - Designboom

Edward Snowden Sets the Record Straight – Truthdig

I generally care relatively little for the personal lives of people of note, but something that always nagged me just slightly about Edward Snowdens 2013 revelations that the NSA was spying on pretty much everyone was how angry was his girlfriend?

After all, we all knew Snowden had a girlfriend, since it didnt take long for the media to uncover that her name was Lindsay Mills, that (much to their infinite delight) she had photos of herself in lingerie, and that her significant other had suddenly turned up in Hong Kong halfway through a business trip and started to fill the world in on U.S. mass surveillance without running it by her first.

It must have been quite the shock.

I therefore found it uncharacteristically satisfying that Permanent Record included a chapter composed of extracts from Lindsay Mills diary. It was genuinely interesting to get an insight into how someone might cope with this very unusual situation being thrust upon them in a more candid tone than we generally get from the guarded Snowden throughout the rest of the book. These excerpts were all the more necessary, as this really is a book about the personal no further details of public significance are released in this title, which is a work primarily of analysis and reflection.

Click here to read long excerpts from Permanent Record at Google Books.

The general schema of the book is precisely what one might expect: Snowdens childhood in North Carolina and the D.C. Beltway; his decision to enlist in the U.S. Army following 9/11; his roles as a defense contractor in the United States, Switzerland, and Japan; his ultimate decision to blow the whistle on mass surveillance and subsequent temporary asylum in Russia.

Prior reviews have been accompanied by a few snarky remarks: The New Yorker, for example, claimed that Snowden saw the early internet as a techno-utopia where boys and men could roam free, although I cannot recall Snowden making such exclusionary gendered distinctions. Presumably it complements Malcolm Gladwells earlier piece on why Snowden is not comparable to Pentagon Papers leaker Daniel Ellsberg (since he is a hacker not a leaker) in flat contradiction to Ellsbergs own defense of Snowden published in The Washington Post:

Many people compare Edward Snowden to me unfavorably for leaving the country and seeking asylum, rather than facing trial as I did. I dont agree. The country I stayed in was a different America, a long time ago. [] Snowden believes that he has done nothing wrong. I agree wholeheartedly.

So eager has everyone been to snipe and show their moral fiber as good little citizens, that they have rarely found the time to dig into the main themes of Permanent Record. . Rather than spilling more facts, Snowdens aim seems to have been to contextualize his previous disclosures and explain their significance. Thus, while many parts of the book are truly gripping a goodly portion of it details how Snowden removed information detailing surveillance from his workplace under a pineapple field in Hawaii and arranged to share it with documentary filmmaker Laura Poitras and journalist Glenn Greenwald in Hong Kong it is the authors underlying themes and motivations that truly deserve our attention.

It is apparent early on that Snowden pursued two main purposes in releasing Permanent Record: 1) to convince skeptics that he acted for the good of the country and to defend the U.S. Constitution (indeed the books release was timed to coincide with Constitution Day on September 17), and 2) to educate readers about technology, or at least that part of it related to mass surveillance.

Early on, while still describing his 80s childhood and initial fascination with what he then termed Big Masheens, Snowden recalls imbibing lessons from his Coast Guard father Lonnie about the potential for technology to bring its own form of tyranny with it. According to Snowden:

To refuse to inform yourself about the basic operation and maintenance of the equipment you depended on was to passively accept that tyranny and agree to its terms: when your equipment works, youll work, but when your equipment breaks down youll break down, too. Your possessions would possess you.

Technological tyranny is a theme Snowden comes back to later in the book, reflecting on Mary Shelleys Frankenstein he was after all posted to Geneva, where part of the novels action is set.

That may sound a bit cliche, until you learn that Snowdens sales partner during his time at Dell literally nicknamed the cloud system they developed for the CIA Frankie because its a real monster. That wasnt just a private office joke, but how he tried to convince the agency to greenlight the project during a sales pitch. Its these little pieces of not-exactly-earth-shattering, but still pleasantly informative detail that help the book keep ticking over and compensate for the often distant tone of its author. Snowden frequently describes his feelings, but rarely does he make the reader feel them.

Snowden also lavishes attention on explaining how he interacted with the internet as a child and teen. While many have interpreted these lengthy passages as either nave utopianism or pathetic addiction, his point is much more important than that. Im much of an age with Snowden and therefore remember many of the things he recalls: phreaking, personal homepages, chat rooms, and the days when you could just ask perfect strangers for advice and theyd give it to you. What I think I hadnt fully considered before reading this book is that at least some people in this rather narrow cohort absorbed some knowledge of modern technology. Despite being nowhere near as interested in computers as Snowden (and having a positive antipathy to Big Masheens), I learned how to build circuits and program from Basic to Java as part of my general education. That gave me the ability to learn more later in life and to form a better (if still far from expert) understanding of the nuts and bolts of computing infrastructure.

By contrast, many people today know how to use tech, but they dont understand it. Just like few people who use money understand economics. And just like an ability to grasp finance creates an enormous power differential, so does the ability to understand tech.

Snowden is at pains to redress this balance, methodically explaining everything from SD cards, to TOR, to smart appliances, to the difference between http and https, to the fact that when you delete a file from your computer, it doesnt actually get deleted. He bestows the same attention to detail on these subjects as he does describing the labyrinthine relationships of his various employers and the intelligence agencies, and this clarity helps turn the book into a relatable story about issues rather than a jargon-stuffed, acronym-filled nightmare.

Only by understanding how technology works on a basic level, so argues Snowden, can journalists ask the right questions of power and regulators regulate effectively. He strengthens this case by noting examples of times when major announcements (construction of enormous data storage facilities; a CIA presentation in which the speaker literally admonished the journalists present to think about their rights) were simply ignored.

They did not make waves, Snowden thinks, because journalists and regulators simply didnt realize the significance. There is, as he says repeatedly in the book, a lag between technology and regulation.

It is an issue that others in a position to know, like Elon Musk and Stephen Hawking, have pointed out. Everything from advances in robotic warfare to artificial intelligence to total surveillance aided by facial recognition is dismissed as alarmist until well after it is happening, when its then dismissed in true Nineteen Eighty-Four style with a shoulder shrug as inevitable.

And when that doesnt happen, tech tends to be treated as an entirely new phenomenon requiring heavy-handed, and often counterproductive, regulation.

While it is entirely true that people are bullied on social media, for example, we shouldnt forget that people were bullied in real life in the past, too. And threatened. And the victims of fraud. And defamation. And child abuse. As a result, we shouldnt lose sight of the fact that we often do already have a well-developed arsenal of remedies that can be adjusted for the internet era without the need to jettison constitutional values in the name of protection and safety.

There are ways to apprehend criminals effectively without the total take of information that intelligence agencies so lazily demand. Vigilante pedophile-hunting groups have been quite successful in luring would-be predators to justice by posing as minors on social media sites. While it is beyond question that such activities should be left to properly trained and authorized police forces not righteous citizens who can do as much harm as good it does show that the individualized pursuit of crime can still be very effective in the social media age. Indeed, in regards to some crimes, like forms of child abuse, detection may well be easier than in earlier times with many culprits unable to resist the temptation to groom potential victims online.

Rather than veering between complacency and panic, we should be thinking about the various ways in which to update our legal framework for the modern digital age something Snowdens revelations about the warrantless mass surveillance programs he uncovered have given us a particular urgency to do.

The part of the law most significant to Snowden, and which he quotes in the book, is the U.S. Constitutions Fourth Amendment, which reads:

The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.

According to Snowden, the NSA sought to circumvent the Fourth Amendment by creating a huge database of all online activity the permanent record of the books title ideally stored in perpetuity and which they would only search when [the organizations] analysts, not its algorithms, actively queried what had already been automatically collected. Intelligence agencies also argued that because individuals have already given permission to third parties, particularly telecommunications companies, to host their data, that data no longer resided in the private sphere and thus constitutional privacy had been forfeited.

After all, the magic of what feels private sitting in front of your computer or scrolling through your phone at home can only happen by connecting to distant servers.Those who support a living document interpretation of the Constitution may see this as an eventual opportunity to expand the scope of the terms papers, and effects for the modern era, something Snowden himself suggests; originalists might argue that only a constitutional change itself can suffice to fully address privacy rights in a digital age.

Some of the actions that Snowden describes monitoring people through their webcams in their homes via XKEYSCORE would certainly seem like unproblematic violations if committed against US citizens or persons on US soil under present wording and interpretations. Others like hunting through the vast reams of information we sign over to private companies may prove more difficult. Justice Scalia, the nations most well-known originalist prior to his death in 2016, is alleged to have refused to be drawn on whether or not computer data was an effect in the sense of the Fourth Amendment at a public lecture in 2014.

In more practical terms, the Court of Appeals for the Second Circuit decided in 2015 (ACLU v. Clapper) that bulk collection was not covered by Section 215 of the Patriot Act, stating in part, Congress cannot reasonably be said to have ratified a program of which many members of Congress and all members of the public were not aware, a decision followed shortly by the passing of the USA Freedom Act, under which telecoms companies keep records that law enforcement may then request.

However, it is somewhat doubtful whether legal remedies alone will effectively stop the political-intelligence agency complex that Snowden describes so adroitly in his book. He recalls the panic he witnessed at Fort Meade and outside the Pentagon during 9/11, and later the blame as politicians emphasized the prevention of terror attacks as the standard for measuring their own competence. Intelligence agencies felt both the horror of having to develop some way to guarantee safety and the power of being able to extort huge budgets from Congress in the interests of doing so. Once an agency has the capability to engage in mass surveillance and is under significant pressure to maintain security, its difficult to imagine it failing to indulge regardless of legalities.

Snowden mentions encryption, SecureDrop, and the European Unions General Data Protection Regulation (GDPR) as potential ways for citizens to uphold their own privacy, but Im less than convinced. Encryption is not readily available to the average person working on an average budget; few people will ever have any reason to use SecureDrop, and I doubt many of the alleged positive effects of the GDPR, which has mainly led to Europeans agreeing to any and every pop-up in order to get to their content ASAP while introducing barriers to sharing and advertisement for small businesses (precisely not the threat).

In this context, perhaps the right to be forgotten (in fairness, now enshrined in Article 17 of the GDPR, although the principle derives from an earlier 2014 court case) is more relevant. After all, Snowdens main fear is the creation of the unforgiving permanent record, where every mistake, minor trespass, and ill-considered comment remains preserved for all time and just waiting to be used against one. Indeed, he contrasts this with the early days of the web, where one could develop opinions freely and cast aside identities that one had outgrown. Snowden regards this freedom as pivotal to development and maturation, as we all tend to curate our lives over the years, forming the identity we want to have at the expense of conflicting past actions.

Despite the fact that he never made it to his intended destination Ecuador Snowden remains, much like Ellsberg, a powerful example of a person who blew the whistle on state abuses and not only lived to tell about it, but is living an apparently well-adjusted life. As he lets us know at the end of the book, Lindsay eventually joined him in Moscow, refrained from slapping him silly (as Snowden admits he deserved), and agreed to marry him. Its a fitting low-key end for a book, and a story, that is more about substance than style.

This article originally appeared on the Los Angeles Review of Books.

See original here:
Edward Snowden Sets the Record Straight - Truthdig

If you bought Edward Snowden’s new book for Christmas, your money goes to the U.S. government – National Post

Profits earned from Edward Snowdens new book, Permanent Record, will be taken by the United States government, a judge has ruled, meaning that for anyone purchasing a copy for a Christmas gift, the money wont go to the author.

On Dec. 17, a federal court justice concluded that Snowden, the whistleblower who worked for the Central Intelligence Agency and the National Security Agency, signed contracts that meant that were he to write about his activities or speak about them, he would need to submit the text for pre-publication review. Since he didnt, he forfeits all profits from book sales.

It stems back to a lawsuit filed in September by the Department of Justice; the government announced it would try to recover all proceeds from the book sales.

The government seeks to recover all proceeds earned by Snowden because of his failure to submit his publication for pre-publication review in violation of his alleged contractual and fiduciary obligations, says a department press release.

In a tweet on Dec. 19, Snowden said The government may steal a dollar, but it cannot erase the idea that earned it.

He went on to suggest people gift the book to someone else when theyre done reading it.

Snowden had attempted to argue in court that: he wouldnt get a fair review of his book from the government; the government is selectively enforcing the contract agreements; and the security agreements dont provide the basis for the governments claims against him.

But the court sided against him, saying the contracts were unambiguous and clear, and he broke the rules.

Snowden was the man who, back in 2013, swiped classified documents from a government facility in Hawaii and transported them to Hong Kong, where he then handed them over to journalists from the Guardian, a British newspaper. It became an international scandal as journalists revealed the extent to which U.S. security agencies had been spying on cellphones.

The government may steal a dollar, but it cannot erase the idea that earned i

Snowden then relocated to Russia and settled and remains in Moscow. He faces charges in the United States for alleged breaches of the Espionage Act.

The book, published by Macmillan Publishing Group, details the decisions Snowden made along the way and how he got the documents out. (The Post reached out to Macmillan Monday morning, but did not hear back by press time.)

Brett Max Kaufman, Snowdens lawyer, said in a statement to the Washington Post, Its farfetched to believe that the government would have reviewed Mr. Snowdens book or anything else he submitted in good faith. For that reason, Mr. Snowden preferred to risk his future royalties than to subject his experiences to improper government censorship.

When it announced the lawsuit, G. Zachary Terwilliger, U.S. attorney for the eastern district of Virginia, said intelligence information should protect our nation, not provide personal profit.

This lawsuit will ensure that Edward Snowden receives no monetary benefits from breaching the trust placed in him.

Email: tdawson@postmedia.com | Twitter:

Original post:
If you bought Edward Snowden's new book for Christmas, your money goes to the U.S. government - National Post

The best of FRANCE 24’s Reporters in 2019 – Reporters – FRANCE 24

Issued on: 27/12/2019 - 11:18Modified: 27/12/2019 - 11:18

Over the past twelve months, FRANCE 24s journalists have brought you exclusive stories from the four corners of the world. In this year-end edition, we present you with seven of our top reports.

First out in our year-end special, we bring you an exclusive report on the jihadi brides held in Syrias notorious Al-Hol refugee camp. FRANCE 24 met the women fleeing the final assault on the Islamic State (IS) group. While some of them consider it a relief to get out of the so-called "caliphate", others perceive it as a betrayal of what they believe in.

>> Exclusive: Rare testimony from jihadi brides in Syria as IS group 'caliphate' crumbles

Next, we head to Chinas northwestern Xinjiang region, where more than one million ethnic Uighurs are believed to be held in internment camps. While authorities call them "re-education through labour camps", victims say the reality is forced indoctrination for Uighurs who are being held in alarming conditions.

>> Surviving Chinas Uighur camps

Then, we trace the footsteps of Edward Snowden, who became one of the worlds most wanted men after leaking explosive confidential documents on US mass surveillance in 2013. While still on the run in Hong Kong, and before heading to Russia, the whistleblower was sheltered by a group of refugees. Our reporters met Snowden's "guardian angels", who today find themselves in danger.

>> Exclusive: Edward Snowdens guardian angels

When Europe and the US this summer commemorated the D-Day landings of June 6, 1944, FRANCE 24 met some of the surviving American veterans of World War II. They were barely 20 years old when they came to fight on European soil. Seventy-five years later, and as they approach their 100thyear, their first-hand accounts are as important as ever.

>> Meeting the last of the US D-Day heroes

Chile this year experienced unprecedented mass protests, as people rose up to demonstrate against the ultra free-market model established during the Pinochet dictatorship, which still remains in force today. The model has turned the country into one of the most unequal in the world. But in response to the rallies, President Sebastin Piera and his government have resorted to a violent crackdown, reminiscent of the countrys former dictatorship.

>> Inside Chile's unprecedented protest movement

We then head to the Kenyan city of Mombasa, the largest port in East Africa, which has become the capital of a new drug trafficking route. Heroin from Asia and cocaine from Latin America now transit through Kenya, before heading to Europe.

>> Kenyas second-largest city becomes world's new drug trafficking hub

Finally, FRANCE 24 brings you an exclusive documentary from war-torn Libya where we take you to the front linesof the bloody conflict and to the heart of the huge migration crisis currently unfolding there.

>> Libya: The infernal trap

See the rest here:
The best of FRANCE 24's Reporters in 2019 - Reporters - FRANCE 24

The smartphone tracking industry has been rumbled. Now we must act – The Guardian

When the history of our time comes to be written, one of the things that will puzzle historians (assuming any have survived the climate cataclysm) is why we allowed ourselves to sleepwalk into dystopia. Ever since 9/11, its been clear that western democracies had embarked on a programme of comprehensive monitoring of their citizenry, usually with erratic and inadequate democratic oversight. But we only began to get a fuller picture of the extent of this surveillance when Edward Snowden broke cover in the summer of 2013.

For a time, the dramatic nature of the Snowden revelations focused public attention on the surveillance activities of the state. In consequence, we stopped thinking about what was going on in the private sector. The various scandals of 2016, and the role that network technology played in the political upheavals of that year, constituted a faint alarm call about what was happening, but in general our peaceful slumbers resumed: we went back to our smartphones and the tech giants continued their appropriation, exploitation and abuse of our personal data without hindrance. And this continued even though a host of academic studies and a powerful book by Shoshana Zuboff showed that, as the cybersecurity guru Bruce Schneier put it, the business model of the internet is surveillance.

The mystery is why so many of us are still apparently relaxed about whats going on. There was a time when most people had no idea what was happening to their privacy. But those days are gone. We now have abundant evidence of public concern about privacy. A recent Pew survey, for example, found that roughly six in 10 Americans believe its not possible to go through daily life without having their data collected by both the tech industry and the government, and say that they have no idea about what is done with that data by either party. About 80% believe they have little or no control over the data collected by tech companies and that the potential risks of that data collection outweigh the benefits. Yet they continue to use the services provided by corporations of which they are apparently so suspicious.

This is the so-called privacy paradox, and the question is, what is needed to trigger an appropriate shift in regulation and public behaviour. What would it take for governments to take coherent, effective measures to stop the ruthless exploitation of personal data by surveillance capitalists? What would it take for ordinary users to decide to use services with less unscrupulous and opaque business models? What would transform this from a scandal to a crisis that would lead to systemic change?

Earlier this month, in an extraordinary feat of reporting and analysis, the New York Times published an investigation into the smartphone tracking industry that should make it harder for anyone to close their eyes to whats going on. Every minute of every day, everywhere on the planet, dozens of largely unregulated and unknown companies log, with mobile phones, the movements of tens of millions of people and store the information in gigantic data files.

The New York Times obtained one of these megafiles which it says is by far the largest and most sensitive ever to be reviewed by journalists. It holds more than 50bn location pings from the phones of more than 12 million Americans as they moved through several major cities, including Washington, New York, San Francisco and Los Angeles.

Each piece of data in this gargantuan file represents the precise location of a single smartphone over a period of several months in 2016 and 2017. It originated from a location tracking company, one of dozens covertly collecting precise movements using software slipped on to mobile phone apps. Youve probably never heard of most of the companies, write the reporters, and yet to anyone who has access to this data, your life is an open book. They can see the places you go every moment of the day, whom you meet with or spend the night with, where you pray, whether you visit a methadone clinic, a psychiatrists office or a massage parlour.

The scope of the New York Times study is incomparably wider than Die Zeit's: 12 million people are tracked in the not-so-distant past

Weve had stories like this before. In 2011, for example, the German magazine Die Zeit published the findings of an experiment in which the smartphone location data of a Green politician was collected and mapped, effectively yielding a detailed insight into his daily life. But that was just a single example of an individual who wanted to make a political point. The digital cognoscenti were alarmed by this, but the general public just yawned.

The scope of the New York Times study is incomparably wider: 12 million people are tracked in the not-so-distant past. It throws an interesting light on western concerns about China. The main difference between there and the US, it seems, is that in China its the state that does the surveillance, whereas in the US its the corporate sector that conducts it with the tacit connivance of a state that declines to control it. So maybe those of us in glass houses ought not to throw so many stones.

Plus a change?Last House on the Left: Following Jeremy Corbyns Campaign Trail. The extraordinary thing about this essay on the Quietus blog is that it was written four years ago.

Eureka!Natures summary of a decade of breakthroughs from gene editing to gravitational waves.

ZuckerfakedI created my own deepfake it took two weeks and cost $552. ArsTechnica report on how Timothy Lee created a deepfake of Mark Zuckerberg giving testimony to Congress (saying what he should have said). Its not perfect, but it was made with off-the-shelf tools.

Read this article:
The smartphone tracking industry has been rumbled. Now we must act - The Guardian

The Rise And Rise Of Mass Surveillance – BuzzFeed News

Giulia Marchi/Bloomberg via Getty Images

A police officer walks past surveillance cameras mounted on posts at Tiananmen Square in Beijing.

We live in a world where school cameras monitor childrens emotions, countries collect peoples DNA en masse, and no digital communication seems truly private.

In response, we use encrypted chat apps on our phones, wear masks during protests to combat facial recognition technology, and try vainly to hide our most personal information from advertisers.

Welcome to the new reality of mass surveillance. How did we get here?

Wael Eskandar, an Egyptian journalist and technologist, remembers documenting his countrys revolution at Cairos Tahrir Square in 2011. It was known then, he remembers, that peoples phone calls were being monitored, and that workers like parking lot attendants and security guards were feeding information back to the police. But few suspected emails or posts on Twitter and Facebook would ever be monitored in the same way at least not at scale.

The revolution toppled the brutal regime of longtime dictator Hosni Mubarak, but by 2014 the country was under the sway of the equally repressive President Abdel Fattah el-Sisi. Now, Egyptians are being arrested for political posts they made on Facebook, and some have reported having their texts read back to them by police during detention. Demonstrations all but stopped.

In 2019, rare protests did take place in Egypt over government corruption. Demonstrators avoided posting about them on social media, wary of ending up in detention, but ultimately it didnt matter dozens of people were rounded up anyway.

Its like theres no space left for us to speak anymore, one woman who had participated in the demonstrations told me earlier this year.

Egypt and dozens of other authoritarian states have increasingly employed mass surveillance technology over the past decade. Where human monitors once had to listen in to phone calls, now increasingly sophisticated voice recognition software can do that at scale, and algorithms scour social media messages for signs of dissent. Biometric surveillance systems like facial and behavioral recognition also make it easier for security services to target large swathes of their population.

Egyptian security forces block the road leading to Cairo's Tahrir Square, Sept. 27.

But mass surveillance is not just the domain of repressive regimes. Companies are using their own forms of surveillance data collection to target consumers with ads, and biometric screenings to watch their moods and behaviors. In 2012, the New York Times reported Target had figured out a teenagers pregnancy before her father; now its using Bluetooth to track your movements as you wander its store aisles. Five years ago, the US Federal Trade Commission called on Congress to regulate data brokers, saying consumers had a right to know what information they had on them. In 2019, these companies remain largely unregulated and hold reams of information about individuals, almost none of which is known to the public.

Powering these surveillance systems is an increasingly complex web of personal data. In 2009, that data might have included your neighborhood and purchasing history. Now its likely that your most personal qualities from your facial features to your search results will be slurped up too. Cross-referencing seemingly inconsequential data from different sources helps companies build detailed and powerful profiles of individuals.

Surveillance systems are being built by some of the worlds biggest technology companies, including US tech giants Amazon, Palantir, and Microsoft. In China, companies like SenseTime, Alibaba, and Hikvision the worlds largest maker of surveillance cameras are moving quickly to corner foreign markets from the Middle East to Latin America. And other players like Israels NSO Group are making it easy for governments all over the world to break into the devices of journalists and dissidents.

This all-seeing surveillance seems straight out of the dystopian fiction of George Orwells 1984 or Aldous Huxleys Brave New World. But centuries earlier, novelists had imagined surveillance as a cornerstone of utopian societies. As far back as 1771, the French novelist Louis Sbastien Mercier depicted a futuristic society exemplifying the rational values of the Enlightenment in a hit novel called Lan 2440. This imagined social order was enforced by a cadre of secret police.

For most of modern history, mass surveillance, when it has been implemented, has been laborious and expensive. The Stasi, infamous for spying on the most mundane aspects of East Germans lives, relied on massive networks of informers and on bureaucrats picking through letters and listening in on phone calls. A friend who grew up in Dresden before the fall of the Berlin Wall once told me she remembered being asked by her kindergarten teacher whether her parents were watching West German TV.

Without this level of human participation, these systems would simply not work. They might function well enough for governments who wanted to monitor individual troublemakers, but when it came to quashing dissent altogether, it was a lot tougher.

In less developed parts of the world, such as Nicaragua and North Korea, state surveillance still works this way. But in richer countries ranging from democratic societies like the US and the UK to authoritarian ones like China the burden of conducting surveillance has shifted from humans to algorithms.

Its made surveillance in these places far more efficient for both governments and companies, and as the technology improves and becomes more widespread, its only a matter of time before the rest of the world adopts similar techniques.

Anti-government protesters demonstrate at the Metropolitan Cathedral during a protest in Managua, Nicaragua, May 26.

In 2012, I wrote an op-ed with the author and journalist Peter Maass arguing that we should think of cellphones as trackers instead of devices to make calls with. That idea now seems quaint of course cellphones and the apps we download to them are monitoring our activities. We published the article not knowing that less than a year later, a 29-year-old former NSA contractor named Edward Snowden would leak an unprecedented cache of documents showing some of the true scope of the mass surveillance programs in the US.

Snowdens leaked documents revealed, among many other things, that the NSA was collecting phone records from millions of Verizon customers, and that it had accessed data from Google and Facebook through back doors. In Germany, the intelligence service was also listening in on millions of phone calls and reading emails and text messages in a surveillance program often compared to that of the Stasi.

By the time Snowden vaulted to fame, hiding out in a hotel in Hong Kong, I had left the US too. I arrived in Beijing to begin work as a journalist for Reuters in late 2012, and fully expected to be the object of some government snooping. After all, there are only a few hundred foreign journalists based in China a country of more than a billion people and the things they write are closely scrutinized because of their ability to shape the worlds view of China.

At the time, a constant subject of debate among junior reporters over kebabs and beer was whether the government was really keeping an eye on our communications, or if we were too small potatoes to matter. I often joked with an old boyfriend, an American who worked in foreign policy, that somewhere an unlucky state security intern was monitoring our cutesy volley of GIFs and emojis. We imagined our eavesdroppers as disheveled bureaucrats, not as lines of code.

One year, a Chinese police official pointedly commented that my apartment looked cheap and untidy it was a way to let me know hed seen the inside of it. On other occasions, police arrived at my door supposedly to check if my water heater was up to standard, but spent more time eyeing the contents of my bookshelf and asking about my work. My colleagues, like the Financial Times Yuan Yang, have had private messages on WeChat the ubiquitous Chinese social app made by tech giant Tencent quoted back to them by government officials.

But by and large, none of us ever found out definitively whether our flats were bugged, our emails read, our phones monitored. We just acted as if they were.

Snowden was all over the state-run news in China the story of an American dissident outing the US surveillance system was far too juicy to pass up. To this day, Chinese officials sometimes bring up Snowden and what he revealed about Americas surveillance program in response to questions about the Chinese nanny state.

At that time, surveillance seemed like an invisible web something everyone knew was a problem, but was tough to actually see.

What I never predicted was the expansion of surveillance technology into a form so visible and widespread that it became as much a part of the atmosphere of China as Beijings infamous smog. Facial recognition cameras, for instance, are now ubiquitous in the country after first appearing in the western region of Xinjiang, where more than a million Uighurs, Kazakhs, and other Muslim ethnic minorities are now in internment camps. The region has become the global epicenter for high-tech surveillance, which the Chinese government has combined with heavy-handed human policing including officers asking dozens of highly personal questions to individuals and plugging their responses into a database. There, police collect data at peoples homes, police stations and roadside interrogations to feed into a centralized system called the Integrated Joint Operations Platform, which spits out determinations for whether Muslim citizens should be interned or not.

It is the first example of a government using 21st-century surveillance technology to target people based on race and religion in order to send them to internment camps, where they face torture and other horrific abuses. According to some estimates, it is the largest internment of ethnic minorities since World War 2.

The collection of such data for security purposes is often called predictive policing, a technique used in many countries, including the US, to spot the potential for individual criminal behavior in data.

When I visited Kashgar, a city in southern Xinjiang, in the fall of 2017, it felt like catching an uncanny glimpse of a suffocating future one where DNA collection was mandatory and even filling your car with gas required a scan of your iris.

A demonstrator wears a face mask featuring Chinese President Xi Jinping while shining a light from a smartphone during a protest on Queensway in the Admiralty district of Hong Kong in December.

Since then, much of the technology being used in Xinjiang has been sold to other parts of the world. Companies and the governments that contract with them point to the many benign uses of some surveillance tech security, public health, and more. But there are few places in the world where people have been asked to consent to surveillance tech being used on them. In the US, facial recognition technology is already widely used, and only a handful of cities have moved to ban it and then, only its use by government authorities. Campaigners against mass surveillance systems say its tough to convince people these technologies are genuinely harmful especially in places where public security or terrorism are serious problems. After all, digital monitoring is usually invisible and security cameras seem harmless.

I dont think people are happy about tech or positive about tech for the sake of it, but they dont know the extent to which that can go wrong, said Leandro Ucciferri, a lawyer specializing in technology and human rights at the Association for Civil Rights in Argentina. People dont usually have the whole picture.

When, in the course of reporting, I peered at the back ends of surveillance systems that claimed to track individuals by their clothing, their faces, their walks, and their behavior, I wondered how I could continue to do my work in the same way. Could I go out to meet a source for coffee without immediately outing her in front of a camera whose video streams were being parsed by an algorithm?

The tech developments themselves have enabled the Chinese government to implement its vision, said Maya Wang, senior China researcher at Human Rights Watch and one of the leading authorities on mass surveillance in Xinjiang. That's why we see the rise of the total surveillance state because it's now possible to automate much of the surveillance and be able to spot irregularities in streams of data about human life like never before.

What happens to the myriad facets of our private lives going to a therapy appointment, buying birth control, meeting a date when its so easy to monitor us?

What happens when its our faces, not our phones, that are our trackers?

Surveillance cameras are seen above tourists as they visit Tiananmen Square in Beijing.

Eritrea, a small nation in the horn of Africa, is one place where the governments approach to monitoring people remains decidedly 20th century. Only 2% of people have access to the internet, largely consisting of the urban elite. Theres little evidence the government is investing in the sophisticated surveillance systems of the kind China uses.

My friend Vanessa Tsehaye, an Eritrean-Swedish journalist and activist, believes deeply in grassroots campaigns for human rights in the country. A recent college grad, she spent her teenage years campaigning for the Eritrean government to free her uncle, the journalist Seyoum Tsehaye, from prison.

Tsehaye is the most relentlessly positive campaigner I know but even she feels bleak thinking about the rise of the surveillance systems of the future.

Their main methods of censorship are limiting access to the internet, Tsehaye said. Eritrea is the most censored country in the world, and despite that, people are slowly but surely mobilizing.

But if you add sophisticated surveillance tech, she said, the government could do whatever they wanted. It would destroy everything.

Early this year, I met a Nicaraguan scholar at a conference and asked him about protests critical of President Daniel Ortega that had gripped the country. I was curious whether protesters there were concerned about facial recognition.

He told me to search Nicaragua protests on Google images. Sure enough, every photo showed demonstrators covering their faces with handkerchiefs and sunglasses.

A protester destroys a surveillance camera at Wan Chai MTR Station during a pro-democracy march in Hong Kong.

There are many reasons besides facial recognition that protesters might like to cover their faces tear gas being one of them but regardless, masks have begun to show up in demonstrations all over the world. In Hong Kong this year, the government has even banned their use. Its one way that people are coping with surveillance in the modern world.

Most demonstrators Ive met in my time as a reporter are not activists who are willing to risk imprisonment for the causes they fight for. Rather, they are ordinary people with jobs, families, and responsibilities. I have wondered how the protest movements of the future would be possible in the presence of newly sophisticated surveillance tech. Would anyone be willing to complain about their leaders online, swap political texts with a friend, or go out and join a street protest if they knew theyd be immediately outed by an algorithm?

I worry tremendously over whether human beings will have freedom in the future anymore, said Human Rights Watchs Wang. We used to worry about the age of AI as robots annihilating humans like in science fiction. I think whats happening instead is that humans are being turned into robots, with the sensory systems placed around cities that are enabling governments and corporations to monitor us continuously and shape our behavior.

In some parts of the world, anti-surveillance campaigns have picked up steam as the technology has become more ubiquitous. Facial recognition bans are being discussed by politicians across the US, for instance, and the EU passed the GDPR in 2016, a sweeping set of rules aimed at the protection of personal data.

Citizens of authoritarian states, however, have fewer options. What many pro-privacy groups fear is a bifurcated world where citizens of democratic systems have privacy rights that far outpace those of people who live in authoritarian countries.

Eskandar, the Egyptian technologist, believes there is still room for optimism.

Nonconformity was the fuel of the revolution, he told me by phone. Ive seen it happen. A few people with very few resources have outmaneuvered a state apparatus its happened time and time again. I really believe that people who are proponents of freedom rather than fascism can think freely. So there is hope.

Follow this link:
The Rise And Rise Of Mass Surveillance - BuzzFeed News