Meet the weaponized propaganda AI that knows you better than you know yourself – ExtremeTech

Posted: March 1, 2017 at 9:15 pm

Is it worse to be distracted by irrelevant ads, or to be monitored closely enough that the ads are accurate but creepy? Why choose? (Why not Zoidberg?) One company called Cambridge Analytica has managed to apply what some are calling a weaponized AI propaganda machine in order to visit both fates upon us at once. And its all made possible by Facebook.

Cambridge Analytica specializes in the mass manipulation of thought. One way they accomplish this is through social media, particularly by deploying native advertising. Otherwise known as sponsored content, these are ads designed to fool you into assimilating the ad unchallenged. The company also uses Facebook as a platform to push microtargeted posts to specific audiences, looking for the tipping point where someones political inclination can be changed, just a little bit, for the right price. Much like Facebook games designed specifically for their addictive potential, rather than for any entertainment value, these intellectual salesmen exist solely to hit every sub-perceptual lever in order to bypass our conscious barriers.

Cambridge Analytica is one subsidiary of a UK-based firm called SCL for Strategic Communication Laboratories that does business in psychometrics, an emerging field concerned with applying the big data approach to psychology and the social sciences. SCL also claims secretive but highly paid disinformation and psy-ops contract work on at least four continents. Their CV includes work done on the public dime here in America, training our military for counterterrorism. Also among their services is the euphemistically named practice of election management. They are riding to fame or at least better funding on the coattails of Donald Trumps ascension to the White House, for which they claim no small degree of responsibility.

If you want certainty, you need scale, their website asserts, and they say theyre just the outfit to provide it. Like any business proposition, this is best taken with some skepticism. But turning political tides in favor of the highest bidders ideology is their whole business model. Their parent company claims to have exerted material influence over elections and other geopolitical outcomes in 22 countries. They, and Cambridge Analytica as their agent, claim to be mindshare brokers of the highest order.

Image source: Cambridge Analytica

Nobody iswilling to go on the record and put their name to assertions that the emperor has no clothes, for fear of incurring the wrath of newly powerful Cambridge Analytica board member Steve Bannon, or yanking too hard on the Koch brothersmonetary speech apparatus. Its not clear whether Cambridge Analytica is pulling the strings they say theyre pulling, or just really good at knowing what side is going to win. But they definitely have something under their hats.

There are a few fundamental tech applications that underlie what Cambridge Analytica claims it can do. But they all depend on the idea that artificial intelligence isnt some dissimilar alien entity, sprung fully self-actualized from the forebrain of humanity like HAL. AI is an extension of human intelligence, which we accomplish by applying the organization and data-handling power of computers to our own tasks and problems. On a reductionist level, all theyre doing at Cambridge Analytica is using more RAM and a rigorous, written-down set of rules to organize and manipulate data that social scientists handled with clipboards and calculators and pencils back in the day. The AI that enables the entire business model is likely an intellectual descendant of Dr. Michal Kosinskis work in the Cambridge University social sciences department and an illegitimate one, if you ask Kosinski himself. The story reads like a film noir.

It starts with the marriage of Facebook, psychology, and AI. Facebook activity has an uncanny amount of predictive power. Back in the 80s, scientists developed the questionnaire-based OCEAN model of five major psychological traits, still in use today. Michal Kosinskis 2014 PhD project rested on a psychometric Facebook survey called MyPersonality, which added AI to the mix. MyPersonality catalogued participants Facebook profile information including social connections and Likes, and also asked the participants to take a Facebook quiz to find out their OCEAN scores. Then it used machine learning to predict their OCEAN scores based on their Facebook activity. With only a persons Facebook likes plugged into a MyPersonality dossier, Kosinskis AI could reliably predict their sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age, and gender.

Success rates for Kosinskis prediction algorithm. Source

More data meant a better guess, of course. Seventy likes were enough to make the AIs prediction of a persons OCEAN score better than their friends could do, 150 made it more accurate than what their parents got, and 300 likes could do better predicting a persons OCEAN score than the best human judge of a person: their spouse. More likes could even surpass what a person thought they knew about themselves, by predicting their OCEAN score closer than the persons own best estimate of what their score would be.

It goes the other way, too. To a database, a persons name and entries from their profile are just nodes in an n-dimensional space, and the connections between nodes arent necessarily directional. You can class individuals by similarities in the data, or you can search the data for individuals who fit into a class. Its as simple as doing an alphabetical sort in an Excel sheet.

Working with the predictive power of Facebook likes and quizzes became Kosinskis stock in trade. Kosinski even used Amazons Mechanical Turk in some of his research, crowdsourcing his quizzes to probe what made people respond to them. (Spoiler: getting paid helps.) His work earned him a deputy directorship at Cambridges Psychometrics Centre. It also earned him the attention of SCL. Kosinski told Motherboard that in 2014, a junior professor in his department named Aleksandr Kogan cold approached him asking for access to the MyPersonality database. Kogan, it turns out, was affiliated with SCL. Kosinski Googled Kogan, discovered this affiliation and declined to collaborate. But his research and methods were already in the wild, which meant in Kogans hands.

Kogan founded his own company that contracted with SCL to do psychometrics and predictive analysis, using aggregated Facebook data and a governing AI. At least some of this data came from jobs posted to Mechanical Turk, where participants were paid about $1 in exchange for access to Facebook profile data. Kogan changed his name and moved to Singapore. Kosinski remained deputy director of the Psychometrics Centre until he moved to the States in 2014.

Facebook has been in the news again and again because of the sheer extent of their data collection. One way they get the information they have is by using a thing called a conversion pixel. You know that stupid social network widget thats on every web page these days, including this one? Its designed to let you like and share a page without having to navigate back to Facebook. It also affords incredible mass surveillance opportunities. Every time you visit a web page with a Facebook share widget, you query one of Facebooks servers for a conversion pixel. Facebook then promptly attempts to phone home with what link you visited, how long you lingered on the page, whether you scrolled down or signed up or bought anything, and whether you chose to Like or share the page, plus the text of whatever comment you might post at the bottom using your Facebook profile. Even if you delete the text and dont publish the post. Likes already have enough predictive power; between likes and activity, that widget can produce a comprehensive set of metadata on a persons personality.

When logged-in users take Facebook quizzes like Kosinskis, the quiz can ask for permission to scrape any or all of this data out of their Facebook profile and into the hands of any marketer, data analyst, or election management specialist willing to pay for it. Between that and purchasing life history data and credit reports from brokers like Experian, this is how Cambridge Analytica profiles their marks in the first place. In return maximum you get to post a little quizlet thing to your wall, so you and all of your friends data can know which Walking Dead character each of you would be.

This is not an exchange of equivalent value.

CORAL!

And then theres microtargeting: the idea that Alice the Advertiser can accurately change the mind of Bob the Buyerbased on information Alice can buy.

The notion of microtargeting is not itself new, but what Cambridge Analytica is doing with it is novel. Theyre using the Facebook ecosystem because it perfectly enables the goal of targeting individuals and using their longer-lasting personality characteristics like a psychological GPS. It all hinges on a Facebook advertising tool called unpublished posts. Among advertisers, these are simply called dark posts.

Normally, when you make a Facebook post, it appears on your Timeline within your current privacy settings; this is true for people and Pages alike. When an advertiser makes a dark post, though, they can choose to serve that post to only a certain subset of users. Nobody sees it but the people the advertiser was targeting. And theyre canny about choosing their targets, looking for persuadable voters.

For example,explained Cambridge Analyticas CEO Alexander Nix in an op-ed last year about the companys work on the Ted Cruz presidential campaign, our issues model identified that there was a small pocket of voters in Iowa who felt strongly that citizens should be required by law to show photo ID at polling stations.

Almost certainly informed by Kosinskis work on Facebook profiling, Cambridge Analytica used the OCEAN model to advise the Cruz campaign on how to capture the vote on the issue of voter ID. The approach: use machine learning to classify, target, and serve dark posts to specific individuals based on their unique profiles, in order to use this relatively niche issue as a political pressure point to motivate them to go out and vote for Cruz. Later, Cambridge Analytica would use the same approach for the Trump campaign. Its not possible to make a complete count, but various places around the web have claimed that Cambridge Analytica tested between 45,000 and 175,000 different dark posts on the days of the Clinton-Trump debates.

Where do they get all the content to serve? Its difficult to say, because Cambridge Analytica doesnt respond to journalists who ask them about their methods. But the $6 million or so Trump has paid Cambridge Analytica can only pay just so many people for just so long. One journalist has been digging into this issue, and his research strongly suggests that much of the political propaganda surrounding the 2016 election was procedurally generated using machine learning, and then packaged and served to target audiences. As that Facebook widget follows a user around the web, the AI gets better and better at serving the user politically polarizing content shell click on. Mindshare acquired.

Nix went on: For people in the Temperamental personality group, who tend to dislike commitment, messaging on the issue should take the line that showing your ID to vote is as easy as buying a case of beer. Whereas the right message for people in the Stoic Traditionalist group, who have strongly held conventional views, is that showing your ID in order to vote is simply part of the privilege of living in a democracy.

We call this behavioral microtargeting, Nix later told Bloomberg, and this is really our secret sauce, if you like. This is what were bringing to America.

But dont take my word for it. Listen to Nix explain his own methods:

If you dont want to opt in to the secret sauce, what can you do?

On the individual level, bluntly, get good at knowing when youre being sold something. Dont reward intellectual salesmanship that you wouldnt tolerate elsewhere. After all, build a better mousetrap, and Nature will build a better mouse.

From the top-down direction, one way is to work to pass strong privacy regulations. They would need to entail meaningful oversight, and consequences that have teeth when an organization is found in breach of the law. But they also have to be nuanced, because if the government tries to ban something, and then that ban gets challenged in court, the government can lose. That sets legal precedent, just like a win in court would.

Also, heres a thought experiment: Watching Deadpool from your desk chair is not the same as taking in a late-night show in a theater, with the popcorn and the bass and all that. If pirating the data that can reconstruct a movie is the moral and legal equivalent of stealing the movie from a store, then pirating a model that can be used to reconstruct someones personality with enough fidelity to predict and alter their behavior without their consent might also be worth legal attention. Can you consent to be misled, and then vote based on that? Our legislature can be sold ideas, and they enact policy by voting. Whos serving dark posts to Congress, and whats in those posts?

If data feels cold and impersonal, a Cambridge Analytica press release muses, then consider this: the data revolution is in the end making politics (or shopping) more intimate by restoring the human scale.

Thats exactly the problem. It is personal. So much is built on the fact that data can be personal, even when dealing en masse. The salient thing here is that there is an outfit which means to leverage the enormous body of intimately personal data they can gather, in order to conduct large-scale and yet individualized psy ops for the highest bidder. The stakes theyre after are no less than the medium-term fate of nations. Whether or not Cambridge Analytica has done what they claim to have done, Pandoras box is open.

Now read:19 ways to stay anonymous and protect your online privacy

Read more:

Meet the weaponized propaganda AI that knows you better than you know yourself - ExtremeTech

Related Posts