In this blog post, we present the issue of data targeting in a video, focusing on the Facebook, GSR and Cambridge Analytica allegations.
This is based on Section 3 of the ‘Disinformation and ‘fake news’ report and is the third in a series of seven posts.
About the report
The scope of this report:
To study ‘the spread of false, misleading, and persuasive content, and the ways in which malign players, whether automated or human, or both together, distort what is true in order to create influence, to intimidate, to make money, or to influence political elections.’
A specific focus of the report is to address concerns related to the “political use of social media”.
The report includes seven sections:
1. Definition of fake news and how to spot it
2. Role of tech companies – definition, legal liabilities
3. The issue of data targeting, based on the Facebook, GSR and Cambridge Analytica allegations
4. Political campaigning
5. Russian influence in political campaigns
6. Co-ordination of Departments within Government
7. Digital literacy
What is the issue of data targeting?
In this section, the report highlights the extent of data misuse and the issue of data targeting, involving various organisations, such as Facebook, Global Science Research (GSR), Cambridge Analytica (CA), and Aggregate IQ (AIQ), as well as the alleged sharing of data in the EU Referendum.
It can be difficult to wrap your head around this very complex issue. There are many actors involved and the way they collaborated can be confusing. The report goes into a significant amount of detail on how events unfolded. In places where the information is incomplete, or yet to be discovered, the report signposts that.
We found that a video was the easiest way to explain the issue of data targetting. The script can be found below if you prefer to read it. Otherwise, enjoy the 7-minute video!
Video on the issue of data targeting
The video script on the issue of data targeting
“Hello, this is Mihaela Gruia from Research Retold. Over the last two weeks, we created a visual summary of the first two sections of the report titled Disinformation and ‘fake news’ published on 29 July 2018. The first visual summary captured the definition of fake news. The second visual summary captured the role of social media companies in perpetuating disinformation.
Section 3 focuses on The issue of data targeting, involving Facebook, Global Science Research, Cambridge Analytica, and Aggregate IQ, as well as the alleged sharing of data in the EU Referendum.
This section of the report is really complicated to explain in a static way, which is why I chose to do a video.
Arguably more invasive than false information is the relentless targeting of hyper-partisan views.
This plays to the fears and prejudices of people, in order to alter their voting plans.
This is how the report explains the issue of data targeting happened:
CA was founded in 2012, with backing from Robert Mercer, the US hedge fund billionaire, and Donald Trump donor. He also became the majority shareholder.
CA was born out of the already established SCL consultancy, which had engaged in political campaigns around the world.
CA’s primary purpose would be to focus on data targeting and communications campaigns for carefully selected Republican Party candidates in the USA.
Linked to CA is Steve Bannon as its former Vice President. He also served as White House chief strategist at the start of President Donald Trump’s term, having previously been chief executive of President Trump’s general election campaign. He introduced Cambridge Analytica to Leave.EU.
In the words of Alexander Nix, the CEO, the CA tries to match “voters with issues and policies that they care most about”.
Cambridge Analytica used ‘OCEAN psychological analysis’ to identify issues people might support and how to position arguments to them. OCEAN categorises people based on their ‘Openness’, ‘Conscientiousness’, ‘Extraversion’, ‘Agreeableness’ and ‘Neuroticism’.
Facebook’s involvement in the data targeting
The Facebook data breach in 2014, and the role of Cambridge Analytica in acquiring this data, has been the subject of intense scrutiny. Ultimately the data breach originated at the source of the data, at Facebook.
‘Friends permissions’ were a set of permissions on Facebook between 2010 and 2014, and allowed developers to access data related to users’ friends, without the knowledge or express consent of those friends.
One such developer, Aleksandr Kogan, was a Research Associate and University Lecturer at the University of Cambridge in the Psychology Department.
Kogan began collaborating “directly” with Facebook in 2013. He states that FB “provided him with several macro-level datasets on friendship connections and emoticon usage.” The data was to be used for academic purposes.
Professor Kogan then set up his own business, Global Science Research (GSR), in the spring of 2014, and developed an App, called the GSR App, which collected data from users, at an individual level.
GSR & Cambridge Analytica’s involvement in the data targeting
It was at around this time as well that Dr Kogan was in discussions about working on some projects with SCL and Cambridge Analytica, to see whether his data collection and analysis methods could help target the audiences of digital campaigns.
On the 4th of June 2014 they signed a contract.
So what happened under this contract?
They collected data from 200k participants through paid surveys. Recruits had to download the App before they could collect payment. The App would download some information about the user and their friends.
After data was collected, models were built using psychometric techniques which used Facebook likes to predict people’s personality scores.
These models’ validity were refined by being tested on new users.
New work, old agreement
Dr Kogan and SCL knew that ‘scraping’ Facebook user data in this way was in breach of the company’s then recently revised terms of service. Also, the work was carried out under the terms of an agreement GSR had with Facebook in 2013. However, the purpose of using this data changed from academic, to supporting political campaigns.
Dr Kogan was then required under the contract to provide SCL with data sets that matched predictive personality scores to named individuals on the electoral register in 11 states. These were Arkansas, Colorado, Florida, Iowa, Louisiana, Nevada, New Hampshire, North and South Carolina, Oregon and West Virginia.
In August 2014 Dr Kogan worked with SCL to provide data on individual voters to support US candidates being promoted by the John Bolton Super Pac in the mid-term elections in November 2014. Psychographic profiling was used to micro-target adverts at voters across five personality groups.
According to an SCL presentation, there was a 39% increase in awareness of the issues.
SCL also claimed that there was a 30% uplift in voter turnout, against the predicted turnout, for the targeted groups.
Facebook data breach
Back at Facebook, there was no proper audit trail of where the data went.
Facebook claimed that Kogan violated his agreement to use the data solely for academic purposes. In March 2018, Facebook suspended Kogan from the platform, characterising his activities as “a scam” and “a fraud”.
The ICO decided to publish a Notice of Intent to issue a monetary penalty to Facebook of £500,000, “for lack of transparency and security issues relating to the harvesting of data”, under the Data Protection Act 1998.
It should be noted that, if the new Data Protection Act 2018 had been in place when the ICO started its investigation into Facebook, the ICO’s Notice of Intent would have totaled £315 million (4% of global turnover).We explain this a little bit more in the second visual summary.
Where does AIQ come into play regarding data targeting?
Aggregate IQ is a Canadian digital advertising web and software development company. Jeff Silvester is one of the owners.
Mr Silvester explained that their first work for SCL was in 2014 in Trinidad and Tobago. From that work, AIQ started developing the Ripon tool (owned by SCL). which lets political groups target users with ads that were tailored to their particular personality traits.
Now what is the connection to the EU referendum?
Well, the report explains that Alexander Nix and SCL pitched for work in the Referendum to Leave.EU, but they were rejected.
However, in July 2018, the ICO confirmed that AIQ had access to the personal data of UK voters, given by the Vote Leave campaign.
There have been data privacy concerns raised about another campaign tool used by AIQ called uCampaign. The mobile App employs gamification strategy to political campaigns where users can win points for campaign activity, like sending text messages and emails to their contacts and friends.
The main investor in uCampaign is the American hedge fund magnate Sean Fieler, who is a close associate of Robert Mercer.
The App was used in Donald Trump’s presidential campaign. The App was also used by Vote Leave during the Brexit Referendum.
How did the app work?
Users downloaded the App. Users agreed to share their address books, including phone numbers and emails. The App then shot the data to a third-party vendor, which looked for matches to existing voter file information. This gave clues as to what may motivate that specific voter. This is important because users and their contacts were targeted with ads based on their personality.
This is the end of Section 3 of the report. Feel free to have a read as it available for free online.
By looking at this map in its entirety, hopefully, it is now clearer how these organisations were connected.
Next week I will be back with Section 4 on ‘Political campaigning’ which goes into more depth about issues around the EU referendum.”