838
Views
0
CrossRef citations to date
0
Altmetric
Editorial

Interview, Damian Collins MP

As the Western world was still reeling from the shock outcomes of key elections held in 2016, the United Kingdom was one of the first countries to launch a parliamentary inquiry into ‘fake news’. Beginning in January 2017, the Department of Digital, Culture, Media and Sport Committee was tasked with defining the concept of 'fake news', identifying its impact on the public understanding of the world, and investigating the incentives created by the online advertising platforms for the spread of disinformation. The DCMS committee quickly found itself at the forefront of investigating the role of Cambridge Analytica and alleged Russian interference in the Brexit referendum.

Damian Collins MP, chair of the DCMS Committee, gave an interview to Emily Taylor, Editor of Chatham House's Journal of Cyber Policy on 25 October 2018.

Thank you for agreeing to be interviewed for the Journal. For readers who might not be familiar with the Digital, Culture, Media and Sport (DCMS) Committee's interim report on disinformation and ‘fake news’, what were the key findings and recommendations?

For me, the inquiry fell into two distinct areas. The first is disinformation and fake news as a content problem online: it's a form of bad content that people share. The content is misleading and can be used to affect the way people think about issues and vote. Here, the question is: what responsibilities do the tech companies have to act against disinformation, to try and block people who are spreading it, and to take down this content when it has been discovered? Should tech companies have a limited liability in law to act against harmful and misleading content, and if they don't, should there be some sanction imposed against them? In Germany, content that breaches the German hate speech laws are taken down and there are penalties for platforms which fail to act. So this is the first area. But in some ways content that is 100% false is a relatively small part of the problem, but still a problem.

The second area is the way data-targeting works to help hyper-partisan and misleading content spread online. This doesn't just happen: people are targeted with this information. And people often don't understand where the information is coming from or why they are being targeted by it. They don't necessarily have the tools to question the source of information and whether they should believe it. So the second part is about transparency to empower the user so they understand more about where information is coming from and what weight of consideration they should give to it.

The DCMS Committee's work coincided with the Cambridge Analytica scandal. Can you reflect on the learning curve that you and other members of the committee went through regarding how the social media platforms and their data-targeting infrastructures work?

The whole committee certainly went on a journey with the issue. When we started off, we were much more focused on the content side of the problem – that disinformation was a form of bad content and what rules we might put in place to deal with it. What became clear was that data-targeting – making sure specific audiences of people see what you want them to see – was a really important part of the way in which companies like Cambridge Analytica work and a really important part of the way disinformation spreads. For example, when it was discovered that the Russians had been running adverts in the American presidential election, we saw they weren't just doing it in a scattered kind of way, they were using Facebook advertising tools to make their advertising spend more efficient. So we started to see this was a really important part of the problem and the question became: how do people build up the datasets to target and identify people?

The work of Cambridge Analytica was particularly interesting in the way it had acquired data about Facebook users from a team at Cambridge University led by Dr Aleksandr Kogan. There was the academic research they’d done which said that by understanding and analysing the different pages people liked on Facebook, you could develop a more accurate psychological profile of that person than if you asked their closest friend about them. Now, this sort of approach is important to Cambridge Analytica because it's not just using datasets on consumers that are publicly available through companies like Experian, but also using Facebook data for psychological profiling. Instead of surveying people themselves to create these psychological profiles, Cambridge Analytica could analyse Facebook data much more quickly. So, it's a really good way to understand what the most effective message in an election campaign will be.

But what Cambridge Analytica could do was not just use this data to build up psychological profiles, but create working sets; these are groups of people who have a certain common personality type. They could then go to Facebook and say: here is a group of Facebook accounts that we want to target in our advertising, what we also want to do is buy a ‘lookalike’ audience to find another 100,000 people who live in this area and who are like the people in this group. So, by analysing Facebook data, you can build up psychological profiles, create groups for targeting specific messages, and then go to Facebook and ask them to find many more people just like them. This is an incredibly powerful tool: you don't need profiles on every voter, you just need enough in your working set to be able to go out and target everyone.

Did the DCMS Committee have concerns around the ethics of Cambridge Analytica's approach?

I think that the ethics question is really interesting. In terms of Cambridge Analytica buying data from developers, that was clearly wrong and in breach of Facebook's rules. So there are lots of questions around how and why Facebook let that happen. I think there are also ethical questions about Facebook's political advertising tools. If a user has chosen not to identify their political preference on Facebook, I question whether it's right that Facebook can help advertisers guess what your political views are and then let them target you. Users can't stop Facebook from advertising to them, they can't say I am not interested in political advertising, and they can't stop receiving targeted political advertisements. So, the whole way this works throws up a lot of really important questions about Facebook's advertising model and users’ data being exploited in unexpected ways.

What you’ve describedwhat Cambridge Analytica had in terms of user datawas just a fraction of what Facebook themselves have on their two billion monthly users. How far do you think the fundamental business models of the free-to-use platforms can be sustained in a democratic society? Or are they a threat to it?

I think one of the reasons people took so much interest in the Cambridge Analytica scandal was that until then I don't think people really understood just how much data was gathered – and it didn't just relate to what you did on Facebook. I think people reasonably understand that Facebook is a free service and it makes money by selling advertising space against you as a user, based on the things you do on Facebook. But the idea that Facebook gathers information about what you do when you’re not on Facebook but on other websites you visit and other things you do – that it gathers data about the online activities of non-Facebook users, or was keeping a record of Android users’ text messages and phone calls – might come as a surprise to many users.

Facebook gathered a huge amount of data people didn't expect. And then they allowed developers to use that data, really, without Facebook knowing. Although Facebook will always say they have clear policies on developer use of data, I think the story really questions the robustness of those policies, how they are enforced, and really, whether Facebook had any effective knowledge or control. Indeed, with the Cambridge Analytica story, which was first reported in the Guardian at the end of 2015, it wasn't until March 2018 that Facebook started to take enforcement action. Facebook's business model is about driving advertising revenue by increasing the amount of data they gather from users and the amount of time those users spend on the site.

I think both those things have been demonstrated to be part of the problem the company faces, now that we know that user data is easily accessible by people outside the platform who can use it for things that users might not have given their consent to.

Given what you’ve described of Facebook's response to your Committee's inquiries, will the company be able to address the wide range of challenges currently facing democracy? Are technical ‘Artificial Intelligence’ solutions to disinformation feasible? Or do they give more executive authority to private companies?

I think the solution has to be within a regulated environment. Legislation could lay out some obligations towards harmful or misleading content that we want the platforms to act against and demonstrate that policies designed to protect against this content are in place. Examples could include an effective policy to take down content that falls within a clear category they are expected to act against when they’re notified by a user. It could also be the tech companies demonstrating that they have the technological capability to identify this content even faster.

But there should be a role for a regulator, and on the content side I think that regulator should be Ofcom who can evaluate whether the companies are meeting their obligations, and if not, impose some sanction against that company. This is similar to the rules that already exist in the broadcasting sector in this country. But what we need to do is bring the tech sector into the same sort of environment that other modern industries find normal to work in.

We have talked a lot about advertising and selecting audiences to target particular messages, but what about the work by foreign states to game the algorithms for organic distribution? How much change should we expect in that fundamental business modelof curating content individually for the user's known preferencesoutside of advertising? How far do you think the terms of tech platforms comply with relevant consumer protection legislation?

I think this is an interesting area and I think there has been a lot of focus on advertising content largely because that was discovered with relation to Russia. But it's quite possible that the paid-for advertising is a very small part of the problem: the problem is much more the development and creation of networks of pages and accounts, and using Facebook groups – some of which have hundreds of thousands of members – as a very effective tool for spreading information to a wider network. We certainly need to look more at how those networks are set up and developed. Sheryl Sandberg said at a recent hearing in the US Congress that Facebook had deleted more than a billion fake accounts within a six-month period. This would suggest that people are doing this all of the time and on a much bigger scale that what Facebook can identify.

There needs to be some analysis of this because fake accounts underlie many of these problems, and yet according to Facebook's terms of service, fake accounts shouldn't be there. Officially, Facebook has said that about 3% of the accounts on Facebook at any one time shouldn't be there. But if the number was much higher than that, then there could be an investigation from the Competition and Markets Authority to look at the mis-selling of advertising: if Facebook is selling these audiences as real people, but they haven't properly interrogated whether they are real people or not, is that a form of mis-selling? From a competition and consumer protection perspective, are they doing enough to prevent harm being caused on the site?

With most of the tech companies, and most of the problems you could look at, those things are already in breach of their own terms of service. They don't enforce them effectively enough because there are only disincentives to doing so. Your company could be worth less and it's going to cost you a lot of money to do. Because no one has told them they’ve got to do it, there is no incentive to do it so they don't. And that's part of the problem.

When you look towards your next election, to what extent will you be using social media to try and communicate and target your message to your voters?

Social media is a good way for local communities to organise around issues that affect them. In general elections, increasingly, it is a tool used by people to find out what issues the candidates are standing on. I think its real power is in a localised form. From my experience, different social media work in different ways. Twitter tends to be more of a tool used by journalists and politicians talking about the national issues they’re involved with. Facebook is a tool that is often very community-focused, because local areas will organise themselves on Facebook. I use social media as a means for making it easier for my constituents to get in touch with me about issues they’re concerned about, and to find out what I’m doing on a constituency level.

The DCMS Fake News Committee's interim report has raised a lot of concerns about the way this works, but actually, the advertising for mainstream political parties has not really been the problem. The parties themselves identify who they are, and they tend to advertise during election periods mainly, and it's really transparent. The changes that Facebook have announced on political advertising are right because users should have a right to be able to check who the advertiser is, what other ads they’re running, and in general terms, why they receive a particular message.

Looking ahead, to the next election, I am sure that social media will continue to be important to me as an individual candidate, and to other candidates. There is no doubt that the national parties put an increasing amount of their own campaign investment into social media, and I don't see this trend changing. What needs to happen is greater transparency over electoral advertising, and proactive steps from the platforms themselves to reduce abuse of their technologies to manipulate public opinion.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.