Searching for new ways to stop the spread of false information is an understandable response to recent political developments, but proponents of the strategy are picking the wrong battle if they want to win this vital war
Recent revelations in the UK’s traditional media on Russian attempts to spread mis- and disinformation have been met with aghast intakes of breath. But this shocked response is really British exceptionalism, allied to a pretty incompetent domestic press. The issue of misinformation, disinformation and propaganda spread by social media has been at play for a long time in global civil society.

Breathless headlines for a very old story: The Times/The Guardian (via BusinessInsider)
‘Fake news’ has become a co-opted cliché. It’s now more likely to be used by those who create and spread lies – such as the president of the United States – than those who unpick them. For the parts of civil society that have picked up the daunting gauntlet of defending the truth, the phrase is already a bad joke at best.
That doesn’t mean that they’ve given up on trying to solve the problem, though. Attempts to differentiate between misinformation, disinformation, and propaganda have become more sophisticated, and definitions are becoming more defined. According to experts on the topic at the most recent MisinfoCon, held in London in October, there are now more than forty projects engaged in tackling different aspects of the problem, mostly funded by the same few large philanthropic foundations. There is tremendous energy and enthusiasm among the motley community of academics, techies, policy wonks and activists attacking this. I consider myself to be part of that broad community.
However, the headlong rush to ‘solve’ this problem is misconceived and misguided. There are several different reasons for this. I’m aiming in this post to set out what those reasons are and how we could redirect our efforts, as a community, towards more effective and valuable outcomes.
I want to make clear at this point that I’m mostly writing about the situation from an English language perspective and I’m inevitably heavily biased towards the UK and US.
Jump to:
-
- Excessive focus on misinformation is a mistake
- Of firehoses and water pistols
- Wrangling the tech titans
- What should we do?
1: Excessive focus on one part of the information system is a mistake
At Mozilla Festival, also held in London in October, I attended a highly enjoyable session where the organizers (Melissa Ryan and Sam Jeffers) led several groups in imagining and designing ‘fake news’ articles. What this session proved beyond doubt is how easy it is to create such stories out of thin air. Although no one there had previously tried to write something viral and false, we all came with compelling ideas that would probably have gained at least some traction.
The rise of misinformation in the current age is not the product of newly creative bad actors. People – especially powerful people – have been lying since the dawn of time. It hasn’t become easier to create false information; it’s become far easier to disseminate it, and to profit from it. That’s an infrastructure problem; to correct it, you have to address the whole system and not merely one part of it.
It’s no surprise therefore that when we came towards the end of that session and began to discuss solutions, we drew a collective blank. Someone tentatively proposed better media education as a solution: that’s an admirable aim, but even if policy-makers acted now (and they aren’t), it will take at least a decade to make a noticeable difference. (I also happen to believe that the breakdown of trust in the media is arguably a consequence of improved media literacy.)
The best outcome of media literacy in a complex, tech-driven information ecosystem is perhaps not restored trust, but more sophisticated and granular distrust. To put it another way, we want distrust (i.e. limits on how far we are willing to accept information based on our experience and available data), not mistrust (i.e. a general cynical unease based on the complexity of modern life).
So attempts to change readers’ habits are long-term and therefore, I’d argue, insufficient as a solution for the immediate problems we face (by which I mean, we are already being governed by people who were elected in part on the basis of false information, at least in the USA and the UK). And attempts to reduce the amount of false information being produced are obviously doomed to fail.
In that context, it’s no wonder that we’ve turned to the kinds of projects that feel the most likely to succeed. But it’s short-sighted to focus all our efforts on the bad information in the system. We need to think much bigger, which brings me to the next point.
2: We can’t fight the falsehood wildfire with the tiny water pistol of truth
I have heard this point made occasionally at large disinfo- and misinformation conferences on both sides of the Atlantic (including MisinfoCon London and the Digital Disinformation Forum held at Stanford back in June). But it doesn’t appear to have sunk in yet.
It seems facile to point this out, but even if we removed all the false information from the 21st century information ecosystem overnight, it still wouldn’t be healthy. That’s because the news industry’s incentives are set up wrong. Far more intelligent people than me have written about the corrosive impact of advertising on journalism, going back many decades.
More recently, the rise of the monopolistic tech titans (Google and Facebook), who now control the vast majority of the advertising market, has accelerated the tendency towards what we sometimes call ‘infotainment’: clickbait headlines leading us into factoid-laden stories of limited value and relevance. This is only likely to continue as newsfeed and search engine algorithms increasingly surface content based on a blend of general popularity, low quality titillation and individual preference.
While there are very clear differences between the large tech firms in terms of their motives, I am not sure that any of them can honestly claim to hold the high ground on this. Perhaps Google’s search team remains committed to giving humans access to high quality information, although Yelp and others may disagree. But YouTube (a natural monopoly and, if you’d forgotten, a Google company) is, by contrast, a nest of vipers, while Google News is still promoting false stories from long-time fakers and trolls like 4Chan.
In short, the business model that sustained for-profit journalism is broken beyond repair. The reporting on Donald Trump during the US election bears this out. Misinformation became a problem towards the end of that campaign, but way before that had occurred, the mainstream news media in the US gave Trump far more publicity than other candidates. This is because they have to compete for attention in an ever-expanding sea of attention seekers. Trump was outrageous, controversial, brash; in short, a ready-made attention-grabbing brand. (This is hardly surprising given his own business dealings have been based on nothing more than his brand – his own name – for decades.)
The question then often arises: what should we do about the broken business model? There is a lingering belief that journalism can become self-sustaining. There are some interesting new attempts to prove that subscription models can provide lasting funding: two that spring to mind are The Correspondent and Jimmy Wales’ latest endeavour, WikiTribune. People will often also point to the increase in subscriptions at established for-profit newspapers like the New York Times, Washington Post and Guardian as evidence that readers are waking up to the threat posed by misinformation.
I think people need to get real about the likely success of such initiatives. Even if something like WikiTribune does get traction, it’s going to take years for such a small organisation to start to produce the kind of investigative reporting that will make a dent, whether by raising standards in the news industry, or by improving transparency in the political landscape.
3: There is no consensus on regulation of the tech giants
While EU regulators have been on the case for a while, a happy recent development is that US legislators have finally begun to grapple with the monopolistic power of the social media platforms that have accelerated the spread of false information. Sadly, their focus is currently taken up mostly with the influence of foreign entities (especially Russia) on elections, rather than the general degradation of information quality that these platforms and their super-dominant technologies are creating.
These are extremely knotty political questions that have no easy answers. In that sense it’s not surprising that the open internet advocacy community has developed little to no consensus on how to approach the issue. This is partly because many in that community are equally worried (sometimes more worried) about government interventions as they are about private corporations. These are people who are used to fighting against excessive surveillance by intelligence agencies; their heroes are Snowden and Manning. As such, it doesn’t come naturally to petition the government to take action against excessive corporate surveillance.
There’s also the risk that by taking action, governments could actually give more power to the platforms. I’ve been back and forth on the issue of CDA230, the provision that protects the likes of Facebook, Google and Twitter from being liable for content posted on their platforms. This is a law that is staunchly defended by freedom of expression activists in the USA.
However, it has arguably backfired in its intention to allow speech to flourish online by allowing all sorts of misinformation and hate speech to be posted without placing enough of a responsibility on these media companies – for that is partly what they are – and by failing to account for what happens when, as in other markets, a few dominant players emerge. A law created in 1996 when the internet was an anarchic Wild West of small, intensely competitive players now looks like an anachronism, protecting the few titans of Silicon Valley.
On the other, placing that responsibility on these tech/media companies now, when they are the de facto managers of online speech, both gives them even more power and adds to the risk of censorship. Once you’ve gone down that path, what is there to stop governments imposing similar conditions on other publishers or editor-like websites?
So, there’s no easy win here either. The reality is that only the threat of government intervention to regulate the market on the basis of competition/antitrust law – perhaps by breaking apart Alphabet and Facebook, which are both collections of monopolies – may be what goads the companies into taking real action.
So what do possible solutions look like?
1: Good quality information doesn’t come for free. So fund it.
One partial solution – crucially, one of which we (global civil society) can shape most or all elements – is to pour as many resources as possible into existing success stories, in an attempt to scale them. At the moment, philanthropic attempts to support media are too meagre to do anything but create additional competition, even between non-profit outlets that are openly committed to extraordinary collaboration. The same few foundations funding the same few organisations is all quite cosy, but doesn’t do enough to expand the amount of good quality information flowing through the system.
If we want projects like the Panama Papers or the Paradise Papers to be more than annual events, then reducing the need to compete (or, to put it another way, reducing the barriers to collaboration) and improving the technology used by investigative journalists and civil society accountability organisations should be the two foremost priorities. What characterises the network of organisations that carried out those projects is that it is sufficiently removed from the profit motive and sufficiently focused on social impact that its leaders are willing to forego the benefits that derive from exclusivity.
It follows that the main way we can expand the amount of good quality reporting in the system is to crowd in more funding at a sufficient scale to make a difference. Omidyar Network’s announcement earlier this year of an extra $100 million to address the global ‘trust deficit’ was impressive, but it’s important to realise that the amount allocated to content production is probably only a third of that (it’s not entirely clear where it’s all going).
We need a global fund of sufficient scale that it can accept money from all-comers, including corporate and individual donors who, on their own, would pose too much of a reputational risk to organisations receiving money. This fund, if large enough, could support global public interest media in perpetuity. There are already strong proposals on how to fund this. Governments recovering assets and money on the back of investigative journalism, for example, could tithe some part of the return into a trust. Or, indeed, there could be a levy placed on the tech titans (or voluntarily paid by them), as proposed variously by expert commentators like Emily Bell, Ben Eltham and Steven Waldman.
2: Turn surveillance tech to the people’s advantage.
The thing that really drives the tech titans’ insane profitability is their successful drive to turn surveillance into something people don’t just passively accept, but to which they actively contribute. On Facebook, Google or Twitter, you are the product. Attention has been tied in an unprecedented way to personalisation through massive-scale analytics conducted by beautiful, sophisticated mathematical contraptions barely understood even by the people who built them.
By contrast, the world of journalism is barely even living in the 20th century, let alone the 21st, when it comes to technology. It’s telling that the most popular film about investigative journalism since the mid 1970s – Spotlight – depicts an investigation conducted in the early 2000s in which the reporters rely pretty much entirely on shoe leather, phone calls, door-knocking and interviews. It’s extremely trad journalism. It has very little to do with the state of the art of investigative journalism today, which is heavily reliant on mapping relationships between entities found in disparate, large and often incredibly unwieldy datasets.
Because of the dire funding situation for accountability journalism, no single organisation – even in the for-profit media space – has the money to invest in the quality data scientists and software engineers you would need to get the most out of a leak like the Panama Papers or the Paradise Papers. Due to increasing collaboration, those projects have yielded tremendous results, but they could have yielded far more if there were a common set of tools, used by everyone, into which data could be shared.
‘Surveillance tech’ is a scary name for something that is, itself, agnostic. What makes the tech titans scary is their incredible network effects, the impossibility of knowing their motives, and the lack of accountability around what they are and do. If we were to build an open source, publicly available data commons to store datasets relevant to transparency, accountability and democracy – such as all public registers of ownership, all public land registries, and structured data from major leaks – it would vastly accelerate and improve the investigative process. In time, you can also envisage this commons including ‘open source intelligence’ – scraped YouTube videos, tweets, Instagram photos and all the rest.
This would be a major technological endeavour, but it is not at all beyond the bounds of current software engineers. What it requires is money and a team willing to work for the good of humanity, rather than for their own financial interests, and those of some anonymous VC investors. If philanthropic foundations or enlightened rich people were to pool resources to fund such a project, it could change the world.
3: Stop being squeamish about working together.
One of the things that most characterises different groups within global civil society is a strong sense of identity. That’s not surprising.
There are the investigative journalists, often independent to the point of paranoia; there are the activists, fired up but sometimes without a compelling, evidence-based story to tell; there are the academics and policy people, who have some data and some answers but can’t get a hearing; there are the civic tech people, trying to work with all of the above and explain why things aren’t as simple as they seem; and there are the donors, who must somehow tolerate all of these groups explaining why they deserve their money.
I’m always struck whenever I go to a conference or an event organised by any one of these groups how insular each one is. As someone who has no qualification to be in any one of them, and therefore feels like a perennial outsider, I find myself wondering at what point this might change.
And then, looking outside, there are other gaps: between civil society and government, between government and Silicon Valley; between civil society and Silicon Valley. And this is to say nothing of the deeper social divides that brought us here in the first place: between the powerful and the weak, the ignorant and the informed, the rich and the poor, the young and the old.
The state of the world today – certainly the world of ‘liberal democracy’, the ‘open society’ or whatever you want to call it – seems urgently to require new thinking, new places to convene, new ways to work together. There are a few places where this is happening, or where it might happen. I was proud to work with OCCRP on its formal partnership with Transparency International because it was a serious attempt to bridge one of these many divides. The Open Government Partnership is another important and valuable nexus for such attempts. Encouragingly there also seems to be a serious move towards building and funding interdisciplinary institutes engaged on ethical issues raised by artificial intelligence technology, such as AINow. And I am excited to see what comes of the new Intellectual Forum at the University of Cambridge, led by the brilliant Julian Huppert (for whom I used to work).
But there’s room for far more. This is an area where a small amount of systems thinking allied to a small amount of funding could have huge benefits and concrete outputs.
Get in touch if you want to help.
Cross-posted from Medium