(The opinions expressed here are those of the author, a columnist for Reuters.)
LONDON, Sept 26 (Reuters) - During the 2016 U.S. election, Facebook says it did not do enough to enforce its standards as rumour and untruth spread. In the three years since, it says it has hired up to 30,000 people to take down inappropriate or extremist content, clamp down on fake accounts and reduce interactions with so-called “fake news” by almost two thirds.
This week, Facebook's chief of government relations and communications announced here an important twist to that policy. As a new U.S. election looms, he said the platform would continue to monitor and sometimes remove untruthful, extremist or otherwise harmful content. But he added a new twist to that policy – that it would not apply to politicians. If someone holding or running for political office said something that would normally be banned, Facebook's updated "newsworthiness" clause means users will still be able to see and share it.
The announcement by Nick Clegg, a former British deputy prime minister and party leader who now leads Facebook’s political liaison, showed just what a complex position the firm and other Internet giants now find themselves in. With it and other platforms such as Twitter now central to politics and community relations around the world, Facebook is increasingly held responsible for the consequences of content posted on it. But it is extremely unsure what to do about that – and perhaps unsurprisingly, reluctant to find itself in conflict with politicians who may one day be responsible for regulating it.
Much of Clegg’s speech, to the Atlantic Festival in Washington DC, appeared aimed at countering suggestions that Facebook should be broken up entirely. That, he said, would undermine a major U.S. brand that supported well-paid tech jobs and the broader economy. The firm should and could not become effectively a censor of political dialogue and debate, he argued, although it might still sometimes block content it believed risked sparking violence or undermining human rights.
Critics, however, say the platform has been failing on that front for years, particularly in India, where it is widely cited as a factor in rising ethnic violence, notably against Muslims. With 300 million users here, many using languages Facebook struggles to monitor, the Indian market has long been a challenge for the firm. According to Internet monitoring organization Equality Labs, 93 percent of Islamaphobic or content otherwise tagged as extremist or incitement in India remained on the platform after being reported.
The rise in Facebook-based incitement has come at the same time as – and quite possibly fuelled – a sectarian trend in Indian politics. The risk is that Facebook’s new “newsworthiness” measure will further encourage such comments from politicians. In parts of India and nearby Myanmar, that has fuelled violence against the Muslim Rohingya minority - and it remains unclear whether Myanmar’s rulers and military will be counted as “politicians” under the updated rules.
Other countries have, on occasion, taken a much tougher line with Facebook. After ethnic riots against Muslims last year, Sri Lanka blocked the service and several others including Twitter. That, however, appears to have done little to blunt the unpleasant aspects of such platforms.
That is particularly true in the United States. Clearly, social media can give an insight into objectionable views held by individuals who might previously have kept them hidden. The fact they can now be seen more widely, however, can make such views appear more mainstream. A survey here earlier this year looked at almost 3,000 serving U.S. police officers as well as some retired colleagues. A fifth of serving officers, it revealed, had shared content judged as troubling, including racial epithets. That rose to two fifths among retired officers.
The risk of Facebook’s new newsworthiness terms, of course, is that they provide an incentive for right-wing and ethnically divisive politicians to push more aggressive rhetoric. Those who monitor the Internet closely say extreme and divisive positions are more likely to get shared. And, of course, those who wish to do so now have an additional incentive to embrace the political mainstream to protect their ability to published such views.
Even amongst those who fall short of outright extremism or hate speech, this may encourage a growing trend in politicians to disregard the truth. Much of this, of course, is about the current incumbent of the White House. President Donald Trump’s twitter feed in particular continues to be an egregious offender. The truth may be that Facebook and others have simply given up any thoughts of moderating the current POTUS, and the updated rules for politicians mean they will not risk his ire by interfering with his output.
Ultimately, this all points to a bigger and in many respects much more disturbing picture. Facebook and other social media might are the battleground and what they will and will not take is crucial. But what is even more alarming is the growing number of politicians who feel neither constrained by truth nor worries over the impact of their divisive words, even if they result in violence, catastrophe or death.
*** Peter Apps is a writer on international affairs, globalisation, conflict and other issues. He is the founder and executive director of the Project for Study of the 21st Century; PS21, a non-national, non-partisan, non-ideological think tank. Paralysed by a war-zone car crash in 2006, he also blogs about his disability and other topics. He was previously a reporter for Reuters and continues to be paid by Thomson Reuters. Since 2016, he has been a member of the British Army Reserve and the UK Labour Party, and is an active fundraiser for the party. (Editing by Giles Elgood)
Our Standards: The Thomson Reuters Trust Principles.