fbpx

The EU Terrorist Content Regulation shows that we still don’t know how to regulate online hate

By Emma Olsson

Sep 20, 2019

COPY URL

Lately, it has become difficult to understand who’s regulating our online behaviour. Social media users may be aware of the dos and don’ts of posting—no live-streaming acts of terror, no reckless posting of female nipples—but do we know where these restrictions are coming from? The platforms themselves? The law? Both?

On 17 April 2019, the European Parliament passed a measure to tackle terrorist usage on internet hosting services. The proposed law (referred to as the EU Terrorist Content Regulation, TCR) will give platforms one hour to remove terrorist content. Those who persistently refuse to abide by the law may face a significant financial backlash, risking a 4 percent sanction of their global annual turnover. The law is mainly targeted at the largest, most ubiquitous platforms (Facebook and Twitter) who have the financial resources required to hire content moderators or implement filters.

The passing of this rule appears to be a great victory for anyone worried about terrorists using social media to communicate or spread violent images. But what does it mean when platforms and governments join forces in regulation? Though platforms would not be required to build filters, as it was originally feared by those claiming the regulation would violate EU digital law, they would be subject to government regulations in a different way. 

There are a couple of issues with the regulation itself. Firstly, what constitutes terrorist content? The legislation makes vague references to content that incites terrorist acts, but public concern today seems more geared towards the mass spread of terrorist footage via social media. The live-streaming of the shooting in Christchurch, New Zealand no doubt contributed to the regulation being revisited in April (it was originally proposed in September 2017). Live-streaming would broaden the definition, though any definition of terrorist content is troubled by the term itself. In 2019, we still struggle with crafting a cultural consensus surrounding terrorism (what its definition is, who perpetrates it, and how we should handle it).

Second of all, filters will not be forcibly instituted, nor will moderation. What would, then, a feasible alternative look like? How will platforms be expected to uphold these rules without some combination of filtering or human moderation? In other words, if we want to keep certain types of content off of our feeds, we need to decide how this could be done in a way that won’t straight out censor users, nor push away smaller platforms that can’t afford robust moderation services. The proposal is admittedly vague, and not just in its content, but in everything surrounding it. 

The desire to regulate terrorist content contributes to a large scale conflation between platform guidelines and supra-national law. Moreover, we see user guidelines merging with government mandates. The same phenomenon could be noted in the Tumblr case from December 2018, in which Tumblr’s guideline change to censor nudity fit snugly into the narrative provided by the U.S. government’s Fight Online Sex Trafficking Act (FOSTA) regulation. Fearing a potential legal breach, it would be favourable for Tumblr to just ban nudity altogether. On the surface, Tumblr is the enforcer. But there are more factors at play.

The EU Terrorist Content Regulation represents a similar tug-of-war between platforms and governments, each factor attempting to exert its power over the other. One side will pull, the other may fight back, but eventually acquiesce—until we are no longer sure which side to lodge the blame to. The loser is the average platform user, the person on the receiving end of this new brand of content moderation. Much like Tumblr’s nudity ban, the Terrorist Content Regulation is content moderation for our modern digital period: vacillating, often biased, and evading blame.

The Terrorist Content Regulation is another example of how our enlivened expectations of platforms manifest in practice. We feel that platforms are failing us, their fingers itching to smash the censor button on nudity, but unwilling to remove content that spreads hate or incites violence. What it really proves is that platforms and governments aren’t so much failing to do their jobs as they are scrambling to figure out what those jobs are in the first place.

So what comes next? Trilateral meetings between the Commission, the Council, and the Parliament are expected to commence in October 2019. Apart from that, not much can be gathered. What can be gleaned from the proposal thus far is that the line between platform guidelines and government regulation is blurred. The identities of those in charge of the internet are blurring, too. Content moderation today occurs in the back-and-forth between platforms and governments; a frenetic movement that renders platform users passive. Sometimes, to end a of tug-of-war, you need to cut the rope.

The EU Terrorist Content Regulation shows that we still don’t know how to regulate online hate


By Emma Olsson

Sep 20, 2019

COPY URL

Terrorist groups are moving to niche chat platforms for communication

By Yair Oded

Jan 11, 2019

COPY URL

It’s been clear for several years now that terrorist groups such as ISIS have mastered the realm of technology, and have utilised various online platforms and social media hubs to boost their sinister cause and recruit members. While mega-mammoth social media giants, such as Facebook, WhatsApp and Twitter, are effectively cracking down on terrorist activity on their networks, less popular chat apps are having a harder time immunising their platforms against terrorists.

In a Wired op ed, executive Director and founder of SITE Intelligence Group Rita Katz expounds on ISIS’ most recent attempt to establish an online presence in order to spike up recruitment and facilitate communication following its significant territory losses in Syria and Iraq last year. According to Katz, the terrorist group has resorted to using encrypted messenger apps primarily intended for businesses and gamers after numerous failed attempts to launch web pages on sites like Tumblr and WordPress. The fairly new apps, Katz argues, have proved to be an efficient alternative for terrorist groups, particularly due to the platforms’ social media-like modeling and rudimentary security systems.

While Telegram appears to constitute ISIS’ primary media hub, other similar apps seem to be penetrated and utilised by the group. RocketChat, an open-source messenger service, has become an increasingly popular arena for ISIS-linked media groups to both coordinate terror attacks and further disseminate information originally posted on Telegram. Katz claims that as of January 2019, there were 700 registered users on RocketChat’s server that were linked to ISIS’ channels.

Furthermore, in the past two months alone, ISIS has made successful attempts to expand its virtual media networks into messenger apps such as Yahoo Together (a recent replacement of Yahoo Messenger), Viber, and Discord (a messaging app for gamers). Content found on such apps revealed, among other things, conversations between ISIS members who were planning attacks around Christmas in major Western cities.

In her article, Katz contends that ISIS is currently “testing the water” on such apps—seeing how long they manage to maintain their activities there before they’re flagged or blocked. She further mentions that terrorist groups are taking advantage of the relatively boundless discussion environments such apps foster, and the great difficulty they face in sifting through and identifying adverse content. The messenger apps’ response, Katz argues, will be crucial in determining “where terrorist groups migrate next.”

Spotting and removing terrorist activity on such platforms may prove ever more challenging for messenger apps such as Telegram and RocketChat; while some ISIS linked channels do little to hide their identity, flaunting usernames such as ‘Just Terror”, others camouflage better. Furthermore, it will undoubtedly be trickier to spot groups of ‘sympathisers’ of terrorists (be it Islamic extremists or white nationalist), whose conversations may or may not escalate to discourse bearing potentially dangerous ramifications.

The greatest challenge regarding terrorist groups’ online presence is that their activity will not be extinguished by censorship, but simply migrate elsewhere. The internet (at this moment in time at least), constitutes a free space with virtually limitless opportunities to spread information. Thus, once one platform or channel is blocked, numerous others sprout to replace them. It is true of terrorists and hate groups just as it is of our beloved streaming websites, porn hubs, or anything, for that matter.

The only solution that comes to mind is a global, federation-like body that will be tasked with maintaining order online and removing content deemed perilous.

No doubt Putin is working hard to make this far-fetched dream a reality for us all.

Terrorist groups are moving to niche chat platforms for communication


By Yair Oded

Jan 11, 2019

COPY URL

 

×

Want to sound smart at a dinner party?

We'll just need your email please

 

Don't show again