YouTube’s new algorithm wants to keep us ethically addicted

By Laura Box

Oct 2, 2019

COPY URL

YouTube’s newest algorithm is set to make the platform even more addictive—an exciting prediction for investors who expect the changes to increase profit by tens of millions—while simultaneously promising to reduce the echo-chamber the platform has created. But can YouTube balance increasing addiction with its self-imposed ethical responsibilities? Probably not.

YouTube’s magnitude is almost inconceivable; it would take a month to watch the amount of content uploaded to the platform every two minutes. Since growing to be the second largest search engine on the planet, inferior only to its parent company Google, public demand has increased for the service to address the negative implications of its algorithm.

YouTube has come under fire for its tendency to create echo chambers; that is, once a user clicks on one type of video, it’s more likely to recommend similar videos, thus reinforcing biases and making it likely to “push users into an immersive ideological bubble,” as academic Derek O’Callaghan suggests. This phenomenon has seen led some to credit the platform with a significant role in the rise of populism.

In 2014, O’Callaghan pointed out that these echo chambers can quickly lead users into bubbles of far-right extremism. As extremist content often contravenes hate laws, videos with these views are technically illegal in the countries that YouTube has recommended them in. Despite these warnings, which emerged 5 years ago, research from Swansea University earlier this year found that YouTube is still significantly more likely to prioritise extremist recommendations to users who had interacted with it, showing that little change has been made.

Prolific author and YouTuber Hank Green argues that the echo chamber isn’t necessarily bad for content creators, writing that “our channels are our homes on the internet, and we need to figure out how to make them safe.” He explains that this environment allows YouTube communities to feel welcomed and engaged. But the unnerving flip side is that the communities of those expressing highly controversial views also feel welcomed and engaged. Previously believed to be reserved for the shadowy corners of the internet, alt-right YouTubers, such as Carl Benjamin (Sargon of Akkad), who has consistently made homophobic, sexist and racist comments, and sympathetically interviewed people who’ve had rape allegations against them, calling them “victims”, have been given a safe space by the platform and amassed huge followings.

Now, YouTube is claiming that the latest algorithm will reduce this echo chamber.

While this appears positive on the surface, there’s evidence that YouTube’s attempts to rectify past public frustrations have merely been band-aid solutions. After studies indicated that the majority of climate content on YouTube was by climate deniers, public outrage pushed YouTube to prioritise videos from legitimate climate sources. Now when users search “Climate change is a hoax”, legitimate, factual information and embarrassed climate deniers are among the first to appear. Despite this, climate change denial videos still quickly rack up, not only views, but overall positive perceptions. Within a week, a denial video (avoid the comment section if you value your mental health) received over 100K views and thousands of likes, showing that the echo chamber on these videos is truly alive and well.

This is the suspicious and irksome nature of YouTube. On the surface, it appears ethically conscious by hiding videos that spread false information, yet it’s apparent that alt-right and scientifically misinformed content makers are still somehow rewarded by its algorithm.

YouTube’s business model has been built around incentivising creators of shocking and controversial content. The platform has not only allowed misogynist, racist and homophobic content to stay online, but it has monetised this content. The platform rewards YouTubers, regardless of the harm their belief systems create, as long as their content is shocking and controversial enough to garner significant views.

YouTube’s CEO Susan Wojcicki recently declared that the company is taking steps to improve its platform in an acknowledgement of social responsibilities. She outlined four “Rs” it will use to do so: Remove (harmful content), Raise (authoritative voices), Reduce (recommendations of harmful content), and Reward (those with videos at a certain standard).

But she forgot YouTube’s fifth and most important R: Revenue.

For a platform that has built its success on the innate nature of its current algorithm, it seems pretty unlikely that YouTube will jeopardise this model. It’s likely that the changes will simply provide surface-level solutions to placate the concerned public, while any genuine self-imposed social responsibility will be outweighed by the capital their algorithm craves.

YouTube’s new algorithm wants to keep us ethically addicted


By Laura Box

Oct 2, 2019

COPY URL

New investigation reveals TikTok’s compliance in spreading hate and violence in India

By Yair Oded

Sep 13, 2019

COPY URL

Over the past year and a half, TikTok has been rapidly taking over Southeast Asia, and has made impressive strides in the U.S. and Europe, situating itself as the next ‘it’ app in the social media landscape. Alas, the 15-second video app has been used as a vehicle of egregious hate speech, racist vitriol, and violent attacks, particularly in India. 

An investigation by WIRED revealed that thousands across India have taken to TikTok to spread racist and violent messages against members of groups who are perceived to be lower than them on the caste system’s social ladder. 

In one case, Venkataraman, a 28-year-old man in the state of Tamil Nadu, had posted a video in which he drunkenly yelled racial slurs against the Dalits—the group ranked lowest in India’s Hindu caste system. “Fight us if you are a real man, you Dalit dogs. You bastards are worthless in front of us. We’ll butcher you lowlifes,” Venkataraman was seen saying in the video, which he claimed he shot at the encouragement of his 18-year-old friend. As the video went viral, a wave of protests broke out in the area, and Dalit demanded that acion be taken against Venkataraman. The latter then placed the blame for video and the backlash on his friend, whom he then strangled to death.

Overall, tens of thousands of TikTok videos have reportedly promoted hate speech and contained casteist-inspired hashtags. Over a two-month period this summer, WIRED came across 500 TikTok videos that included caste-based hate, incitement for violence, and threats. In a growing number of cases, the rapid proliferation and ubiquity of such hate speech encourages people to take the fight off of the screen and commit acts of violence in real life. Thus far, 18 incidents of violence (ten of which resulted in deaths) were linked either directly or indirectly to TikTok in India.

Responding to the investigation, TikTok stated that, “The team had identified the videos cited before WIRED contacted us and were in the removal process, but we continuously work to improve our capabilities to do even better.” The company has also appointed a special grievance officer to India in August. 

Yet, court documents procured by WIRED reveal that the company currently fails to curb the volume of hate speech spreading on its platform in India. Over a five-month period, between November 2018 and April 2019, TikTok removed 36,365 videos that breached its codes on hate speech and religion, and 12,309 videos that included dangerous behaviour and violence. And still the court documents reveal that only one out of ten of the overall videos reported (677,544) were eventually removed, and that those reported only account for 0.00006 percent of the total videos uploaded. While this data makes it difficult to measure the true impact of TikTok on the proliferation of hate speech in India, it indicates that the company simply fails to establish an effective screening mechanism to moderate content on its app. 

“The problem with Tiktok is that they are not very open to advocacy or engaging with civil society. Not even to the standards of its American counterparts,” said Thenmozhi Soundararajan, executive director of Equality Labs, a South Asian human rights group, adding that, “I think they’d rather pay the fines and don’t care.” 

TikTok has also inspired the wrath of numerous lawmakers and judges in India, who have been vocal in their opposition to the app and its influence over the Indian population. At the request of the Indian Court system, which ruled that the app was disseminating “pornographic” and “inappropriate” content, Google and Apple removed TikTok from their app stores last April, and didn’t reinstate it until millions of additional videos were taken off the platform.

The power of social media platforms in exacerbating tensions and their role as potential vehicles of hate should not be taken lightly. It is true that TikTok is not the only company struggling to formulate a proper system to curb hate-speech and halt the spreading of misinformation, yet with its position as the most popular kid on the block, at least in Southeast Asia, comes an even greater responsibility to lead such efforts.

TikTok—get your act together.  

New investigation reveals TikTok’s compliance in spreading hate and violence in India


By Yair Oded

Sep 13, 2019

COPY URL

 

×

Want to sound smart at a dinner party?

We'll just need your email please

 

Don't show again