The Meming of Life: Making YouTube rabbit holes safer

Moves to tackle the video-sharing website’s predatory algorithm are to be welcomed


If you’ve spent any time on YouTube – and since 1 billion hours are watched daily worldwide, you almost certainly have – you might have noticed the platform has something of a knack for directing you to far-flung and esoteric corners of its commentariat. With so much talk of farmed content and echo chambers, some of these weird diversions are eminently recommendable, like the time a search for bland, boring unboxing videos led me to Alex Frost’s repulsive and hypnotic “wet unboxing” series, featuring his disembodied hands wordlessly opening sandwiches and soft drinks underwater.

Unfortunately, some YouTube detours are off-putting in more serious ways, with videos on holocaust denial, eugenics or other such conspiratorial nonsense proving unaccountably popular in people’s recommendation tabs. Try searching Peppa Pig, for example, and see how long it takes to be recommended a video on its blatant Satanic symbolism. This process of wading through tinfoil-hat garbage when trying to watch even the most vanilla content has become something of a meme for regular users, but in 2019 YouTube have finally addressed the issue as a priority.

“We’ll begin reducing recommendations of content that could misinform users in harmful ways,” their official blog announced, content such as “videos promoting a phony miracle cure for a serious illness, claiming the earth is flat, or making blatantly false claims about historic events like 9/11.”

Recommendations are not accidental

Cheering YouTube’s announcement was Guillaume Chaslot, formerly of YouTube’s AI lab, who went further by explaining these recommendations are not accidental, but a direct result of YouTube’s predatory algorithm; one he helped design.

READ MORE

He explained via the example of “Brian”, a friend who’d become depressed after a personal tragedy. “Brian fell down the rabbit hole of YouTube conspiracy theories ... for his parents, family and friends, his story is heartbreaking. But from the point of view of YouTube’s AI, he’s a jackpot”.

This jackpot, as Chaslot explained in a gripping Twitter thread this week, begets a cycle of use, recommendation and reward in which the AI specifically tracks those videos people are watching obsessively and amplifies them to others, effectively incentivising the kind of material beloved by people with obsessive or paranoid traits, meaning more are made, then watched, then shared, ad infinitum.

Ironically, Chaslot’s account of a pitiless machine code purposefully preying on the obsessions of vulnerable people is enough to make one quite paranoid by itself. Good then, that Chaslot’s conclusion is a heartening one. “YouTube’s announcement is a great victory,” he wrote. “It’s only the beginning of a more humane technology. Technology that empowers all of us, instead of deceiving the most vulnerable”. So long as we can still get our wet unboxing videos, this can only be a good thing.