Big Tech Is Abusing Content Moderators

A crucial part of Big Tech’s business, content moderators remain neglected.

In an alternate world where antitrust law would’ve been more liberally invoked, we could’ve had the early promise of the internet truly realized–multiple platforms interacting with each other through open standards, with a clear and concise understanding of what should be removed when content does not meet a predetermined set of criteria. That’s not the world we live in however–the consolidation of power within a poorly-regulated small cluster of companies under the umbrella of ‘Big Tech’ brought to the fore many issues. Chief of them as of late has been the mentally tollful work of content moderation, and Big Tech’s reticence to innovate on what is an integral part of their business.

Casey Newton’s reporting on this is best-in-class. In February of last year, the Verge reporter uncovered a set of horrid facts about the working conditions of content moderators at companies contracted by Facebook — such as Cognizant — where workers were not only baited in with the salivating image of making six-figures as a software engineer in Silicon Valley (which they were ultimately denied), but also, they’d have to stomach a measly salary that is in no way a good enough justification for being exposed to the worst of human cultural produce on a regular basis.

Not long after, Facebook’s hands were forced into sweetening the pot, but the underlying fact remained the same–the nature of content that moderators are constantly exposed to, treads them on dangerous ground. Between trauma, depression, and the general cynicism that looking at a curated collection of obscenities will engender, Big Tech had to have known the only path to making this line of work truly sustainable, is to automate a great deal of it. The problem is, that promise was made long before, but given the lack of in-roads made, it clearly was more PR speak than anything even remotely achievable.

There was a point at which Big Tech would not stop touting the benefits of AI in allowing it to screen content much more effectively–we’d been given the illusion that the bulk of the work was done by computers, such as humans were a negligible add in the equation. The complete opposite turned out to be true however–it seems as though AI is quite a ways off from ever being as capable as humans at screening content, and it was yet another way for Big Tech companies to sell us their image of self-sustenance, when it has little basis in reality.

One of the solutions floated was better pay–subsequent reporting from Casey Newton discovered however that it did not help that much. “Content moderation makes the internet safe for the rest of us to use. But after talking to more than 100 moderators over the past year, I believe that the bargain tech companies are offering many of these workers is morally indefensible,” Newton says. “Companies know that these jobs lead to mental health crises. […] And yet tech giants continue to hire thousands of people into relatively low-paid jobs that, for some subset of employees, lead to PTSD.”

Content moderation isn’t a uniquely-dangerous job–it could be argued with little resistance that it is most akin to enrolling in the military, wherein the entire justification for such a commitment rests on the premise that humans are capable of great evil–we’d rather both didn’t exist, but recognizing their importance is a unifying factor. Where the army takes a stark departure from Big Tech, however, is that it matches the threat with adequate spending, sometimes to a fault–as crucial as content moderation is to a usable social media, Big Tech isn’t too keen on taking it with the gravitas it deserves. It’s even more so brazen when you realize that these companies knew about the harmful effects of the job well before they established a content moderation ecosystem–as it bottoms the list of priorities when it should be much higher up, it makes discussing a solution harder since urgency isn’t of the essence.

Whatever shape that solution may take, it has to be based on human empathy–this is an issue where the harm done takes on more abstract forms than usual, and we shouldn’t let Big Tech’s dogmatic insistence on empirical evidence impede its ability to assume a less exploitative role. Even when the material conditions of a good life are nominally granted, knowing that Mark Zuckerberg, Sundar Pichai, Susan Wojcicki and Jack Dorsey would never elect to do that work on their own is proof enough that they think it’s unfit for anyone else to do.

Professor of marketing at NYU Stern and Pivot co-host Scott Galloway makes this point repeatedly–Big Tech have put themselves in a position where governments around the world have to legislate around their failures, but that won’t amount to much unless American legislature follows suit.

Still, for any platform to operate efficiently, scale is almost a necessary prerequisite, so it’s not a sure thing that antitrust will be the long-awaited silver bullet. The news cycle in the tech sector is so fast-going, that we forget promises of better content screening AI have been made for the longest time, with little to fruition outside of what is easily monetizable. The algorithms amplifying conflict and promoting extremist content on social media continue to get better, while the technology tasked to remove a great portion of it remains largely underdeveloped–it’s no coincidence that the these two facts are simultaneously true, and it just puts into further question whether capitalistic incentives aren’t fundamentally the problem here.

Because the scale required to turn content moderation into a lucrative business makes it hard to milk for profit, the impetus for improving it is virtually nonexistent. The great fear is that a simple change of personnel won’t be enough, and it’ll have to be systemic change leading the way forward–Big Tech may view it as an existential threat, but that’s the point.