Social media is a defective product


Meta CEO Mark Zuckerberg leaves Los Angeles Superior Court in California

Kyle Grillot/Bloomberg via Getty Images

I sat down to write, but before I typed words into my document, I pulled out my phone to check my calendar. Then I got a chat notification from a friend, who sent me a link to a meme on Instagram. Might as well check it out. Below the post, a bunch of short videos are queued up, algorithmically chosen to enchant me: one about ravens in the Tower of London, another about Indonesian street food. I poke the raven. Then another. I could scroll through these reels endlessly, and I do. The videos are becoming increasingly disturbing and political. You know what comes next. When I look up at the computer again, almost 45 minutes have passed.

My day is not ruined, but I feel depressed and tired. Where did all the missing time go? How did Instagram lure me into watching hundreds of videos (not to mention dozens of ads), when all I wanted to do was check my calendar? And why did it make me feel so miserable?

The answers to these questions are being debated right now and will come to trial in two California lawsuits brought by thousands of individuals and groups against social media giants Meta (owner of Facebook and Instagram), Google (owner of YouTube), Snap (owner of Snapchat), ByteDance (owner of TikTok) and Discord. The plaintiffs in these cases — ranging from school districts to concerned parents — allege that social media platforms pose a danger to children, cause serious psychological harm and even lead to death. Exposed to videos full of violence, impossible beauty standards and “competitions” that encourage dangerous stunts, children are being led down dark rabbit holes from which they may never return. At stake in both cases is one fundamental question: are these companies to blame for making people feel terrible?

For over a decade now, many US lawmakers have suggested the answer is no. Instead of trying to regulate companies, several states in the US have passed laws targeting how children use social apps. Some try to limit access by requiring parental consent for minors to create accounts, for example. Others have tried to prevent youth bullying by banning “like” counts on posts. Many of these laws have focused on the dangers of social media content. Here in the USA, companies are basically off the hook. There is a notorious section of our Communications Decency Act, known as Section 230, which prevents companies from being held liable for content posted by users.

You can understand why Section 230 seemed like a good idea when it was written in the 1990s. Back then, no one worried about doomscrolling, algorithmic manipulation or toxic “lookmaxxer” influencers who encourage their followers to beat their faces with hammers to create a more defined jawline. Section 230 also came in handy: YouTube reports that 20 million videos are uploaded to the service every day. The company, and others like it, could not function if they were responsible for every illegal thing posted on their service.

Lurking in the background of all this legislation is the fact that the United States is a free speech and absolutist nation. That means it’s very easy for companies like Meta or Google to challenge laws that might limit people’s access to speech online, even if that speech is a video about how to lose weight by starving. In fact, many of the laws restricting minors’ access to social media have been struck down by judges who see them as antithetical to free speech. As a result, many social media companies in the US have been able to whip out free speech laws as a shield against any form of regulation.

Until now. What is fascinating about the two current cases in California is that they deftly sidestep questions of content and free speech. Instead, they argue that the design of social media platforms is itself “defective” and therefore harmful; the endless scrolling, the constant notifications, the auto-playing videos, and the algorithmic lure that feeds our fixations – these features are deliberately created by the companies themselves. And, the lawsuits allege, these “flaws” turn social media apps into “addictive” products, akin to “slot machines,” that “exploit young people” by providing them with an “artificial intelligence-powered endless feed to keep users rolling.” Ultimately, the goal of these lawsuits is to force social media companies to take responsibility for the negative effects their products have on the most vulnerable consumers.

In many ways, this argument is similar to those brought by the US government against tobacco companies in the 1990s. The government successfully argued that companies knew their products were harmful but covered it up. As a result, the companies paid out larger settlements to victims, put warning labels on tobacco products and changed their marketing so that it no longer appealed to children.

There are already leaked documents from Meta that suggest the company knew their product was addictive. A federal judge opened court documents in a case in which a teenage girl became suicidal after becoming addicted to social media. Those documents contained internal communications at Instagram, in which a user experience specialist allegedly wrote: “oh my god, yall (Instagram) is a drug… We’re basically pushers.” This is one of many documents from Instagram and YouTube that the lawyers say paint a picture of companies that knowingly and negligently produce defective products.

The two lawsuits are currently underway and have the potential to dramatically transform social media. Maybe American law will finally recognize what many of us have known for years: the problem isn’t the content, it’s the behavior of the companies that give it to us.

Do you need a listening ear? Samaritans of Great Britain: 116123 (samaritans.org); US Suicide & Crisis Lifeline: 988 (988lifeline.org). Visit bit.ly/SuicideHelplines for services in other countries.

Topics:

Add Comment