Adam Mosseri testifying in court about Instagram teen safety and Meta internal emails regarding direct messages

Picture a bridge inspector who files a report confirming the structure is failing. The stress fractures are documented. The load-bearing columns are compromised. And yet, instead of closing the bridge, the city just keeps waving traffic through — occasionally muttering to colleagues that the situation looks dicey — while collecting tolls the whole time.

That is, with uncomfortable precision, what unfolded inside one of the most consequential technology companies ever built. According to Telset, recently unsealed court documents from a sprawling legal battle have exposed a deeply inconvenient reality: Meta executives knew about severe, specific dangers facing teenagers on Instagram as far back as 2018. They discussed it in writing. They acknowledged it explicitly.

And then they waited.

Nearly six years passed before a basic, foundational safeguard arrived. The timeline alone — read cold, without spin — forces a rather dark question about how modern social media actually operates. When growth metrics collide with child safety in a boardroom, who walks out winning?

The 2018 Emails They Couldn’t Unwrite

Unsealed documents from the U.S. District Court for the Northern District of California pulled back the curtain on internal conversations that most observers suspected were happening, but rarely get to see rendered in plain text. Prosecutors surfaced an email chain from August 2018 — one end held by Adam Mosseri, Instagram’s chief, and the other by Guy Rosen, Meta’s Vice President and Chief Information Security Officer.

In that exchange, Mosseri bluntly acknowledged that “terrible” things could happen through Instagram’s Direct Messages. Pressed by plaintiffs’ lawyers during testimony about whether those terrible things included unsolicited explicit imagery — what the internet has long called “dick pics” — Mosseri agreed that yes, that’s what he meant.

Sit with that for a moment.

In 2018, the uppermost tier of Instagram’s leadership had fully internalized that their platform’s core messaging feature was being weaponized to deliver explicit, unwanted sexual content to users — including minors. They understood the platform was periodically functioning as a digital pipeline for adults attempting to groom teenagers. That high-level awareness, however, did not trigger anything resembling a high-level emergency response.

Meta finally rolled out a feature to automatically blur explicit images in Instagram DMs in April 2024. Six years on. In the hyper-compressed timeline of Silicon Valley — where entirely new product categories are conceived, engineered, and shipped within months — a six-year delay on a rudimentary safety filter is practically geological.

Nearly One in Five Young Teens — Meta’s Own Numbers

To grasp why this delay carries real weight, you have to stop looking at the executives and look squarely at the users. The internal statistics surfaced during Mosseri’s testimony sketch a bleak portrait of what teenagers were actually absorbing during those intervening years.

See also  Tesla’s Autopilot Reality Check: The $243 Million Verdict Stands

Meta’s own internal surveys — conducted by the company, not outside critics — showed that 19.2% of respondents between the ages of 13 and 15 reported encountering unwanted nudity or sexual imagery on Instagram. Nearly one in five young teens. That’s not a rounding error.

Even more disturbing: 8.4% of that same cohort reported seeing content depicting self-harm or threats of self-harm within just the prior seven days of app use. These aren’t abstract data points floating in a spreadsheet. They represent millions of developing minds repeatedly exposed to psychological distress — the kind of cumulative exposure that compounds quietly and surfaces years later in therapy offices.

This tracks with what medical professionals have been sounding alarms about for the better part of a decade. World Health Organization data on adolescent mental health makes clear that early exposure to trauma, cyberbullying, and inappropriate content measurably elevates the risk of depression and anxiety disorders in teenagers. The psychological toll of a device buzzing in a teenager’s pocket with unsolicited explicit imagery is not trivial — it accumulates. And for six years, that accumulation was quietly accepted as an operating condition.

“Everyone Else Does It Too” — A Defense That Doesn’t Hold

Under questioning about the company’s priorities and these protracted delays, Mosseri reached for a defense that feels threadbare given the documented scale of harm. He argued that Meta was perpetually navigating the tension between user privacy and safety — a genuinely real tension, to be fair, but one that other platforms have managed to address far more swiftly.

“I think it’s pretty clear that you can send problematic content on any messaging app, whether that is Instagram or anything else,” Adam Mosseri

This is a revealing pivot. It’s the corporate equivalent of a teenager explaining to their parents that plenty of other kids are failing math too. Technically accurate. Completely beside the point. Yes, you can transmit problematic content over SMS or email — but Instagram isn’t a neutral utility functioning like a digital postal service. It is a meticulously engineered, algorithmically supercharged ecosystem built to maximize user engagement, actively connecting strangers through suggested contacts, Explore pages, and frictionless DM portals that require almost no effort to misuse.

See also  The Great Digital Exhaustion: Why We’re Drowning in AI Slop

Mosseri also pushed back against the notion that Meta should have explicitly cautioned parents that Instagram’s messaging system operated largely unmonitored for these categories of abuse — outside of the automated scanning for Child Sexual Abuse Material (CSAM), which is a legal requirement, not a voluntary safety measure. The implicit assumption underneath his position seems to be that parents and teenagers should inherently know the internet harbors predators. But offloading the entire burden of safety onto families, while simultaneously engineering an app specifically designed to route around parental oversight, is a fairly elegant way to dodge culpability without ever quite admitting it.

Who, exactly, was supposed to protect those kids?

The Legal Theory That Could Rewrite the Rules for Big Tech

This specific revelation about Instagram DMs is one thread in a much larger, messier legal tapestry. The ongoing litigation in California — alongside parallel suits in Los Angeles and New York — doesn’t target Meta alone. Snap, TikTok, and Google’s YouTube are all named. The ambition of the litigation is hard to overstate.

The central legal theory is genuinely explosive. Plaintiffs aren’t simply arguing that bad things happen on social media platforms — that argument would be unwinnable. They are asserting that these platforms are legally “defective.” Specifically, that the products were deliberately architected to maximize screen time and engagement, with full corporate awareness that this design drives addictive behavioral patterns and degrades adolescent mental health. Defective, in other words, not by accident but by intention.

The behavioral mechanics embedded in these platforms aren’t subtle. Pew Research Center’s longitudinal tracking of teen social media habits has documented near-constant platform usage among teenagers — a pattern cultivated through infinite scrolling, autoplay video queues, and algorithms optimized to surface content that triggers strong emotional reactions. When you engineer a slot machine, you forfeit the right to express surprise when users can’t stop pulling the lever.

The suits suggest — and the internal documents appear to corroborate — that the tech giants treated user safety not as a foundational design requirement, but as a reputational liability to be managed on a quarterly basis. An automatic nudity filter introduces “friction.” It demands engineering resources. And — this part matters — it risks a measurable dip in messaging engagement. For a long stretch of Silicon Valley history, anything threatening engagement numbers got quietly buried in committee, regardless of what it was protecting against.

See also  Beyond the Static: Why Our Digital World is Starting to Glitch

Regulation Arrived. The Question Is Whether It Arrived in Time.

As of early 2026, the regulatory environment has shifted dramatically. The era of unchecked, largely self-regulated social media expansion — where platforms wrote their own rulebooks and governments mostly watched — is, for practical purposes, finished. A mounting wave of state and international legislation has begun constructing guardrails around adolescent social media use that companies can no longer simply lobby away. The U.S. Surgeon General’s formal advisory on social media and youth mental health reframed the entire conversation, treating the issue not as a quirky tech-sector externality but as an urgent public health emergency demanding a systemic response.

Meta’s current public posture remains defensive, though the company has grown more practiced at wrapping that defensiveness in the language of progress. Liza Crenshaw, a company spokesperson, recently pointed to the various protective measures Meta has introduced over the years. “Selama lebih dari satu dekade, kami telah mendengarkan orang tua, bekerja sama dengan para ahli dan penegak hukum, serta melakukan penelitian mendalam untuk memahami masalah yang paling penting,” she stated — emphasizing a decade-long collaboration with parents, experts, and law enforcement as evidence of genuine commitment.

And credit where it’s due: the 2024 DM nudity filter exists. Parental controls in 2026 are meaningfully more robust than they were in 2018. The work is happening. Slowly, grudgingly, often only after litigation or legislation made inaction more expensive than action — but it is happening.

The question that won’t go away, though — the one hovering over every courtroom filing and every congressional hearing — isn’t whether these companies are doing the work now. It’s why litigation had to become the primary mechanism for getting them to do it. They had the emails in 2018. They had the internal survey data showing one in five young teenagers encountering explicit content. They had the engineering capacity to build a blur filter — a tool, by any reasonable measure, far less technically demanding than the recommendation algorithms they were already running at scale.

They knew the bridge was crumbling. The reports were on the desk.

We are only now — finally, belatedly — demanding they fix it before waving the next generation across.

Based on reporting from various media outlets. Any editorial opinion is that of the author.

Leave a Reply

Your email address will not be published. Required fields are marked *