A jury in Santa Fe, New Mexico, ruled against Meta on March 24, 2026, finding the company violated the state’s Unfair Practices Act through content moderation failures that harmed children. The verdict imposed a $375 million penalty on Meta, the parent company of Facebook, Instagram, and WhatsApp.
The nearly seven-week trial centered on accusations that Meta knowingly allowed child sexual exploitation on its platforms while misleading the public about safety measures. Prosecutors from the New Mexico Department of Justice presented evidence that:
- Meta’s algorithms recommended harmful content to teens * and failed to remove predators effectively.
Jurors determined Meta made false or misleading statements about platform safety and engaged in unconscionable trade practices that exploited children’s vulnerabilities and inexperience.
State investigators showed internal Meta documents revealing the company knew about rising risks of child sexual exploitation material on Instagram and Facebook. Despite this knowledge, Meta prioritized profits over stronger moderation tools. Witnesses, including:
- teachers,
- psychiatric experts,
- and former employees,
described how the platforms created environments where predators contacted minors and harmful content spread rapidly to young users.
The jury found thousands of separate violations of the Unfair Practices Act. Each violation stemmed from Meta’s deceptive practices and failure to protect children from mental health damage and exploitation. The penalty reflects the scale of these failures across Meta’s user base in New Mexico and beyond.
Meta CEO Mark Zuckerberg testified during the trial.
He downplayed the company’s own internal research on harms to teens and argued that enforcing age limits and removing all bad content is difficult on platforms with billions of users.
Prosecutors countered that Meta had the resources and data to do more but chose not to invest sufficiently in content moderation teams and detection systems.
The case began years earlier when New Mexico Attorney General filed suit under the state’s consumer protection law. Prosecutors built their arguments on:
- whistleblower accounts,
- leaked Meta studies, * and data on child exploitation reports that the company received but did not act on aggressively enough.
Evidence included cases where predators used direct messaging features on Instagram to groom minors, with moderation responses delayed or ineffective.
This verdict marks the first major jury decision holding Meta liable for systemic content moderation breakdowns affecting children. It sets a precedent for other ongoing lawsuits across the country that accuse social media companies of similar failures. States and families have filed thousands of related claims alleging platforms fuel addiction, anxiety, depression, and exploitation among minors.
Meta issued a statement immediately after the ruling. Spokesperson Andy Stone said:
the company respectfully disagrees with the verdict and plans to appeal.
Stone claimed Meta has invested heavily in safety features, content moderation, and tools to protect teens. He added that: the company remains confident in its record despite the jury’s findings.
The decision comes as Meta faces mounting pressure from regulators and lawmakers over platform harms. Internal company data presented at trial showed sharp increases in child sexual abuse material reports in recent years. Prosecutors argued Meta’s profit-driven algorithm designs amplified harmful content to keep users engaged longer, directly contributing to mental health declines in children.
Jurors deliberated for one day before reaching the unanimous decision against Meta on every count. They rejected the company’s defense that it could not realistically police all content on global platforms. The $375 million penalty falls short of the nearly $2 billion the state had sought but still represents a significant financial hit and public condemnation of Meta’s practices.
This outcome highlights ongoing failures in Big Tech’s approach to content moderation. Companies like Meta collect vast amounts of user data yet repeatedly fall short in shielding the most vulnerable from exploitation and psychological damage. The New Mexico case exposed how:
- delayed moderation, * weak age verification, * and algorithm choices created breeding grounds for predators.
Other trials involving Meta and Google on similar child safety and addiction claims are underway or pending in states like California. The New Mexico verdict strengthens the position of plaintiffs in those cases and could influence future legislation aimed at forcing stronger protections.
Meta built its empire on user engagement metrics that often reward sensational and harmful material. When internal warnings surfaced about risks to children, executives minimized them to protect growth and advertising revenue. The jury’s ruling sends a direct message that such priorities violate basic consumer protection standards when they endanger minors.
The $375 million penalty will go to the state of New Mexico. It underscores accountability for corporations that treat child safety as an afterthought rather than a core responsibility.
Big Tech must overhaul its content moderation systems or face repeated legal consequences for the damage caused to the next generation.

