⚡ BREAKING NEWS
Sponsor Advertisement
Meta Fined $375M by Jury for Child Safety Violations

Meta Fined $375M by Jury for Child Safety Violations

A Santa Fe jury ordered Meta Platforms to pay $375 million for violating New Mexico consumer protection laws. The landmark verdict found Meta failed to safeguard children on Facebook and Instagram from predators, concluding a six-week trial.

A Santa Fe jury delivered a landmark ruling on Tuesday against Meta Platforms, finding the technology giant willfully violated New Mexico’s consumer protection laws by failing to safeguard children on its social media platforms from sexual predators. The jury ordered Meta to pay $375 million in civil penalties, concluding a six-week trial that marked the first jury verdict of its kind against the company on these specific claims.

The verdict follows a 2023 lawsuit filed by New Mexico Attorney General Raúl Torrez, which alleged that Meta, the parent company of Facebook and Instagram, along with WhatsApp, violated state consumer protection laws. The lawsuit further claimed that Meta misled the public regarding the risks posed to teen users' mental health and the extent of sexual exploitation on its platforms. New Mexico prosecutors contended that Meta concealed the scope of safety dangers facing children on Facebook and Instagram and failed to enforce its own minimum age requirement of 13. They also argued that Meta’s recommendation algorithms allegedly made it easier for malicious actors to target minors for harassment and sex trafficking.

In its decision, the jury found hundreds of thousands of instances where Meta engaged in unfair, deceptive, or unconscionable trade practices, violating state law. Jurors determined that thousands of these violations each counted separately toward the total penalty of $375 million. During closing arguments, state attorney Linda Singer addressed the jury, stating, "The safety issues that you’ve heard about in this case, weren’t mistakes. They were a product of a corporate philosophy that chose growth and engagement over children’s safety. And young people in this state and around the country have borne the cost." Prosecutors had initially urged the jury to impose a civil penalty against Meta that could exceed $2 billion, according to reports from CNBC.

Meta's defense attorney, Kevin Huff, countered these arguments in his closing statement, emphasizing the company's efforts to ensure safety. "Meta has built innovative, automated tools to protect people. Meta has 40,000 people working to make its apps as safe as possible," Huff stated. He also described the prosecutors' $2 billion penalty request as "a shocking number."

The jury's deliberations included an examination of whether social media users were misled by specific statements about platform safety made by Meta CEO Mark Zuckerberg, Instagram head Adam Mosseri, and Meta global head of safety Antigone Davis. As part of New Mexico’s investigation into Meta’s conduct, state officials conducted an undercover sting operation. This involved creating test accounts to probe safety on the platforms. These accounts were allegedly bombarded with adult sexual content and outreach from purported child predators, including graphic images and an offer of a six-figure payment to appear in pornographic content. Local police subsequently made at least three arrests tied to this operation.

Further testimony that influenced the jury came from Arturo Béjar, a former Meta safety researcher and whistleblower. Béjar recounted how his then-14-year-old daughter received disturbing messages, including "unsolicited penis pictures," shortly after creating her first Instagram account. Béjar further alleged that Meta’s recommendation algorithm actively helped predators identify potential child victims. He testified, "The product is very good at connecting people with interests, and if your interest is little girls, it will be really good at connecting you with little girls."

Court documents unsealed during the trial revealed an internal Meta email in which a company researcher reportedly warned executives that there could be as many as 500,000 cases of online sexual exploitation per day across Facebook and Instagram. This verdict marks a significant legal challenge for Meta, as it continues to face broader scrutiny and numerous lawsuits concerning the impact of its platforms on young people’s mental health and overall well-being.

Advertisement

The Flipside: Different Perspectives

Progressive View

The $375 million verdict against Meta Platforms serves as a stark reminder of the systemic failures of large technology companies to prioritize the well-being of their most vulnerable users, particularly children. This ruling highlights how profit motives, driven by engagement metrics and growth, can allegedly overshadow fundamental safety responsibilities, leading to environments where children are exposed to exploitation. From a progressive perspective, this verdict is a necessary step towards holding powerful corporations accountable and underscores the urgent need for stronger regulatory frameworks. It is not enough for companies to claim they are working on safety; their platforms must be designed with child protection as a core principle, not an afterthought. The alleged role of recommendation algorithms in connecting predators with minors points to a design flaw that requires collective action and government oversight. While this civil penalty is significant, it should prompt broader legislative efforts to ensure that all children, regardless of socioeconomic status or parental digital literacy, are equitably protected from the harms of online exploitation, demanding that tech giants accept their social responsibility to foster a safe digital ecosystem.

Conservative View

The verdict against Meta Platforms in New Mexico underscores the critical importance of corporate accountability within a free market system, particularly when it comes to consumer protection. While recognizing the paramount need to safeguard children online, conservatives emphasize that this ruling is a civil penalty for deceptive practices, not a call for broad, stifling government overreach into technology innovation. Companies like Meta have a responsibility to be truthful about their safety measures, and if they fail, existing consumer protection laws should be robustly enforced. However, the primary responsibility for a child's online safety ultimately rests with parents, who must be empowered with tools and knowledge to monitor their children's digital activities. Excessive regulation risks creating an environment where innovation is stifled, and companies are burdened with compliance costs that could hinder the development of new technologies, including those designed to enhance safety. A balanced approach would focus on targeted enforcement against proven deception and fraud, while allowing the free market to develop solutions for parental control and digital literacy, rather than imposing broad mandates that could impede economic growth and individual liberty.

Common Ground

There is broad, bipartisan agreement on the fundamental principle that children must be safe online. Both conservatives and progressives can agree that corporations have a responsibility to operate ethically and transparently, especially concerning the safety features and risks of their products for minors. The New Mexico jury's finding that Meta violated consumer protection laws by engaging in "unfair, deceptive, or unconscionable trade practices" highlights a shared concern across the political spectrum: that companies should not mislead the public about the safety of their platforms. Practical bipartisan approaches could include supporting educational initiatives for parents and children on digital literacy and online safety practices. There is also common ground in fostering the development and widespread availability of effective parental control technologies. Furthermore, both sides can unite around the importance of robust enforcement of existing laws against corporate deception and exploitation, ensuring that companies are held accountable when they knowingly endanger users, particularly the most vulnerable members of society.