More than 70 million warnings sent to people seeking child abuse material

Over 70 Million Warnings Sent to Individuals Seeking Child Sexual Abuse Material

More than 70 million warnings sent – The Lucy Faithfull Foundation has reported that more than 70 million warning messages have been issued to users attempting to access child sexual abuse material (CSAM) online during the past two years. This initiative, known as Project Intercept, is a collaboration between the child protection organization and major technology companies such as Google, TikTok, and Meta. Unlike traditional content filtering methods, the program focuses on alerting users to the legal implications of viewing CSAM and guiding them toward support services designed to help modify their behavior.

Project Intercept’s Approach to Intervention

Project Intercept operates across diverse digital environments, from end-to-end encrypted messaging platforms to AI-driven chatbots. By delivering warnings in real time as users engage with harmful content, the initiative aims to create awareness and encourage proactive support. The foundation emphasizes that this method does not merely restrict access but educates users, offering them resources to address their behavior before it escalates.

The Lucy Faithfull Foundation has noted that nearly 700,000 individuals have used the Stop It Now resources following these warnings. These tools include confidential advice and self-guided interventions to help users understand and change their actions. While the number of people accessing these materials is significant, some experts argue it is relatively low compared to the scale of the issue. “The fact that only 700,000 people click through to support services after receiving 70 million alerts seems disappointing,” remarked Professor Sonia Livingstone, director of the Digital Futures for Children center at the London School of Economics. “This suggests there is still room for improvement, especially as the prevalence of child sexual abuse imagery online continues to rise.”

See also  Armed groups launch coordinated attacks across Mali

Global Reach and Engagement Metrics

Project Intercept is operational in 131 countries, spanning a wide array of online platforms. This includes services with end-to-end encryption, where only the sender and recipient can see messages, as well as AI-powered chatbot systems. The foundation has not disclosed the exact number of unique users who triggered these warnings, but it has highlighted the steady engagement with its support content. In 2024 and 2025, an average of 28,000 users per month were redirected to the resources, with more than 80% continuing to interact with the material. However, the organization has not provided long-term data on how these users progress beyond initial engagement.

Deborah Denis, chief executive of the Lucy Faithfull Foundation, stated that the project’s real-time warnings are effective in “disrupting harmful behavior and steering individuals toward help.” She suggested that the initiative could be expanded further to cover more scenarios. “By intercepting harmful actions at the point of occurrence, we can prevent the cycle of exploitation from continuing,” Denis added, underscoring the importance of early intervention.

Industry Response and Broader Context

Children’s charity NSPCC has acknowledged the potential of such interventions but stressed that they must be integrated into a broader strategy. “These kinds of actions can disrupt harmful behavior, but they should be part of a wider effort to stop illegal material from being created and shared initially,” the organization noted. This aligns with calls from experts for tech companies to take more responsibility in combating the spread of CSAM.

Emma Hardy, Communications Director at the Internet Watch Foundation, highlighted the need for innovative solutions to tackle encrypted platforms. “Currently, it’s too simple to share and distribute child sexual abuse imagery online, leaving children vulnerable to repeated exploitation,” she said. Hardy emphasized that safety by design should be a priority, urging the development of new tools that eliminate hiding spots for harmful content. “Platforms must be built with mechanisms that identify and block such material proactively,” she added.

See also  Trump cancels US envoys' trip to Pakistan for talks on Iran war

Regulatory and Technological Collaboration

Ofcom, the UK communications regulator, has included warning messages as part of its guidelines under the Online Safety Act. Almudena Lara, the child protection policy director at Ofcom, noted that the data illustrates both progress and the “magnitude of the problem that remains to be addressed.” She emphasized the necessity of continued efforts to refine and scale these interventions.

Google’s involvement in the project has seen tangible results, according to Griffin Hunt, a product manager at Google Search. “The adjustments made early in 2025 have led to increased interaction with therapeutic help services and a reduction in subsequent searches for illegal content,” Hunt explained. This indicates a positive shift in user behavior, reinforcing the program’s effectiveness in certain contexts.

Challenges in Encrypted Spaces

Mega, a company specializing in encrypted cloud storage, has also joined Project Intercept. Its participation challenges the perception that encrypted services are immune to intervention. “The project demonstrates that even platforms with strong encryption can take steps to address harmful behavior early,” Mega stated. This highlights the growing recognition that no platform is entirely beyond reach when it comes to combating child sexual abuse material.

Despite the program’s successes, experts caution that the current approach may not fully address the complexity of the issue. “While these warnings are a valuable tool, they are just one piece of the puzzle,” Livingstone said. “We need more comprehensive strategies to ensure that the creation and sharing of CSAM is not only interrupted but also prevented.”

The foundation’s data underscores a critical balance: while 70 million alerts have been issued, the engagement with support resources remains a point of concern. Livingstone’s analysis suggests that the system works for motivated individuals but could be improved to reach those less inclined to seek help. “The challenge lies in ensuring that warnings translate into meaningful behavior change across all user groups,” she said.

See also  'Burnham plans to return' and 'Four weeks from crunch'

As the digital landscape evolves, Project Intercept’s model offers a template for other initiatives. However, the success of such efforts hinges on sustained collaboration between charities, tech firms, and regulators. “The key is to adapt and innovate continuously,” Denis concluded. “Only then can we hope to address the growing threat of child sexual abuse material in the digital age.”

For those interested in staying updated on advancements in child safety and online technology, the Lucy Faithfull Foundation invites readers to sign up for its Tech Decoded newsletter. Whether within the UK or beyond, the program’s impact extends globally, with the potential to reshape how harmful content is managed across platforms. “By working together, we can create a safer online environment for children,” the foundation asserts, as it continues to refine its approach in the face of an ever-changing digital ecosystem.