Recent research from the Molly Rose Foundation (UK) underscores just how pervasive harmful content remains on key social platforms where youth are present in high numbers. According to the study, material related to suicide, self-harm, and intense depression was easily accessible on TikTok and Instagram, and often pushed directly to under-18-year-olds through algorithmic recommendation. Brand safety today reaches far beyond avoiding harmful content, by adjacency, and now directly overlaps with child protection as well as online safety controls and the accountability of tech platforms to safeguard vulnerable users. The study found that not only have these risk profiles failed to improve since 2023, but in some cases, they have actively deteriorated. The foundation study alleges further that both platforms appear to be “gaming” their requirements under the UK’s Online Safety Act, raising concerns about whether Ofcom’s current approach is fit for purpose. While the data used to identify said content could be argued, the level of said content is concerning. High-risk design choices and algorithms that supercharge harmful content reveal a questionable system of declared “safety-by-design.”
In another vein, recently (and not for the first time), Roblox illustrates the tension of building a thriving child-focused gaming ecosystem, while its open social features have been repeatedly exploited by predators. Measures like AI-powered age verification and “Trusted Connections” show positive steps, but experts argue they are reactive rather than preventive, highlighting how difficult it is for platforms to keep pace with bad actors.
This problem extends well beyond gaming and social media, into the broader ad tech ecosystem. Law enforcement collaborations continually expose predators within online communities, while major tech firms like Google and Amazon have faced scrutiny for inadvertently monetizing child sexual abuse material (CSAM) through their ad networks. Even as groups like the Internet Watch Foundation partner with vendors to strengthen safeguards, the incidents underline a sobering reality: brand dollars can - and do - find their way to funding harmful content when digital supply chains lack robust oversight. Advertisers, in turn, face not only reputational damage but also potential regulatory exposure when safeguards fail.
At the same time, the regulatory debate is itself polarizing. Critics argue the UK’s Online Safety Act overreaches into invasive surveillance and that existing parental tools, moderation systems, and responsible software already achieve safer outcomes when properly deployed. Others contend that AI is not capable or sufficiently trained to enforce digital safety at scale. To say nothing of general consumers mounting disdain for AI. Yet for brands, the stakes are clear: opportunity knocks to build customer relationships while industry interests debate. Advertisers must demand transparency, adopt verification tools/protocols, and leverage their ad spend to incentivize ecosystem changes. Advertisers can help transform brand safety from a defensive posture into an opportunity for leadership, aligning online child protections with consumer trust.