Brand safety has changed dramatically over the past year, yet it remains one of the most powerful and misunderstood levers in digital marketing. Having worked on both the agency and platform sides, I have seen how quickly threats, platforms, and measurement evolve; and how often practices & policies lag behind reality.
Early 2025 was a clear inflection point. Long-standing industry working groups were dismantled, platforms began loosening enforcement, and traditional oversight was increasingly replaced with community-led mechanisms like Community Notes. At the same time, familiar tools such as keyword blocklists were quietly deprecated, leaving many marketers without the safeguards they had relied on for years.
However, the consensus wasn't that brand safety was no longer a priority. Instead, the message was that the old playbook was simply outdated and needed to evolve. In 2026, marketers will need to navigate new complexities, make tougher trade-offs, and rethink old assumptions.
Here are some of the trends I’m keeping an eye out for in 2026:
Viewability has long been a default metric for ad measurement, but it tells us little about whether real people actually notice or engage with an ad. An ad could be ‘viewable’, yet have little coverage on screen, be in the midst of 20 other ads and get zero engagement. If (like some clients I worked with this year) you’re looking for high viewability as your primary measure of whether an ad works or not, I have a few hundred thousand MFAs to sell you. Others are starting to see viewability for what it is; a legacy metric that advertisers are over-investing in. Zefr offered free viewability across all platforms it operates on, arguing marketers should not pay to verify something walled gardens already deliver. Google’s DV360 attention signals are being called a viewability killer, signaling the shift from exposure to attention-based metrics.
The line between human traffic and bots is increasingly blurred, especially with AI agents. Bot traffic now exceeds 50 percent of online interactions, yet context matters as some automation drives business outcomes. I recently booked flights using an AI agent; under legacy definitions, that interaction might be flagged as fraud. But if it delivered real value, advertisers should arguably not only welcome this but potentially optimize for increased agentic activity. While Sophisticated Invalid Traffic (SIVT) still plagues mobile, desktop, and CTV, the challenge for 2026 is clear: brands must move toward risk-based, context-aware verification to distinguish harmful fraud from automation that actually works.
AI content is everywhere—some useful, some junk—and "AI slop" remains subjective; what one person dismisses as low-quality, another may see as creative. However, the volume and variability now make manual filtering impossible, especially since it is so easy for anyone to create low-value websites, video, and images with a simple prompt just to maximize ad revenue. For advertisers, this slop muddies the waters, wasting budgets and misleading consumers while legitimate publishers lose revenue. While many creators are still experimenting responsibly with human oversight, the challenge is understanding what AI content adds value and what is wasted ad spend. In 2026, brands will need their own playbooks to define what counts as acceptable as creators, influencers, and AI tools continue to converge.
Since 2024, online scams have grown exponentially as generative AI allows fraudsters to scale and personalize attacks with ease. Social media platforms are currently struggling to combat this volume; Meta documents suggest 10 percent of 2024 ad revenue, about 16 billion dollars, came from scam-linked ads, with around 15 billion high-risk scam ads daily (reuters.com). Fraudsters impersonate brands and public figures, and generative AI scales these threats. Throughout 2025, we have also seen a surge in brand misappropriation, where AI is used to impersonate public figures and corporate identities. This led to brands like Emirates calling out social media platforms for how slow they were to respond to these scams & content.
This trend shows no signs of slowing down in 2026. For brands, protecting digital identity is now a core safety priority, requiring a shift toward more aggressive social listening and a deeper investment in platform reporting tools to stay ahead of the threat.
Youth safety has shifted from an aspirational goal to a legal and strategic mandate. This is driven by global initiatives like Australia’s under-16 social media ban, the UK Online Safety Act, and the US House recently advanced 18 bills, including the Kids Online Safety Act (KOSA). Platforms now face intense pressure to enforce age-gating and content standards, and the likes of Pinterest are even backing state-level restrictions on school cellphone use.
As brands expand into new media channels like gaming, they face heightened scrutiny because these immersive spaces are often perceived as unmoderated "Wild Wests" that still lack transparent age-verification and standardized safety frameworks. Younger users are also increasingly frustrated with their online experiences, turning to tools like the BePresent app to reclaim their mental health from environments they find toxic or addictive.
By 2026, these dynamics will dictate media planning, creative, and platform partnerships. Since many advertising frameworks still lack specific youth safety guidance, brands (especially in highly regulated industries) must proactively audit their targeting and messaging to ensure they remain safe for younger audiences.
As mentioned earlier, blocklists and legacy tools are being sunset. Historically, brand safety relied on static lists and rigid keyword filtering; a very blunt approach when nuance was needed. This often resulted in "false positives" where high-quality content was inadvertently demonetized, such as when a Times feature on Taylor Swift was blocked. However, as investment pours into more sophisticated contextual and agentic AI systems, the industry is moving toward adaptive, real-time decision-making that understands intent rather than just identifying words. Moving forward, these systems will embed brand values into every impression, effectively moving brand safety from a restrictive cost of doing business to a high-reach investment channel that unlocks significant new revenue.
As we’ve just seen, the block-everything model is collapsing. It’s no longer just inefficient; it’s mathematically impossible to maintain in a landscape where over 50% of traffic is non-human and AI-generated content could account for 90% of all content by the end of 2026. Reactive exclusion is a losing game of whack-a-mole: you simply cannot block your way to safety when the sheer volume of low-quality content scales faster than any blocklist.
Leading brands are flipping the script by moving away from these defensive exclusions and toward opt-in strategies. Instead of trying to list everything they fear, they are defining exactly what they value: specific content pillars, verified creators, and trusted environments; whether that’s a premium publisher on the open web or a vetted real world creator. When paired with agentic systems, this shift does more than just protect a brand; it recalibrates market incentives, moving media spend away from being a defensive cost and toward becoming a high-conviction investment in quality.
2026 will separate brands that treat safety as a checkbox from those that make it a core strategic advantage. Complexity is only increasing, and playing it safe with legacy approaches will not protect reputation or drive real impact. Marketers who define their own playbooks, lean into context-aware verification, and proactively opt into quality environments will gain more than just protection. They will gain trust, influence, and a competitive edge in an increasingly automated landscape. In this environment, leadership means refusing to let blunt, outdated tools dictate your outcomes and instead owning the narratives and spaces where your brand actually lives.