Brand Safety Institute Blog

AI Acceleration: Navigating Technology and Trust

Written by Victor Mills | Nov 6, 2025 2:00:04 PM

The conversation around brand safety is no longer confined to adjacency or keyword blocking. As 2026 approaches, it is becoming more focused on trust, technology, the responsible use of artificial intelligence, and its ability to engender trust. The media landscape is a frenetic hybrid ecosystem of machine learning models, AI processes, livestream environments, and addressable pathways - where the quest for speed and precision meets unpredictability, fairly often. In this dynamic environment, the notion of “safety” is being redefined.

The Media Rating Council (MRC), long the industry’s arbiter of measurement and accountability, is taking the lead in establishing more comprehensive standards for AI and machine learning. Their upcoming framework will address six critical areas of modern media: Generative AI, Agentic AI, Ad Verification & Brand Safety, AI-Powered Search Engines, and Invalid Traffic (IVT).  Some of these areas already have standards that have been in force for several years, but with the advent of AI, and its capabilities/pitfalls, existing standards rightfully require a re-imagining.

AI is a double-edged sword. It creates risks and mitigates them. Fraudulent actors use AI to generate fake traffic and mimic human engagement, but the same technology is also being deployed to detect IVT earlier and more accurately than ever before. The challenge for 2026 will be governance: ensuring that the AI models designed to protect brands are transparent, ethical, and verifiable.

Platforms are beginning to meet this challenge head-on. TikTok’s global ad network, Pangle, which reaches nearly 2.9 billion daily active users across 380,000 apps, has integrated Integral Ad Science’s AI-driven “Total Media Quality” technology. This allows advertisers to measure viewability, detect invalid traffic, and evaluate brand safety. This is critical for adjusting campaigns/allocations and maintaining confidence at scale.

Meanwhile, the Brand Safety Summit New York (November 2025) is marking a milestone moment for addressable advertising in partnership with Go Addressable. The event is highlighting how deterministic identity, the ability to connect verified user data to ad delivery, is accelerating streaming and connected TV from experimental formats into foundational strategies.

One of the critical frontiers for brand safety concerns has been livestreaming, where real-time content feeds leave little room for error. LiveGuard, is a new system providing what the industry has long lacked: continuous, customizable protection. It is a new entrant promising progress. Its three-layer model aims to allow advertisers to define their own safety parameters, filtering out categories like profanity, politics, or NSFW topics. This “total control” approach transforms livestreaming from a risk-prone medium into a manageable, brand-safe opportunity that could finally widen the opportunity for larger advertisers and ad investments in live digital content.

A future built on accountability in 2026 is about establishing trust while protecting advertisers and consumers. AI has made media faster, smarter, and more personalized - but also more opaque and volatile. To navigate that paradox, the industry must anchor innovation in transparency, governance, and collaboration. As MRC codifies AI standards, as IAS and TikTok advance measurement integrity, and as new technologies like LiveGuard give brands real-time control - one truth becomes clear: the future of brand safety isn’t reactive, it’s architectural. It must be built into the DNA of every platform, campaign, and algorithm that defines the next generation of media.