AI: Friend or Foe? It’s All About Context
Posted by Victor Mills • Jan 16, 2026 8:45:00 AM
Public trust in generative AI remains fragile. Consumers are increasingly wary of synthetic content, deepfakes, and opaque algorithms. Recent research shows mixed consumer feelings about AI-generated ads online; many people are skeptical or uneasy rather than enthusiastic. In surveys, a significant portion of consumers describe AI ads as less engaging, annoying, or confusing compared with traditional ads, and often associate them with inauthenticity or lower trust in the brand, especially among younger audiences like Gen Z and Millennials.
At the same time brands and agencies accelerate adoption of AI-powered tools to monitor, measure, and mitigate risk at scale. This AI ‘duality’ defines the current confusion and concern in mixing brand messages and content. AI is simultaneously viewed as a threat to authenticity and a necessary defense against the volume, speed, and complexity of modern media environments. As digital video, social platforms, and AI-generated content proliferate, brands are leaning harder on pre-bid optimization, contextual controls, and automated safeguards - often while knowing that public sentiment toward AI itself is, at best, up in the air.
From a brand safety standpoint, AI poses both opportunity and risk when matching ads to content. This duality also is becoming more prevalent with respect to public figures and business executives. Public comments made by senior leaders can create immediate exposure to reputational risk. The recent example involving Epic Games CEO Tim Sweeney underscores this dynamic. His public criticism of legislative action tied to generative AI tools unfolded around the issue of AI’s growing capacity to generate exploitative and harmful content. Context is also key on the public podium. The issue centered on platform responsibility and AI misuse; out of context, the executive commentary amplified the conversation and biased brand perception on an individual’s public stance.
In a more traditional context, the Campbell’s controversy involving an executive’s alleged disparaging remarks about product quality and the customers who buy them demonstrate how quickly reputations shift under the power of social platforms. In this case, the reputational damage wasn’t driven by AI at all, but algorithms ensured that his comments were shared broadly and supercharged by angry consumers who repeatedly shared the comments on how a brand’s leadership was perceived to speak about its own consumers. In both cases, the fallout demonstrates how quickly news can be shared and trust can erode when executive narratives clash with brand legacies, values, and/or public opinion.
Public distrust of AI heightens scrutiny across the board, meaning brands are no longer evaluated solely on where their ads appear, but on how their leaders traverse the public landscape. In this environment, executive comments function much like media placement. Context matters, proximity matters, and intent can be overpowered by perception.
For brands and their partners, the mandate is balance. Brand safety in 2026 is no longer just a media problem or a technology problem. It is an organizational one. Successfully defending and delivering brand trust requires aligning AI safeguards with human accountability - ensuring that AI innovation, suitability, and careful communication reinforce, rather than undermine one another.
Topics: Brand Safety, Brand Reputation, Knowing Your Partners, Brand Suitability, Marketers, content, marketing, Corporate Social Responsibility, tools, education, AI
Want To Stay Ahead In Brand Safety?
Sign up for the BSI Newsletter to get our latest blogs and insights delivered straight to your inbox once a month—so you never miss an update. And if you’re ready to deepen your expertise, check out our education programs and certifications to lead with confidence in today’s evolving digital landscape.
