This is part of an ongoing series by Dave Byrne, TrustRaise founder and BSI Advisory Board member, about where we find the practice of brand safety & suitability in 2026. You can find all the articles here.
Thanks to generative AI, we've seen a rapid shift in how information is created, distributed, and consumed online. Recent reports suggest that roughly forty percent of all web traffic in 2026 comes from AI-generated content, and that number is only growing. Meanwhile, platforms are scrambling to label, limit, or demonetize what's being called "AI slop," though definitions remain frustratingly vague, typically gesturing toward "low-quality, mass-produced content" flooding feeds and search results.
The challenge we face is that AI slop defies standardization in ways traditional low-quality content never did. Without clear, actionable definitions, advertisers believe they face the same challenges they encountered with Made for Advertising (MFA) sites — billions in wasted spend and damage to brand reputation. But as elusive as MFAs proved to define, AI content has even fewer quantifiable metrics and evolves faster than our ability to measure it. Brands can't afford to wait for industry-wide consensus that may never come so in this article I will attempt to provide an initial decision framework to help.
Before we jump in, it’s important to acknowledge that not all AI-assisted content is created equal. There's a massive difference between 100% AI-generated content (where a machine produces everything from concept to final output with zero human oversight) and AI-empowered content, where humans use AI as a tool in their creative process. We've always used technology as a creative tool and its presence doesn't make work inauthentic; for example, spell check doesn't make a writer less trustworthy. What matters is how much human judgment, creativity, and intent shaped the final product.
Platforms recognize this and are taking steps to address the problems posed by 100% AI-generated content, which is encouraging. YouTube's updated monetization policy, effective July 15, 2025, targets content that is "mass-produced," "repetitious," or "inauthentic." The platform is demonetizing videos using AI voiceovers without human elements, templated slideshows, and channels churning out near-identical content. Pinterest has introduced controls allowing users to limit AI-generated imagery in their feeds, responding to user complaints about their platform being overrun.
These are steps in the right direction, but they don't solve the core problem. And more importantly, new research is beginning to challenge the assumption that all AI content is a risk to be managed.
A joint study by Zefr and OM Media Trials — the first of its kind to directly measure how advertising performs when placed next to different types of AI-generated content — found that AI content is not inherently harmful for brands. Across several AI sub-categories, including satire, humorous content, and creative expression, ad adjacency actually drove increases in ad recall and perceptions of innovation. AI-generated content is not going away, and the research confirms it doesn't have to be a problem.
If this feels familiar, it should. The conversation around AI slop is following the same trajectory as the MFA debate, and we're making some of the same mistakes.
MFA sites were designed primarily for ad arbitrage, buying traffic cheaply and monetizing it aggressively with high ad density and low-quality content. The Association of National Advertisers found that ten billion dollars of the eighty-eight billion dollar open web programmatic market was spent on MFA sites. But it's worth pausing on why MFA exploded in the first place, and why AI-generated content is following the same path: people engaged with it. Whether or not the industry liked it, the attention was real, and real attention is monetizable. The same human behavioral dynamics that fueled MFA are now fueling the AI content boom. That's a harder problem than any platform policy or industry definition can fully solve.
Even with industry alignment and measurable characteristics like ad density, bounce rates, and session duration, it took years to define what qualified as MFA. A consortium of trade organizations (the ANA, 4A's, WFA, and ISBA) created a loose definition with shared characteristics, but it was intentionally left open to interpretation to give buyers flexibility in determining their level of tolerance.
To their credit, the industry eventually mobilized. though notably, not by arriving at a single agreed-upon standard. There was never a universal definition that gave companies a clear, consistent set of criteria to distinguish an MFA site from a non-MFA site. Instead, vendors like Jounce Media had to operationalize the vague industry guidance themselves, making their own determinations about what qualified. BSI's Publisher Quality Utility was developed to give publishers visibility into how different providers were rating them — a direct result of the fact that multiple definitions were in play simultaneously.
The progress we've seen (MFA presence in web RTB auctions peaked at nearly thirty percent in July 2023 and has since contracted to less than ten percent in some estimates) didn't come from collective agreement. It came from a critical mass of individual brands and providers each deciding "this is what MFA means to us" and cutting out inventory accordingly. When enough players did that independently, the tide turned.
AI content is exponentially harder to categorize. There are fewer quantifiable metrics, the technology evolves faster than our ability to measure it, and the line between human-created and AI-assisted content becomes blurrier every day. But here's where the parallel breaks down: with MFAs, the goal was largely to block an entire category. With AI content, blanket avoidance is both impractical and, as the new research shows, potentially counterproductive.
That means the burden on individual brands is even higher than it was with MFA. Then, you needed your own definition of what to block. Now, you need a more nuanced framework which distinguishes not just AI from non-AI, but the specific types of AI content that create risk from those that don't. The Zefr/OM Media Trials study found that negative brand outcomes were most closely associated with specific types of AI content (spam-like, misleading, or content that created viewer uncertainty) not AI content as a whole. Treating all AI as a single risk category is both inaccurate and limiting.
The Zefr/OM Media Trials research offers a more optimistic point of view than the industry conversation might suggest. Forty-one percent of consumers feel more positive about a brand when AI content is clearly labeled (surprise surprise, transparency can help with trust). And in certain AI environments (like satire, humorous content, and creative expression), ad adjacency actually drove increases in ad recall and perceptions of innovation.
That said, ignoring the risks entirely isn't an option. 55% of consumers feel uncomfortable with content on websites that rely heavily on AI, and 48% say they do not trust brands advertising on such sites. The perception risks are real, it’s just not evenly distributed across all AI content. The difference lies in the type of AI environment and how well it aligns with your brand values.
There's also a deeper challenge the research surfaces: 32% of people mistakenly believe human-created content is AI-generated. The line between real and synthetic is blurring in both directions, which makes waiting for a neat definition even less viable.
Just as the industry didn’t wait for a universal MFA definition before acting, brands need to define and implement their own principles for acceptable AI content rather than waiting for universal standards. You already make similar calls on behalf of your brand (e.g., whether your ads should appear next to coverage of natural disasters or celebrity gossip). Ask yourself these four questions to establish your brand's threshold:
Not just editing for typos, but actual creative direction, fact-checking, and editorial judgment. This is your first filter. A human tweaking an AI-generated script is different from a human providing creative vision that AI helps execute, which is different from pure automation with no human involvement.
For product reviews, health information, financial advice, or emotionally manipulative content, disclosure becomes critical. For generic how-to content or weather reports, maybe less so. Context determines when transparency is non-negotiable.
This standard cuts both ways. If disclosure matters when your ad appears next to AI-generated content, it matters equally when your own ad is AI-generated. Brands holding publishers to a transparency standard they're not applying to their own creative undermine the whole framework.
Would this content exist if it weren't cheap to produce? If it's just arbitraging attention (churning out keyword-stuffed articles or templated videos to capture search traffic), that's your red flag. If it's solving a real problem or providing genuine insight, that's different.
A tech-forward brand might embrace AI transparency and even celebrate it. A heritage luxury brand built on craftsmanship might not. Know your audience's tolerance and their expectations of authenticity.
Work through these questions, and you'll start to see your threshold emerge. There aren't right or wrong answers. They're brand-specific decisions that should align with your values and audience expectations. My personal belief is that 100% AI-generated content, where a machine produces everything from concept to final output with zero human creative involvement, is slop, but I know others will disagree. They'll point to AI-generated weather reports, sports summaries, or data visualizations that are perfectly functional and useful. And they're not wrong. 100% AI doesn't necessarily mean slop.
But for me, human-in-the-loop is the initial indicator of content worth trusting. Not because AI can't create coherent content, but because human judgment provides accountability, context, and intent that pure automation lacks. When something goes wrong (e.g., when bias creeps in, when content causes harm), who's responsible if no human was involved in its creation?
Partners like Zefr are building solutions for this moment, and what sets them apart is their transparency about how they do it. Rather than offering black-box certainty, they've been open about their methodology and willing to acknowledge the gray areas.
The joint Zefr/OM Media Trials research referenced earlier is worth revisiting here, not just as data but as an example of the kind of work the industry needs more of: direct measurement of how ads actually perform next to different types of AI content, rather than assumptions. That willingness to test and publish findings openly is itself a signal of how they operate as a partner.
For a complementary perspective, Vanessa from AdFontes Media has a great demo on how they think about AI use in news articles. There are valuable learnings from that approach.
When evaluating any partner in this space, methodology transparency should be your bar. Anyone offering clean, definitive answers to what is still a genuinely murky problem probably isn't being straight with you.
In the long term, avoiding AI-driven content entirely won't be possible; and as the research now shows, it isn't the right goal. The goal is alignment: knowing which AI environments enhance your brand and which introduce risk, and having the tools and principles to tell the difference.