This is part of an ongoing series by Dave Byrne, TrustRaise founder and BSI Advisory Board Member, about where we find the practice of brand safety & suitability in 2026. You can find all the articles here.
In my last piece, I argued that blanket avoidance of AI content is both impractical and, based on emerging research, potentially counterproductive. The brands that get this right won't be the ones waiting for an industry-wide definition of acceptable — they'll be the ones who've built their own.
But defining acceptable content only gets you so far. As AI becomes embedded in nearly every content workflow, the presence or absence of AI in the production process is becoming a less useful signal on its own. What matters more is the human behind the content: their intent, their values, their relationship with their audience, and whether any of that is consistent with your brand. Understanding the creator behind the content is becoming even more important than the content itself.
There are two distinct contexts where this applies, and they each require a different approach: advertising adjacent to creator content, and working with creators directly.
The numbers on creator trust are hard to ignore. 69% of consumers trust what creators say and recommend, and 77% favor content created by influencers over professionally scripted ads. Among Gen Z, that figure climbs to 94%. This isn't a niche preference. It's a fundamental shift in where people locate credibility.
The reason is obvious when you think about it. Audiences spend time with creators in a way that builds something closer to friendship than spectatorship. They know how the creator thinks, what makes them laugh, what they'd actually buy. That sustained exposure creates parasocial relationships, and those relationships are the real asset brands are trying to borrow. It's earned attention - not just because a creator is a person, but because the audience has chosen to spend time with them.
But that trust premium is contingent. It doesn't transfer automatically to whoever is advertising adjacent to the content or partnering with the creator. It has to be warranted. And when it breaks (when 70% of consumers report feeling deceived by undisclosed partnerships, or when a creator does something that conflicts with a brand's values) it doesn't just break for the creator. It can break for the brand too.
80% of consumers cite influencers who are not genuine or transparent as the biggest trust killer. There are two clear implications for brands:
Both were true before AI. It becomes even more complicated when you layer in the question of how creators are producing their content, and what they're disclosing (or not) about it.
When brands think about creator environments, they're typically making one of two decisions, and they require very different thinking.
The risks and the opportunities are different in each case. But in both, the creator's identity and values matter more than any single piece of content they've produced.
The challenge with creator-adjacent advertising is that brands tend to evaluate the content they're appearing next to (the specific video, post, or episode) rather than the creator who made it and the broader context they operate in. That's a mistake that's becoming more expensive.
Consider what happened at Spotify during the Rogan controversy. I was working there in 2023 and saw this play out firsthand. The brands that paused spend weren't only the ones advertising on The Joe Rogan Experience. Many advertisers pulled back across the Spotify Podcast Audience Network entirely, from shows that had nothing to do with Rogan, hosted by people who had never said anything remotely controversial. They weren't worried about the content their ads were appearing next to — they were worried about being seen as part of the same ecosystem. The reputational risk had spread beyond the source.
This is the new brand suitability challenge in creator environments: you're not just evaluating a piece of content; you're evaluating a person's ongoing relationship with your brand. A podcaster who also happens to discuss health and medicine isn't just an entertainer. Their worldview, their audience's expectations of them, and their behavior outside any given episode all matter, and all reflect on the brands that appear alongside them.
72% of consumers believe that brands should be held accountable for the influencer and creator partnerships they establish. That accountability extends beyond individual posts to the creator's broader public identity. And as 47% of influencers share content that can get brands into trouble, the question isn't whether this will happen to someone you're advertising with — it's when, and whether you've done the work to know your exposure before it does.
This isn’t typically a function of bad intent on a creator’s part. Research from BSI and TikTok found that 83% of advertisers believe creators’ brand safety & suitability knowledge is moderate at best — highlighting a knowledge gap, not a values gap. Creators are navigating a fundamental tension: the same authenticity that builds audience trust and drives performance can, at times, clash with the guardrails that come with monetizing content and becoming a business. The takeaway for marketers is to invest earlier in understanding who creators are, how they operate, and where those tensions are likely to surface before they become issues.
This challenge is getting more complex, not less, as the creator landscape expands beyond traditional influencers. Journalists, commentators, and subject matter experts who built their audiences inside legacy media are increasingly building their own independent channels, newsletters, podcasts, and video platforms. They bring credibility and expertise, but also strong points of view and established reputations that brands need to evaluate carefully. I'll dig into what this means for brands investing in news environments in a future article, but for now the principle holds: know who you're appearing next to, not just what they're talking about today.
The practical implication: Because most programmatic infrastructure doesn't surface creator-level identity at the point of buying, an opt-in approach is more useful than an opt-out one. Rather than trying to block your way to safety, build toward inclusion lists of creators and channels whose full output you've evaluated and are comfortable with. Platforms like YouTube Select offer curated, brand-suitable inventory with channel-level visibility. Partners like Pixability and Zefr have creator and channel-specific offerings that go further, providing suitability signals at the creator level rather than just the content level. The framework from the last article applies here too: you defined your content thresholds, now apply the same discipline to the people producing the content.
The creator partnership side of this conversation is different, and in some ways more interesting. Because here, the trend lines are pointing somewhere that many brands haven't fully caught up with.
61% of brands now report higher ROI from micro-influencers than from macro-influencers. Micro-influencers (roughly 10K-50K followers) generate engagement rates of around 5.7%, compared to 1.8% for accounts with 500K or more followers. They generate 22% more comments per post, at a fraction of the cost: an average of $320 per sponsored post versus $4,800 for macro-influencer content.
A creator with 15,000 followers in a specific niche has a fundamentally different relationship with their audience than someone with millions of passive followers accumulated over years of viral moments. It's closer, more conversational, and built on shared interest rather than a shared moment. And 78% of consumers say they trust recommendations from micro-influencers over larger accounts, which is the inverse of where most brand investment has historically gone.
There's a corollary to this that matters even more: the most valuable creator relationship isn't always a paid one.
Organic brand advocates (people who genuinely use your product and happen to have an audience, even a small one) represent something money genuinely can't buy. Their recommendations carry a weight that paid partnerships can approximate but never fully replicate. Part of the due diligence brands should be doing is identifying who is already speaking positively about them, understanding why, and figuring out how to support that without turning it into something that feels transactional.
This is exactly the problem that platforms like Marker Video are built to solve. Rather than brands commissioning content from creators upfront, Marker Video surfaces authentic video reviews from real product users, and brands can license the content after the fact. There's no brief, no brand influence over the creative, and no incentive for the reviewer to say anything other than what they actually think. The result is content that carries the credibility of genuine experience, because it is genuine experience. It's an early but interesting example of what infrastructure for organic advocacy could look like at scale.
The AI angle matters here too. A creator who is open about how and why they use generative AI in their process (who treats it as a tool, not a ghost) maintains their authenticity precisely because of that transparency. The relationship between a creator and their audience is built on trust, and audiences are increasingly sophisticated at detecting when that trust is being undermined. It's worth noting that 43% of people believe they can identify AI-generated content, but research shows their actual accuracy is worse than a coin flip. The line between authentic and synthetic is blurring in ways audiences can sense but can't always articulate, which makes a creator's track record of transparency even more important. When something feels off, audiences don't always know why, but they act on it. Transparency about AI use, when it's genuine and consistent, can be part of building trust rather than a contradiction of it.
This echoes what I wrote in the last piece: it's possible to be authentic and use AI. The same is true for creators. What matters is the presence of genuine human judgment, creative intent, and accountability. A creator who uses AI to help draft scripts, generate visual concepts, or repurpose content, but who is honest about it and whose voice and perspective genuinely shapes the final product, is not being inauthentic. A creator who uses AI to mass-produce content under a persona that doesn't reflect any real person's views, at a volume designed to capture algorithmic attention rather than serve an audience, is a different matter entirely.
What connects the AI content conversation to the creator conversation is a single underlying principle: the source of content matters as much as the content itself, and increasingly, brands need to be investing in understanding both.
The industry has reasonably good infrastructure for evaluating content. Keyword blocklists, contextual classifiers, brand safety categories: these are imperfect but they exist, they're improving, and the market knows how to use them. What we don't have is equivalent infrastructure for evaluating people. No standardised way to assess a creator's track record, no shared signal for the values they represent beyond any given post, no early warning system for the kind of reputational drift that turns a safe buy into a liability.
That gap is widening. The creator landscape keeps expanding, and more credentialed voices (journalists, scientists, commentators, former TV anchors) are moving into independent channels where there's no editorial layer sitting between them and their audience. They bring credibility. They also bring history, ideology, and the capacity to surprise you in ways a purely algorithmic content environment can't.
Closing that gap is the actual work. Not avoiding creators, not treating every independent voice as a risk, but building the capability to make informed, defensible decisions about who your brand appears alongside and who you choose to work with. That's harder than running a keyword filter. It's also where the competitive advantage is, because most brands aren't doing it yet.
Just as I offered four questions for brands to assess AI content in the last piece, here are five questions for evaluating creator partnerships and environments, whether you're placing ads or seeking collaborators.
1. Is their voice genuinely their own? Not whether they use AI, but whether a real person's perspective, values, and judgment are evident in what they produce. Would this content exist without them? Can you identify a point of view that's theirs? In practice, this means reading beyond the posts you're being pitched: look at how they respond to comments, how their content has evolved, whether they have a consistent perspective or shift with the algorithmic wind.
2. Is their engagement real and relevant? Follower counts are vanity metrics. Engagement rates, comment quality, and audience relevance to your category are what matter. A smaller audience that genuinely cares is worth more than a large audience that doesn't. A useful test: read the comments on five recent posts. Are people responding to what the creator actually said, or are they just tagging friends and posting emoji? The former indicates a real community; the latter, an audience that isn't really there.
3. Does their audience overlap with yours? Strong engagement means little if the people engaging aren't the people you're trying to reach. Understanding whether a creator's followers actually match your target demographic should be a baseline screen before any partnership conversation.
4. Does their full body of work align with your brand's position? Not just the video your ad appeared in, or the post you're sponsoring, but what else are they saying, and to whom? The Rogan situation is a useful reminder that you're evaluating a person, not a content asset. In practice this means going three clicks deep before committing: their older content, their other platforms, anything that surfaces when you search their name alongside words your brand wouldn't want to be associated with.
5. Are they transparent about their relationships and their process? Do they disclose paid partnerships clearly and consistently, or buried in a caption? Do they acknowledge when AI plays a role in their work? A creator who discloses well is lower risk as a partner, because they've already demonstrated they're comfortable with transparency when it costs them something. Check their last five sponsored posts and see how the disclosure is handled.
There are no universal right answers here, any more than there were for the AI content questions in the last piece. A brand that explicitly wants to reach audiences engaged in political commentary might make a very different call on the Rogan-type scenario than one that doesn't. What matters is that the decision is deliberate, not passive.