YouTube Details Efforts to Combat Misinformation

Posted by Brand Safety Institute • Feb 22, 2022 5:00:00 AM

Video is already a big part of the digital media landscape and its role is only increasing. But the proliferation of misinformation is a major problem in all digital content—evidenced most recently in the world of podcasting with the uproar over Covid-19 misinformation on Spotify’s Joe Rogan Podcast—and video is no exception. But one of the big platforms in video content says it’s taking steps to address misinformation in the medium.

YouTube’s Chief Product Officer, Neal Mohan, took to the company’s blog to outline three key challenges YouTube is addressing in its efforts to halt misinformation. According to Mohan, the company is focusing on catching new misinformation before it goes viral, preventing the sharing of misinformation across platforms, and addressing misinformation in various parts of the world.

Mohan’s post details how YouTube’s machine-learning systems have been trained in the past on misinformation efforts such as 9/11 and moon-landing conspiracy theories. Thanks to that prior learning, those same systems have enabled YouTube to stay on top of new misinformation efforts that have arisen. In the case of Covid-19 misinformation, YouTube moved quickly to update its guidelines based on health authority guidance and minimize the spread of those misinformation efforts. And when expert guidance hasn’t been available, Mohan’s post says YouTube has trained its systems on new data, utilizing “an even more targeted mix of classifiers, keywords in additional languages, and information from regional analysts to identify narratives our main classifier doesn’t catch.” That effort, Mohan writes, “will make us faster and more accurate at catching these viral misinfo narratives.”

To prevent the sharing of misinformation, Mohan writes that YouTube has “overhauled our recommendation systems to lower consumption of borderline content.” And in some cases when that hasn’t worked, YouTube has disabled the share button and broken the link on videos it is already limiting in recommendations. YouTube has also utilized interstitials to alert viewers that the content may contain misinformation.

Addressing misinformation in other parts of the world is prompting YouTube to explore “growing our teams with even more people who understand the regional nuances” as well as “exploring…partnerships with experts and non-governmental organizations around the world.”

Mohan writes that “There has never been a more urgent time to advance our work for the safety and well-being of our community.” The Brand Safety Institute agrees, and in an effort to help other platforms—as well as advertisers, agencies, and everyone else in the digital advertising ecosystem—fight to keep the community safe, has a wide range of educational resources, tools, and expert insights to help all players in the supply chain stay up-to-date on issues surrounding brand safety—including misinformation. You can also find the policies and guidelines for several platforms here.

And individuals can become certified Brand Safety Officers while organizations can explore certification as Brand Safety Business Partners and Brand Safe Workforces. When every member of the digital advertising world has the tools needed to address challenges to brand safety, the ecosystem stays safe for all.

Topics: Brand Safety, YouTube, misinformation, video

Comments