Measurement Beyond the Frame: Assessing the Safety of Influencer Communities

Published On: April 12, 2021By Categories: AnnouncementsTags:

Brand-Content Creator Partnerships Are Increasingly Risky

Business opportunities between brands and content creators are more lucrative than ever. According to Stream Hatchet’s Live Game Streaming Trends report, audiences spent an average of 97 million hours watching live streams every day in 2021. Twitch streams alone garnered 6.3 million hours watched, and similar growth is evidenced across other streaming platforms with daily streaming hours watched increasing 80% year over year. As audiences spend more time watching streamers, it’s understandable that advertisers want to reach that massive, growing viewership.

However, some advertisers are hesitant to invest in these environments due to the risk of partnering with content creators who aren’t brand safe. Similarly, brands are quick to drop content creators who become associated with unsafe topics. Recent cases that demonstrate this include a content creator group that filmed a video with crimes being committed in the background, a creator who was found to have business ties to racism, and creators grooming their underage audience. Instances like this result in perpetuating advertiser hesitancy, and creators left struggling to prove both their value and their safety to prospective sponsors.

The volatility of these streaming environments and unavailability of third party safety verification make it difficult for brands to get an accurate picture of the brand safety status of content creators. This begs the critical question: what can brands do to avoid these pitfalls? The problem starts with viewing only what is inside the frame of a video. How much can surface-level metrics like view counts and likes really relay to a brand about that content creator’s ability to be safe?

The Problems With Traditional Brand Safety Evaluation

Traditionally, vetting content creators is an arduous process that tends to rely on simplistic, self-reported data. View counts, followers, or a content creator’s verified status on a platform provide a semblance of assurance, but these metrics provide an incomplete picture. They simply do not give an indication into what’s happening beyond the frame and in that content creator’s community.

Rather, it is important to look beyond the frame of the video player and measure the safety of the surrounding content. This includes the conversations that audiences are having pertaining to the content creator, whether in chat, a comments section, or on other platforms altogether. Getting the full, clear picture of the content creator’s community health is key.

Beyond monitoring the safety of these environments, it’s important to have a contextual understanding of what’s being said. After all, keyword detection alone may give a distorted view of safety, as positive exclamations can be incorrectly flagged, but nuanced unsafe content is often missed. For example, mentions of “Sex and the City” would get pinged by a sexual content filter, but AI with entity recognition would identify it as a reference to the show. Lacking the understanding of such language nuance can often lead to an inaccurate representation of a content creator’s community safety.

Real-time and historical reporting are also essential components to brand safety. Since creators prove their brand safety suitability over time, getting a view into that historical data is key for partnership decisions. Meanwhile, real-time reporting is critical when it comes to identifying burgeoning toxic incidents before they become a larger issue for a creator or their partners.

Clearly, a number of factors can go awry when brands and agencies evaluate content creator brand safety, but there is a way to avoid these costly and time consuming pitfalls.

Going Beyond the Frame to Understand Safety in Context

Understanding the safety of what is happening around a piece of content or its creator is the first step towards validating brand safety in these environments.

Major livestreaming services are already moving to consider streamer behavior outside their platforms. In response to recent incidents involving toxicity, Twitch recently announced they would start enforcing their conduct policy on their users for what they do off-camera. Looking beyond the frame is an essential aspect of brand safety consideration, and one way to do that is by understanding the community health of the content creator.

This is achievable through NLP (natural language processing) AI that can automatically capture and distinguish content in context, ensuring accurate classification. Having these insights in real time also ensures that brands will instantly know if a content creator or their community has become toxic, so they can make actionable, business-driven decisions quickly and avoid PR issues.

Third party verification on the data for a creator’s brand safety status is essential as well, as it eliminates the inherent bias of self reported data and gives a more accurate picture of what’s happening beyond the surface.

Getting a holistic view into conversations surrounding a content creator across multiple platforms and sources is also key. This ensures that if toxicity occurs on a platform that isn’t being monitored by a creator’s agency, it still gets captured and reported in real-time, providing instant insights and enabling rapid response.

A Safety Net for Brand Investments and Content Creators Alike

Having all of these pieces into play is critical for all parties involved in the creator economy. With third party validated brand safety reporting, advertisers are able to reliably and confidently evaluate a content creator’s fitness for sponsorship, and talent agencies can ensure that their talent pool is a safe bet. Mid to long tail content creators benefit too, as the playing field is leveled out and they can earn sponsorships by proving their brand safety.

The Modern Solution for a Growingly Complex Brand Safety Environment

The industry deserves a smarter brand safety solution to keep up with the growing complexity in these environments, as traditional approaches are no longer enough. We believe that an AI-based, third-party validated approach with contextual understanding and instant insights are the key to staying ahead of rapidly evolving brand safety needs.

Stay tuned to find out more.

Recent Posts

Measurement Beyond the Frame: Assessing the Safety of Influencer Communities

Published On: April 12, 2021By Categories: AnnouncementsTags:

Brand-Content Creator Partnerships Are Increasingly Risky

Business opportunities between brands and content creators are more lucrative than ever. According to Stream Hatchet’s Live Game Streaming Trends report, audiences spent an average of 97 million hours watching live streams every day in 2021. Twitch streams alone garnered 6.3 million hours watched, and similar growth is evidenced across other streaming platforms with daily streaming hours watched increasing 80% year over year. As audiences spend more time watching streamers, it’s understandable that advertisers want to reach that massive, growing viewership.

However, some advertisers are hesitant to invest in these environments due to the risk of partnering with content creators who aren’t brand safe. Similarly, brands are quick to drop content creators who become associated with unsafe topics. Recent cases that demonstrate this include a content creator group that filmed a video with crimes being committed in the background, a creator who was found to have business ties to racism, and creators grooming their underage audience. Instances like this result in perpetuating advertiser hesitancy, and creators left struggling to prove both their value and their safety to prospective sponsors.

The volatility of these streaming environments and unavailability of third party safety verification make it difficult for brands to get an accurate picture of the brand safety status of content creators. This begs the critical question: what can brands do to avoid these pitfalls? The problem starts with viewing only what is inside the frame of a video. How much can surface-level metrics like view counts and likes really relay to a brand about that content creator’s ability to be safe?

The Problems With Traditional Brand Safety Evaluation

Traditionally, vetting content creators is an arduous process that tends to rely on simplistic, self-reported data. View counts, followers, or a content creator’s verified status on a platform provide a semblance of assurance, but these metrics provide an incomplete picture. They simply do not give an indication into what’s happening beyond the frame and in that content creator’s community.

Rather, it is important to look beyond the frame of the video player and measure the safety of the surrounding content. This includes the conversations that audiences are having pertaining to the content creator, whether in chat, a comments section, or on other platforms altogether. Getting the full, clear picture of the content creator’s community health is key.

Beyond monitoring the safety of these environments, it’s important to have a contextual understanding of what’s being said. After all, keyword detection alone may give a distorted view of safety, as positive exclamations can be incorrectly flagged, but nuanced unsafe content is often missed. For example, mentions of “Sex and the City” would get pinged by a sexual content filter, but AI with entity recognition would identify it as a reference to the show. Lacking the understanding of such language nuance can often lead to an inaccurate representation of a content creator’s community safety.

Real-time and historical reporting are also essential components to brand safety. Since creators prove their brand safety suitability over time, getting a view into that historical data is key for partnership decisions. Meanwhile, real-time reporting is critical when it comes to identifying burgeoning toxic incidents before they become a larger issue for a creator or their partners.

Clearly, a number of factors can go awry when brands and agencies evaluate content creator brand safety, but there is a way to avoid these costly and time consuming pitfalls.

Going Beyond the Frame to Understand Safety in Context

Understanding the safety of what is happening around a piece of content or its creator is the first step towards validating brand safety in these environments.

Major livestreaming services are already moving to consider streamer behavior outside their platforms. In response to recent incidents involving toxicity, Twitch recently announced they would start enforcing their conduct policy on their users for what they do off-camera. Looking beyond the frame is an essential aspect of brand safety consideration, and one way to do that is by understanding the community health of the content creator.

This is achievable through NLP (natural language processing) AI that can automatically capture and distinguish content in context, ensuring accurate classification. Having these insights in real time also ensures that brands will instantly know if a content creator or their community has become toxic, so they can make actionable, business-driven decisions quickly and avoid PR issues.

Third party verification on the data for a creator’s brand safety status is essential as well, as it eliminates the inherent bias of self reported data and gives a more accurate picture of what’s happening beyond the surface.

Getting a holistic view into conversations surrounding a content creator across multiple platforms and sources is also key. This ensures that if toxicity occurs on a platform that isn’t being monitored by a creator’s agency, it still gets captured and reported in real-time, providing instant insights and enabling rapid response.

A Safety Net for Brand Investments and Content Creators Alike

Having all of these pieces into play is critical for all parties involved in the creator economy. With third party validated brand safety reporting, advertisers are able to reliably and confidently evaluate a content creator’s fitness for sponsorship, and talent agencies can ensure that their talent pool is a safe bet. Mid to long tail content creators benefit too, as the playing field is leveled out and they can earn sponsorships by proving their brand safety.

The Modern Solution for a Growingly Complex Brand Safety Environment

The industry deserves a smarter brand safety solution to keep up with the growing complexity in these environments, as traditional approaches are no longer enough. We believe that an AI-based, third-party validated approach with contextual understanding and instant insights are the key to staying ahead of rapidly evolving brand safety needs.

Stay tuned to find out more.

Recent Posts

Measurement Beyond the Frame: Assessing the Safety of Influencer Communities

Published On: April 12, 2021By

Brand-Content Creator Partnerships Are Increasingly Risky

Business opportunities between brands and content creators are more lucrative than ever. According to Stream Hatchet’s Live Game Streaming Trends report, audiences spent an average of 97 million hours watching live streams every day in 2021. Twitch streams alone garnered 6.3 million hours watched, and similar growth is evidenced across other streaming platforms with daily streaming hours watched increasing 80% year over year. As audiences spend more time watching streamers, it’s understandable that advertisers want to reach that massive, growing viewership.

However, some advertisers are hesitant to invest in these environments due to the risk of partnering with content creators who aren’t brand safe. Similarly, brands are quick to drop content creators who become associated with unsafe topics. Recent cases that demonstrate this include a content creator group that filmed a video with crimes being committed in the background, a creator who was found to have business ties to racism, and creators grooming their underage audience. Instances like this result in perpetuating advertiser hesitancy, and creators left struggling to prove both their value and their safety to prospective sponsors.

The volatility of these streaming environments and unavailability of third party safety verification make it difficult for brands to get an accurate picture of the brand safety status of content creators. This begs the critical question: what can brands do to avoid these pitfalls? The problem starts with viewing only what is inside the frame of a video. How much can surface-level metrics like view counts and likes really relay to a brand about that content creator’s ability to be safe?

The Problems With Traditional Brand Safety Evaluation

Traditionally, vetting content creators is an arduous process that tends to rely on simplistic, self-reported data. View counts, followers, or a content creator’s verified status on a platform provide a semblance of assurance, but these metrics provide an incomplete picture. They simply do not give an indication into what’s happening beyond the frame and in that content creator’s community.

Rather, it is important to look beyond the frame of the video player and measure the safety of the surrounding content. This includes the conversations that audiences are having pertaining to the content creator, whether in chat, a comments section, or on other platforms altogether. Getting the full, clear picture of the content creator’s community health is key.

Beyond monitoring the safety of these environments, it’s important to have a contextual understanding of what’s being said. After all, keyword detection alone may give a distorted view of safety, as positive exclamations can be incorrectly flagged, but nuanced unsafe content is often missed. For example, mentions of “Sex and the City” would get pinged by a sexual content filter, but AI with entity recognition would identify it as a reference to the show. Lacking the understanding of such language nuance can often lead to an inaccurate representation of a content creator’s community safety.

Real-time and historical reporting are also essential components to brand safety. Since creators prove their brand safety suitability over time, getting a view into that historical data is key for partnership decisions. Meanwhile, real-time reporting is critical when it comes to identifying burgeoning toxic incidents before they become a larger issue for a creator or their partners.

Clearly, a number of factors can go awry when brands and agencies evaluate content creator brand safety, but there is a way to avoid these costly and time consuming pitfalls.

Going Beyond the Frame to Understand Safety in Context

Understanding the safety of what is happening around a piece of content or its creator is the first step towards validating brand safety in these environments.

Major livestreaming services are already moving to consider streamer behavior outside their platforms. In response to recent incidents involving toxicity, Twitch recently announced they would start enforcing their conduct policy on their users for what they do off-camera. Looking beyond the frame is an essential aspect of brand safety consideration, and one way to do that is by understanding the community health of the content creator.

This is achievable through NLP (natural language processing) AI that can automatically capture and distinguish content in context, ensuring accurate classification. Having these insights in real time also ensures that brands will instantly know if a content creator or their community has become toxic, so they can make actionable, business-driven decisions quickly and avoid PR issues.

Third party verification on the data for a creator’s brand safety status is essential as well, as it eliminates the inherent bias of self reported data and gives a more accurate picture of what’s happening beyond the surface.

Getting a holistic view into conversations surrounding a content creator across multiple platforms and sources is also key. This ensures that if toxicity occurs on a platform that isn’t being monitored by a creator’s agency, it still gets captured and reported in real-time, providing instant insights and enabling rapid response.

A Safety Net for Brand Investments and Content Creators Alike

Having all of these pieces into play is critical for all parties involved in the creator economy. With third party validated brand safety reporting, advertisers are able to reliably and confidently evaluate a content creator’s fitness for sponsorship, and talent agencies can ensure that their talent pool is a safe bet. Mid to long tail content creators benefit too, as the playing field is leveled out and they can earn sponsorships by proving their brand safety.

The Modern Solution for a Growingly Complex Brand Safety Environment

The industry deserves a smarter brand safety solution to keep up with the growing complexity in these environments, as traditional approaches are no longer enough. We believe that an AI-based, third-party validated approach with contextual understanding and instant insights are the key to staying ahead of rapidly evolving brand safety needs.

Stay tuned to find out more.