The ultimate glossary for brand safety

Charlie Spargo's picture

Research shows that many marketers are confused by the technology commonly used to maintain online brand safety. Peter Wallace, UK Commercial Director at GumGum - the computer vision platform for marketers - explains its main elements.

When many household brands discovered in 2017 that their ads had been appearing next to content created by paedophiles and terrorists, the reaction was swift and strong. Marketers issued clear direction to their media agencies to avoid any such problems in the future, with the result that many sites were blacklisted. 

Three years on, and a poll issued by IAB Europe last month has demonstrated the extent of the response. 77% of marketers say that keeping their brand safe from dangerous or controversial associations is a high priority for them; 94% are using blacklists and 91% employ keyword targeting. Yet the problems remain. Only a few weeks ago, activist group Avaaz discovered that ads for a variety of major brands had been appearing next to YouTube footage from climate change deniers.

The fact is that the tools that most brands have been employing to combat this problem are not only ineffective but also costly. Blacklisting entire sites is the digital marketing equivalent of using a sledgehammer to crack a nut - marketers remove large quantities of inventory from their media plan on the basis of some fairly spurious keyword matches.

For example, many brands have to avoid placing ads on recipe sites because of the commonly used phrase 'chicken breast'. The keyword matching programmes associate the word 'breast' with pornography and automatically blacklist the entire site. 

Contextualisation coupled with machine learning is the key. We have been pioneering this technology for the past ten years, so we have a pretty good grasp of its opportunities and scope, but it can be complex for a non-technical audience. One of the findings of the IAB research was that marketers are keen for more education in this area.So, with this in mind, here are some explanations of the key technologies that will increasingly form the backbone of a smarter approach to brand safety. It's divided into two main categories - tech that analyses text, and visuals.

Seeing the whole, not just the parts

Keyword targeting is a technique that assumes that use of single words can indicate that a page or site is 'bad' or 'good', but as with the 'chicken breast' example, this is far too simplistic. What is needed is to take account of the context of the word - in this case, the juxtaposition of the word 'chicken' which instantly renders 'breast' safe.

Picking up the context of a word or phrase is something that most adult humans can do instantly, and natural language processing (NLP) is the way in which we use machine learning to effectively train a computer to do the same thing.

NLP is the technology, for example, that would help a computer programme recognise the difference between Apple the brand and apple the fruit, by scanning the content surrounding the word. 

We have created some more targeted processes that incorporate NLP to add more colour to simple keyword targeting. This includes hate speech detection - analysing the context of a piece of content that included the word 'kill' for example, might conclude that it was actually a horticulture piece about slugs.

Named entity recognition tracks and analyses content that includes certain names that might be particularly relevant for a certain brand, in a similar way to event categories. Sentiment analysis is also useful for discovering the relative tone of a piece of text.

Understanding the visual web

The other side of the contextualisation coin is the analysis of the visual content of a web page, using computer vision technology, the visual arm of artificial intelligence. The same kind of technology that can recognise your face at airport scanners is used to recognise the shapes in a picture on a web page, and then work out what they are by comparing them to a pre-programmed list. This, at its simplest level, is object recognition. 

A more complex programme can understand a whole scene by recognising the individual elements and using contextualisation technology to spot what the particular grouping of objects means. For example, faced with a picture of a court, a programme might detect a ball, players, the crowd and some advertising hoardings - then deduce that this scene is a basketball match.

Hate logo detection is a customised version of object recognition. A programme can be 'trained' to spot, for example, the shape of a swastika.

Where contextualisation gets really powerful is when image recognition technology is integrated with NLP capabilities. Not only does this approach offer a water-tight system of brand safety, it also avoids the wastage of media inventory that over-use of just keyword targeting creates. You can, for example, analyse for Threat Detection across multiple different categories of events, by looking at the contexts for both the words and pictures on a page or site. 

This more sophisticated and nuanced approach to contextualisation is taking the industry into a new era of analysing for brand suitability. While this has its roots in brand safety, it also incorporates more of an element of tailoring media placements and their relative safety for a particular brand. Because what might prove controversial for one brand may be fine for another.

So, this kind of customisation of brand safety is where the industry is moving - in essence, a refinement of the process that retains an efficiency of media plans as well as keeping the brand away from trouble.