How RollWorks optimizes your display advertising

Marketers spend a lot of their waking hours writing ads, creating ads, and testing ads - all in the hopes those ads will attract a viewer enough to (at best) click on it and (at worst) remember the name so they can search for it later.

Creating display advertising can be stressful, and the opportunity costs of not showing the most effective option are huge - from wasting precious ad budget to missing out on converting the right prospects for your business. It’s no wonder Marketers are constantly wondering if they are using their best-performing ads!

While some customers try to use A/B testing to figure out which ad is most effective, we take a different approach to save you time, money, and (most importantly) your stress levels!

Did you know we have several Machine Learning algorithms hard at work in the background to optimize your display advertising campaigns?

Our approach to optimizing your display advertising

While some vendors in the ABM space offer the ability to run A/B tests on your creative, the RollWorks platform goes one step further to intelligently select the best creative by using a multi-armed bandit algorithm.

 

The multi-armed bandit problem

In simple terms, the multi-armed bandit problem refers to a scenario where you need to choose the option that gives the most valuable return from a set of options. In our case, in our BidIQ bidding algorithm, if there is an ad slot that’s getting auctioned off, we consider all the creatives from our customers that could be potentially auctioned off for this ad slot and we pick the one we think has the highest probability of getting a conversion.

We balance exploration and exploitation to try out different ads to gather information about how they perform (exploration) but also take advantage of the ad that so far performs best (exploitation). Currently, our approach is 10% exploration of any creative vs 90% exploitation of using the creative our data shows to be the best fit.

 

Benefits of using a bandit algorithm

  • Creative is chosen specifically for each user: The BidIQ model is evaluating if this ad is the best choice for that unique opportunity, at that time, for that user, for this ad slot. The model then assigns a probability of ‘success’ (clicks/ conversion) for each ad in order to determine which ad to choose for this slot. For example, if we know the person who is visiting the website and will eventually see the potential ad spot is a sales leader who has visited our pricing page three times and is interested in our intent solution (through looking at our website engagement stats), we would choose a different ad compared to if the person visiting that site was a marketing leader who had visited our case study page and was interested in our measurement features.  At RollWorks, we have an enormous trove of ad engagement data to pull from, created over years as a leader in the ABM space which helps our BidIQ algorithm understand and distinguish between different personas and what content would be most interesting to them.
  • The BidIQ algorithm is constantly learning as the tests are being performed without needing to wait until the end of the tests to then implement our learnings. Other solutions in the market don’t conduct this constant exploration and evaluation of different ad creatives effectively ignoring a user’s context clues and not gathering data about how other options may be a better fit for this specific opportunity.

We use a customer's context to deliver the most relevant ad

With our BidIQ algorithm, we might learn that a prospect values case studies and testimonials rather than just the cheapest pricing on the market (through seeing which pages they have visited on our website), hence the ads we show will be focused on our customer case studies and customer testimonials rather than our pricing page so the prospect can see how much our customers would recommend our products.

We run multiple bandit algorithms at the campaign level to choose the best creative to select amongst our internal stock of creatives from campaigns from the same account and other accounts. Eventually, one creative is picked to battle it out in the external ad exchange bid.

 

Bandit algorithm vs A/B testing

The two most important differentiators are context considerations and the fact that bandit algorithms learn as the tests are underway so you do not need to wait for the tests to finish to draw conclusions as we would in A/B tests:

Context Considerations

Our bandit algorithm selects the ad it thinks has the highest probability of success after considering ALL the context of the given scenario including but not limited to: ad slot size, the ad exchange used, what type of page it’s being shown on, who the user is, how many times they’ve interacted with your brand before, how many times they’ve been shown the ad before and how many pages they’ve looked at on your site. It will then review ALL possible creative options before picking the one it thinks has the best chance of creating a positive outcome (clicks/ conversions). A/B tests on the other hand do not take into account this context and instead just split the audience into randomized groups. This means there’s lots of potential here for results to misrepresent the sentiment of the individuals within these 2 groups.

For example, imagine you were a sales enablement company trying to decide between two creatives that highlighted different solutions: a call recorder (which is more relatable to a sales leader) or a content management system (which would be more relatable to a marketing manager). Using our bandit algorithm, we could pull in additional context clues like who the viewer of the ad is, if they have been to your site before, and the type of page it’s being shown on to figure out which of these two ads would be a better fit. Using only A/B testing, you might mistakenly show the ad that speaks more to the sales leader to the Marketing Manager, just because it had a higher click-through rate. This might be a missed opportunity to highlight what is important to that individual viewer.

Immediate and continuous learning
The multi-arm bandit approach tests all options concurrently and tries to take advantage of what has been learned immediately, taking into account the context of a particular, specific opportunity. This differs from A/B tests because you need to wait until the conclusion of the A/B test to draw conclusions and implement any findings you have.

 

Conclusion

While A/B testing can be useful and has its applications, customers can rest assured knowing that there is a whole lot of data science going into our selection process to ensure we are choosing the best and most appropriate ad for each bidding opportunity. Our Machine Learning and Data Science team is constantly working on new ways to evolve and optimize our ad selection and ad bidding process to bring more value to our customers.

Was this article helpful?
0 out of 0 found this helpful

Articles in this section

Chat with an agent
Mon - Fri 10am - 6pm EST
Send a support email
Mon - Fri 10am - 6pm EST
Community
Get the latest product news and join other ABM practitioners