See how Media.Monks improved CPE by 200% with GenAI targeting

Take Time to Test Bad VAST Responses

You wake up in the morning and brush your teeth for the full two recommended minutes. You go to work and smile at your colleagues, even though some of them clearly don’t brush for the full two recommended minutes. You do everything right! But that doesn’t mean things will go your way. 

Today’s issue surfaced during a general investigation of low use-rate. One of our demand-side partners noted that certain ads were never getting impressions. This is something that can normally be seen in an ad server report that shows how well demand is monetized. 

In order to investigate this, we set up a test channel with frequent ad breaks and our demand partner set up a campaign that delivered these ads. With a single session viewing the test channel, we captured the VAST requests and responses. 

This issue had a simple root cause, and yet it wouldn’t have been discovered without the special setup. That’s because the ad (and the demand-side platform serving it) constituted a relatively small percentage of the overall fill. Still, the error was equivalent to throwing money away, because the bad response took up a slot that could have been filled with a good response. In this case, the malformed response looked like this:

<VAST version=”2.0″>


<Ad id=”489018″>


         <AdSystem version=”0.5″>REDACTED</AdSystem>


         <Description />


                     <VAST version=”2.0″>

             <Ad id=”489018″>


                    <AdSystem version=”0.5″>REDACTED</AdSystem>


                     <Description />



There were two opening <VAST> tags in the response. The response was returned from a major DSP through an intermediary SSP, and there was no straightforward way to contact the offender. (We’ve emailed them a link to this blog entry with a kind and gentle “that’s you, cough.”) Interestingly, because the VAST response was large, and because we took it for granted that it was correct, we didn’t discover the issue immediately. The take-away is: test VAST responses. A good place to validate VAST responses can be found at the IAB Tech lab

Only one way to be good, but so many ways to be bad.

This example is extreme – pure and simple bad-ness, but relatively rare. More subtle variations exist that are more common. In another example, we’ve found VAST responses in which the declared ad duration, shown in the <duration> element below, doesn’t match the duration of the actual file. 









<MediaFile height=”720″ type=”video/mp4″ width=”1280″>










In this case, the ad splicer doesn’t know if there’s an error – for example, the file might have been truncated. There isn’t a standard  approach to this problem – we know of splicers that just use the actual duration from the creative, and we know of splicers that will throw away the creative if its duration differs from the declared duration by more than one second. In this latter  situation, of course, the result is lack of placement and a lower use-rate. 

There are other ways that VAST can be bad, even when it’s not really bad. People make assumptions when implementing specifications, and different people make different assumptions. These different assumptions can be difficult to uncover, since VAST responses that are legal may not parse in implementations that are incomplete. We’ve found issues with parsers that (erroneously) assume that: 

Bottom line: Though difficult, it is always advisable to validate returned VAST when possible. As shown above, you can think you are doing everything right – but the issues may still be out there, and our eagle-eyed experts at Wurl, on their quest for no more ad gaps, are here to help resolve them.

If you missed the previous posts in the series about unfilled ad slots – or ad gaps – check out the previous posts describing what ad gaps are and the impact CTV viewing:

Get news and updates from Wurl