About a month ago, Google rolled out Smart Display campaigns, which are essentially fully automated Display campaigns. As with anything that gives me less campaign control, I was skeptical about the results this campaign type would yield. However, knowing that I could set the geo targeting, budget and Target CPA myself helped push me toward testing it out. Google also boasted that other advertisers who had tested Smart Display campaigns saw “an average 20% increase in conversions at the same CPA, compared to their other display campaigns,” so I was intrigued.
The Set Up
For starters, I had to find an account that met the Smart campaign conversion-based eligibility criteria of at least 50 conversions on the Display Network, or at least 100 conversions on the Search Network, in the last 30 days. Once I chose a good account for testing, the set up was simple. Choose ‘Display network only’ as your campaign type, then the radio button next to ‘Marketing objectives’; from there, you can choose any option in the ‘Drive Action’ section.
Just below that section you have the option to choose to create a Smart campaign:
After that, you can establish additional campaign settings like the budget, Geo and Target CPA. Then, you submit various ad headlines, descriptions and other ad elements that Google will match up for serving. There’s a great setup article over on the Wordstream blog if you’d like to read more about that, but for the sake of this post I’m going to dive into my campaign test results.
Campaign Specs & Optimizations
Google recommends a daily budget to account for at least 10-15 times your Target CPA bid, however I didn’t want the spend to get away from me without the campaign driving conversions for a good CPA, so I started my campaign with a $500 daily budget even though I set the Target CPA bid at $100. For the Headlines and Descriptions, I pulled from existing top-performing ad copy, although I couldn’t use all of the best copy because Google can choose to pair any headline with any description, so I had to watch out for odd repetition between the headlines and descriptions. For the images, I pulled two top performing Facebook images (since the sizes are the same) with generic messaging to, again, allow for good overall ad messaging without odd repetition.
Three Days Later: First Check-in
Performance was good enough ($147 CPA) that I went ahead and doubled the daily budget to $1k which was on the low end of Google’s recommended “10x the Target CPA bid” (which was a $100 target). Additionally, the campaign was blowing through its daily budget early in the day, so I wanted to give it some more budget to see how performance might change when allowing for later in the day users. I also checked the Asset Details report, but enough data hadn’t accrued yet and all elements still had the “Learning” performance status. At that time, the only other change I made in addition to doubling the daily budget was launching three logos that the client provided (none of which had be included up to this point).
Another Three Days Later: Second Check-in
I checked again and CPA had decreased to $123 despite the increase in budget. I checked the Asset Details report again and enough data had accrued for the system to evaluate Headline performance. One headline had been given ‘Best’ status and the other was given ‘Low’ status. I decided to add two more headlines to the mix to see how the ‘Low’ performer would shake out when more options were running. This was the only optimization I made.
Four Days Later: Third Check-in
I checked back and CPA had decreased to $94. Pretty impressive. The Asset Details report still said ‘Learning’ for the images and logos, but winners had been declared for the headlines and body copy. Interestingly, the previously declared ‘Low’ performing headline was still at the bottom of the group, so I went ahead and removed it. I also removed the ‘Low’ performing description and replaced it with a description that was a variation of the ‘Best’ performing existing description.
The Results
Here is the week over week breakdown in performance (keeping in mind that the campaign started on 5/12 so that is an incomplete week, as well as the current week is incomplete):
As you can see, there has been a gradual decrease in CPA and increase in conversion rate as optimizations have been made and the system has had additional learning time. And, as noted in the Optimizations section above, I noticed improvement in the campaign’s performance after each round of optimizations. Looking at past 7 days’ data for the whole account, the Smart campaign is beating out 70% of the other converting campaigns, both Search and Display, for low CPA. One of the three campaigns that has a lower CPA than the Smart campaign in the past 7 days is the Branded campaign and one of them is a campaign that is consistently the top performing campaign for this account.
Closing Thoughts
Overall, I’m very surprised and happy with the performance of this campaign so far, and I’m eager to continue reviewing the Asset Details report and testing out new ad elements. Even though there are very few manual optimizations that can be made to these campaigns (I haven’t even found a way to exclude under-performing placements which I’m definitely not thrilled about), overall campaign performance has been impressive. We are still working with the client to determine lead quality for this campaign, but as long as that piece is looking good, we will likely keep this campaign going.
Have you tested out a Smart Display campaign yet? What were your results and thoughts? Let us know in the comments below!