Create 2014 PPC Strategy Using 2013 Aggregate Data| Pt 2: Ad Copy

2013-2014A couple weeks ago, I shared my process for using aggregate data to help set strategy moving forward. That post focused on keyword match types and how they should interact. Today, I’d like to talk about analyzing ad copy with the same approach. With keyword match types, there are four classifications for you to analyze but ad copy allows you to test just about whatever you want. The goal here is to identify performance pieces using aggregate data that you may not have noticed during the year.

Determining Your Test

Curious if your call to action performed better in Description Line 1 or Description Line 2? What about features vs. benefits? Maybe you don’t know whether symbols are a good or bad thing in your ad copy. All of these might have been a primary factor during a test this year year, or they could have been a secondary change not accountable for winning or losing the test. For the most part, I find this method to be helpful to identify factors that weren’t my tests main focus, but on occasion it’s advantageous to go back and double check my original findings at the aggregate level. (Note: this has also been super helpful when taking over an account without a clear ad testing strategy. You can find what did and didn’t work for them and determine your next steps from there.)

Getting the Data

Similar to the match types review, the data I’ll be using is best found by using labels. Granted, it’s a longer process than identifying match types, but I’ve found it to be worth the time. You’ll have to use filters and your knowledge of the ads in the account to come up with A/B tests of your own. Once you’ve got everything labeled, you can hop on over to the Dimensions tab and segment by Labels – Ad and get your data. Here’s what I came up with as an example from one of our accounts.

The first analyzes whether our ads perform better with CTA in Line 1 or Line 2.

Screen Shot 2014-01-06 at 9.23.21 AM

The second compares having all words capitalized vs having all but the beginning of the ad lower case.

Screen Shot 2014-01-06 at 9.24.15 AM

The last differentiates between ads that asked a question and those that didn’t.

Screen Shot 2014-01-06 at 9.22.50 AM

Now that you’ve got your data, we can begin to take a stab at figuring out what it means.

Analyzing the Data

One of the big things to note with using this type of aggregate data: it’s not a true A/B test. None of these tests have equal, or even semi-close, amounts of traffic for each variable. The second test is the worst offender. The only way to get any usable data from this is to use efficiency measurements rather than volume. Choose your favorite method and give it a whirl. Some only look at CTR, others Conversion Rate. I’m personally a mix. I multiply CTR x CVR to get a number for each variable. The variable with the highest number is the most efficient from impression to conversion. (Note: it’s pretty much the same as Conv/Impressions.)

So for my tests above, this is my efficiency breakdown:

Screen Shot 2014-01-06 at 9.42.09 AM

CTA Line 2, Capitalized All Words, and Asks a Question are our winners. It’s probably a safe bet that writing an ad that asks a question with all words capitalized and the CTA in line 2 would generate good performance. And you can do that if you want. It’s also worth noting that two of these tests are reasonably close to each other. Both the CTA and Question test are within 7 points of each other, and have very similar CPAs. To me, this says these warrant a head to head test some time this year to determine a true winner. The lowercase vs capitalized test, on the other hand, is pretty far apart and will most likely only hurt performance if set up as a stand alone test. I’ll most likely be writing all of my ads with Capitalized First Letters This Year.

The biggest, and most important, thing to keep in mind when doing these tests is that they’re designed to help you generate new tests. Not scrap everything currently in your account and change it to these variables. Take a look in your account and see which tests you can come up with. Get creative. If you can put a label on it, you can compare it at the aggregate level. (And if you can put a label on it in Google, you can roll out those changes to Bing as well!)