More on Google’s Expanded Broad Match – Guest Blogger Matt Van Wagner

My friend and fellow PPC expert Matt Van Wagner of FindMeFaster posted this excellent observation on the Churchills’ blog SEM Funhouse – I’ve repeated it here, with my comments below it:

Rather than assigning evil intent, or profiteering as motivations for Google, perhaps we could also consider that they are running a test of behavioral ad targeting on direct search results?

The premise behind behavioral ad targeting is that you get to present ads that reflect what’s on the person’s mind even when they are doing something else. This how the behavioral ad networks operate. Look at a car site one day, and all of a sudden every website you go to seems to have an ad about cars.

While it may seem unlikely that this type of behavioral targeting would translate well into regular paid search results on a search engine, how would you know unless you tested it? Seems like a good thing to test, if you believe in behavior targeting in its current iterations.

My opinion is that behavioral ads mixed in with keyword relevant ads would be a long-shot to perform well, but how can you know unless it gets tested? As outrageous as an idea may seem, it is quite possible it could turn out to be an effective generator of effective broad matched traffic.

Just a theory, at any rate.

I agree with Matt that Google probably views the “1-2 Punch” as a test of a behavioral targeting algorithm. And by some measures it’s working – run a Search Query report on one of your campaigns, and you’ll probably notice that the click-through-rates (CTRs) on even the worst matches are very high.

But I don’t think that’s necessarily a good thing, since we’ve found that even though CTRs are high for those words, the real enchilada – conversion rates – are very low. Which may mean that searchers clicking on ads displayed as a result of Expanded Broad Match are impulse-clicking but not converting.

More to Matt’s point: conducting tests that cause advertisers to lose money is not the right way to do software quality assurance. Well-understood methods exist to test software using automated means that simulate the actions of real people.

Matt and I have discussed this and have come to the conclusion that the EBM debacle might be one of many signs that Google’s software development and release methodology may be causing them (and us) problems.

I’ll blog more extensively on this later, but in a nutshell: in the “old” days (say, pre-Web 2.0), software developers issued upgrades and updates infrequently, and only after thoroughly wringing out the code by doing extensive testing. Google seems to have pioneered a new methodology: develop and release a relatively small feature set (like the broken graphs in the AdWords reporting module), and release it quickly with little fanfare. So software development and deployment are greatly accelerated compared to the old model. Problem is, the users (advertisers) end up doing the QA work, costing us valuable time and (in the case of especially impactful algorithm changes like EBM), money.