Song: 7 Deadly Sins of Ad Optimization
Artist:  DataPop
Year: 2013
Viewed: 60 - Published at: 3 years ago

The 7 Deadly Sins of Ad Optimization by DataPop
1. Lack of or Poorly Developed Hypotheses
Every test must begin with a clear, specific, and informed hypothesis. Most times there isn’t a hypothesis because it is easy enough to write an ad and set it live to see how it does, and a forward thinking hypothesis is overlooked. Other times a hypothesis is not used because it seems difficult or pointless to prove
a hypothesis right. However, the purpose of a test is not to prove a hypothesis right, but rather to use it as a framework for executing the test and a reference point to understand the results.
2. Speaking to Multiple Intents in Test Groups
Many tests are conducted on ad groups that represent multiple user intents, making it incredibly difficult to interpret the results in a way that could drive long term value. There are two ways this happens. First, an advertiser will include keywords like “red shoes” and “red sneakers,” in the same group, and as such the winning ad tends to be the least common denominator, not the best ad for each of those intents. Second, an advertiser will run their test on a broad match term with no match modifications or negative keywords. The winning ad is typically non-specific and the results of this test are largely unrepeatable.
3. Overlooking Variations in Traffic & Devices
When it comes to traffic considerations, many seem to be a lot more concerned with having enough traffic and less concerned with traffic fluctuations and variations. Factors such as seasonality leading to traffic dips and spikes can significantly affect the results of a test and the insights gained from them. Device considerations are also crucial in ad testing. Because searchers respond differently to ads when on desktops than when on mobile, you must be aware that both the ad copy and test results cannot be indiscriminately duplicated
and applied across devices.
4. Excessive Variation Between Test and Control Ads
One of the caveats of creating a test ad that is excessively different from the control is that when analyzing the results, whether the test ad was
a winner or loser, it is nearly impossible to determine which factor was responsible for the performance swings. This is why it is crucial to be as specific as possible when creating a hypothesis and structuring a test around it. The experimental ads should be created in a way that allows you to test the variables you intend to test, and the results can then be attributed to predefined changes.
5. Declaring Tests Over Too Soon
Testing can either be exciting or terrifying, depending on what the initial results begin to look like. As the test begins to accumulate data, many make premature assumptions about which ad is a winner and which is a loser. An example is not giving an ad as long as a sales cycle typically
is before making a call on the test, which doesn’t give enough time for an accurate conversion rate to be revealed. Test results take time to normalize as they accumulate more data and gain increased accuracy as a result. Declaring a test over too soon before results normalize and before it has reached statistical significance means that decisions were made based on erroneous, incomplete data.
6. Applying Results Blindly Across Campaigns
The success of a specific element in one ad group, a ‘Free download’ CTA, for instance, is by no means an indication that it would yield the same results if applied across all other ad groups and campaigns, particularly those representing different intents. The only way to determine if leveraging results across other ad groups and campaigns would yield the same success is by testing before applying.
7. Too Much Focus on Brand Terms
Every marketer must own their brand in search, but it is a problem if the majority of ad testing is done on brand terms and then the results are assumed to be applicable to non-brand terms. A consumer who seeks out your brand specifically should be messaged to much differently that someone who is not as familiar with your brand. Trying to combine test results from brand and non-brand searches sets you up to miss the mark with a large portion of your ad creative.

( DataPop )
www.ChordsAZ.com

TAGS :