“I think I need to see a testing plan. What can we do with the button?” he said.
What? When did C-level executives start caring about the details of landing pages, I wondered.
“Sure thing,” I replied.
That was the interaction I recently had at a Fortune 50 client meeting. Agencies are always required to prove value, so the point of the request was no surprise. However, a C-level executive wanting to get into the weeds of A/B testing was a new one. That’s where digital is at the moment though. The digital mystique has started to fade, and parts of the mystery are becoming normal business lexicon. Testing used to be relegated purely to the realm of a small handful of developers, then into web analytics folks, then business analysts, and is now being planned and executed often by general marketers or, gasp, finance people. More and more often as Google Analytics has made complex web analysis seem easy, the planning and analysis of results has become commonplace and really, expected. While thinking that testing is easy may be a mistake, what is more difficult to manage is how commonly the nuances of A/B testing are dismissed. However, like so much of life, easy is a matter of the subjective.
I don’t mean to dissuade anyone from running A/B tests by implying that it is difficult. In fact, I strongly encourage testing. Having made the transition in my career from web analytics guy to business analyst to general marketer to strategist, I have had time along the way to develop four guidelines to running an A/B Test. If you are already following these guidelines, you are ahead of the competition. If you aren’t using these guidelines, I am amazed at your…creativity.
I love that Google Analytics has made web analytics something that is accessible and at the forefront of marketer’s minds. The rise of Google Analytics, due to its deceptive simplicity and excellent price point, has brought web analytics into casual marketing meetings. The issue with casual usage is that interpreting analytics is still complicated, even when those analytics are packaged well. Web analytics offer a data trend not an absolute. Web trends even has “trend” in its name. In order to plan for a great test, and to choose relevant page elements to test, a fundamental understanding of your analytics package is required. Knowing which pages are actually converting well and not, in context with traffic sources and mediums, as well as time on page and bounce rates and, more importantly, understanding the context of those numbers, are integral to performing a good test. Having a junior analyst or general marketer login to your analytics and try to interpret your data is a recipe for A/B tests that repeatedly end in no significant difference. Yes, sometimes the test might end in a push with the creative of both performing in a dead heat, but much more often a test ending in a push is the result of poor planning based on faulty data. The data you currently have is a roadmap, it should give you some ideas about what types of tests would make sense. Is your traffic coming from social media? Using images and large calls to action are the norm. Is your visitor coming from a precise query via Search Engine Optimisation, perhaps detailed product information or a spiff of a whitepaper or technical docs might get the conversion? An e-commerce value shopper coming in from product feeds needs to see a price comparison, etc.? Of course you don’t want to alienate audience segments, but you do want to certainly serve your biggest audience base to increase conversions. If you don’t know what your data means, you simply can’t get start the journey.
When you have gone through planning, and are setting up your test, you shouldn’t start with an A/B test. You should start with an A/A test. If you have never used your software before, you really should start with an A/A test. That is you should first load the same creative rather than two pieces of creative and essentially test your software. There are three great reasons to start with an A/A test:
Your tests should achieve statistical relevance.
Pure and simple, the test should run until you understand the meaning of the data and have a clear winner. While that is simple to say, it is nearly impossible to practically account for. The reason being most people are risk-averse. People are especially risk-averse for something that they perceive as a bottom line driver to both their company’s revenue and their compensation-lead volume. Risk aversion might manifest itself by someone feeling a need to claim a winner, stop the bleeding of a bad test (even if the loser is the original element that has been in play for years), rushing to start a new test…you get the picture. Resist people demanding the easy answer, and get the correct answer. Achieve statistical relevance and help yourself by not using time to set your test limitations- instead, use conversions. If you are “losing money” by running a test then something more is at work than simply a landing page optimisation test. That something might be reality, but it is most likely perception. Be sure to advocate, educate and carry on.
What determines statistical relevance largely will depend on your traffic, medium, and test scenario. That said- there are a number of A/B split test calculators to use in your testing that will take the onus of testing off your shoulders. Use them and let the test run until you have a relevant test.
Human nature dictates that you let up on the gas when you cross the finish line. How much longer does it take to clean that last room of the house than every room before it? How many times have you remembered to bring home the stamps but taken a week to put them on an envelope? How often have you run a test, reached a conclusion, planned the next test….and sat on making changes? For your digital marketing to have teeth, you need to make changes. Often times those changes are hard to make because they reach across more areas than you originally expected. Compliance or legal, promotions, brand…it can be frustrating to act on the data that you have gathered and institute permanent changes. Yet, that is why you test in the first place. There is no shame in creative, an offer, or a form failing. There is only shame in not making necessary changes and iterating on your marketing. In order to streamline the process, I would suggest organising your changes into four categories:
There you have it, four guidelines for running an A/B Test. Each guideline has been crafted under real world, practical conditions and each one will improve your testing experience. The guidelines are simple but executable. The guidelines are: Understand your current data, A/A test, know when to end a test, and what to do about your test. By using these four guidelines, your tests will run smoother and you will have better results to show off to the envy of your friends and colleagues — until you decide to open that Organic Taco Stand.
Interested in finding out more? Contact DAC today!