Four Tips for Running an A/B Test

September 04, 2013
Guest Contributor
11 min read
Intermediate
A red apple and an orange balanced on a seesaw.

“I think I need to see a testing plan.  What can we do with the button?” he said.

What? When did C-level executives start caring about the details of landing pages, I wondered.

“Sure thing,” I replied.

That was the interaction I recently had at a Fortune 50 client meeting.  Agencies are always required to prove value, so the point of the request was no surprise. However, a C-level executive wanting to get into the weeds of A/B testing was a new one.  That’s where digital is at the moment though. The digital mystique has started to fade, and parts of the mystery are becoming normal business lexicon. Testing used to be relegated purely to the realm of a small handful of developers, then into web analytics folks, then business analysts, and is now being planned and executed often by general marketers or, gasp, finance people.  More and more often as Google Analytics has made complex web analysis seem easy, the planning and analysis of results has become commonplace and really, expected.  While thinking that testing is easy may be a mistake, what is more difficult to manage is how commonly the nuances of A/B testing are dismissed.  However, like so much of life, easy is a matter of the subjective.

I don’t mean to dissuade anyone from running A/B tests by implying that it is difficult. In fact, I strongly encourage testing.  Having made the transition in my career from web analytics guy to business analyst to general marketer to strategist, I have had time along the way to develop four guidelines to running  an A/B Test. If you are already following these guidelines, you are ahead of the competition. If you aren’t using these  guidelines, I am amazed at your…creativity.

1. Understand your current data.

I love that Google Analytics has made web analytics something that is accessible and at the forefront of marketer’s minds. The rise of Google Analytics, due to its deceptive simplicity and excellent price point, has brought web analytics into casual marketing meetings. The issue with casual usage is that interpreting analytics is still complicated, even when those analytics are packaged well. Web analytics offer a data trend not an absolute.  Web trends even has “trend” in its name. In order to plan for a great test, and to choose relevant page elements to test, a fundamental understanding of your analytics package is required.  Knowing which pages are actually converting well and not, in context with traffic sources and mediums, as well as time on page and bounce rates-and more importantly understanding the context of those numbers, are integral to performing a good test.  Having a junior analyst or general marketer login to your analytics and try to interpret your data is a recipe for A/B tests that repeatedly end in no significant difference.  Yes, sometimes the test might end in a push with the creative of both performing in a dead heat, but much more often a test ending in a push is the result of poor planning based on faulty data.  The data you currently have is a roadmap, it should give you some ideas about what types of tests would make sense. Is your traffic coming from social media? Using images and large calls to action are the norm.  Is your visitor coming from a precise query via SEO, perhaps detailed product information or a spiff of a whitepaper or technical docs might get the conversion? An ecommerce value shopper coming in from product feeds needs to see a price comparison, etc.?  Of course you don’t want to alienate audience segments, but you do want to certainly serve your biggest audience base to increase conversions. If you don’t know what your data means, you simply can’t get start the journey.

2. Multivariate test? A/B Test? No, A/A test

When you have gone through planning, and are setting up your test, you shouldn’t start with an A/B test.  You should start with an A/A test.  If you have never used your software before, you really should start with an A/A test.  That is you should first load the same creative rather than two pieces of creative and essentially test your software. There are three great reasons to start with an A/A test:

  • To understand if the software works. As a user do you understand how to start and stop the test? Are there options or features that could go wrong (scheduler, etc.) that you need to be mindful of?  If you are running paid traffic to a landing page that loads properly only on every other test creative, you are wasting a ton of money and getting bad data.
  • To see sample data.  All those cautions around interpretation don’t stop at web analytics, they are also a concern for optimization data.  Additionally, it isn’t simply enough to be able to get a quick answer, but to have mastery of the data to handle any follow-up questions that might come up or…wait for it…to explain the data simply to someone that might log in randomly and misinterpret data once in a while.  When is the last time you have had to stumble over a senior marketing person misinterpreting data but during the explanation you caused more confusion because you forgot to explain something “basic?”  If you know the data, that won’t happen.
  • To know the tool. Is the tool scalable?  For example, when you are asked to set up a multivariate test, do you know if there are limits to the number of variants that can happen simultaneously? Are there limits to the number of pages that can run concurrently?  Knowing the tool also helps you troubleshoot to make sure the tool is coded properly and that all your hard work isn’t wasted.

3. What triggers the end of the test?

Your tests should achieve statistical relevance.

Pure and simple, the test should run until you understand the meaning of the data and have a clear winner. While that is simple to say, it is nearly impossible to practically account for. The reason being most people are risk averse. People are especially risk-averse for something that they perceive as a bottom line driver to both their company’s revenue and their compensation-lead volume.  Risk aversion might manifest itself by someone feeling a need to claim a winner, stop the bleeding of a bad test (even if the loser is the original element that has been in play for years), rushing to start a new test…you get the picture. Resist people demanding the easy answer, and get the correct answer.  Achieve statistical relevance and help yourself by not using time to set your test limitations- instead use conversions.  If you are “losing money” by running a test then something more is at work than simply a landing page optimization test.  That something might be reality, but it is most likely perception.  Be sure to advocate, educate and carry on.

What determines statistical relevance largely will depend on your traffic, medium, and test scenario. That said- there are a number of A/B split test calculators to use in your testing that will take the onus of testing off your shoulders.  Use them and let the test run until you have a relevant test.

4. What are you going to do about it?

Human nature dictates that you let up on the gas when you cross the finish line. How much longer does it take to clean that last room of the house than every room before it? How many times have you remembered to bring home the stamps but taken a week to put them on an envelope? How often have you run a test, reached a conclusion, planned the next test….and sat on making changes?  For your digital marketing to have teeth, you need to make changes.  Often times those changes are hard to make because they reach across more areas than you originally expected. Compliance or legal, promotions, brand…it can be frustrating to act on the data that you have gathered and institute permanent changes. Yet, that is why you test in the first place.  There is no shame in creative, an offer, or a form failing. There is only shame in not making necessary changes and iterating on your marketing.  In order to streamline the process, I would suggest organizing your changes into four categories:

  • Requirement Changes – If you have a requirement change you are likely dealing with an issue that will become a development project.  This might come up when working with form fields, dynamic content, or functional additions.  To help keep these changes actionable you should capture the before and after that lead to your decision, have a quick synopsis of the change, and create screen grabs with annotations.  With those three things in hand, your developers will have the ammo they need to get something done.
  • Analytics Changes – Occasionally you realize that you don’t have all the data that you need to effectively make decisions that will impact your web pages.  You might find that you need to separate out actions, better follow a visitor path, or have more control on segmenting data in general. In those cases, you will need to go back to your SDR or spreadsheet and get your requirements updates documented. What’s that you say? You don’t have a SDR or a master spreadsheet of your analytics implementation?  What happens when all your great work gets you promoted? Or you move on? Or you leave all this marketing to open an Organic Taco Stand? Have an implementation document.
  • Creative Changes – So very often high impact changes can be made via creative updates.  Images, button shapes, button verbiage, phone number call out, video incorporation…all those great creative changes can make an incredible impact.  These are often your clearest path towards making changes and pushing forward with improving your marketing efforts.  The reasoning behind making these changes is usually straight forward after your test and getting decision makers on board isn’t as difficult as a development or incentive change.  Execute these quickly and plan to continue iterating.
  • Incentive Changes – Incentive changes can have a huge impact on your landing pages. The problem here is that most often if you could have an incentive in the first place, it would have been on the page.  Usually if you don’t have that incentive called out on your landing page a decision maker stopped it, the business requirements didn’t support it, legal crushed it or compliance argued against it.  In those cases, your only chance to get an incentive put back on your pages is during the testing and iteration phase.  While you technically can’t show the incentive as being what is missing from your landing page, you can certainly demonstrate if your page has problems with cold, hard data.  In addition to demonstrating issues via the data, you should be able to show what hasn’t been a problem during testing to be able to request an incentive.  Remember, incentives don’t have to always be dollars or % off. A guarantee, warranty, or no wait time is a fantastic incentive to put on a page to generate a call or form completion.

There you have it, four guidelines for running  an A/B Test. Each guideline has been crafted under real world, practical conditions and each one will improve your testing experience.  The guidelines are simple, but executable.  The guidelines are: Understand your current data, A/A test, know when to end a test, and what to do about your test.  By using these four guidelines, your tests will run smoother and you will have better results to show off to the envy of your friends and colleagues — until you decide to open that Organic Taco Stand.

Interested in finding out more? Contact us today!

Gregg Holtsclaw, Digital Strategist

Contributing Experts

Guest Contributor

Mentioned in this article