Quantcast
Channel: split testing – ClickThrough Marketing
Viewing all articles
Browse latest Browse all 3

Why Failing Is Good

$
0
0

Our Analytics and Conversion Executive Rob Berry gives us an insight into the workings of CRO, with regards to failing and testing. Find out why failing is good.

In the day-to-day workings of conversion rate optimisation, research on visual intelligence and analytics data is vital. By harvesting data and making good, detailed observations, we can form thorough hypotheses regarding the tests we run on different websites.

Rob says:

An intelligently constructed test hypothesis, alongside smart goal planning and tracking, can provide important insights whether your variation wins or loses.

This process requires in-depth research and a deep understanding of the business, user personas and website that’s being optimised. This will increase the frequency of conclusive test results, which can be fed into hypotheses for future tests in a cyclical a/b test process. Test – learn – repeat.

What about “failed” tests?

In a RichPage article on A/B testing results, Justin Rondeau of DigitalMarketer said:

Look at your segments (if you have the traffic) to see if the losing variation had a positive impact on any segment of visitors.

Claire Vo, Founder of Experiment Engine also commented:

It depends how you’re defining a “failed” test. If it is a conversion rate loss, then you’ve identified that the elements changed in the test are ‘sensitive areas’ that contribute quite a bit to your conversions. This is a great clue into what can help conversions – see what in this sensitive area changed in the test. Did you deemphasize something? Change a value proposition? These also offer great hints at what is important to users, and you can use these hints to create future tests that maximize what the original is doing well.

In short, “failure” of these tests is actually a process of confident deducing. Without failures, websites cannot be optimised in other directions, which can affect the possibility of increasing conversion rates on the next test and service offering expansion.

To wrap up, we’ve detailed some ways to avoid inconclusive test results:

  • Segment The Data – analyse the test’s performance across various segments such as traffic source, user device and any other elements that are important to your business. Make sure your segments have enough data within them to be conclusive in the first place.
  • Revisit Your Strategy – if you’re seeing a regularity in inconclusive test results, chances are you need to make some strategic changes. Ask yourself questions like “is there a WHY behind my hypothesis?” and “will users even notice the changes we are making?” One tip for low traffic testing is make the changes as big as possible to get clearer results.
  • Revisit Your Hypothesis – just like the point above, your hypothesis may need work too. Does it make sense? Should you be testing different variations around it?
  • Don’t Test Pointless Elements – analyse the size of the business. If you’re dealing with a big brand that generates millions of users per day, small changes can have big impacts. If you’re dealing with smaller brands, bigger changes are likely to have more of an impact.

 If you want to discover more about A/B testing, or the ways in which our research and implementation process could make a big difference to your business, get in touch with our Conversion Rate Optimisation experts today.

The post Why Failing Is Good appeared first on ClickThrough Marketing.


Viewing all articles
Browse latest Browse all 3

Latest Images

Trending Articles





Latest Images