A/B Testing: Your Secret Weapon to Optimising Your Website for Students

Claudia Reiners
January 29, 2020
Blog

A/B Testing: Your Secret Weapon to Optimising Your Website for Students

4bf428ed6af9ff680b7ac8b24b046f3d?s=50&d=mm&r=g Claudia Reiners
Head of Strategy

A/B testing is the dark horse of Product Management and can be a brilliant investment for any website.

A/B testing is often misunderstood, and can take some time to set up, but the results are almost always worthwhile. A well-planned and thought-out testing framework can make a world of difference to your site and encourage more prospective students down the conversion funnel.

We’ve put together a quick guide to help you determine what tests are best for you to start optimising your site.

What is A/B Testing?


A/B testing is the process of comparing the performance of two variations of a website, web page or page element.

Typical A/B tests focus on changing styling elements, layout, new features and variations of call-to-actions, with a goal of improving the conversion rate against the original design. Website traffic is then allocated evenly between the variations using a testing platform. The conversions between the variations are then compared to determine a winner.

The A/B Testing Process:


  • Develop Hypothesis

    What is it you want to test? What change do you expect to see in the variation?

  • Set Test Goals

    Your goal could be to improve course enquiries by 10%.

  • Identify Metrics

    The main metric is what will be used to see if you achieved your goal. For this test, it could be an increase in conversion rate on your course form.

  • Set Up Test

    Using a testing platform to make the required changes to your form and determine your page targeting and traffic allocation.

  • Run Test

    Start your test and monitor performance. Most tests will take 4-6 weeks, depending on traffic numbers.

  • Analyse Results

    Once your test reaches significance, compare the variation with your control again your goal target and main metric.

  • Apply Learnings

    Is the variation a winning experiment? If so, apply changes to your site. If it was a losing test, what further changes could you test?

The First Step: The Hypothesis


All tests begin with the same question: what is the expected outcome of this experiment?

The hypothesis you create will form the basis of the A/B test, and helps to set out the primary goals and the metrics used to test the hypothesis. For example, ‘if I shorten the number of fields on a form on my website, then the conversion rate for form submissions will increase, as users will be more motivated to fill it out when it isn’t as time consuming’.

The best way of developing a sound hypothesis is to use this guide:

“If [VARIABLE], then [RESULT] due to [RATIONALE].”

So, it should look something like:

“If I shorten the number of fields on a form on my website, then the conversion rate for form submissions will increase due to users being more motivated to fill it out when it isn’t as time consuming.

With a sound hypothesis ready to go, determining the goals and identifying the metrics that will be used to compare the performance of the hypothesis will be much easier.

If we were going to test the hypothesis statement above, our goal and metrics could be as follows:

Goal: Improvement of form submissions by 10%
Metrics used to reach the goal: Form conversion rate (%)

A B Testing Graphic  01c

Choosing a Testing Platform


A/B testing can seem like an expensive practice, which is why it’s always important to consider your business needs first, to help determine what suits best.

Many who have looked into A/B testing as an option have most likely come across some of the most well-known platforms out there, such as Optimizely, VWO and A/B Tasty, just to name a few. At Candlefox, we’ve used all three products, and each has its own range of unique benefits. Make sure to check out this useful comparison between the three if your business is beginning to look into testing.

For those that are looking to try testing out for the first time but aren’t ready to invest quite yet, Google Optimize is one of the best products to begin with. This lesser known Google product should be a staple in any Product Manager’s toolbox. This free tool is easy to use and manage, and integrates perfectly with Google Analytics, which most already use to track website performance. Most tests with Google Optimize can be created directly in the Visual Editor, so little development work is often required.

Deciding What to Test


One of the most confusing aspects of testing is choosing what exactly to look at first, especially considering all tests are based, to an extent, on a hunch.

Many Conversion Rate Optimisation (CRO) consultants would advise starting as ‘far down the funnel’ as you can, since these are typically your highest value pages.

When we talk about the ‘funnel’, we’re essentially talking about the set of steps users or visitors go through before they reach a conversion.

The top of the funnel is where everyone starts off when they visit a website, and the most interested buyers (or for education providers, prospective students) move down the funnel to convert. A conversion on an ecommerce website could be a product purchase, and for education providers this could be an enquiry form submission. Other examples of moving visitors down the funnel could be driving website signups or newsletter signups.

For education providers, this might be an enquiry or application form for courses.

So, does that mean that all tests should be at the bottom of the funnel?

The answer is no.

The further down the funnel you test, the more uncertain the results can be. Transactional pages tend to have the lowest conversion rates, so solely testing on these pages can have varying results.

Running a test where you change the messaging on your homepage, on the other hand, could have a huge impact on overall conversion rate for your website, depending on how you measure your on website performance or conversion rate. Not all tests need to be focused on conversion rates either. It’s all about determining what the best strategy is for your company.

Some of the best techniques to follow include:

1. Incremental improvements at the top of the funnel
2. Focusing on high-value pages (such as enquiry form pages or product pages)
3. Micro-conversion tests


Types of Micro-Conversion Tests

  • Email/Newsletter sign ups
  • Viewing a course page
  • Adding something to a cart
  • Downloading a PDF or resource
  • Click Through Rates (CTRs) from ad copy
  • Increasing Time-on-Page
  • Social media follows
  • Increasing chatbot discussions
  • Reducing Bounce Rate

Types of High-Value Page Tests

  • Hiding form fields with high drop offs
  • Changing the messaging on CTAs
  • Reordering information sections of course pages
  • Adding social-proof to course pages
  • Using behavioural nudges (countdown timers, urgency messaging etc.)

Types of Top-of-the-funnel Tests

  • Updating styling
  • Testing homepage image
  • Adding testimonials
  • Changing messaging on landing pages
  • Reordering sections of the homepage

Another option is to focus on your worst performing pages and go from there.

Jump on Google Analytics and see what pages have either 1) the lowest conversion rates or 2) the highest bounce rates. It’s always a good idea to focus on areas of your website that need the most improvement.

Testing is the Goldmine of New Learnings


Testing is often pushed to the side because of it’s fickle nature. Not all tests will win, and not all will show an increase in conversion rate or whichever goal you set for it. Most of the time, you’ll find that the current version of your website performs better. But one thing worth noting, is that every test is a winner, even when it’s not. The insights you get in return for every test are just as important as a monetary gain.

If you run a test on an enquiry form on a course page, where the variation has less fields than the current version, it may not always win. But what it will do, is open a treasure trove of important questions that can help you better understand your users.

Do students prefer longer forms when requesting information for a course because a longer form looks more trustworthy and credible?

Does a longer enquiry form prevent users who aren’t motivated to do the course from filling it out, therefore increasing the quality of leads you receive?

These are the questions that can open the door to dozens of new testing ideas and help to rule out changes that may otherwise harm your goals. That’s why it’s important to keep track of all tests performed, so you can go back and reflect on past tests and use those insights to help you refine your testing roadmap going forward.

Tracking Your Tests


Whether you use Trello, Airtable or a Google Sheet, a good test tracker should include:

1. The name of the test
2. Screenshots and a description of the variation and any changes made
3. A brief commentary on results
4. Uplift in conversion rate (see formula)
5. A link to a report on a testing platform or any spreadsheet you may have exported from the test
6. Start and finish dates of the test
7. What pages were part of the test


How to calculate uplift in conversion rate:

(New Conversion Rate – Old Conversion Rate) / Old Conversion Rate x 100

i.e. (2.7% – 2.3%) / 2.3%) x 100
=17.39% uplift in conversion rate


How to calculate uplift in conversion rate


(New Conversion Rate – Old Conversion Rate) / Old Conversion Rate
x 100

i.e. (2.7% – 2.3%) / 2.3% x 100

=17.39% uplift in conversion rate

Closing your Test and Applying Changes


Before interpreting your results, you need to ensure that your test has reached statistical significance first. This means that to 90% – 95% confidence, an increase in conversion rate on a variation is not due to chance, and you will see the same results if you push the variation to your website.

There are many tools out there to help you determine if your test is statistically significant – all you need is the number of website sessions to the original, the variation(s) and the conversion rate for both. Neil Patel has a very useful tool on his site that can help with this.

Once a test has reached significance, it’s time to start thinking of applying it to your site. Whether it’s a small uplift in conversion, or a huge one, any incremental improvement is worth pushing forward with.

If a test doesn’t win, it’s a matter of considering phase two and asking yourself a question:

Since that didn’t work, what else can I change to it?


A/B testing isn’t a once off. It’s an ongoing commitment to optimising your web performance and reaching your goals by improving not only your site metrics, but also your knowledge of your website and its users as a whole.


img claudia3
Claudia Reiners
Head of Strategy
disk 1s 111px 1