Get in touch
Do you need help to optimise your marketing? Contact us and we'd be more than happy to discuss how we can help.
Experiments add a scientific touch to how we engage with our audiences online. They involve testing different versions or elements of a webpage or app to understand what works best and why. Whether you're running A/B/n-tests (comparing multiple versions of a page), diving into the world of conversion rate optimisation (CRO) to boost those all-important clicks, or getting fancy with multivariate testing (MVT) to see how different combinations of elements interact, experiments are your secret weapon.
So, how do we go about it? What should and shouldn’t we do? Let’s dive in.
A good analysis relies on good data, and experiments are no different. Imagine going through the whole process and then realising the test key performance indicators (KPIs) weren’t tracked properly. Not what we want! To avoid this, make sure important KPIs, such as clicks on a call-to-action button, can be tracked and set as the test goal in your testing tool. Which KPIs you decide to track depends on the goal of your campaign.
While quantitative results tell us which variant performed the best, they don’t always explain why. Heatmaps to the rescue! A heatmap shows where most activity happens on a webpage, like how far people scroll. This is crucial if your conversion goal is at the bottom of the page. Heatmaps add context to why certain actions happen or don’t happen, giving you a colourful snapshot of user behavior.
If a high-traffic page isn’t doing well for unknown reasons, a multivariate test can help. Unlike A/B/n-tests, which compare different page versions, multivariate tests compare changes in smaller sections. This helps identify which part of the page needs a makeover.
We often set up experiments on a top-notch computer with a fast internet connection. However, visitors might use different devices with varying internet speeds. Preview test variants on different devices and simulate slower connections using tools for these purposes. This ensures your changes load well for all users, leading to better test results. After all, not everyone has super-fast internet—some are still living in the buffering era!
Experiments often include all visitors who accept cookies. But what if you want to improve performance for specific groups? For example, if mobile visitors or desktop visitors from paid search campaigns have low conversion rates, target these groups separately. Set up tests specifically for mobile users and desktop users from paid search campaigns. This helps improve the mobile version of your site and gather data for future personalisation. Tailor-made experiences, anyone?
In experiments, test only one change or a few closely related changes at a time. Otherwise, it can be hard to tell why a version is performing well. If you want to test more than one hypothesis, create a multivariate test with a hypothesis for each change. Keep it simple, Sherlock!
Don’t be afraid to challenge general best practices when creating your test hypotheses for specific groups. Just like people, target groups are unique. General best practices might not work for all groups and could even harm performance. Websites focused on selling products need different strategies than those aimed at building brand awareness. One size doesn’t fit all!
A common mistake in experiments is ending a test too soon. This can skew results because a small number of users might have too much influence. Make sure you have a large enough sample of visitors and a clear difference between variants to achieve statistical significance. Many testing tools have built-in calculators, but there are also many available online. Patience, my friend!
Avoid comparing page results over different time periods and attributing changes to specific factors. Pages don’t exist in a vacuum, and other factors can affect performance. Test all major changes with an even traffic split and compare data within the test period to ensure fair comparisons. Apples to apples, not apples to light bulbs!
A common mistake in experiments is setting the test start and end dates before launching. Ending a test without statistically significant results is problematic. If you set an end date from the start, you might not achieve significant results. Keep an open mind about the end date and prioritise reliable results over strict timelines. Flexibility is key!
Experiments are crucial — and a marketing team’s best friend. Understanding what your audience responds to and whether you’re meeting your goals is essential for business success. Plus, experiments go hand in hand with traffic-driving activities like paid search, UX design, and content editorial. By integrating these strategies, you can create a holistic approach to optimise your website and campaigns effectively. So go ahead, test, tweak, and triumph! Learn more about experiments and reach out to us for further insights.
Do you need help to optimise your marketing? Contact us and we'd be more than happy to discuss how we can help.