Let’s say you’re taking an average of two buying customers.
One, a 17-year-old Finnish girl and the second a 77-year-old Spanish man, averaging them out you would end up with a middle-aged transgender Austrian which would be a pointless exercise in understanding your customers.
What’s our point?
Well, you can’t understand your customer if you average them out and heap them together like this and if you’re basing your Conversion Rate Optimisation (CRO) on assumptions like these you don’t know who you’re testing or optimising for.
You’re essentially flying blind.
Even the wealth of free CRO tools at your disposal would be unable to get the most out of your optimisation.
Real CRO, with hypothesises based on hard data and proper segmentation, will do wonders for your bottom line. Conservative estimates give you a standard life-long optimisation of 10-15% with a simple switch of button colour, layout or call to action (CTA).
This article and the accompanying video, taken from our Wednesdays @ Aeona event presented by Alasdair Humberston Head of UX & Conversion at FIRST, will hopefully be a thorough examination of CRO that will set you up to begin testing and optimising your own website.
What is CRO?
Conversion rate optimisation is a system for increasing the percentage of visitors to your website that convert into users. It involves understanding their user journey and how they interact with touch points as they move along your conversion funnel towards a desired action.
Let’s be clear from the outset CRO only works when you know what you’re testing for!
Effective CRO comes from having an in-depth, analytical understanding of the way your users and visitors interact with your product. If your theories are just backed by your own assumptions and not by hard data then you’re not practising effective CRO.
Real CRO starts at the point of tracking visitor data and observing their behaviour.
If you haven’t started tracking any user data yet, you should probably stop reading right here and go look at our 3 Free Analytics Tools article. This will take you through the process of installing Google Analytics, Heap & Hotjar onto your website as well as where they fit into the analytics mix.
For those of you who already have data, you’ll need to begin measuring them against a set of metrics that you deem to be ‘good’.
What makes a good metric?
A good metric is:
- Comparative – Compare to other time periods, groups of users or competitors (eg. we increased conversion rate by 8% from the last month)
- Understandable – If people cannot talk about it, it is much harder to use it as an actionable insight
- Ratio or rate – Easy to act on (distance travelled vs speed), comparative
A good metric is anything that causes you to change your behaviour.
If you want to learn a little bit more about ‘good’ and ‘bad’ metrics take a look through our article Vanity vs Actionable metrics.
Finding Areas for Improvement
Ok, so now you have your data and you know what makes a ‘good’ metric.
How can you improve on this?
How do you go around the process of optimising!
This starts with identifying your areas of interest. These are areas where your visitors aren’t converting as you would like them to. A great framework for identifying your areas of interest is the heuristic evaluation.
A heuristic evaluation is a user experience (UX) inspection method developed for computer software that helps to identify usability problems in a website’s design.
The areas included in a heuristic evaluation are:
- Relevancy – Does it match the users content and design expectations? E.g. CTAs (ticks)
-
Clarity – Is the message as clear as it could be? E.g. understood without prior knowledge (slogan)
-
Value – Can we increase the user’s motivation to take the next step? E.g. Sell end benefit
-
Friction – What’s causing doubts, hesitations, uncertainties or usability issues? E.g. Form Fields
- Distraction – What is not contributing and can be got rid of? E.g. Links to non-essential pages
The practical application of a heuristic evaluation in combination with your analytic data would work something like this:
You collect your user data using analytics (Refer back to our 3 Analytics Tools article for more info, but Google Analytics & Heap should do just fine).
You define your conversion funnel and track your users along the funnel to an eventual goal.
The changes in conversion rate at each stage will give you a good sense of where your problem areas lie within your conversion funnel.
Next, you implement Hotjar tracking on the problem web pages in your site and begin to track visitor behaviour on the page. You track their mouse movements, the overall heat maps with the heuristic areas of interest in mind.
Let’s say that your users aren’t completing an expression of interest (EOI) on your page and you believe that a friction area within your form field may be how you ask to contact your users. Your theory is, that because you only offer to contact them via telephone, rather than offering email as an alternative, that your visitors feel pressured and discard their form.
Of course, this is a fairly simple example but it gives you your basis to begin testing.
Optimising with A/B Testing
The actual process of optimising is fairly intuitive, the hard part was collecting your data and coming up with theories.
Optimising has become as simple as installing Google Optimise code on your website and then deciding on the areas you want to A/B test. Google has made it so you can show it to a certain % split of your audience, so if of the 100% of visitors that come to your site, 40% might see version A, while 60% see version B.
You will most likely experience the following scenario: One of your tests increase conversion rates by 10%, but your split test actually decreases conversion by 10% or more.
This isn’t necessarily bad news, you just need to stop running the poor performing test and keep the strong performer!
The only catch with CRO is that you need a significant sample size to make an A/B test worth your time. Google may show you a statistical improvement, but you might not feel the results right away because your experiment is underpowered and hasn’t had enough visitors.
Follow through, maximise your returns
On average, A/B testing should give you an average of 7-15% increase in conversions, if it’s a significant change this can be anything in the range of a 20-50% increase.
The amount at which this varies depends upon your due diligence in the collection of data and your application of your theories.
In CRO all the magic has happened before you even begin running your A/B tests. When your theories are designed correctly, backed by good metrics and data you will be rewarded with a lifelong increase in conversion rates that will continue to maximise your returns.