Growth Marketing 2
--
Growth experiments are in the center of the Growth Marketing Process. And Conversion Rate Optimization is a powerful tool in a Growth Marketer’s arsenal. But without the proper information on what to optimize and how to optimize it, you might end up in a mess that has no actual impact on the baseline.
Week 2 of CXL Institute’s Growth Marketing Minidegree focuses on how to conduct the appropriate research, how to design the A/B tests and analyze the results.
Peep Laja, CEO of CXL, warns not to fall for so-called CRO best practices and lists written by SEO people who copied other SEO people to drive traffic to their websites.
CRO is a systematic process to find growth opportunities based on data. So it differs from company to company. What works for Amazon is unlikely to work for you, as you’re not Amazon.
But, it makes a lot of sense to pay attention to what Jeff Bezos said a few years back: “Our success at Amazon relies on .how many experiments we do per year, per week and day.” So understand and apply their growth mindset with your perspective and for your business, don’t copy exactly what they do.
Asking the right questions is key to having sustainable growth year after year.
1) Where are the biggest leak in my funnel?
2) What are these problems?
3) Why are the reasons for these problems?
Then you turn the known issues into hypotheses and prioritize them to make tests.
To process starts with thorough Conversion Research.
Research XL Framework
This is a 6 step analysis.
Step 1: Technical Analysis
Peep advises starting with the low hanging fruit and finding technical problems that your developers might haven’t noticed. Basically, you go to your website(or your client’s website) and analyze all paths for different devices, browsers, pages, and compare conversions, make speed tests. Fixing a technical issue that can increase your conversion by 1% might end up as thousands of dollars in revenue in a year.
Step 2: Heuristic Analysis
This one is not scientific but it sure is fast. Again, you go to your website and critique every possible element.
You have to ask yourself:
Is this relevant?
Is this clear?
Is the motivation there to make my users take the desired action?
What are the possible frictions keeping the users from taking the next step?
Your goal is to make the user take the next step, commit to the next micro-commitment.
If you can create sufficient motivation and provide the necessary ability to take and action all you need to do is trigger the user to take an action. That said, intrinsic triggers are the real habit creators. External triggers(e-mails, push notifications, pop-ups, SMS) are means to an initial behavior.
One important thing to note here is: the right motivation can make a user go through a lot of friction to reach the end goal but all the ability in the world won’t make a similar effect when there is no motivation.
Step 3: Digital Analytics
It is critical to go to Google Analytics and find the answers to the following questions:
- What are the leaks?
- How do different segments behave?
- What are users doing?
- What actions correlate with higher conversions?
Once you create KPIs at possible problem areas you can monitor the effects of your experiments and solve the issues.
Step 4: Mouse Tracking and Form Analytics
Analytics will be able to answer what people do but mouse tracking and form analytics can shed a light on possible frictions and how people behave. Heat maps, click maps and scroll maps will tell a story.
Especially session replays can provide a lot of insights that you can’t gather from numbers.
Step 5: Qualitative Surveys
A short survey with open-ended questions can be invaluable to understand user behavior.
The idea is to understand which problems are people solving with your product/ service, how are they deciding, what’s holding them back.
Once you know more about user goals, desires, fears, uncertainties, doubts, questions you can address them in your copy and design to make your journeys flow better.
Some questions to ask in surveys:
What was your biggest challenge, frustration, or problem in finding the right product?
What doubts and hesitations did you have before completing the purchase?
What is the one thing that nearly stopped you from buying from us?
Step 6: User Testing
This step is to observe how people interact with your website or application. You basically have them perform key tasks and try to extract insights and find issues about the customer journey.
What’s difficult to understand?
What’s difficult to do?
What goes wrong?
You should focus on how people behave when they experience your product rather than what they say. What people say can be affected by various biases.
You should have detected tons of areas to improve. If you have around a hundred issues that can be optimized you can move on to coming up with hypotheses to improve them and prioritize your experiments.
Running A/B Tests
Ton Wesseling, who is a CRO and Digital Experimentation consultant, starts by giving his view on conversion research and add on it by how to actually run tests.
He goes over the 6V research model that contains value, versus, view, validated, verified and voice. I won’t get into detail on this as there are overlaps with the previous section.
…
Hypotheses align all stakeholders on what the problem is, what is the possible solution and expected results.
The Roar Model
If you don’t have 1000 conversions(not necessarily a sales conversion) a month it doesn’t make sense to do A/B tests as the collected data will be insufficient to find a winner. And you aim for a minimum 15% uplift to consider the challenger experiment to be the winner over the control.
Once you reach 10.000 conversions you can consider 5% enough to re-think the whole thing.
Ton goes on to explain statistical power and significance to detect false negatives and positives.
Power: “Statistical power is the likelihood that an experiment will detect and effect when there is an effect to be detected.”
It depends on sample size, effect size and significance level.
Most people use 90% or 95% significance level. The significance level is the probability of the study rejecting the null hypothesis, given that the null hypothesis was assumed to be true.
It is crucial to know what works and what doesn’t and data can be misleading if you don’t know what you’re doing.