When first starting A/B testing you can go after a few quick wins. These include: changing button color, changing CTA text, changing headlines, and the placement of elements. I’m going to cover my process for developing and implementing a successful test. In this test I changed the copy of our primary CTA (call to action) button. I changed it from “Sign up for free” to “Try it out free.” I just changed three words, but this resulted in a 68% increase in how often our primary CTA was clicked, 49% increase in registration forms shown, and a 37% increase in signups.
Recently at Techstars we had Chris Rodriguez come in and give all of our startups a growth hacking workshop. If you’re familiar with Growthhacker.tv he’s the third interview. He’s in charge of growth over at Knewton, and extremely knowledgeable. You can find his individual services at yougrow.co. He gave some great advice and maybe in another post I can do a deep dive on it.
I found two slides especially relevant since I was starting to run some of my first A/B tests with Tutum. We recently signed up for the trial of Visual Website Optimizer and now we’re using it as our dedicated testing platform. Chris went over his Scientific Method for testing, and potential “low-hanging-fruit” tests that target color, copy and placement.
My first question was, “why aren’t people signing up more often?” We have a pretty specific service and our visitors are probably people who could benefit from our service. For visitors to come to our website and not sign up for a free trial was odd, so I thought there must be a reason for them to be hesitant. And I thought our CTA text could be something more informative that would remove some of that initial hesitation. In other words, I was now on a mission to reduce friction.
As mentioned by Sean Ellis (who coined the term “Growth Hacker” and runs growthhackers.com):
Do Background Research
Background research was in the form of feedback from our users. We asked users for feedback about Tutum’s website and our service. Some users noticed that we didn’t make it explicitly clear that we were offering free credit, and that you could actually start using our service for free right after you sign up.
My hypothesis was that our control was too generic and told people they could sign up for free, but didn’t explicitly tell them they could actually start using our service for free. With the variation, I made it clear that by signing up they would be able to immediately start using Tutum for free.
Test with an Experiment
The next step was to conduct a test. Here’s another slide from Chris’ presentation that provides a nice quick visual if you’re new to testing. I decided to test our CTA’s copy for the reasons stated above.
Control: “Sign up for free“
Variation: “Try it out free“
Goal #1 tracked was how many times the primary CTA was clicked. This went up 68% and had a 98% chance to beat the original.
Goal #2 tracked how many visitors viewed our registration page. This went up 49% and had a 98% chance to beat the original.
The test occurred over 8 days, with about 1,000 visitors. Here’s the summary table of the results:
I used Mixpanel to keep track of the final part of the funnel.
There was a noticeable difference in the percentage of people viewing our registration form, and signing up. Our previous months data had our visitors seeing the registration form 8.56%, with an overall sign up rate of 4.42%.
With the new CTA, we had 11.67% of users view the registration form, with an overall sign up rate of 6.07%, which equates to a 36% and 37% increase respectively.
The experiment validated my hypothesis, but I think this opens up a lot more questions. I feel like there’s an infinite number of tests that can be run, and at what point are my efforts better spent elsewhere. How do I know that optimizing this landing page is my best choice, and not designing an entirely new landing page.
I think using my judgement to prioritize tests, and using the knowledge I learn to guide future tests will keep me on the right path. Also, it doesn’t hurt when there are amazing sources of information out there such as: growthhackers.com, Optimizely’s blog, and Visual Website Optimizer’s library of case-studies. And a quick google search will introduce you to a number of in-depth guides on how to A/B test.
I think one of the most important lessons Chris told us at Techstars, was that having a scientific method was absolutely necessary to keep your tests moving forward and to prevent hindsight bias. If you don’t write down exactly what you think will happen before making a test, backwards rationalizations can muddy your perception. You may end up convincing yourself that you were right all along, or that you’re “not really surprised the test failed.” If your writing a hypothesis, a failed test should surprise you, otherwise why did you run the test? This will make sure you’re learning from your experiments like any good scientist, and not just spinning your wheels.
Thanks for reading!
Would love to hear about any quick wins or A/B tests you may have done and what kind of results you saw.