Marketers have been using A/B Testing for many years to optimize the combination of various elements meant to attract and engage customers. A/B experiments are usually set up to measure the performance of two different choices such as two website designs (colors, placement, text layout), communication strategies (message, context, invocation), subscription models (monthly, yearly, one-time fee), forms, testimonials, and so on. Many times the outcomes are surprising and contradict the initial expectation.
And for that very reason, we test.
Since your customers’ taste and behavior is something difficult to predict - if you have an assumption, it is always better to test it out. Any test however should (ideally) be based on previous research, or observation that helped identify the problem and formulate the hypothesis.
For example, "Our number of daily signups is very low because most clients are abandoning the webpage halfway through the registration form. There are probably too many fields to fill out."
And the most noteworthy word here is 'probably'. But, we want proof. So as a next step we create a shorter, more simple form to test against the existing control version, which will serve as a baseline for the experiment. The performance itself can be measured through various KPIs (Key Performance Indicators), however it is vital to have predefined goals and to stay objective and accept the outcome no matter which version you like better. This way, drawing an unequivocal conclusion will be simple, quick, and actionable.
Once the control and test versions are finalized, and a KPI that is going to measure the performance – clicks, conversions, registrations, sales - is picked, the experiment is ready to be launched. In the next phase your website visitors must be randomly assigned to either version A, or version B, until a minimum sample size is met for a statistical significant result. You must then record the site visitors' actions and summarize the results based on the preset indicator (eg. number of forms filled out).
At the end of the experiment there are two factors to consider before any decisions are made:
The answer to the second question is somewhat straightforward: general statistics suggests a 95% confidence level. Everything below that is a risky path to take. However, in some business case taking this risk can be justifiable.
On the contrary, the answer to the first question is almost always a business decision. If the result of the test version is not obviously much better, someone from your team will need to make the executive decision. Are you confident that the benefits of making the change will surpass the costs of making this change?
A/B testing should be a common practice for any business that generates sales leads. Marketing is the name of the game in lead gen and A/B testing is vital in successful marketing. However, A/B testing can go beyond the marketing aspect of lead generation.
At boberdoo, we have been working on a new feature that will help A/B test a lead generation company's lead sale and distribution process in effort to optimize lead revenue. We encourage you to keep your eye out for this new feature. If you're interested in learning more, please sign up for our newsletter to stay on top of the latest news!