A/B testing, at its simplest, is about randomly showing a visitor one version of a page – (A) version or (B) version – and tracking the changes in behavior based on which version they saw. (A) version is normally your existing design (“control” in statistics lingo); and (B) version is the “challenger” with one copy or design element changed. In a “50/50 A/B split test,” you’re flipping a coin to decide which version of a page to show. A classic example would be comparing conversions resulting from serving either version (A) or (B), where the versions display different headlines. A/B tests are commonly applied to clicks – on ad copy and landing page copy or designs to determine which version drives the more desired result.
A/B Testing is hence also used to determine which two strategies about selling a brand / service work, based on analytics derived by relationship analysis of landing page results against call for action related to those landing pages.