The other day I tweeted this about A/B testing.
I got a few responses saying A/B testing is better than nothing. Which I completely agree with, it shows that the team cares about what is the best thing for users, good, keep going.
It’s just that in the early stages of a project we are trying to answer questions like
- who are our users?
- what problems are we trying to solve?
- how do we measure success?
- what will people get out of this service?
- how can we constrain our reckons and turn them into evidence based decisions?
These are the hard things that then help us understand why people prefer option A/B, not that they prefer A/B. Having this deeper understanding makes things much easier later on. Doing the hard work to make it simple etc. Saying “don’t worry about that, we’ll A/B test it” feels like a cop out to me. The best thing for a team to be doing is combining contextual research so we have that deeper understanding and then using quantitive tools like A/B testing when we want data to test the hypothesis we have created.
As a cynical side note™ very often those things that we were going to A/B test, actually never get tested. They are thrown upon the great pile of ‘oh yeh, didn’t we talk about that’ actions.