The problem with posts like these is that this is a guy who was surprised multiple times, but statistically there should be some high ranking guy that was surprised multiple times. This court case comes to mind: http://en.wikipedia.org/wiki/Sally_Clark
As someone who has run probably 50 or more split tests I've been really surprised twice. Most of the time you don't know what is going to convert better and most of the time it's 10% here, another 10% there at multiple points in your funnel that, overall, total a 3x in your conversion rate. A 3x might not sound like a lot, but it's something like a 10+ fold in your ROI after you compare an advertising spend of $10 that nets you $12 in NPV vs one that nets you $36.
Wow, that Sally Clark story is tragic. They fell hook line and sinker into the old Feynman numberplate trap [1]. The insanity of it would almost be funny, if someone's life wasn't destroyed as a result.
Had they restated the odds as "every ten years in the UK on average, one woman is going to be unlucky enough to have this happen to them", instead of essentially "there's a 1 in 73 million chance this was a coincidence", I can't imagine a conviction on that alone.
And that's assuming that, once you'd had one baby die of SIDS, the odds stay constant, which is a crazy assumption, especially as it's such a rare even we don't possibly have enough data to verify it.
You're right. I found it via that page, and I didn't read it that carefully, so I didn't realize it came from Quora.
Now I feel dumb because I hate when people link to a source that isn't original and doesn't really add anything. Though at least it has the value-add of not being down ...
> I learned that I can eke out an extra 5-15% from improving the subject line…or 500% from creating a better offer.
This is something I am seeing people overlook a lot in the startup world. Many small startups are spending too much time/money worrying about A/B testing when no amount of testing will bring them the improvement they need. They would have more success by increasing their distribution, or working on presenting better offerings.
But is that really true? Is there some reason why A/B testing necessarily means "testing small, incremental changes"? It seems to me that once you've got your buggy, you may want to test what kind of things should power it-- horses, rabbits, internal combustion engines, windmills, etc.,-- and see what performs best.
Coming up with large changes to test is a lot more based on inspiration and gaining new knowledge than the kind of stuff that I see people testing in AB tests in general. Sure you could test large changes too (and I have) but AB testing works best when comparing apples with better apples. The biggest problem in testing diverse possible solutions to a problem is that you probably will have to restate the metrics by which you evaluate your total offering (the 'fitness').
You could compare cars and horse drawn buggies on a hundred fronts or more and what is 'best' depends more on circumstance than on the defining elements of both solutions, they are too different to be compared meaningfully except for their general utility. In that case the market seems to be best at deciding what is better but how would you as a buggy manufacturer test your newly minted car against your buggies without an infrastructure to support your car?
"The critics see the small experiments used to market A/B testing to internet businesses and think it is the totality of the method. They are right that companies usually don’t A/B test large changes. ... That doesn’t mean these experiments aren’t done ... Google, for example, is currently experimenting with both Android and Chrome OS in more or less the same space."
As someone who has run probably 50 or more split tests I've been really surprised twice. Most of the time you don't know what is going to convert better and most of the time it's 10% here, another 10% there at multiple points in your funnel that, overall, total a 3x in your conversion rate. A 3x might not sound like a lot, but it's something like a 10+ fold in your ROI after you compare an advertising spend of $10 that nets you $12 in NPV vs one that nets you $36.