Knowing what you don’t know is knowledge

May 17, 2016
Sam Harris

I love reading Tomasz Tunguz‘s blog posts over at Redpoint Ventures. He brings genuine data analysis into a mostly conjecture- and anecdote-dominated blogosphere!


His latest is entitled Is A/B Testing A Good Idea for SaaS Startups? It’s a great read, and you should check it out.

Don’t worry, I’ll still be here when you get back.

I appreciate that Tomasz recognizes the limitations of analysis. There are a lot of mentors and bloggers out there who tell us to run as many experiments as possible. People who lament the politics-laded decision-making process of their old company. But as a former data scientist, I am concerned the sample sizes are too small in a pre-seed startup to draw any conclusions from most experiments. Knowing what you don’t know is knowledge. Plus, the time and effort required to run a well-designed experiment is greater than the time and effort required to interview a few key customers and fill in the rest with intuition.

When I was an officer in the Air Force, I used to fly around the world designing tests for various agencies.

I used to ask pilots, generals, and politicians…

“If you were to drop a bomb three times and it hit the target all three times, how confident would you be that it would hit its target 95 out of 100 times in combat?”

“Three out of three? That’s pretty good! I’d be very confident.”

“Actually, we only have 36% confidence the bomb will hit its target 95 out of 100 times.”

The problem, I would explain, was the specificity of the result (hit vs miss), and how terrible binary outputs are for predicting effectiveness.

Instead of hit/miss what if we measured the miss distance and angle of incidence from the intended target. If it needs to be within 10ft of the target, and the three drops were 9.5ft, 9ft, and 9.8ft from the target, all at different angles, how confident are we? Not very confident. If instead, the three drops were 0.1ft, 0.2ft, 0.1ft, then we can be very confident it will be effective. That was one of my many hacks to generate more statistically significant results without actually spending more money (like on more drops).

I wonder if there’s a way to introduce more specificity to the results of a startup website A/B test. Instead of click/no-click, what if we measured milliseconds before bouncing or something else that’s a continuous measurement. I haven’t put any deep thought into that, honestly. We get so much better data (at our baby pre-seed scale) from giving a sales pitch or training session with a customer’s team and watching people’s faces for non-verbal feedback on features, keywords, and objection-handling, etc. What’s more, when we have a face-to-face interaction with someone and they don’t use our product, we can write an email asking why and they will actually write back. The hardest part of that process is being honest with ourselves and not looking to see what we want to see.

I have worked in small, medium, and large organizations as an employee, consultant, manager, executive, and entrepreneur. Very few techniques are effective in all situations. There has to be context. We know that face-to-face meetings with small clients is not a go-to-market strategy that scales, but scale isn’t the biggest challenge in a pre-seed startup. Product-market fit is.

As a wise man (Greg Reda) once said, “Although experiments and analytics can often tell you what, they can almost never tell you why.”

No comments

Leave a Reply

Your email address will not be published. Required fields are marked *