ADAM MCGOWAN:: Welcome to episode seven of Ventures in Tech, brought to you by Firefield. This is Adam McGowan and on today’s show we’re going to discuss the concept of the minimum viable product or MVP and the risks of mistaking assumption testing for product development. I’m again being joined by my colleague from Firefield, Henry Reohr, who will be guiding today’s chat. And with that, I’ll let Henry take it away.
HENRY REOHR: [00:34]: Adam, in the startup environment, the term minimum viable product gets thrown around a lot. It also seems to have a number of definitions. Can multiple meanings of MVP all be correct?
ADAM MCGOWAN: [00:47]: Well, if you like to be a bit of a purist, which I think of myself as when it comes to this, I would argue that you go back to the source. And in his book, The Lean Startup, Eric Ries created what’s probably known as popularized version or at least what was initially the most popularized version of the definition of this MVP. And he said that a minimum viable product is that version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least amount of effort. Now, that left a ton of room for interpretation and I would argue that the industry has really taken a bunch of liberties with it and I think I do as well.
HENRY REOHR: [01:30]: Like what? What kind of liberties?
ADAM MCGOWAN: [01:33]: Well, my first one is the question whether or not the MVP is really a product. So if you dig into it, it seems more like a learning loop; this idea that you build, you measure, you learn something and then you iterate on that. I actually do some mentorship and do some teaching on The Lean Startup approach and on the idea of MVP creation, and this is what I talk about when I do that; this idea of this iterative loop. And, you know, I think it’s something that’s sometimes gets a little bit confused given that product is actually right in the name of the MVP.
HENRY REOHR: [02:10]: I also hear a handful of other names thrown around when people talk about early stage products. There’s clickable prototypes, proofs of concept, alphas, betas and several others. Some even see an abuse interchangeably with MVP. How do they all relate to each other?
ADAM MCGOWAN: [02:29]: This is a place where I’d say I get a little bit frustrated and I do that or I feel that way because I feel that there’s really a necessity to separate out the concept of creating a product from actually testing your assumptions and I think the descriptions and the names you just share tend to commingle those two concepts, some cases, in pretty material ways. So if I think about the stages of product development, I’d argue that the proof of concept happens at the earliest of stages and this tries to help you determine if something can even be done. You then kind of move into what would be more of a prototyping phase which describes and helps to illustrate how it would actually be done. And then, what you end up with is various versions of completeness and then you can all evolve those over time. And I think that you ask ten different people about the definition of one particular term that you shared, they might give you ten different answers.
When I think about this idea of product testing, one example of that is what I would consider the alpha test or the alpha version of a product. This typically means the product is not fully functional, there’s likely bugs in the product, but really what’s happening is that the owners of that product are looking for some feedback. That evolves into another phase which is much more commonly used that’s called the beta, the beta test or the beta release. This is typically a highly functional version of a product but it is not yet fully tested and it requires more feedback. But something to think about here is that all these, both alpha and beta, are still beyond proof of concept and they’re beyond prototyping. You are actually just dealing with what’s the level of completeness of your actual product and you’re not testing your assumptions here. You’re testing whether the product works. You’re testing whether or not there’s bugs. It’s a different thing but sometimes they get lumped together. With the MVP, I think they’re very different things.
Separate and apart from that is this notion of assumption testing. So again, as I said, those above stages, they’re about building, and about testing the actual product once it’s past this proof type stage, and at that point you’re not really having any requirements left to need to test the product’s underlying assumptions. Some things that we can attempt to do, two things, which is to both go about building products and then also go about testing assumptions. And I think that’s what the MVP tries to accomplish. But most early versions of product, you know, at best can serve one of these two things, either assumption testing or product development. There’s even a few cases where I think it attempts to conserve neither of those purposes.
HENRY REOHR: [05:09]: Neither? How can something that a founder’s going to share with a user be neither a product nor an assumption test?
ADAM MCGOWAN: [05:21]: I think it usually happens when something attempts to be a test. It’s attempting to go by testing an assumption. So this means it was never intended yet to be a product, it’s not fully functional, it’s not too ready for market, but the problem is that the actual mechanism they used to test isn’t very good. Not very scientific, they don’t control for a bunch of variables or even worse, it does attempt to make a pretty scientific test, it just goes about testing assumptions that are the wrong things to test.
HENRY REOHR: [05:53]: What do you mean by that? How do you know whether or not you are testing the right thing? How could there a wrong test?
ADAM MCGOWAN: [06:01]: Well for a startup, money often time seems or in reality is their most scarce resource. But the reality is, in the vast majority of cases, it’s not money that’s the most scarce resource, it’s time. And so if you go back to Eric Ries, you’re trying to “maximize learning over the shortest period of time.” And so let’s say you had a hypothesis you’re trying to test and it was, “Can I get ten customers to commit to purchase over the next two week period?” And let’s say that you actually go about achieving that, you do get those customers over that period time. What if in reality finding small numbers of paying customers is extremely easy but then those customers represent a market that’s way too small to actually sustain your venture? So what you’ve done is you’ve created test, you’ve controlled for it, you’ve got a result, you’ve proved it to be true, the problem is it’s not meaningful at all. It was too easy to achieve. It doesn’t address the bigger issues you’re trying to accomplish. And so what could have been the right test would have been trying to determine how well this product could attract customers outside of this initial market, outside of this sort of easy picking, if you will, when it comes to customers. Those subsequent markets might be necessary to make the product a success but yet you’ve just fulfilled a positive result to a test that could lead you down an inconclusive or in some cases problematic path.
HENRY REOHR: [07:29]: So how do you first identify and then conduct the right test?
ADAM MCGOWAN: [07:36]: Well, I have thought about this quite a bit and I think that it’s a pretty complicated lengthy answer. But then, I actually found a piece of content somewhere recently that I think articulates the concept really really well. I saw it earlier last year, it was in a post on Hacker Noon written by someone named Rik Higham, and he proposed something he calls a RAT.
HENRY REOHR: [07:59]: RAT? R-A-T like the rodent?
ADAM MCGOWAN: [08:03]: Yes, R-A-T, but it stands for Riskiest Assumption Test,, and these are assumptions that if they’re not proven true have the most sizeable negative impact on a venture. So let’s take that past example we had, getting customers to commit. Assuming the entrepreneur asked herself a question, “Is the idea that I can find ten customers the riskiest of all my assumptions?” So that’s a modification of the prior question which was, “Can I get ten customers in two weeks?” Now it’s, can the act of getting those customers be easy or hard, right? How risky is that assumption? I think that assessing whether or not getting ten customers is the riskiest possible assumption you could have, it’s pretty clear the answer is no. That is not the riskiest assumption that you should be testing. And so what it does is it forces the founder to then iterate and then break these problems and questions and concerns down into smaller and more explicit components, often addressing issues that they haven’t looked at in depth yet.
HENRY REOHR: [09:12]: So now that we’ve added another acronym to the mix, what’s the biggest takeaway you want to share with founders who have products that are at a very early stage.
ADAM MCGOWAN: [09:24]: Well, I think the first thing I want to do is to suggest that people try to forget about buzzwords. One, they are by their very nature very much a fad, they can come and go. Two, they’re open for interpretation, you ask ten people you get ten different answers, and I just don’t think they get to the core of what you’re trying to deal with.
So I think that what’s most important is for founders to really focus on moving their ventures forward as efficiently as possible. Almost in sort of a fanatical way, be consistently asking yourself, “Is this the most efficient use of my unbelievably limited time and potentially extremely limited money and capital?” And if the answer is no, iterate. Figure out what it is that you’re supposed to be doing that’s more efficient.
I think another point that’s very very key is to really think hard about the distinguishing this idea of assumption testing from product launching. Again, the MVP kind of tries to do both but they’re pretty distinct things with pretty distinct objectives. For no two companies will they be the same but they are pretty distinct for almost every company that I’ve seen.
I’d argue that the priority here should be mostly on the assumptions first and then product second. And so if the most important tests that you need to address first, if they don’t translate into an MVP, then don’t build the MVP yet. This is another major misstep I see all the time which is, ideas that are not fully baked, they are not fully validated, they don’t necessarily… they aren’t necessarily even a product yet, they’re just still a concept, people will come to us and say, “I need you to build my MVP.” First of all, you don’t build an MVP, right? You iterate an MVP, you use it as an assumption test. But I think the term is thrown around in the wrong way and then the product gets put in front of the assumptions and I think that that’s backwards.
So, with all I’ve said, I think there’s really a lot to this idea of the RAT even if the acronym isn’t particularly all that attractive. So what does it do? It helps you to uncover these riskiest assumptions, it helps you to validate them in the smallest and most iterative test that you possibly can and if the performance of such tests requires the creation of an early stage product then think about creating one. But if it does not, don’t. So I think this idea and this assumption that some form of product creation is always the early answer is a very sort of commonality in the startup environment and I roll back to this concept of test first, validate assumptions, come up with the notion that there’s a product that should be built and then go about doing it. And so I think that oftentimes the notion of the MVP, while in its purest form, when Ries talked about it, made great sense, it’s played a game of telephone over the course of the years and now I think it seems like an M, a V, and a capital P in that the product seems to get brought to the front and I think that that’s something that if the founders are more thoughtful about it, they’ll find themselves being able to validate earlier and probably find a little bit more success in their products.
HENRY REOHR: [12:41]: Thanks Adam. That’s all we’ve got for today. I’m Henry Reohr from Firefield and you’ve been listening to another episode of Ventures in Tech.