The Maximum Minimum Viable Product

Estafet consultants occasionally produce short, practical tech notes designed to help the broader software development community and colleagues. 

If you would like to have a more detailed discussion about any of these areas and/or how Estafet can help your organisation and teams with best practices in improving your SDLC, you are very welcome to contact us at enquiries@estafet.com 

Evolving Beyond the Initial MVP

In software development, it’s become generally accepted that it’s always best to strive for and deliver the Minimum Viable Product (MVP), right? Delivering just enough features for your initial users reduces risk, yields quicker returns, and allows you to validate your product faster. I’ve certainly found this to be the case and is always something I try to keep in mind. But, what about once your initial MVP has been released and is successful?

In almost all cases, new feature requests come in to capitalise on and continue that success. Generally, the MVP approach is applied to the first release, so what do you do when you add the next feature?

“Well, this is still a new-ish project… there is a massive push from the business to deliver as soon as possible… still the need to justify every story point.. let’s add the MVP of the new feature onto our initial project MVP!” 

Okay, this may no longer strictly fit the definition of MVP, but the term often persists in discussions! I suppose it could be justified as this is the initial release of the new feature. All projects vary, but taking the quickest ‘easy option’ here may often actually be the best decision, after all, despite the successful first release, your fledgling project is still vulnerable with a need to prove itself.

Then the next feature request arrives, and that pressure to release hasn’t relented, so we add the MVP of this feature, and then the next feature, and the next, and so on.  After a while, you may find that new features are taking longer and longer due to an increasingly complex, bloated code base. So is there a point where you need to slow down, re-evaluate, refactor, and stop aiming for the MVP? The maximum minimum viable product?

The Problem of Applying MVP Upon MVP

We all try to write maintainable code that is easily extendable, but sometimes, due to time constraints, we may take the simpler or ‘MVP’ quick option. Additionally, we may not be aware of new requirements that will come down the line, which, had we known of, may have changed our design. For example, if you don’t know that a piece of code will be conditional at a later point, you’re unlikely to apply a design pattern to deal with it. When working on the requirement to make it conditional you may not know that there will be further conditions in the future and will likely just add an if/elif/else structure. Over time the amount of elif statements can, however, end up growing and growing. 

A Personal Example: Time to Refactor

It’s exactly the example above I encountered on a recent project. In a few different points in a large service, there were if/elif/elif/elif…/else blocks scattered around, all with the same conditions, based on what service or platform it would be connecting to. In the initial release, it would have been AWS, in a further requirement, Azure… 

– then GCP… 

– then SFTP. 

I first came across it when asked to add another connector type and, I admit, I added another elif in all the places I thought it was needed in an attempt to get the task done quickly. However, once in test, it came back that an edge case was failing! I’d missed a spot! I fixed the issue and forgot about it until a few weeks later I picked up a new ticket to add another connector type.

At this point, it was clear that it was unlikely to be the last connector type that would need to be added (arguably it should have been clear the previous time too), so I agreed with the team that it was worth refactoring, and got a few extra story points for my task. I didn’t do anything revolutionary – I implemented an object factory that would return an object for the required connector type so that I could call methods on the returned object throughout the service without caring about the connector specifics. 

This means that now to add a new connector type you just need to add a new implementation of the interface with all the logic in one place. It should be far quicker, cleaner, and with a lot less likelihood of missing anything! Refactoring this way is in no way groundbreaking, but it does go to show that a small change can go a long way in keeping your codebase more nimble and future-proof. Sure, this pattern could have been applied during the initial MVP, but it would have required more effort and added complexity that at the time wasn’t obviously necessary.

Recognising the Tipping Point

So should we insist we drop both the term and idea of a MVP after the initial release? I don’t think so. Just be careful that it doesn’t become an excuse to always pick the quickest option, especially once your project is more mature. As you progress through the project lifecycle you will probably notice that ‘tipping point’ where adding new features is taking longer due to the existing project’s complexity. It’s at this point that it makes sense to evaluate your current code versus the new requirements that will be coming in and, instead of jumping to a ‘quick fix’ MVP for those requirements, refactor first, as it may well be quicker in the longer term. 

It’s easy to say but harder to put into practice. In my case, I was lucky it was a simple change that I could just run past the rest of the development team, however for larger changes you may need to make your case to your stakeholders who may not be happy if their deadlines slip due to you jumping headfirst into refactoring rather than the quick MVP they need for their next feature. Instead, make your case to them and highlight the issues that you’re currently facing and the long-term benefits that the work would provide. This way you can get their buy-in too. Often you might find that you won’t be able to work on it immediately, but time can be allocated further in the schedule. For a big enough refactor it may even get its own release!

Keeping the MVP Chain Going

Is it at that tipping point before a refactor that the MVP approach should be discarded? Is it the Maximum Minimum Viable Product where no more MVPs should be added? Well, if we take the term ‘Minimum Viable Product’ at face value, one could argue (mostly for narrative purposes), that it should cover not only requirements but also include code simplicity. In this case, your initial MVP from your first release plus your MVP of new feature 1 plus your MVP of new feature 2 is not equal to the MVP of the final state! It’s now full of bloat and unrequired complexity that can be simplified. So keep aiming for that MVP even past your initial project stages, just keep in mind that perhaps a longer-term approach might be needed to achieve it.

Pushing the Max

Of course, one way of keeping that MVP chain going is to not get to the previously mentioned tipping point where a large refactor becomes essential to maintain velocity. Small refactors here and there will always be needed, but perhaps the large, blockbuster, multi-sprint ones can be avoided. How might this be achieved?

Depending on your situation this could be more or less difficult. Imagine a scenario where you need to develop a simple MVP for a proof of concept to prove a product to investors in a short time period. It sounds like a recipe to get potentially locked in if we then want to extend it in the future. How would we avoid being locked into a simplistic solution without long-term viability? Consider applying the following strategies:

  1. Architect and Estimate for Extensibility: In a perfect world we would architect and implement every part of our solution for extensibility, with the correct design pattern applied to every part of our codebase to match what our requirements will be several years down the road. Except our scenario isn’t in a perfect world, and, to be honest, neither will your project. So what options are available?
    1. Even in the most time-constrained project, you aren’t going to have all your code in a single big block, so you should push as much as possible to implement extensible code from the get-go – particularly focus on any ‘big wins’ where you know there is going to likely be an extension in future.
    2. Where you see opportunities to apply a design pattern but time constraints don’t immediately allow it, instead estimate it, and add it to your backlog. By having it in your backlog it won’t get forgotten about once the solution is proved or the milestone is reached, and you’ll likely be able to come back to it much sooner while the code is relatively simple and it won’t be as big a job
  2. Anticipate Future Needs: Be proactive in identifying potential future requirements. Being thoughtful of where the project may go will allow you to apply either of the above steps (implementing or estimating extensibility) earlier in the process.
    1. If you have a roadmap, great – If not, define one. It doesn’t need to be a formal one that you define and plan with the business (although of course get their input too). A half-hour discussion with the dev team and stakeholders about a likely course with a few notes taken will do. Think about the obvious directions the project might take after the initial release so that you and your team can keep them in mind.
    2. Try and spot places where there will likely be future work. It sounds obvious but by consciously keeping it in mind it’s less likely to be overlooked. For example, in my case study above, where there was a connection to AWS it would have been fairly evident there would likely be future connections to other platforms.
    3. Include a step to review extensibility vs future requirements in your PR checklist. Another pair of eyes may identify something the first person has missed. Depending on cost vs benefit, decide whether to fix it now or add it as a backlog item.

Striking the Balance: Final Thoughts

In the dynamic world of software development, embracing the MVP philosophy is a solid foundation. However, as success unfolds and new features emerge, the challenge lies in preserving the essence of the MVP while steering clear of accumulating unnecessary complexity. Recognising the tipping point where adding new features becomes cumbersome is key. Instead of discarding the MVP concept, it’s about embracing a nuanced approach – constantly evaluating the balance between immediate needs and long-term sustainability. By fostering discussions, making informed decisions, and proactively anticipating future requirements, we not only keep the MVP chain alive but also ensure our codebase remains agile, extensible, and ready to weather the evolving landscape of software development.

By Michael Ruse, Consultant at Estafet

Stay Informed with Our Newsletter!

Get the latest news, exclusive articles, and updates delivered to your inbox.