Recently, I got an e-mail asking me why we should care about Agile if the overall product development process, including mechanics and electronics, is measured in years and is completely waterfall. The question took me by surprise. I’ve been working with Agile practices for the better part of two decades now and for me it’s a given that fast feedback loops are better than slow ones.

However, after more careful reflection, I realized that the question is based on a few assumptions that, in turn, are founded on our beliefs around our ability to predict. The first assumption is concerned with our ability to optimally predict requirements for our products months, quarters or years down the line. In many industries where products contain mechanical and electronic components, the production pipeline requires long lead times. Consequently, the product requirements are formulated long before the start of production. The fallacy is, of course, that requirements change all the time due to new technologies becoming available, changing customer preferences, actions taken by competitors and so on. One rule of thumb in software says that requirements change with 1 percent per month – a very conservative estimate if you ask me.

So, how to respond to constantly changing requirements? There are fundamentally two approaches. Either you adopt agility and continuously respond to changes or you resist requirement changes, reject all that you can and grudgingly accept those that you really can’t ignore. The result of the latter approach is, of course, an inferior product as it’s based on the best insights from years ago.

The second assumption is that we can predict the effect of our requirements. These are defined as we hope to achieve a specific outcome as a consequence of realizing the requirement. We see this most often with usability requirements, but it basically extends to any quality attribute of the system. Online companies use A/B testing of solutions to determine the effects of different realizations of functions and features on users. These companies don’t do that because they’re so poor at requirements engineering, but because the effect of features and functions is fundamentally unknown when it comes to the way humans respond to software functions.

Traditional engineering companies pride themselves on their ability to predict the capabilities of systems before they build them as engineering offers a set of mathematical tools for modeling, simulating and predicting. These models are typically then confirmed by lab tests and in some cases small-scale tests in real-world contexts before fully committing to a specific design. Although this works quite well in many circumstances, it remains the case that measuring in real-world deployments provides much higher validity than mathematical models and lab tests. As I’ve shared in earlier posts, research by us and others shows that at least half of all the functions in a typical system are never used or used so seldomly that the R&D investment is a waste. So, wherever we can use techniques to deploy slices of functionality or features and measure the effect before building more, we should as it allows for a major improvement in the effectiveness of our R&D.

'We need real-world experiments to continuously improve'

Although many understand that real-world experimentation concerning usability and user behavior is a necessity, the same is true for all quality attributes. Think of all the security fixes that we need to roll out. Often these concern vulnerabilities to threats that were known before the design of the system was finished. It just turned out that the mitigation strategies that engineers designed into the system didn’t suffice. Similarly, do we know for a fact that the current system design gives us the highest performance, the best robustness, the highest energy efficiency? Of course not! Rather than relying on models and lab tests, we need real-world experiments with our products at customers in the field to continuously improve. The models and lab tests are still needed, but mostly to protect us from the downside of less successful experiments before deployment.

Concluding, if you’re able to perfectly predict the optimal set of requirements for a system or product years ahead of the start of production or deployment and if you’re able to accurately predict the effect of each requirement on the user, the customer and the quality attributes of the system, then you don’t need Agile. In all other cases, Agile (both pre-deployment and post-deployment – DevOps) offers the opportunity for a massive improvement in the effectiveness of your R&D (as measured in value created for each unit of R&D). It’s not that we can’t build products using traditional waterfall processes – of course we can as we’ve done so for decades. The challenge is that we’re much less efficient doing so, which increases the risk of disruption for our company.

Jan Bosch is trainer of the Speed, Data and Ecosystems training. Get a holistic framework that offers strategic guidance into how you successfully can identify and address the key challenges to excel in a software-driven world.