Finding your AI business case

Having worked with companies on the use of AI, I’ve noticed an interesting pattern: although most of the attention is spent on algorithms, data storage infrastructure, training and evaluation of applications, the hardest part very often seems to be coming up with a promising concept in the first place. When exploring promising concepts, many start to realize that taking the resultant ML model from the prototyping phase to real deployment is a major challenge that requires changes in existing customer engagement models, product architectures, ways of working, the data collected and often even legal constraints.

'Exploring promising concepts requires exploring both the potential business benefits and the expected cost'

Exploring promising concepts, of course, requires exploring both the potential business benefits and the expected cost for introducing a machine or deep-learning model in a product, solutions or service. However, my observation is that many struggle quite a bit with coming up with potential concepts that exploit the benefits that ML/DL models provide.

Earlier, we introduced the HoliDev model, which distinguishes between requirements-driven, outcome-driven and AI-driven development and claims that each type of development has its own characteristics. AI-driven development thrives where, on the one hand, there’s sufficient data available for training and operations and, on the other hand, we’re looking to solve an inference problem that’s particularly hard to solve without the use of ML/DL techniques as there’s no clear algorithmic approach. In general, we focus on three main characteristics that provide the key preconditions for a successful AI concept, ie removing hardcoded responses, using ignored data and revisiting negative RoI use cases.

First, in situations where the system response is hardcoded, there can be a significant benefit to providing a response to each request based on the available information. The obvious example is in the online advertising space where companies like Google and Facebook are constantly looking to create more accurate profiles of users in order to serve more relevant ads, rather than showing people a random ad. AI models can, especially when a good algorithmic approach is lacking, provide better responses by training based on available data.

Second, there are numerous situations where available data simply is ignored as humans haven’t been able to detect patterns in it and consequently follow a mathematical approach to solve a particular problem. An interesting example can be found in control systems where several companies are working to complement or replace traditional P, PI, PD and PID controllers with AI models. The reason being that traditional controllers operate based on a theoretical model of how a system is supposed to behave in response to control signals. In practice, no real-world system responds completely in accordance with the theory and AI models can improve the quality of control by taking all data into account.

Third, the most difficult case is where the cost of collecting data for human interpretation has had a negative return on investment as the effort required to benefit from the data was too high. With the decreasing cost of sensors, computing resources and communication, however, more and more cases exist where collecting the data for use by an AI model is actually becoming profitable.

It’s in this category where the most rewarding AI business cases can be found. One well-known example is sentiment analysis in social media. The amount of data in social media vastly outweighs the ability of even large teams of people to keep track of the sentiment around, for instance, a product or a company and consequently people didn’t even try. With the emergence of ML approaches, however, it becomes entirely feasible to have real-time dashboards of the state of the sentiment and companies use these insights for decision-making.

Concluding, for all the focus on AI algorithms, data and training, one of the most challenging activities remains the identification of interesting business cases and evaluating the feasibility and desirability of each case. I’ve discussed three categories of cases that can provide inspiration for identifying your AI business case.

Why your strategy fails

During the last weeks, I’ve experienced multiple situations where an organization (industrial or academic) simply doesn’t have a business strategy or a strategy concerning a key area for their business. When probed and questioned on the strategy, I’ve observed at least three types of responses.

First, leaders in the company say that there **is** a strategy and that I’m wrong in claiming otherwise. Although I’ve been wrong many times in my life, to me a strategy should provide clear guidance on what tasks and opportunities should be prioritized over others and, above all, what we shouldn’t spend time, energy and resources on. A strategy that fails to specify what we shouldn’t do, to paraphrase Michael Porter, is no strategy.

Second, the company admits that the strategy is high-level and not operational but defends itself by claiming that its key success in the market is to be customer focused and, consequently, it needs to respond to the requests from customers rather than set its own course. Obviously, this is a fallacy as it causes companies to fall into the “make customers happy” trap. It’s impossible to satisfy everyone. Rather, you need to choose what kind of customers you want and then focus on making them happy. This, of course, is a strategic choice.

Third, especially in new areas where the company has no established business, leaders claim that it’s impossible to formulate a strategy as nobody knows how the market will unfold. This, however, causes them to become the plaything of more proactive, strategic competitors who will dictate how the market will establish itself. It’s important to avoid an Alice in Wonderland situation where by not knowing what you want, any direction is equally good.

Although these responses are understandable and human, they lead to a number of serious problems for the company. There are at least three that I’ve witnessed over the years.

First, the company acts tactically and opportunistically. Due to the lack of a clear strategy, individuals at all levels in the organization take tactical decisions that provide them with the most benefit in the short term without considering the long-term consequences. This results in an accumulation of architectural, technical and process debt in the organization, as well as in the relationships with customers and other ecosystem participants, which over time causes enormous disadvantages due to reduced business agility, unreasonable expectations by others, as well as numerous other consequences.

Second, there’s a significant risk that different teams in the company pursue opposing local strategies and consequently nullify each other’s efforts, causing the company to expend, sometimes significant, resources without any business benefit. Burning resources without generating business benefits obviously is the fastest road to bankruptcy.

'The “strategy in use” will become whatever everyone feels like'

Third, even if none of the above effects occur at your organization, all employees will, at any point in time, still have way more work than they could possibly hope to accomplish during work hours. In the absence of a clear strategy, individuals randomly prioritize tasks based on personal preferences, expediency and other factors. So, the “strategy in use” will become whatever everyone feels like. In practice, this tends to lead to people doing what they did yesterday, meaning the company gets stuck in the past and fails to evolve and respond to changes in the market.

Concluding, developing and communicating a clear and actionable strategy that represents tangible choices is a critical tool in aligning large groups of people. The alternative is to micromanage everyone, which will cause you to lose your best people as nobody likes being told how to do their job. A successful strategy defines a clear what and why and leaves it to individuals and teams to figure out how.

How to generate data for machine learning

In recent columns, I’ve been sharing my view on the quality of the data that many companies have in their data warehouses, lakes or swamps. In my experience, most of the data that companies have stored so carefully is useless and will never generate any value for the company. The data that actually is potentially useful tends to require vast amounts of preprocessing before it can be used for machine learning, for example. As a consequence, in most data science teams, more than 90 percent of all time is spent on preprocessing the data before it can even be used for analytics or machine learning.

In a paper that we recently submitted, we studied this problem for system logs. Virtually any software-intensive system generates data capturing the state and significant events in the system at important points in time. The challenge is that, on the one hand, the data captured in logs is intended for human consumption and, consequently, contains a high variability in the structure, content and type of the information for each log entry. On the other hand, the amount of data stored in logs often is phenomenally large. It’s not atypical for systems to generate gigabytes of data for even a single day of operations.

The obvious answer to this conundrum is to use machine learning to derive the relevant information from the system logs. This approach experiences a number of significant challenges due to the way logs are generated. Based on our research in literature and company cases, we identified several challenges.

Due to the nature of data generation, the logs require extensive preprocessing, reducing the value. It’s also quite common that multiple system processes write into the same log file, complicating time series analysis and other machine learning techniques assuming sequential data. Conversely, many systems generate multiple types of log files and establishing a reliable ground truth requires combining data from multiple log files. These log files tend to contain data at fundamentally different levels of abstraction, complicating the training of machine learning models. Once we’re able to apply machine learning models to the preprocessed data, interpretation of the results often requires extensive domain knowledge. Developers are free to add new code to the system that generates log entries in ad-hoc formats. The changing format of log files complicates the use of multiple logs for training machine learning models as the logs aren’t necessarily comparable. Finally, any tools built to process log files, such as automated parsers, fail unpredictably and are very brittle, requiring constant maintenance.

We studied the problem specifically for system logs, but my experience is that our findings are quite typical for virtually any type of automated data generation. Although this is a huge problem for almost all companies that I work with and enormous amounts of resources are spent on preprocessing data to get value out of it, it’s a losing battle. The amount of data generated in any product, by customers, across the company, and so on, will only continue to go up. If we don’t address this problem, every data scientist, engineer and mathematician will soon be doing little else than preprocessing data.

'Data should be generated in such a way that preprocessing isn’t required at all'

The solution, as we propose in the paper, is quite simple: rather than first generating the data and then preprocessing it, we need to build software to generate data in such a format that preprocessing isn’t required at all. Any data should be generated in such a way that it can immediately and automatically be used for machine learning. Preferably without any human intervention.

Accomplishing this goal is a bit more involved than what I can outline in this post, but there are a number of key elements that I believe are common for any approach aiming to achieve this. First, all data should be numerical. Second, all data of the nominal type (different elements have no order nor relationship to each other) should be one-hot encoded, meaning that the elements are mapped to a binary string as long as the number of element types. Third, data of the ordinal type can use the same approach or, in the case of non-dichotomous data, use a variety of encodings. Fourth, interval and ratio data needs to be normalized (mapped to a value between 0 and 1) for optimal use by machine and deep-learning algorithms. Five, where necessary, the statistical distribution of the data needs to be mapped to a standard Gaussian distribution for better training results.

Accomplishing this at the point of data generation may require engineers and developers to interact with data scientists. In addition, it calls for alignment across the organization, which hasn’t been necessary up to now. However, doing so allows companies to build systems that can fully autonomously collect, train and retrain machine learning models and deploy these without any human involvement (see the figure).

System logging for machine learning

Concluding, most data in most companies is useless because it was generated in the wrong way and without proper structure, encoding and standardization. Especially for the use of this data in training machine learning models, this is problematic as it requires extensive amounts of data preprocessing. Rather than improving our data preprocessing activities, we need to generate data in a way that removes the need for any preprocessing at all. Data scientists and engineers would benefit from focusing on how data should be generated. Rather than trying to clean up the mess afterwards, let’s try to not create any mess to begin with.

AI is NOT big data analytics

During the big data era, one of the key tenets of successfully realizing your big data strategy was to create a central data warehouse or data lake where all data was stored. The data analysts could then run their analyses to their hearts’ content and find relevant correlations, outliers, predictive patterns and the like. In this scenario, everyone contributes their data to the data lake, after which a central data science department uses it to provide, typically executive, decision support (Figure 1).

Figure 1: Everyone contributes their data to the data lake, after which a central data science department uses it to provide, typically executive, decision support.

 

Although this looks great in theory, the reality in many companies is, of course, quite a bit different. We see at least four challenges. First, analyzing data from products and customers in the field often requires significant domain knowledge that data scientists in a central department typically lack. This easily results in incorrect interpretations of data and, consequently, inaccurate results.

Second, different departments and groups that collect data often do so in different ways, resulting in similarly looking data but with different semantics. These can be minor differences, such as the frequency of data generation, eg seconds, minutes, hours or days, but also much larger differences, such as data concerning individual products in the field vs similar data concerning an entire product family in a specific category. As data scientists in a central department often seek to relate data from different sources, this easily causes incorrect conclusions to be drawn.

Third, especially with the increased adoption of DevOps, even the same source will, over time, generate different data. As the software evolves, the way data is generated typically changes with it, leading to similar challenges as outlined above. The result is that the promise of the big data era doesn’t always pan out in companies and almost never to the full extent that was expected at the start of the project.

Finally, to gain value from big data analytics requires a strong data science skillset and there simply aren’t that many people around that have this skillset. Training your existing staff to become proficient in data science skills is quite challenging and most certainly harder than providing machine learning education to engineers and developers.

'Every team, business unit or product organization can start with AI'

Many in the industry believe that artificial intelligence applications, and especially machine and deep-learning models, suffer from the same challenges. However, even though both data analytics and ML/DL models are heavily based on data, the main difference is that for ML/DL, there’s no need to create a centralized data warehouse. Instead, every team, business unit or product organization can start with AI without any elaborate coordination with the rest of the company.

Each business unit can build its own ML/DL models and deploy these in the system or solution for which they’re responsible (Figure 2). The data can come from the data lake or from the local data storage solutions, so you don’t even need to have adopted the centralized data storage approach before starting with using ML/DL.

Figure 2: Each business unit can build its own ML/DL models and deploy these in the system or solution for which they’re responsible.

Concluding, AI is **not** data analytics and doesn’t require the same preconditions. Instead, you can start today, just using the data that you have available, even if you and your team are just working on a single function or subsystem. Artificial intelligence and especially deep learning offer amazing potential for reducing cost, as well as for creating new business opportunities. It’s the most exciting technology that has reached maturity in perhaps decades. Rather than waiting for the rest of the world to overtake you, start using AI and DL today.

Why your data is useless

Virtually all organizations I work with have terabytes or even petabytes of data stored in different databases and file systems. However, there’s a very interesting pattern I’ve started to recognize during recent months. On the one hand, the data that gets generated is almost always intended for human interpretation. Consequently, there are lots of alphanumeric data, comments and other unstructured data in these files and databases. On the other hand, the size of the stored data is so phenomenally large that it’s impossible for any human to make heads or tails of it.

The consequence is that enormous amounts of time are required to preprocess the data in order to make it usable for training machine learning models or for inference using already trained models. Data scientists at a number of companies have told me that they and their colleagues spend well over 90 percent of their time and energy on this.

'Most of the data is mud pretending to be oil'

For most organizations, therefore, the only way to generate any value from the vast amounts of data that are stored on their servers is to throw lots and lots of human resources at it. Since, oftentimes, the business case for doing so is unclear or insufficient, the only logical conclusion is that the vast majority of data that’s stored at companies is simply useless. It’s dead weight and will never generate any relevant business value. Although the saying is that “data is the new oil”, the reality is that most of it is mud pretending to be oil.

Even if the data is relevant, there are several challenges associated with using it in analytics or machine learning. The first is timeliness: if you have a data set of, say, customer behavior that’s 24, 12 or even only 6 months old, it’s highly likely that your customer base has evolved and that preferences and behaviors have changed, invalidating your data set.

Second, particularly in companies that release new software frequently, such as when using DevOps, the problem is that with every software version, the way data is generated may have changed. Especially when the data is generated for human consumption, eg engineers debugging systems in operation, it’s time consuming to merge data sets that were produced by different versions of the software.

Third, in many organizations, multiple data sets are generated continuously, even by the same system. To derive the information that’s actually relevant for the company frequently requires combining data from different sets. The challenge is that different data sets may not use the same way of timestamping entries, may store data at very different levels of abstraction and frequency and may evolve in very unpredictable ways. This makes combining the data effort consuming and any automation developed for the purpose very brittle and likely to fail unpredictably.

My main message is that, rather than focusing on preprocessing data, we need to spend much more time and focus on how the data is produced in the first place. The goal should be to generate data such that it doesn’t require any preprocessing at all. This opens up a host of use cases and opportunities that I’ll discuss in future articles.

Concluding, for all the focus on data, the fact of the matter is that in most companies, most data is useless or requires prohibitive amounts of human effort to unlock the value that it contains. Instead, we should focus on how we generate data in the first place. The goal should be to do that in such a way that the data can be used for analytics and machine learning without any preprocessing. So, clean up the mess, get rid of the useless data and generate data in ways that actually make sense.

 

The game plan for 2020

In reinforcement learning (a field within AI), algorithms need to learn about an unexplored space. These algorithms need to balance exploration (learning about new options and possibilities) with exploitation (using the acquired knowledge to generate a good outcome). The general rule of thumb is that the less is known about the problem domain, the more the algorithm should focus on exploration. Similarly, the better the problem domain is understood, the more the algorithm should focus on exploitation.

The exploration/exploitation balance applies to companies too. Most companies have, for a long time, been operating in a business ecosystem that was stable and well understood. There were competitors, of course, but everyone basically behaved the same way, got access to new technologies at about the same time, responded to customers the same way, and so on. In such a context, a company naturally focuses more and more on exploitation as the reward for exploration is low. This is exactly what I see in many of the organizations I work with: for all the talk about innovation and business development, the result is almost always sustaining innovations that make the existing product or solution portfolio a bit better.

With digitalization and its constituent technologies – software, data and AI – taking a stronger and stronger hold of industry after industry, the stable business ecosystem is being disrupted in novel and unpredictable ways. Many companies find out the hard way that their customers never cared about their product. Instead, the customer has a need and your product happened to be the best way to meet that need. When a new entrant provides a new solution that meets the need better, your product is replaced with this new solution.

'Companies need to significantly increase the amount of exploration'

The only way to address this challenge is to significantly increase the amount of exploration your company conducts – we’re talking real exploration, where the outcome of efforts is unknown and where everyone understands that the majority of initiatives will fail. To achieve this, though, you need a game plan. This game plan needs to contain, at least, four elements: strategic resource allocation, reduced effort in commodity functionality, exploration of the novel business ecosystems and/or new positions in the existing business ecosystem and exploration of disruptive innovation efforts that are enabled through data and AI.

Many companies allocate the vast majority of their resources to their largest businesses. This makes intuitive sense, but fails to put a longitudinal perspective on the challenge of resource allocation. A model that can be very helpful in this context is the three horizons model. This model structures the businesses the company is in into three buckets. Horizon one are the large, established businesses that, today, pay the bills. Horizon two are the new, rapidly growing businesses that, however, are much smaller than the horizon one businesses. These are intended to be our future horizon one businesses. Horizon three are all the new, unproven innovation initiatives and businesses where it’s uncertain that things will work out but that are the breeding ground for future horizon two businesses. Resource allocation should restrict horizon one resources to maximally 70 percent of the total. Horizon two should get up to 20 percent and at least 10 percent of the total company resources should be allocated to horizon three.

Within horizon one, each business should grow its resource usage slower than revenue growth. That might mean that a horizon one business growing at 5 percent per year should cut its resource usage with 5 percent per year as this business is supposed to act as a cash cow for funding the development of future horizon one businesses.

In most companies, revenue and resource allocation are closely aligned with each other, but this is a mistake from a longitudinal perspective. A new business will require years of investment before it can achieve horizon one status and this new business can’t fund itself. Of course, you can have it bootstrap itself, but the result will typically be that competitors with a more strategic resource allocation will become the market leaders in these new businesses.

'Once you’ve defined the commodity, **stop** virtually all investment in it'

Second, reduce investment in commodity functionality. Our research shows that companies spend 80-90 percent of their resources on functionality and capabilities that customers consider to be commodity. I’ve discussed this in earlier blog posts and columns, but I keep getting surprised at the lack of willingness of companies to look into novel ways of reducing investment in places where it doesn’t pay off. Don’t be stupid and, instead, do a strategic review of your entire product portfolio and the functionality in your products and, together with customers and others, define what’s commodity and what’s differentiating. Once you’ve defined the commodity, **stop** virtually all investment in it. You need those resources for sustaining innovations that drive differentiation for your products.

Third, many companies consider their existing business ecosystem as the one and only way to serve customers. In practice, however, ecosystems get disrupted and it’s far better to be the disruptor than the disruptee. This requires a constant exploration of opportunities to reposition yourself in your existing ecosystem, as well as an exploration of novel ecosystems where your capabilities might also be relevant.

Finally, digital technologies – especially data and AI – offer new ways of meeting customer needs that you must explore in order to avoid being disrupted by, especially, new entrants. Accept that the value in almost every industry is shifting from atoms to bits, that data can be used to subsidize product sales in multi-sided markets, that AI allows for automation of tasks that were impossible to automate even some years ago and, in general, proactively explore the value that digital technologies can provide for you and your customers. This is where the majority of the resources that you freed up through horizon planning and reducing investment in commodity functionality should go.

Concluding, at the beginning of 2020, you need a game plan to significantly increase exploration at the expense of exploitation in order to identify new opportunities and detect disruption risks and to invest sufficiently in areas that provide an opportunity for growth. This requires strategic resource allocation, identifying and removing commodity, a careful review of your position in existing and new business ecosystems and major exploration initiatives in the data and AI space. It’s risky, it’s scary, most initiatives won’t pan out and customers, your shareholders and your own people will scream bloody murder. And yet, the biggest risk is to do nothing at all as that will surely lead to your company’s demise. Will you allow that to happen on your watch?

Why care about purpose in business?

Peter Drucker famously said that the purpose of a business is to create a customer and a customer is defined as someone who pays for the products and services the company offers. This perspective seems to be shared by many in business: as long as revenue and profits are generated, there’s no reason to bother about anything else. It’s all about the money!

Whenever there’s a discussion about morals and ethics, lip service is paid to those questions, but only if there’s a monetary reason for it. For instance, if trading with certain types of industries would be frowned upon by other customers and thus might lead to reduced sales. In this case, the revenue loss with existing customers outweighs the additional revenue and, as a result, the company may decide to not serve those industries. Although the outcome may be the desirable one, the rationale for the decision is pecuniary only.

At the same time, there are many companies out there that are purpose driven and explicitly seek to make the world a better place and improve the state of humanity. In the US, Whole Foods and Patagonia are good examples of this. To paraphrase the former co-CEO of Whole Foods, John Mackey: companies need to make money in the same way as our bodies need to make red blood cells if we want to live. But the purpose of our bodies is not to make red blood cells. Similarly, companies need to go beyond the sole focus of making money.

'Interestingly, focusing on purpose proves to be good for making money'

Interestingly, counter to what one might expect, focusing on purpose proves to be good for making money. Research shows that purpose-driven companies have higher profit margins than their competitors. In “Corporate culture and performance”, John Kotter and James Heskett show that over a decade-long period, purpose-driven companies outperform their counterparts in stock price by a factor of twelve.

The typical reasons why a purpose-driven company might do better have to do with more engaged employees and more passionate customers. With Gallup showing that the percentage of employees engaged in their work is in the low teens across the world, it’s clear that significantly increasing that percentage will do miracles for a company’s productivity and output. Similarly, we know that word of mouth is one of the most powerful and cost-effective ways to reach new customers.

So, why are so few companies explicit in expressing their purpose? One of the key challenges, I think, is that there’s an instinctive fear that expressing a purpose will be viewed as negative by at least some groups in society, resulting in alienating some parts of the customer base. As Simon Sinek so eloquently expressed this: “People don’t buy what you do; they buy why you do it!” The flip side of this statement is that the people that disagree with your why won’t buy from you.

Another reason, I believe, is that expressing a purpose may easily alienate employees. Putting such a stake in the ground may cause some of them to shy away from your business, while they could add value from a technical perspective. The corollary is, of course, that working with people that aren’t aligned with your implicit mission is demotivating as you and others may easily end up pulling in different directions.

The primary reason, however, is that, in my experience, many leaders don’t have clarity on their own purpose nor on the purpose of the company they lead. And when you yourself are unclear on your professional purpose, it’s difficult to express it clearly to others. The key challenge often isn’t whether an aspect of one’s purpose is positive or not, but rather it’s the relative priority of different aspects. When having to choose between revenue and environmental impact, how much cost savings justify what level of impact? Would your company go out with an ad like Patagonia where they showed a jacket with the text “Don’t buy this jacket”? Or, like Tesla, make your patent portfolio publicly available as long as your competitors use it to positively affect climate change?

Doctors have the goal of healing patients. Firefighters aim to protect people and property from damage. Teachers seek to educate the next generation. Business can’t just be about making money. We have the obligation to hold ourselves to a higher standard. What’s the purpose of your company? And how does your mission align with it? And what hard decisions do you take to live up to that purpose and mission?

With Christmas and New Year upon us, I encourage all of us to reflect on why we do what we do. What are we doing to contribute to a world that gets better all the time? Because the world **is** getting better and technology is at the heart of that. But it doesn’t happen automatically. It requires us, as technologists, to explicitly focus on the purpose and meaning of what we do.

More process doesn’t help

Over the last weeks, I’ve been to three different conferences where I heard presentations that were variations on a common theme: if we would just add more structure and more process to the topic at hand, if we would only introduce more steps, more checkpoints, involve more people, and so on, then all the problems we’re experiencing with this product roadmapping, these innovation initiatives, these business development activities, would magically disappear.

Although most would agree that this is obviously wrong, the fact is that in many companies, universities and government institutions, this is exactly what happens. The organization experiences some kind of problem, perhaps even one that may be exposed in the media and makes management look bad, resulting in a top-down order to “fix it”. The subsequent process is obvious for those that have been part of it. First, there’s an activity to describe the process that led up to the issue surfacing. This is followed by a review of all the actions and other factors, with the intent of identifying what went wrong. Finally, a new process is introduced or an existing process is updated to address the perceived limitations, holes or weaknesses in the current way of working.

Once introduced, the next step is to ensure enforcement of the new way of working. Obviously, the new or updated process adds overhead and makes it more difficult to perform the tasks efficiently. So, before you jump the gun and start to work on further complicating the existing processes in the organization, there are five factors I’d like you to consider.

First, one of the concerns that many ignore, but that’s obvious when you think about it, is that the future is fundamentally unknowable. Looking back, we have full knowledge of what has happened. Consequently, it’s obvious what the optimal way to address an issue would have been. However, when standing at the point when a decision needs to be taken, we’re doing so with significant uncertainty about the implications.

'Incompetence cannot be cured by more process'

Second, depending on the organizational culture, it may be very difficult to point out that individuals have acted out of a fundamental lack of competence. It’s important to realize that incompetence cannot be cured by more process. Incompetence requires educating people or, if that proves unfeasible, replacing individuals with new people.

Third, the more process is introduced and the more enforcement of process takes place, the more people focus their attention on correctly following the process, rather than focusing on accomplishing the desired outcome. This leads to a fundamental lack of accountability in the organization, with everyone hiding behind having followed the process and failing to take responsibility for the desired results.

Fourth, too much process can cause more problems than it solves. As processes are created to be repeatable and to apply to a large variety of different situations, an overly detailed process definition is, by definition, ineffective in the majority of situations. Especially in organizations that place high value on following due process, the inefficiencies and harm done by blindly following process can become staggering, potentially even to the point of companies being disrupted.

Finally, in most organizations that I work with, processes and methods are developed by people that are outside the arena, meaning that they won’t be affected by the implications of the process and method definitions. Although not actually performing the job, there’s a strong tendency to act as “Monday morning quarterbacks”, a reference to the Monday watercooler meetings in especially US companies where the flaws of a team’s quarterback are discussed. The interesting thing is that the criticism tends to come from people that would never ever qualify as quarterbacks themselves.

Concluding, before you fall into the ‘more process’ trap, please ask yourself whether it would help to predict the future better, whether your people perhaps lack competence, whether you promote accountability, whether the root cause is perhaps too much process and whether you’re listening to so-called experts that don’t actually have a sufficient understanding of the situation.