Digital business: automated at heart

Digitalization is fundamentally enabled by three core technologies: software, data and artificial intelligence. The common denominator, which is inherent in a digitalized business, is that automation is at the heart of it. Digital technologies allow for automation to a much more significant extent than traditional technologies. We see this reflected in companies: whereas in traditional companies, humans are supported with automation, in digital businesses, automation of the core business processes has removed humans from the equation (almost) entirely.

One of the key reasons for the high degree of automation is that digital businesses typically employ continuous, rather than transactional, business models. This means that there’s a continuous relationship with the business of the customers, continuous delivery of new value-adding software, data-driven insights and AI models and continuous monitoring and logging. Activities that we might accept doing manually once or twice per year rapidly become subjects for full automation if they need to be conducted monthly, weekly, daily or even more frequently.

In a digital business, all core business processes are to the highest extent automated and controlled in an automated fashion using quantitative performance data. In fact, we can conceptualize a digital business as consisting of three circles of activities. The core circle consists of the company’s core value delivery business processes. For instance, for an e-commerce website, this includes the presentation of items, recommendations, managing orders and taking payments. These activities have no human involvement and are completely automated. In the case that core value delivery processes can’t be automated fully, such as warehouse tasks, the humans tend to be instrumented with data collection and subject to the same quantitative performance management as the automated parts. The first circle is concerned with operations and activities that support operations.

The second circle of activity involves human actors who use quantitative data for analytics and experimentation. The main focus here is to measure the core business processes and to tune and optimize them. For instance, analytics may show that items that are recommended to customers by the recommendation engine are selected and bought in 0.15 percent of the cases. As the industry average is higher than that, one of the activities in this circle might then be to experiment with different recommendation algorithms using A/B testing to evaluate whether the engine’s success rate can be improved to match the average. The second circle is concerned with tactics that improve the performance of the operational core. It’s important to note that activities in this circle don’t have to be performed by humans. It’s entirely feasible to have a system run autonomous improvement activities that focus on optimizing the core business processes.

Finally, the third circle is concerned with those business activities that are strategic in nature. As strategic activities tend to be about interpreting trends and predicting the future, it can be challenging to quantify them. Typical for activities in this circle is that the focus is on the purpose of the business, the role it plays in its ecosystem and the way it seeks to differentiate and complement itself towards others.

The three circles of activities are different from each other not just in terms of automation and use of data, but also in the cycle time and operating speed. The operations circle runs, by its very nature, in seconds, minutes and hours. The tactical circle operates in days and weeks, whereas the strategic circle tends to operate in months and years.

Of course, one can find huge amounts of automation in traditional companies as well. The main difference with digital businesses is the underlying mindset and approach. A bit exaggerated, in traditional companies, tasks are performed by humans unless it’s too expensive to do so. In digital companies, tasks are automated and performed by systems unless it’s unfeasible or prohibitively expensive to do so.

It’s easy to forget how far automation and digitalization can take a company. In many SaaS companies, the vast amount of business value creation (as in 99+ percent) is conducted fully automatically by systems rather than humans. The funny thing is, however, that in my experience, even in SaaS companies, the majority of management attention is directed towards humans and human processes, even if these represent a very small slice of the business.

Concluding, I find it helpful to think about companies in three distinct circles of activity, ie delivery and operations, optimization and experimentation and, finally, strategy and innovation, that have completely different characteristics, cycle times and success metrics. In my experience, many tend to mix up the activities in the different circles, which leads to confusion and sub-par performance. As a leader, take a step back and reflect on your organization, map the processes and activities to the three circles and identify where there are mismatches that you can address. Going digital is challenging, but the alternative is to remain a traditional company and risk being disrupted.

Why you’re not deploying AI

Imagine the following scenario. A (sizable) team at a large company writes customer documents in response to customer requests. They request help from the automation team to reduce their repetitive tasks. The automation team brings in an AI company, which develops an ML model that generates the customer documents automatically and virtually eliminates the need for human involvement. The prototype works amazingly well and both the AI company and the automation team are eager to move it into production as the cost savings, as well as the speed and quality of response to customers, are bound to improve significantly.

Sounds like a success story, right? Well, in this case, as well as in other cases that I’ve seen, the company managed to grab defeat from the jaws of victory. The solution wasn’t deployed. It’s probably not the end of the story and hopefully, the solution will be rolled out in the future, but the company experiences a significant delay in reaping the benefits from what should have been a straightforward and obvious deployment.

The pattern as I’ve seen it is that if AI is used to improve some product capability and it doesn’t affect existing organizational units nor existing processes, the deployment of the ML/DL model is quick and fairly seamless. The moment, however, existing organizations or teams are threatened in their existence or asked to reduce significantly in size or when existing work processes need to be adjusted to achieve the benefits, things rapidly grind to a halt and many in the organization start to backpedal.

 

Evolution stages of adopting ML/DL

In an earlier post, I presented the stages that companies go through when adding ML/DL to products. As shown in the figure, the first stage is experimenting & prototyping. Every company I work with has a host of those initiatives ongoing. However, when looking to transition successful prototypes and proofs of concept to actual deployment, we run into roadblocks.

The first and obvious roadblock is that you now need AI engineering to ensure that you have industry-strength, production-quality deployment of AI and, as I discussed in an earlier post, that requires a set of solutions, architecture, infrastructure and processes that are often not recognized by data scientists and people without an engineering background.

The second and more important roadblock is that the potential of AI is to significantly reduce cost while improving speed and quality. The fact is that for most companies, the primary cost driver is salaries. So, to reap the benefits of AI, it means reallocating or releasing the people that currently are doing the job that will be replaced by ML/DL models.

'It’s almost painful to write it down and not feel like an idiot'

This is so obvious that it’s almost painful to write it down and not feel like an idiot, but I keep running into situations like the scenario that we started with. Everyone loves AI and it’s on the top of the hype cycle. Everyone talks about all the great opportunities and benefits that AI will bring to their organization and society at large. But when it hits close to home, the willingness to change and reap the benefits suddenly is severely lacking.

This is a problem as the competition isn’t sitting still. We need to go through the painful process of reaping the benefits by reducing cost, redesigning processes, reallocating people and aligning your organization with the benefits that AI can offer. As I wrote earlier, it’s not what AI can do for you; the question is how you redesign your entire organization, business models, products, customer engagement models and ways of working to align with digitalization, meaning software, data and AI. This is the only way to capture the potential of AI to the full extent and the only way that you stay competitive in the long run.

In the startup community, large companies are often referred to as dinosaurs, ie slow, set in their ways and consequently ripe for disruption. Don’t be a dinosaur!

Don’t be like everyone else

This week, I had a wonderful conversation with the CEO of a midsized company (around 1,000 employees) to discuss business strategy and the implications on technology strategy in the overall context of digitalization. As the company supports its customers with digital solutions, it’s an example of the part of the economy that’s doing really well under the current circumstances. It’s a good reminder of the fact that it’s not so much that the economy is cratering, but rather that there are quite fundamental and accelerated shifts towards digitalization taking place in it. It’s just that news outlets prefer to talk about bad news (companies going out of business) instead of good news (the business of some companies is booming) because bad news sells more ads (if it bleeds, it leads).

The discussion with this CEO focused on the positioning of the company. It has much smaller competitors, as well as those that are (much) bigger and the question becomes how to differentiate your organization from these competitors. The simple answer is to do what they do but better or cheaper. However, as Einstein so eloquently said, for every problem, there’s an answer that’s simple, elegant and wrong.

The slightly less simplistic answer is to focus on one of the corners of the competitive triangle (customer intimacy, technology leadership or operational excellence) and organize your company based on that. Again, this perspective isn’t necessarily wrong, but it fails to give guidance as the question then becomes when to use what strategy.

'Commodity, differentiating and innovative functionality each require a different strategy'

In an earlier post, I introduced the three-layer product model where the functionality in a product, a platform or a product portfolio is organized into a layer of commodity functionality, a layer of differentiating functionality and a layer of innovative and experimental functionality. In our discussion, I realized that each of these layers requires a different strategy.

For commodity functionality, the focus should be on operational excellence as you’re looking to reduce the total cost of ownership for that layer to the minimum possible. This demands that you limit the alternative systems to deliver this functionality to the lowest possible number, preferably one. I still meet companies that have multiple solutions for the same commodity functionality and that can’t find the prioritization to reduce the number of alternatives and consequently continue to have outsized associated costs. In general, the goal should be to centralize, standardize and prepare for outsourcing the delivery of commodity functionality.

The differentiating functionality needs an alternative strategy: customer intimacy. This functionality is the key reason customers pick us over competitors and consequently, we need to work closely with customers to maximize the value we deliver to them. Here, the introduction of variants may well be justified as long as we’re able to monetize our efforts. At some point, what’s differentiating now will start to commoditize and then the rules of the game change to what we described above.

Finally, for the innovation and experimentation layer, the key strategy should be technology and product leadership. This is where we explore new innovations, which often are technology driven and which hopefully form our future differentiation. The success metric here is the number of things we can try out against our, often limited, budget. And if I say “try out,” I mean of course to evaluate ideas with customers. It’s too easy to get hung up in our own set of beliefs. Instead, work with customers and observe. Customers will never ask you for an innovation (and if they do, you’re in bigger trouble than you think) but will use what’s valuable to them.

Back to the discussion with the CEO: we concluded that it’s easy to look at our competitors, typically the larger ones, and consider copying what they’re doing, which typically focuses on standardizing and preparing for scaling. Or, to look at smaller competitors and focus on agility and customer intimacy. Although it’s perfectly alright to be inspired by what others are doing and to “steal with pride,” as leaders it remains your key responsibility to define a business strategy that’s uniquely different from the others in the industry. Being like everyone else lands you in a red ocean where cost and slim margins are the only things you can think about. Instead, be different in a way that matters to customers, find your blue ocean and build a great business. And, to quote Steve Jobs, if you haven’t found it yet, keep looking!

Don’t let your habits define you

This week, I had a meeting with the leadership team of a company that has asked me for help to accelerate their growth. We’ve been reconvening regularly and going through the process of defining who we are and what our purpose is as a business, identified the key avenues to accelerate growth, created a plan to execute on and operating mechanisms to follow up.

The weird thing is that we’ve been consistently running behind the plan in terms of execution and when I pointed this out to them, I got the usual excuses of internal dependencies, external factors outside the control of the team and so on. However, at the core, something else was going on. The team has been working together for more than a decade, during which the company went through some difficult times that resulted in their having become extremely careful and risk avoidant. Over the years, they’ve developed a set of habits that ensure wide safety margins. For instance, any new hiring only takes place after the revenue from customers for the new hire has been guaranteed for a long time to come.

The surprising thing is that these habits might have been useful at some point in the past, but at this stage where the company has raised a good chunk of funding, there’s no reason to be avoiding financial and business risk. Instead, with the whole COVID-19 situation, now is the time to invest and expand the team with great talent that’s now available because of many companies scaling back.

Not only are the current set of habits counterproductive for what we’re looking to achieve. The team even fully recognizes and admits that this is the case. And yet, as individuals and as a team, they struggle to let go of their habits and old ways of working.

This example is an instance of normal human behavior. Even though we tend to think of ourselves as rational beings that are occasionally bothered by these pesky emotions, the reality is that we’re irrational beings that are, according to some research, for more than 95 percent of the time driven by habits and that have a tendency to post-rationalize our entirely irrational behavior. The brain is a fantastic story-generating machine and most of the time, it’s generating stories explaining to ourselves why we did something.

In many of the companies and teams I work with, I’ve observed the same situation and it’s the leadership team that tends to be at the heart of it. For all the explanations and excuses of why we are in the situation we find ourselves in, basically, it almost always is the leadership team that’s hampering the company’s development and growth. And in the few cases where there really are external factors at play, it still behooves you as a leader to take responsibility anyway as it causes you to shift your mindset from a victim to the protagonist of your own story.

'Is what you’re doing actually the best course of action under the circumstances?'

 

I don’t mean to say that leadership teams of companies that aren’t doing so well need to be universally kicked out and replaced. Instead, I’m asking you, dear reader, to spend more time reflecting on what you do, how you behave, why you believe you do these things and to what extent it might be that you’re post-rationalizing non-constructive behavior. The only way to break out of these situations is by continuously holding up a mirror to yourself and carefully analyzing whether what you’re doing is actually the best course of action under the circumstances. To me, that’s the most effective, or even the only way, to continuously learn, improve and reinvent yourself and your organization.

As Lao Tzu famously said: “Watch your thoughts, they become your words; watch your words, they become your actions; watch your actions, they become your habits; watch your habits, they become your character; watch your character, it becomes your destiny.” And I believe that we should all aim for the highest destiny we can accomplish in our lifetimes.

What’s with all the Ops?

DevOps, DataOps, MLOps – the number of different “Ops” combinations seems to have exploded over the last year or so. There are manifestos, meetups, lots of blog posts and research articles about these various approaches.

In order to get clear on terminology, I think it’s good to define what we’re talking about. So, first, DevOps is a set of practices that combines software development (Dev) and information technology operations (Ops) with the aim to shorten the system development life cycle and provide continuous delivery with high software quality (Wikipedia). The intent is to combine agile software development practices with continuous deployment in order to have a constant flow of new functionality and resultant value delivery to customers. Also referred to as continuous deployment, new functionality can be rolled out whenever it’s ready, the effects measured and the feedback used to inform the next (rapid) cycle of development.

DataOps is an automated, process-oriented methodology, used by analytic and data teams, to improve the quality and reduce the cycle time of data analytics (Wikipedia). Although this sounds very different from DevOps, in most product companies, it’s tightly interconnected with the products deployed in the field. Consequently, the data analytics is primarily focused on R&D teams that need to know if the intended outcomes of their development efforts are indeed accomplished as part of the continuous deployment pipeline.

Finally, MLOps is a practice for collaboration and communication between data scientists and operations professionals to help manage the production machine learning (or deep learning) life cycle (Wikipedia). Whereas traditionally, data scientists would develop a model based on a data set and then move on with their lives, currently in many systems, ML/DL models are constantly evolving due to changes to the data or new algorithmic insights and need to be deployed frequently as well. Once deployed, they need to be monitored to ensure that models that perform better in training also perform better during operations.

'Dev, Data and ML all have to integrate with the same Ops'

In an earlier column, I presented the HoliDev model (see figure). Each of the “Ops” matches with one of the three types of development that’s ongoing. The surprising thing, of course, is that “Ops” for all these stands for “operations” and the key is to remember that for any system, product or solution, there’s only one operations function taking care of it. So, Dev, Data and ML all have to integrate with the same Ops.

The HoliDev model

Concluding, whatever “Ops” you’re working on, it all has to come together in the same operations and consequently, you’ll need to work in [cross-functional teams](https://bits-chips.nl/artikel/focus-on-outcomes-for-cross-functional-teams/) to ensure that you’re reaching the desired outcomes. The important takeaway is, though, that if an activity delivers value to customers, it deserves being done often. Only unimportant tasks are done yearly or even less frequently. So, reflect on this: for all the value-adding activities and processes in your organization, how can you increase the cycle time and create your own “Ops” setup where it matters the most?

AI engineering part 2: data versioning and dependency management

In my last column, I presented our research agenda for AI engineering. This time, we’re going to focus on one of the topics on that agenda, ie data versioning and dependency management. Even though the big data era has been with us for over a decade now, many of the companies that we work with are still struggling with their data pipelines, data lakes and data warehouses.

As we mostly work with the embedded systems industry in the B2B space, one of the first challenges many companies struggle with is access to data and ownership issues. As I discussed in an [earlier column](https://bits-chips.nl/artikel/get-your-data-out-of-the-gray-zone/), the key thing is that rather than allowing your data to exist in some kind of grey zone where it’s unclear who owns what, it’s critical to address questions around access, usage and ownership of data between your customers and your company. And of course, we need to be clear and transparent on the use of the data, as well as how the data is anonymized and aggregated before being shared with others.

The second challenge in this space is associated with the increasing use of DevOps. As data generation is much less mature as a technology than, for instance, API management in software, teams tend to make rather ad-hoc changes to the way log data is generated as they believe they’re the only consumers of the data and it’s only being used by them to evaluate the behavior of the functionality that the team is working on. Consequently, other consumers of the data tend to experience frequent disruptions of the data stream, as well as its content.

The frequent changes to data formats and ways of generation is especially challenging for machine learning (ML) applications as the performance of the ML models is highly dependent on the quality of the data. So, changes to the data can cause unexpected degradations of performance. Also, as ML models tend to be very data hungry, we typically want to use large data sets for training and, consequently, combine the data from multiple sprints and DevOps deployments into a single training and validation data set. However, if the data generated by each deployment is subtly (or not so subtly) different, that can become challenging.

The third challenge is that data pipelines tend to have implicit dependencies that can unexpectedly surface when implementing changes or improvements. Consumers of data streams can suddenly be switched off and as there typically is a significant business criticality associated with the functionality implemented by the consumer, this easily leads to firefighting actions to get the consumer of the data back online. However, even if this may be a nice endorphin kick for the cowboys in the organization, the fact of the matter is that we shouldn’t have experienced these kinds of problems, to begin with. Instead, the parties generating, processing and consuming data need to be properly governed and the evolution of the pipeline and its contents should be coordinated among the affected players.

'We’re working on a domain-specific language to model data pipelines'

These are just some of the challenges associated with data management. In earlier research, we’ve provided a comprehensive overview of the data management challenges. In our current research, we’re working on a domain-specific language to model data pipelines, including the processing and storage nodes, as well as their mutual connectors. The long-term goal is to be able to generate operational pipelines that include monitoring solutions that can detect the absence of data streams, even in case of batch delivery of data, as well as a host of other deviations.

In addition, we’ve worked on a “data linter” solution that can warn when the content of the data changes, ranging from simple changes such as missing or out-of-range data to more complicated ones such as shifting statistical distributions over time. The solution can warn, reject data and trigger mitigation strategies that address the problems with the data without interrupting the operations. Please contact me if you’d like to learn more.

Concluding, data management, including versioning and dependencies, is a surprisingly complicated topic that many companies haven’t yet wrestled to the ground. The difference in maturity between the way we deal with software and with data is simply staggering, especially in embedded systems companies where data traditionally was only used for defect management and quality assurance. In our research, we work with companies to make a step function change to the way data is collected, processed, stored, managed and exploited. As data is the new oil, according to some, it’s critical to take it as seriously as any other asset that you have available in your business.

AI engineering: making AI real

Few technologies create a level of hype, excitement and fear these days as artificial intelligence (AI). The uninitiated believe that general AI is around the corner and worry that Skynet will take over soon. Even among those that understand the technology, there’s amazement and excitement about the things we’re able to do now and lots of prediction about what might happen next.

'Rolling out an ML/DL model remains a significant engineering challenge'

The reality is, of course, much less pretty as the beliefs we all walk around with. Not because the technology doesn’t work, as it does in several or even many cases, but because rolling out a machine learning (ML) or deep learning (DL) model in production-quality, industry-strength deployments remains a significant engineering challenge. Companies such as Peltarion help address some of these and do a great job at it.

Taking an end-to-end perspective, in our research we’ve developed an agenda that aims to provide a comprehensive overview of the topics that need to be addressed when transitioning from the experimentation and prototyping stage to deployment. This agenda is based on more than 15 case studies we’ve been involved with and over 40 problems and challenges we’ve identified.

The AI Engineering research agenda developed in Software Center

The research agenda follows the typical four-stage data science process of getting the data, creating and evolving the model, training and evaluating and then deployment. For generic AI engineering, we identify, for each of the stages, the primary research challenge related to architecture, development and process. These challenges are mostly concerned with properly managing data, federated solutions, ensuring the various quality attributes, integrating ML/DL models in the rest of the system, monitoring during operations and infrastructure.

In addition to the generic AI engineering challenges, we recognize that different domains have their own unique challenges. We identify the key challenges for cyber-physical, safety-critical and autonomously improving systems. For cyber-physical systems, as one would expect, they’re concerned with managing many instances of a system deployed out in the field at customers. For safety-critical systems, explainability, reproducibility and validation are key concerns. Finally, autonomously improving systems require the ability to monitor and observe their own behavior, generate alternative solutions for experimentation and balance exploration versus exploitation.

Concluding, building and deploying production-quality, industry-strength ML/DL systems require AI engineering as a discipline. I’ve outlined what we, in our research group, believe are the key research challenges that need to be addressed to allow more companies to transition from experimentation and prototyping to real-world deployment. This post is just a high-level summary of the work we’re engaged in in Software Center, but you can watch and read or contact me if you want to learn more.

Why Agile matters

Recently, I got an e-mail asking me why we should care about Agile if the overall product development process, including mechanics and electronics, is measured in years and is completely waterfall. The question took me by surprise. I’ve been working with Agile practices for the better part of two decades now and for me it’s a given that fast feedback loops are better than slow ones.

However, after more careful reflection, I realized that the question is based on a few assumptions that, in turn, are founded on our beliefs around our ability to predict. The first assumption is concerned with our ability to optimally predict requirements for our products months, quarters or years down the line. In many industries where products contain mechanical and electronic components, the production pipeline requires long lead times. Consequently, the product requirements are formulated long before the start of production. The fallacy is, of course, that requirements change all the time due to new technologies becoming available, changing customer preferences, actions taken by competitors and so on. One rule of thumb in software says that requirements change with 1 percent per month – a very conservative estimate if you ask me.

So, how to respond to constantly changing requirements? There are fundamentally two approaches. Either you adopt agility and continuously respond to changes or you resist requirement changes, reject all that you can and grudgingly accept those that you really can’t ignore. The result of the latter approach is, of course, an inferior product as it’s based on the best insights from years ago.

The second assumption is that we can predict the effect of our requirements. These are defined as we hope to achieve a specific outcome as a consequence of realizing the requirement. We see this most often with usability requirements, but it basically extends to any quality attribute of the system. Online companies use A/B testing of solutions to determine the effects of different realizations of functions and features on users. These companies don’t do that because they’re so poor at requirements engineering, but because the effect of features and functions is fundamentally unknown when it comes to the way humans respond to software functions.

Traditional engineering companies pride themselves on their ability to predict the capabilities of systems before they build them as engineering offers a set of mathematical tools for modeling, simulating and predicting. These models are typically then confirmed by lab tests and in some cases small-scale tests in real-world contexts before fully committing to a specific design. Although this works quite well in many circumstances, it remains the case that measuring in real-world deployments provides much higher validity than mathematical models and lab tests. As I’ve shared in earlier posts, research by us and others shows that at least half of all the functions in a typical system are never used or used so seldomly that the R&D investment is a waste. So, wherever we can use techniques to deploy slices of functionality or features and measure the effect before building more, we should as it allows for a major improvement in the effectiveness of our R&D.

'We need real-world experiments to continuously improve'

Although many understand that real-world experimentation concerning usability and user behavior is a necessity, the same is true for all quality attributes. Think of all the security fixes that we need to roll out. Often these concern vulnerabilities to threats that were known before the design of the system was finished. It just turned out that the mitigation strategies that engineers designed into the system didn’t suffice. Similarly, do we know for a fact that the current system design gives us the highest performance, the best robustness, the highest energy efficiency? Of course not! Rather than relying on models and lab tests, we need real-world experiments with our products at customers in the field to continuously improve. The models and lab tests are still needed, but mostly to protect us from the downside of less successful experiments before deployment.

Concluding, if you’re able to perfectly predict the optimal set of requirements for a system or product years ahead of the start of production or deployment and if you’re able to accurately predict the effect of each requirement on the user, the customer and the quality attributes of the system, then you don’t need Agile. In all other cases, Agile (both pre-deployment and post-deployment – DevOps) offers the opportunity for a massive improvement in the effectiveness of your R&D (as measured in value created for each unit of R&D). It’s not that we can’t build products using traditional waterfall processes – of course we can as we’ve done so for decades. The challenge is that we’re much less efficient doing so, which increases the risk of disruption for our company.

Digitalization accelerated

For all the human suffering and economic impact caused by corona, there’s one thing that has just surprised me over and over again these last weeks: companies and professionals just adjust and adjust quickly. Teams and departments that were stuck in old ways of working suddenly have found that it’s entirely possible to work in a remote setup.

During this week’s Software Center steering committee meeting, all the companies present shared how they kept the business going despite everything. Those developing software, meeting customers or doing administrative work were working from home, but things were progressing. Those that required access to complex machinery or worked in manufacturing were still at the company but had taken measures to protect against infection to the best extent possible.

All these new work setups required everyone to spend time adjusting, required some more infrastructure in some cases and gave IT departments a busy time. But after the first week or so, most people got into the groove and things seem to be moving forward at largely the same rate of progress.

Now, I’m not at all implying that the current situation is ideal. Some companies have shut down or are working at 40-60 percent of capacity. Many experience loneliness due to the lack of human contact. And for all the video conferencing in the world, nothing beats standing together in front of a whiteboard during a brainstorm session. My point is that we’re able to push forward, to conduct R&D, to drive sales and to keep things going to a much larger extent than what I’d initially feared.

'Necessity is the mother of invention'

And, of course, there’s the notion of digitalization. Changes in working behavior, interactions with customers, activities that were viewed as simply requiring physical presence have now digitalized at a phenomenal pace. Necessity is the mother of invention and it’s clear that things that were considered impossible or at least sub-par are suddenly entirely possible and will soon be the norm.

As a leader, you now have a choice to make. Either you change as little as possible with the intent of changing back to the old ways of doing things as soon as possible. Or you use this opportunity to drive as much change as possible and use this as a springboard for accelerating all kinds of changes in your organization, ranging from the business models, interactions with customers and the way sales is conducted to the way you conduct R&D, what and how you automate processes and where you use humans. As the saying goes: Never waste a good crisis!

Focus on outcomes for cross-functional teams

With the vast majority of white-collar staff in companies currently working from home, the normal ways of managing people are disrupted quite fundamentally. Working closely with people in such a way that you can tell them what to do is much more difficult when you’re not physically in the same place.

Similarly, many organizations rely on meetings to align and coordinate work that crosses team and function boundaries. Because virtually all meetings take place online, their effectiveness is even lower than usual. I’ve already talked to people who are literally stuck behind their computers for back-to-back online meetings for ten hours in a row.

Rather than complaining about the situation and the inefficiency of working in this way, I’d like to outline an alternative approach: move to cross-functional teams that get tangible, quantitative outcomes as their target and that are otherwise left to their own devices on how to accomplish these outcomes.

Although it’s obvious that in almost all contexts, this is a better approach, few, especially traditional, companies are adopting it. The reasons are many, but some of the primary ones include: a lack of clear quantitative goals for most individuals and teams – in [last week’s post](https://bits-chips.nl/artikel/why-you-dont-define-desired-outcome/), I wrote about the reasons for this; a need for control by management, as in my experience, especially in more traditional companies, the culture tends to lean towards hierarchy; the Taylorian mindset of dividing every end-to-end task into multiple slices and giving each slice to a department, function or team and assuming a waterfall-style handover process; the belief that most work is repetitive and can be optimized by asking individuals and teams to specialize on their narrow slice of the work.

These reasons may have been justifiable at some point in the past, but this is most certainly no longer the case. All work that’s repetitive these days is automated and if it isn’t, you better automate it soon. This means that the only work left for humans is the work that we’re best at: complex, unique tasks that require creativity and a variety of skills to resolve.

'As coordination cost is orders of magnitude lower, the efficiency of work is much higher'

Cross-functional teams are uniquely suited for taking on this type of work. By establishing the team around the skills expected to be required for the task, people with different skills can together work on addressing the challenge. As coordination cost within teams is orders of magnitude lower than coordination across teams, functions and departments, the efficiency of work is much higher.

The second aspect of the approach is that rather than telling teams what to do and how to work, they receive tangible outcome targets. It’s then up to them to figure out how to achieve these targets. This may require experimentation with customers and products, prototyping, and so on.

As we describe in our work on value modeling, the outcome targets should be part of a hierarchical value model that links the top-level KPIs for the business with the mid-level and team-level targets. So, all teams have targets on which to focus but also guardrail metrics that they’re not allowed to affect negatively.

With most of the people in industry working from home, perhaps the time has come to reinvent your organization along these principles and instead of suffering through the disruption, you can use it to lift your organization to the next level. Focus on setting outcome targets, not on telling people how to get there.