The end of scarcity

As it was Thanksgiving in the US last week, I wanted to follow up with a reflection on the notion of scarcity and abundance. Many in industry operate with a scarcity mindset, believing that basically everything we’re concerned with is available only in limited amounts. Whether it’s ‘winning’ a customer, being promoted or getting a project that you asked for, the basic assumption is that either someone else gets it or you get it. It’s a win-lose situation.

The reason for this can, of course, be found in the evolution of humankind. For the hundreds of thousands of years that the predecessors of modern-day humans roamed the earth, everything was driven by scarcity. Food was scarce, safe places to live were scarce, people to partner with safely were scarce, and so on. And this translates into many of our current behaviors in society. In football games, one team wins and the other loses. In television shows, there’s one winner. In computer gaming, the battle royale games go through successive rounds of fights until only one player is left.

For virtually everyone in the western world, scarcity is largely an illusion. We all have access to food, a safe place to live, access to healthcare, protection from threats, and so on. Using Maslov’s theory as a basis, our physiological and safety needs are largely taken care of and these basic needs were historically where scarcity existed. Our psychological and self-fulfillment needs are typically not where scarcity is an issue. When someone else enters a relationship, it doesn’t reduce your opportunities to enter a relationship. When a colleague of mine publishes a paper, it doesn’t affect my opportunities to publish a paper.

In our economy and business, we have the same misconception of scarcity. Most people seem to miss that the economy is growing all the time. GDP growth means that there’s more value, with money as its proxy, created by an economy. That value and money don’t mean that a country had to lose for another country to win. In some way, we’re creating value and money ‘out of thin air’ (see Wikipedia for a longer exploration of this topic). Thanks to the digitalization of society, more and more of that value is digital, as can be witnessed by the valuations of technology behemoths such as Microsoft, Facebook and Google. And because of that, our planet’s physical resources are less and less the source of value creation, which is a good thing from an environmental perspective.

The point I’m trying to make is that we live in an age of abundance. For most of us, we can put virtually all our life energy into self-actualization and creating a positive impact on the world we live in. We can do this through work, volunteering, relationships, community efforts, politics or any other means. But we should view the world around us through the lens of abundance, rather than the lens of scarcity.

In business, this means that a competitor or partner doing well doesn’t mean that you lose opportunities to do well. In fact, several of the startups I’m involved in celebrate when companies in the same space do well as it means that the size of the cake is growing. Especially for new companies, it’s not the competition that’s the concern, but overcoming non-consumption, eg companies that continue to use pen and paper instead of the nifty tool you created. So, instead of competing at the sharp end of the knife with a scarcity mindset, look for ways to create win-win situations and adopt an abundance mindset. There’s more than enough for all of us. Your growth is only helping my growth!

Many who will read this will undoubtedly point to all the pain and suffering that still exist in the world and I don’t want to ignore that or sweep it under the rug. But the fact is that virtually every metric concerning the quality of life in the world, ranging from war deaths and people living in poverty to child mortality and life expectancy, is improving and humanity has never lived in a better age. We live in the age of abundance and, for all the troubles in the world today, I believe it’s good to remember that and be grateful for it.

Towards autonomously improving systems

This week, I attended the International Conference on Software Business (ICSOB 2020) and gave a presentation on autonomously improving systems. The core idea is that software-intensive systems can measure their performance, know what to optimize for and can autonomously experiment with their own behavior.

The history of software-intensive systems can be divided into three main phases. In the first phase, we built systems according to a specification. This specification could either come from the customer or from product managers, but the goal of the R&D team was to build a system that realized the requirements in the specification.

Evolution of software-intensive systems

Many online companies, but also the first embedded-systems companies, are operating in the next stage. Here, teams do not get a specification but rather a set of one or more prioritized KPIs to improve. In e-commerce, the predominant KPI often is conversion; in embedded systems, often a weighted mix of performance, reliability and other quality attributes is used. Teams get as a target to improve one or more of these KPIs without (significantly) deteriorating the others. They have to develop hypotheses on this and test them in the field using, for instance, A/B experiments.

Although the second stage is a major step forward for many companies, the problem is that it still is the team doing all the heavy lifting. Especially running many A/B experiments can be quite effort consuming. The next step that some companies are starting to explore is to allow the system to generate its own experiments with the intent to learn about ways to improve its own performance. Theoretically, this falls in the category of reinforcement learning, but it proves to be quite challenging to realize this is an empirical, industrial context.

The evolution companies go through to reach this third stage can be put in a model, showing the activities and technologies that can be used at each level. From level 2, we see some autonomously improving system behavior such as adding intelligent, online selection of experiments, as well as automatically generating experiments. This results in all kinds of challenges, including predicting the worst-case performance of the generated alternatives. If a system autonomously generates and deploys experiments, some of these experiments can exhibit very poor performance, meaning the system requires models to predict the worst-case outcome for each experiment, as well as solutions to cancel ongoing experiments if performance is negatively affected.

Evolution model

'We need to start looking into online reinforcement learning'

With the increasing prevalence of AI, we need to start looking into online reinforcement learning in software-intensive systems as this would facilitate autonomously improving systems. This ambition comes with major challenges that we’re now researching. However, I encourage you to start exploring where the systems that you build could autonomously improve their own behavior. Even starting in a small, risk-free corner of the system can be very helpful to learn about this paradigm. The overall goal is that every day I use your product, I want it to be better!

So much data, so little value

Recently, in a discussion with a company about becoming data-driven, I ran into the same challenge as many times before: the company claims to gather so much data, but the amount of value generated from that data is very small. It makes one wonder what underlies these patterns of, apparently, enormous amounts of data being collected but very little of that data being used to create something of value. In my experience, there are at least three factors at play: sense of ownership, local optimization and cost of ‘productizing’ data.

A typical pattern in many organizations is that teams generating and collecting data for their purposes feel strong ownership of that data and don’t want others to prod “their data” with their big, fat fingers. It’s theirs and if anyone else needs similar data, they can go and collect it themselves rather than get it for free from the team.

This leads to many small islands of data that are entirely disconnected and don’t aggregate into something more valuable than the sum of the parts. Teams may brag about all their data, but nobody else can use it.

Any team that decides that they need data to improve the quality of their decisions will focus on their own challenge and only collect what they need at the level of detail, frequency and aggregation that they need. In addition, they can decide on a moment’s notice to fundamentally change the way data is collected, as well as what data is collected.

The consequence is that the data typically is hard to use outside of the immediate context for which it was generated. This leads to different teams collecting very similar data, due to the lack of coordination. Also, as few think about the broader use, teams that realize that they need data are unable to reuse any of the existing data as it’s so specific to the use case for which it was collected.

If a team would decide to make their data available for others, they would need to provide documentation on the semantics of the data, set up a system for finding and downloading data sets, ensure that changes to the way data is collected, the semantics, and so on, are carefully communicated to stakeholders and, of course, respond to requests from these stakeholders and make changes to the data collection processes not to benefit themselves, but to help others in the organization. And, last but not least, the team may easily be held accountable for privacy, GDPR, security and other concerns that companies have around the stored data.

'Teams will actively try to not share data'

The consequence is that, unless a counterforce is present, teams will actively try to not share data because of the effort and cost of sharing with others in the organization. This again leads to lots of data recorded, stored and used for specific, narrow use cases, but no synergies, no end-to-end understanding of systems in the field and the way customers are using it, and so on.

The solution to these challenges is to adopt a hierarchical value modeling approach where you connect top-down business KPIs to lower-level metrics that can be collected directly from the field. By building this hierarchical, acyclic, directed graph and quantitatively establishing the relationship between higher and lower-level factors, we can finally start to generate business value from all the data we collect.

Getting from the current state to this hierarchical value model isn’t easy, if only because most people in the companies I work with find it extremely hard to determine what quantitative factors we’re optimizing for, and if we do know, the relative priority of these factors is a source of significant debate. However, it provides enormous benefits as you can focus data collection on the things that matter, use the data to make higher-quality decisions and build data-driven offerings to customers that you couldn’t have created otherwise. As the adage goes, it’s not about what you have, but about how you use it!

What is the basis of good communication?

Trainer Communication and leadership

An engineer asks:

I have been working as a chief design engineer for many years. However, I am regularly told that I need to communicate better. By now, I’ve gotten to the point where I want to make some improvements, but what exactly do they mean by “communicate better” and how do I do that?

The communication trainer answers:

Good communication skills are necessary to work well together in complex projects. Basically, it’s about knowing how to give a message and knowing how to properly receive the information someone else gives you. The necessary condition for this to succeed is contact between sender and receiver.

Contact is established by paying attention to the person you are talking to. You will therefore have to show interest in the other person. When the other person also pays attention to you, the contact is established. Compare it with calling a colleague. The moment the connection is there and the line is noiseless, you can start discussing things.

By actively listening, you ensure that you understand the other person’s message. You do this by applying: listening, summarizing and asking follow-up questions. Listening means paying attention to the other person. You summarize by saying, for example, “Okay, I understand that …” or “Okay, I hear you say this and that, is that correct?’’ The other person hears what you have learned and receives confirmation that you have understood correctly, or he has the opportunity to make some corrections or adjust the level of his explanation to the level of your understanding. In both cases this is pleasant for the one who is telling and creates clarity.

When sending your message, it is important to be as concrete as possible. Quantify where you can. This always makes your story better. Tune your story to the level of understanding and focus of the other person. What does the other person want to know? Probably your project manager is primarily interested in the schedule, risks or costs and less in technical details. The sales manager is probably more interested in the consequences for his customer than in the problem itself. You can estimate this beforehand and take it into account in your story.

'The game of sending and receiving rarely runs smoothly'

But now the most important thing. The game of sending and receiving rarely runs smoothly. Simply because we all have our own frame of reference and so we don’t understand each other right away. It is therefore extremely important that you pay attention at all times to whether your message is getting through and that you react if it is not.

You notice that your message is getting through when the other person is paying attention to your story and maybe nods in agreement from time to time. However, if the other person suddenly changes position, frowns or starts to say something, this could be a sign that your message is not going down well. It could be that the other person doesn’t understand something, has an interesting association or disagrees. You don’t know until you check.

And so the latter is what you should do. Continuing with your own story while the other person wanders off in their own thoughts accomplishes little. The moment you notice that something is happening with the other person’s attention, the communication turns one hundred and eighty degrees and you switch from sending to receiving. You ask: ‘’I see you frowning, tell …’’ or ‘’You want to say something, tell …’’. In this way, the communication oscillates back and forth and you quickly come to clarity.

Should you immediately stop your story with every muscle that leaves the other person? No, but it is recommended to always remain aware of the other person’s reaction to your story while you are telling it. Check it at least every so often by asking, “How does it sound so far?’’ In that way, you invite the recipient of your message to respond and you get feedback on how your story has arrived.

 

Don’t fall for symptoms

Over the last few weeks, I’ve been in discussions with several companies and the same problem occurred: my contacts raised a change that they were looking to realize in their organizations and asked me for help to realize it. When I asked how they had ended up in the situation that required the change, most were stumped – it was clear that this had crossed their minds.

Of course, as engineers, we’re trained to think in terms of solutions and many of us follow that training to the dot. However, developing a solution for a problem that actually is a symptom and the consequence of something else is entirely useless as the likelihood of the solution solving anything is about as high as the survival chances of a snowflake in hell.

For example, several R&D departments want to introduce continuous deployment or DevOps in their company but run into strong resistance from sales and customer support. Many lament that the people on the other side of the fence just don’t get it. However, when analyzing the situation, it’s obvious that the introduction brings along significant cost and a fundamentally different relationship with customers. And without a business model to monetize the continuous value delivery, there’s no point in adopting DevOps. So, rather than stressing that the folks on the business side are idiots, work with them to figure out how to create business models and customer engagement models that make sense and then work with lead customers to experiment with this new model.

Many know the “rule of 5 why’s”: the notion of, after observing a problem, asking “why” five times to go from the observed symptoms to the actual root causes. The challenge is that in practice, this rule is followed not nearly as often as it deserves.

A related and subsequent challenge is that even if we’ve identified the root cause and we have an idea to solve it, there’s huge resistance in the organization to actually implement everything that’s needed to actually make progress. Instead, there’s a tendency to support the implementation of something small enough to make it palatable for all in the company. The result frequently is a watered and scaled-down proposal that gets broad support but that offers little more than a token effort with little real impact on the company. As a general observation, my experience is that the more politicized an organization is, the more it tends to focus on symptoms instead of root causes and the more it tends to focus on watered-down change initiatives that create the illusion of action but don’t result in any genuine change.

'Build a common platform of a root cause focused understanding'

My advice is obvious. First, use your intelligence and experience to develop a solid understanding of the root causes underlying observed phenomena. Don’t fall into the trap of believing what everyone else believes. Second, use your social and interaction skills to confirm your understanding with others and to build a common platform of a root cause focused understanding. Third, once you’ve established a common understanding, explore multiple (rather than only one) avenues to address the identified root cause and build a platform for the one that has the required impact while minimizing collateral damage. Fourth, when sufficient agreement is in place, move forward with execution in an iterative, experimental manner where you take one end-to-end slice of the organization through the change, observe and measure the impact, adjust accordingly and proceed with the next slice. Throughout all this, find the right balance between driving and being committed to realizing the change and an objective, reflective attitude where you’re able to identify the downsides of the change and the need for adjustment where necessary.

As the Buddhists say, the difference lies in the small window between trigger and response. Rather than instinctively reacting to what life throws your way, pause, reflect and decide on a course of action that actually results in what you’re looking to accomplish. In other words, think rather than react.

Making data-driven real

Recently, I expert-facilitated a workshop at a company having the desire to become data driven. Different from the product companies that I normally work with, this company is a service provider with a large amount of staff offering services to customers. The workshop participants included the CEO and head of business development, as well as several others that are in or close to the company’s leadership team.

In many ways, this looks to be the ideal setup as one would assume that we have all the management support we need and some of the smartest people in the company with us. This was even reinforced by several in the company sharing that they’ve been working with data for quite a long time. Nevertheless, we ran into a significant set of challenges and we didn’t nearly get as far as we’d hoped.

The first challenge was becoming concrete on specific hypotheses to test. Even though we shared concrete examples of hypotheses and associated experiments when we kicked off the brainstorming and teamwork, everyone was having an incredibly hard time to go from a high-level goal of increasing a specific business KPI, eg customer satisfaction, to a specific hypothesis and an associated concrete experiment. There are many reasons for this. An obvious one is that many people feel that ‘someone’ should ‘do something’ about the thing that they worry about but never spend many brain cycles thinking about what that would look like.

The second challenge was that, for all the data the company had at its disposal, the data relevant in the situation at hand was frequently unavailable. Many companies I work with claim to have lots of data and many in the organization get really surprised that ‘just’ the data they need hasn’t been recorded. When you reflect on it, it’s obvious that this would be the case as the number of hypotheses that one can formulate is virtually infinite and, consequently, the likelihood of data not being available is quite significant.

The third challenge we ran into was that even in the cases where the data was available, it turned out to be aggregated and/or have a too low frequency of recording to be relevant for the purpose at hand. So, we have the data, but it’s in a form that doesn’t allow for the analysis that we want to do.

The response to these challenges is, as one would expect, to go out and collect what we need to pursue the experiment to get to a confirmation or rejection of the hypothesis. The funny realization that I had is that the more relevant and important the hypothesis is from a business perspective, the more likely it relates to regulatory constraints that limit what can be collected without going through a host of disclaimers and permissions. So, we ran into the situation that several of the more promising hypotheses were not testable due to legal constraints.

Finally, even if we had a specific hypothesis and associated experiment and we were able to collect the data we needed, it proved incredibly hard to scale to the point of statistical significance. Running a large-scale experiment that has a decent chance of failure, but that’s very expensive and risky to run kind of defeats the purpose of experimentation.

Becoming a data-driven organization is one of the highest-priority goals that any company should have. It allows for much higher-quality decision-making and operations while preparing for use of AI as a key differentiator and enabler. However, going from word to action is a challenging journey where, ideally, you learn from other people’s mistakes before making new ones yourself. We need the data, but we need to be smart in execution.

Towards business agility 2.0

Soon after the introduction of agility in software development, the notion of business agility was introduced as well. The basic idea was to scale the concepts behind agile software development to larger scopes, with the ambition to reach the entire organization, including R&D and IT. In practice, however, for many organizations, it proved difficult to go beyond the software part of the organization and things often got stuck at DevOps. Also, the basic mindset often was to treat changes as disruptions in a steady-state system, focused on returning to a steady state as soon as possible. Agile was concerned with minimizing the impact of changes by rapidly responding to them. The notion of business agility was very popular around 2010 and then started to fade as it didn’t provide the benefit that companies were looking for. To quote a manager in one of the Software Center partners: “We use Safe and say we’re all agile but we didn’t change a thing…”

More recently, we can see a development that’s not entirely dissimilar to the first incarnation of business agility (1.0) but that has a number of unique characteristics and is leading up to a 2.0 version of business agility. This version has, at least, three unique aspects: business models, technology scope and fast feedback loops.

First, many companies have started to realize that agility at the business level starts with the business model that you employ. It has to start with a transition from a transactional to a continuous model. If you build the capability to deliver value to customers but don’t have a way to monetize the continuous value capture, there’s no business incentive at all. If you improve the product, system or offering along some dimension, you need to be able to capture some of that value. For instance, if you run a truck company and you conduct A/B testing on the engines of your customers in the field to improve fuel efficiency, you want to capture some of the savings that your customers are enjoying. Why else would you bother with experimentation in the first place? So, whereas business agility 1.0 started bottom-up with the software development teams, the 2.0 incarnation starts top-down from the business model.

Second, in the embedded-systems industry, there’s a growing awareness that continuous deployment or DevOps doesn’t need to be limited to software. Under the right incentives and business models, it’s entirely feasible to periodically update electronic and mechanical parts of systems in the field to improve system performance. Among others, Tesla offers chip upgrades and hardware retrofits providing significantly improved capabilities, which the software can then use to improve the functionality in the car. So, business agility 2.0 doesn’t just focus on software but extends to electronics and mechanics on the one end and includes data and AI on the other.

Third, the focus in business agility 2.0 is on fast feedback loops across the company and all technologies. This has two aspects. First, each technology has an optimal feedback length where the customer and business benefit of new releases are balanced with the cost of manufacturing, distributing and installing. This of course means that software (including AI models) can afford to have very fast cycles as the cost of distribution and installation is very low and there’s no manufacturing cost. For electronics, especially when keeping the mechanical interface constant (pin configuration, power usage, EMC, and so on), the cost is higher and perhaps a yearly or biannual cycle makes the most sense. Finally, for mechanical parts, the update frequency should be even lower as they’re even more costly to manufacture, distribute and install. Still, when the continuous business model has liberated you from the “let’s save all improvements for the next product” attitude, also improved mechanical parts can be distributed, say, every three to five years.

Business agility 1.0, digitalization and business agility 2.0

The second aspect is that no slower cycle can slow down the faster cycle. Traditionally, the software release frequency was bound to the product release cycle. In business agility 2.0, no faster cycle (software or electronics) can be slowed down by a slower cycle (eg electronics or mechanics).

We’re entering the era of business agility 2.0, which starts from the adoption of a continuous business model and then optimizes the entire company to capitalize on fast feedback loops that allow for all technologies in products to improve at their own pace. Even if your customers aren’t asking for it yet, your suppliers are complaining and your partners aren’t yet ready to play ball, you better get going on this as the second incarnation of business agility provides major benefits, as well as improvements in efficiency and effectiveness that you can’t do without. Go agile, but go 2.0!

It’s up to you!

Last week, we had a strategy workshop at Software Center, the public-private digital transformation acceleration partnership that I lead. During one of the breakout sessions, we had a fun discussion around business agility that illustrated a very recognizable pattern. In a discussion around how to realize business agility, the focus was on who could be considered to be responsible for it. And then more examples of various people and roles abdicating responsibility were shared than you can shake a stick at.

In many ways, it has been the journey of Software Center. We started to work with software engineers around Agile practices, but soon the engineers mentioned that the architects should be involved as agility affects architecture as well. Once we had the architects involved, soon the requests came to involve development managers in the discussion as those are line managers to both engineers and architects. The development managers of course soon asked to involve product managers because they were just telling their teams to build what product management requests. It didn’t take long after we got product managers involved until they started to complain that we needed to involve the salespeople as whatever we were doing on the product development side had to be sold by them. The salespeople immediately remarked that if we wanted to change what we were selling, we had to get the C-suite involved as it would have a material impact on the bottom line. And the C-suite, obviously, responded with the argument that our customers weren’t asking for it and that our suppliers and partners weren’t willing to work with us on realizing these changes.

What’s going on here? Well, it relates to a column that I shared some months ago: to change anything, you have to change everything! And it aligns perfectly with our instinctive desire to keep things as they were and to control our environment to the maximum extent possible.

There’s an additional perspective though: the R&D organization in most companies that I work with considers itself to have the duty to build what the business side of the company asks for. The problem is of course that the business side doesn’t know what it wants until it’s blatantly obvious what’s needed and then they want it immediately. The new requirements from business often come up late and demand an immediate response from R&D.

The reality is that in practice, it’s the R&D organization that sets the business strategy for any organization. The design decisions taken by key people in R&D make certain business opportunities impossibly expensive to pursue, in terms of cost and time, and other business opportunities are easy and fast to realize. For all the talk around agility, realizing any significant architectural change in a large, established system takes a long time, often measured in quarters and years. The consequence is that it’s the responsibility of the R&D organization to predict the most likely business strategy options that the company will pursue a year or more down the line and prepare the system architecture for this.

This means that if you’re in R&D, you need to take responsibility. It’s your job to have a clear idea of what the future may look like and ensure that you’re creating a future for your company while delivering on today’s challenges. It’s critical to be ambidextrous and to balance the short and longer term. Most organizations rapidly build patterns for this, with a tendency to focus on the short term predominantly. It’s your responsibility to not blindly follow the established patterns but to continuously question the status quo. As Andy Grove used to say, only the paranoid survive.

In most organizations, there’s a tendency to use excuses to explain why certain changes aren’t realized. One of the most effective excuses is to abdicate responsibility and to point to others in the organization as being responsible for holding you back. As the saying goes, your comfort zone is a beautiful place but nothing ever grows there. It’s up to each of us to shoulder the heaviest responsibility we can carry and to step into an uncertain, unpredictable future, taking calculated risks and delivering for today and tomorrow. It’s up to you!

Make money from data

There’s an interesting development going on in the embedded-systems industry. Initially, data was only used for internal purposes and quality assurance. Customers would send log files to product companies who would analyze them to figure out why the product wasn’t operating as it should and what to do about it. Over time, the periodic data sets have turned into more or less continuous data streams and the data collected has evolved from being concerned with QA to focusing on product performance and measuring value delivery to customers.

As the volume and expenses associated with collecting and storing data have increased, companies have been investigating ways to create novel value from this data through direct or indirect monetization. We can identify at least four phases that companies go through.

The first step is where the company gives the data away as part of the overall product offering. Typically, the data is processed and provides nice dashboards for customers to gain an understanding of the product’s performance. However, as the customer gets this for free, there’s limited focus on the data part of the total offering. This is similar to how, in many industries, software was given away for free as part of the mechanical or electronic product. We’re now getting paid for software, but many are now giving data away for free.

The second step is where the company has developed some form of data-driven service to customers using the data from each specific customer. Here, the first monetization of the data starts and even if it often is a minor revenue stream, both customers and the company itself are now, in fact, benefiting from the collected data.

Once the second step is in place, often customers ask the company how they perform when compared to others. This is where the third step is initiated as it allows the company to provide data-driven services to customers using data from all customers. Now, customers can benchmark themselves and understand where to improve and where to extend their lead over competitors.

The fourth step is where the company moves to find alternative markets/customers for the data from its primary customer base. Here we see the start of a two-sided market where the primary customer base generates the data that is then monetized with a secondary customer base. If played right, this can allow the company to transition from a product to a platform company and to ignite a thriving business ecosystem where the company can ‘tax’ transactions between ecosystem partners and thus create highly profitable revenue streams that, in time, may outweigh the revenues from products.

'There are three main challenges: pricing, disruption risk from suppliers and partnering'

In our discussions with companies in Software Center, there are three main challenges that companies struggle with, ie pricing, disruption risk from suppliers and partnering. The first challenge, pricing, is simply concerned with putting an actual value on data sets or data streams. The preferred model, though difficult to execute on, is value-based pricing, meaning that you estimate the value that the receiver of the data gets from it and then negotiate a fair share of that value.

The second challenge is that product companies are constantly asked by suppliers for data. Initially, this concerns data from the subsystem provided by the supplier, but over time, it tends to broaden and cover a larger and larger scope. The risk becomes that, with enough data, suppliers can become powerful competitors in data-driven services. They often serve multiple companies in the same industry and if they manage to negotiate data from all of them, they’re much better placed to generate a competitive advantage. Of course, many companies have little interest in this, but finding the right balance between sharing and avoiding creating a new competitor is a difficult one. The best practice seems to be the insertion of a control point, meaning that you can cut off a supplier at any point in time when it becomes clear that they’re starting to compete with you.

Finally, even for potential partners from other industries that are interested in gaining access to the data collected by the company, it’s often very difficult to decide which of these potential partners are worthwhile to participate with and which ones should be ignored. There are few generic guidelines here, but in general, a potential partner that can help you build a two-sided market and, in due time, become a platform company is much more valuable than alternatives.

The embedded-systems (or cyber-physical-systems) industry is becoming increasingly aware of the importance of data but is struggling with operationalizing this awareness into a solid business. I’ve outlined the typical pattern that I see companies follow, as well as the key challenges experienced. Engaging in data is very difficult for companies that still think of themselves as metal-bending experts, but it’s critical to get going. Not using your data, or just giving it to someone else to build a business around, is the worst thing you can do. For all the risks and challenges, in a digitalizing world, you need to be world class at software, data and AI and the only way to achieve that is to experiment and learn. Go digital!

How do I tell another?

A senior engineer asks:

I manage a team of engineers in terms of content and in doing so I get stuck quite often. For example, I think one of my engineers should do things differently, but I don’t get through to him with my criticism. I worry that I will have to redo the work later. How do I get the engineer to listen to my criticism?

I also have a colleague from whom I need information. I have already kicked his ass a couple of times, but to no avail. I am fed up with it. Because of this irritation I’m afraid that if I say something about it it won’t come out ‘nicely’. How do I deal with this?

The communication trainer replies:

In both situations, it’s all about giving feedback. In other words, telling someone what you think, with the goal of improving behavior. This is a tricky soft skill, especially when it comes to negative criticism. Two pitfalls come into play here: you avoid the issue, which means the message does not get through to him/her, or you are too blunt, which damages the relationship. We often choose to avoid these pitfalls by just not saying anything. Of course, this does not make the problem go away. Even worse: it becomes worse. It is even the case that if you open your mouth after long delay, your pent-up criticism will indeed come out too unsubtly. So, what you wanted to avoid is actually created, namely a discussion that leads nowhere.

If you’re not careful, you draw the conclusion that ‘saying something about it’ next time is not a good idea either. The threshold for giving feedback thus becomes higher. This is bad news, because, as human beings, we can only learn by receiving feedback. If we are not aware of what we are doing and what the effect is, we cannot adjust our behavior to what is needed. In short: if you want to develop, you need feedback from your environment. Whether you like it or not. So, this also applies to your colleague. With this intention, it already becomes easier to start saying something. After all, you say it to improve the situation, to help the other person improve.

To effectively influence the behavior of the colleague and make your feedback land, four steps are necessary. The steps are all necessary and you take them one by one.

'Look for a solution together'

Step 1: Announce that you want to say something about the work or the collaboration (do not dive straight into the content). The other person then knows that they have to pay attention. Say, for example, “I want to talk to you about what has struck me (or what bothers me)’’.

Step 2: State in concrete and factual terms the other person’s behavior and what effect it has on you and/or the work (this can also be an emotion). So don’t say: ‘You’re handling it wrong’. This is not clear. Say: ‘I see that, for the third time, you are delivering your work later than we agreed’ (behavior of the other person). I am falling behind with my work as a result and so have too little time to do it properly (effect on the work). I am afraid that I am getting stuck and I worry about that. Moreover, it irritates me that you do not keep our agreement (effect on me)’. Note that we often forget about the effect on yourself and this makes the message just not get through to him/her.

Step 3: Take a step back and let the other person respond. You do this by asking a question. Say, for example, “Do you recognize this?” or “What do you think about this?” and then wait for an answer (a silence of at least four seconds stimulates the other person to react). This may be uncomfortable for the other person for a moment. If so, it means your message is getting through. It is important to let it be for a while and not to rush to the solution.

Step 4: Have you made up your mind? Look for a solution together. Being able to give good feedback is no guarantee of success. It does give you the tools with which you can positively influence the majority of situations.