The end of scarcity

As it was Thanksgiving in the US last week, I wanted to follow up with a reflection on the notion of scarcity and abundance. Many in industry operate with a scarcity mindset, believing that basically everything we’re concerned with is available only in limited amounts. Whether it’s ‘winning’ a customer, being promoted or getting a project that you asked for, the basic assumption is that either someone else gets it or you get it. It’s a win-lose situation.

The reason for this can, of course, be found in the evolution of humankind. For the hundreds of thousands of years that the predecessors of modern-day humans roamed the earth, everything was driven by scarcity. Food was scarce, safe places to live were scarce, people to partner with safely were scarce, and so on. And this translates into many of our current behaviors in society. In football games, one team wins and the other loses. In television shows, there’s one winner. In computer gaming, the battle royale games go through successive rounds of fights until only one player is left.

For virtually everyone in the western world, scarcity is largely an illusion. We all have access to food, a safe place to live, access to healthcare, protection from threats, and so on. Using Maslov’s theory as a basis, our physiological and safety needs are largely taken care of and these basic needs were historically where scarcity existed. Our psychological and self-fulfillment needs are typically not where scarcity is an issue. When someone else enters a relationship, it doesn’t reduce your opportunities to enter a relationship. When a colleague of mine publishes a paper, it doesn’t affect my opportunities to publish a paper.

In our economy and business, we have the same misconception of scarcity. Most people seem to miss that the economy is growing all the time. GDP growth means that there’s more value, with money as its proxy, created by an economy. That value and money don’t mean that a country had to lose for another country to win. In some way, we’re creating value and money ‘out of thin air’ (see Wikipedia for a longer exploration of this topic). Thanks to the digitalization of society, more and more of that value is digital, as can be witnessed by the valuations of technology behemoths such as Microsoft, Facebook and Google. And because of that, our planet’s physical resources are less and less the source of value creation, which is a good thing from an environmental perspective.

The point I’m trying to make is that we live in an age of abundance. For most of us, we can put virtually all our life energy into self-actualization and creating a positive impact on the world we live in. We can do this through work, volunteering, relationships, community efforts, politics or any other means. But we should view the world around us through the lens of abundance, rather than the lens of scarcity.

In business, this means that a competitor or partner doing well doesn’t mean that you lose opportunities to do well. In fact, several of the startups I’m involved in celebrate when companies in the same space do well as it means that the size of the cake is growing. Especially for new companies, it’s not the competition that’s the concern, but overcoming non-consumption, eg companies that continue to use pen and paper instead of the nifty tool you created. So, instead of competing at the sharp end of the knife with a scarcity mindset, look for ways to create win-win situations and adopt an abundance mindset. There’s more than enough for all of us. Your growth is only helping my growth!

Many who will read this will undoubtedly point to all the pain and suffering that still exist in the world and I don’t want to ignore that or sweep it under the rug. But the fact is that virtually every metric concerning the quality of life in the world, ranging from war deaths and people living in poverty to child mortality and life expectancy, is improving and humanity has never lived in a better age. We live in the age of abundance and, for all the troubles in the world today, I believe it’s good to remember that and be grateful for it.

Towards autonomously improving systems

This week, I attended the International Conference on Software Business (ICSOB 2020) and gave a presentation on autonomously improving systems. The core idea is that software-intensive systems can measure their performance, know what to optimize for and can autonomously experiment with their own behavior.

The history of software-intensive systems can be divided into three main phases. In the first phase, we built systems according to a specification. This specification could either come from the customer or from product managers, but the goal of the R&D team was to build a system that realized the requirements in the specification.

Evolution of software-intensive systems

Many online companies, but also the first embedded-systems companies, are operating in the next stage. Here, teams do not get a specification but rather a set of one or more prioritized KPIs to improve. In e-commerce, the predominant KPI often is conversion; in embedded systems, often a weighted mix of performance, reliability and other quality attributes is used. Teams get as a target to improve one or more of these KPIs without (significantly) deteriorating the others. They have to develop hypotheses on this and test them in the field using, for instance, A/B experiments.

Although the second stage is a major step forward for many companies, the problem is that it still is the team doing all the heavy lifting. Especially running many A/B experiments can be quite effort consuming. The next step that some companies are starting to explore is to allow the system to generate its own experiments with the intent to learn about ways to improve its own performance. Theoretically, this falls in the category of reinforcement learning, but it proves to be quite challenging to realize this is an empirical, industrial context.

The evolution companies go through to reach this third stage can be put in a model, showing the activities and technologies that can be used at each level. From level 2, we see some autonomously improving system behavior such as adding intelligent, online selection of experiments, as well as automatically generating experiments. This results in all kinds of challenges, including predicting the worst-case performance of the generated alternatives. If a system autonomously generates and deploys experiments, some of these experiments can exhibit very poor performance, meaning the system requires models to predict the worst-case outcome for each experiment, as well as solutions to cancel ongoing experiments if performance is negatively affected.

Evolution model

'We need to start looking into online reinforcement learning'

With the increasing prevalence of AI, we need to start looking into online reinforcement learning in software-intensive systems as this would facilitate autonomously improving systems. This ambition comes with major challenges that we’re now researching. However, I encourage you to start exploring where the systems that you build could autonomously improve their own behavior. Even starting in a small, risk-free corner of the system can be very helpful to learn about this paradigm. The overall goal is that every day I use your product, I want it to be better!

TUE PDEng answers the call to drive the future of industry

The link between industry and academia is crucial for preparing the workforce of tomorrow. As industrial leaders look to TUs for advanced engineers to fill leadership roles like that of a system architect, TUE’s PDEng program answers the call by infusing personal and professional development into students with training.

The Professional Doctorate in Engineering degree (PDEng) isn’t your typical advanced degree. In fact, the program is relatively unique to the Netherlands, with only a few other countries offering similar programs. PDEng’s Dutch roots go back several decades, but in 2003 the professional doctorate got its new name and was recognized by the Bologna Declaration as a third-cycle (doctorate-level) program. Different to a PhD, the curriculum doesn’t require years of research and a lengthy dissertation, rather it’s a two-year post-master’s program aimed at elevating systems knowledge and enabling the next generation of developers by gaining valuable hands-on experience and first-hand access to industry to become a system architect.

Each year, Eindhoven University of Technology (TUE) accepts 100-120 PDEng trainees across its various programs, spanning the fields of chemical, mechanical, electrical, software and medical engineering. “We have a very stringent selection process to ensure that our programs maintain an incredibly high level,” describes Peter Heuberger, the recently retired program manager for the Mechatronics and Automotive PDEng groups at TUE. “Just to give you an idea, each of my groups has only eight people. Those 16 spots were filled out of a pool of more than 200 applications that we received from all over the world.”

Peter Heuberger: “We’re looking to build advanced engineers that will take a few steps back and adopt a helicopter view of the problem.”

'Not only are they located in the neighborhood, but their extensive pool of industry-experienced engineers and experts greatly complimented our goal of getting our trainees as close to industry as possible'

Helicopter

As technology becomes exponentially more complex, success in technical development relies heavily on teams of multidisciplined engineers working together, each doing their part to contribute. A challenge, however, is that by nature, engineers tend to focus on one area and fail to see the big picture of the whole system. “Typically, if you give an engineer a problem, they’ll jump right in and start to unscrew bolts and take things apart, focused on finding their own solution to the problem,” illustrates Heuberger. “But we’re looking to build advanced engineers that will take a few steps back and adopt a helicopter view of the problem. Not just where the problem lies, but for whom is it a problem? Will it still be a problem next year? What are the costs involved? What’s the lifetime of the product?”

So, how do TUE’s Mechatronics and Automotive PDEng programs encourage their engineers to adopt this big-picture systems approach? They turn to training – especially in the first year. “A few years ago, while we were organizing system engineering courses at the university, it became clear that we didn’t have the resources or manpower to do all the necessary training in house,” explains Heuberger. “That’s when we reached out to High Tech Institute for help in providing training courses. Not only are they located in the neighborhood, but their extensive pool of industry-experienced engineers and experts greatly complimented our goal of getting our trainees as close to industry as possible.”

“After the first week of introductions, we have the trainees jump right into the Systems Thinking course. This is where many of the trainees get their first introduction and exposure to industry, the demands of the industrial plight and specific methodologies with which to approach system engineering,” says Heuberger. After the initial training, trainees spend the next several periods honing the methods and skills they’ve learned as they train their own system-engineering approach. “For this, we take on several sample projects, given to us by industrial partners like ASML, DAF, Philips and Punch Powertrain, where trainees take on different roles, ranging from project manager and team leader to communications, configurations or test managers. These exercises add more practical tools to the training and give trainees a better grasp of the bigger picture as they gain new perspective in the essence of their work.”

'This is precisely one of the most important aspects of training, the gained awareness and perspective'

Awareness

As the Mechatronics and Automotive PDEng trainees shift into the final module of the first year, TUE again reaches out to High Tech Institute to give a training on Mechatronics System Design. “This is a really high point for our trainees nearing the end of their first year, especially those interested in mechatronics. At this stage, they learn about advanced control theory from Mechatronics Academy experts like Adrian Rankers,” depicts Heuberger. “Something that really seems to stick with them is that you don’t always need very sophisticated control theory. You need to get the job done. When looking at a problem from a smart perspective, sometimes the most basic control theory is the best fit. But of course, it might be due to the control application or to the hardware setup, for example. This is the point where it all seems to click, and they really see the big picture.”

“This is precisely one of the most important aspects of training, the gained awareness and perspective,” adds Riske Meijer, incoming director of the Mechatronics and Automotive PDEng programs. “The awareness that when you’re starting any job, you’ve got to look beyond one task and one solution, at the job as a whole. That’s what it takes to be a successful system architect in industry.”

Riske Meijer: “You’ve got to look beyond one task and one solution, at the job as a whole. That’s what it takes to be a successful system architect in industry.”

Answering the call

Heuberger and Meijer will be the first to tell you, the TUE PDEng program doesn’t produce system architects but more of a system engineer. After all, there’s a big difference between leading groups of 3-5 people at university compared to leading groups of 30-50 in today’s workplace. To get to the level of a real system architect, it takes somewhere around 20 years of experience and development in the industry. However, by giving young engineers enhanced tools and real, hands-on industrial experience, TUE provides them a head start. Of course, not all trainees go on to become system architects, as not everyone is built the same. Many of them go on to find their place in other leadership roles like project management, people management or technical leads.

“Industrial partners have called on us to help produce advanced engineers beyond the master’s-degree level. They’re looking for young talent that will be able to step up as team leaders and in other leadership roles to advance the industry,” suggests Heuberger. “So that’s what we aim to do, we’re answering the call of industry and preparing future engineers, team leaders, project managers and system architects to fill those needs.”

This article is written by Collin Arocho, tech editor of Bits&Chips.

Recommendation by former participants

By the end of the training participants are asked to fill out an evaluation form. To the question: 'Would you recommend this training to others?' they responded with a 8.4 out of 10.

“Testing is tattooed on my forehead”

When he was a student, he didn’t have the slightest interest in chips, let alone in testing them. Now, Erik Jan Marinissen is an authority in the IC test and design-for-test arena and even teaches on the subject.

Last year, when Erik Jan Marinissen heard that his papers on Design for Test at the IEEE International Test Conference (ITC) had made him the most-cited ITC author over the last 25 years, he didn’t believe it. “I had skipped a plenary lunch session to set up a presentation that I would give later that day when passers-by started congratulating me. For what, I asked them. They explained that it had just been announced that I’m the most-cited ITC author over the past 25 years. Well, I thought, that can’t be right. Of course, I had presented a couple of successful papers over the years, but surely the demigods of the test discipline – the people I look up to – would be miles ahead of me,” tells Marinissen.

Back at home, Marinissen got to work. He wrote a piece of software that sifted through the conference data to produce a ‘hit parade’ of authors and papers. The outcome was clear: not only was he the most-cited author, but his lead over his idols was also actually quite substantial. ITC being the most prominent scientific forum in his field, there was no question about it: Marinissen is an authority in the test and design-for-test (DfT) disciplines (see inset “What’s design-for-test?”).

Credit: Imec

Once he was certain there had been no mistake, Marinissen felt “extremely proud. I’ve won some best-paper awards over the years, but they typically reflect the fashion of the moment. What’s popular one year, may not be anymore the next. My analysis confirms this, actually: not all awarded papers end up with a high citation score. Being the most-cited author shows that my work has survived the test of time; it’s like a lifetime achievement award.”

What’s design-for-test?
A modern chip consists of millions or even billions of components, and even a single one malfunctioning can ruin the entire chip. This is why every component needs to be tested before the chip can be sold. It’s turned on and off, and it needs to be verified that it changed state.

The hard part is: you can’t exactly multimeter every transistor as you’d do with, say, a PCB. In fact, the only way to ‘reach’ them is through the I/O, and a chip has far fewer I/O pins than internal components. Indeed, the main challenge of testing is to find a path to every component, using that limited number of pins.
This task is impossible without adding features to the chip that facilitate testing. Typically, 5-10 percent of a chip’s silicon area is there just to make testing possible: adding shift-register access to all functional flip-flops, decompression of test stimuli and compression of test responses, on-chip generation of test stimuli and corresponding expected test responses for embedded memories. Design-for-test (DfT), in its narrow definition, refers to the on-chip design features that are integrated into the IC design to facilitate test access.
Colloquially, however, the term DfT is also used to indicate all test development activities. This includes generating the test stimulus vectors that are applied in consecutive clock cycles on the chip’s input pins and the expected test response vectors against which the test equipment compares the actual test responses coming out of the chip’s output pins. Chip manufacturers run these programs on automatic test equipment in or near their fabs.

'I soon realized how wrong I was about testing. It’s actually a diverse and interesting field! '

Diverse and interesting

Verifying the calculations that entitled him to a prestigious award might be considered an instinct for someone who has dedicated his life to checking whether things work correctly, but Marinissen and testing weren’t exactly love at first sight. “As a computer science student at Eindhoven University of Technology, I didn’t have much affinity with chips or electrical engineering. We CS students used to look down on electrical engineers, actually. Electrical engineers are only useful for fixing bike lights, we used to joke. I’m sure they felt similarly about us, though,” Marinissen laughs.

Testing seemed even less appealing to Marinissen, for reasons he thinks are still true today. “If you don’t know much about the field, it may seem like testers are the ones cleaning up other people’s messes. That’s just not very sexy. For IC design or process technology development, it’s much easier to grasp the creative and innovative aspects involved. Even today, I very rarely encounter students who have the ambition to make a career in testing from the moment they set foot in the university.”

It took a particular turn of events for Marinissen to end up in testing. “I wanted to do my graduation work with professor Martin Rem because I liked him in general and because he worked part-time at the Philips Natuurkundig Laboratorium, which allowed him to arrange graduation projects there. Like most scientists in those days, I wanted to work at Philips’s famous research lab. But, to my disappointment, professor Rem only had a project in testing available. I reluctantly accepted, but only because I wanted to work with the professor at the Natlab.”

“I soon realized how wrong I was about testing. It’s actually a diverse and interesting field! You need to know about design aspects to be able to implement DfT hardware, about manufacturing to know what kind of defects you’ll be encountering and about algorithms to generate effective test patterns. It’s funny, really. Initially, I couldn’t be any less enthusiastic about testing, but by now, it has been tattooed on my forehead.”

Stacking dies

After finishing his internship at the Natlab in 1990, Marinissen briefly considered working at Shell Research but decided that it made more sense to work for a company whose core business is electronics. He applied at the Natlab, got hired but took a two-year post-academic design course first. Having completed this, Marinissen’s career started in earnest in 1992.

“At Philips, my most prominent work was in testing systems-on-chip containing embedded cores. A SoC combines multiple cores, such as Arm and DSP microprocessor cores, and this increases testing complexity. I helped develop the DfT for that, which is now incorporated in the IEEE 1500 standard for embedded core test. When the standard was approved in 2005, many people said it was too late. They thought that companies would already be set in their ways. That wasn’t the case. Slowly but surely, IEEE 1500 has become the industry default.”

Marinissen is confident the same will eventually happen with another standard he’s helped set up. He worked on this after transferring from Philips, whose semiconductor division by then had been divested as NXP, to Imec in Leuven in 2008. He actually took the initiative for the IEEE 1838 standard for test access architecture for three-dimensional stacked integrated circuits himself. He chaired the working group that developed the standard for years until he reached his maximum term and someone else took the helm. The standard was approved last year.

“Stacking dies was a hot topic when I was hired at Imec. Conceptually, 3D chips aren’t dissimilar from SoCs: multiple components are combined and need to work together. By 2010, I’d figured out what the standard should look like, I’d published a paper about it and I thought: let’s quickly put that standard together. These things always take much longer than you want,” Marinissen sighs.

His hard work paid off, though. Even before the standard got its final approval, the scientific director at Imec received the IEEE Standards Association Emerging Technology Award 2017 “for his passion and initiative supporting the creation of a 3D test standard.”

Credit: Imec

Flipping through the slides

As many researchers do, Marinissen also enjoys teaching. He accepted a position as a visiting researcher at TUE to mentor students who – unlike himself when he was their age – take an interest in DfT. At an early stage, he also got involved with the test and DfT course at Philips’ internal training center, the Centre for Technical Training (CTT). “Initially, most of the course was taught by Ben Bennetts, an external teacher, but I took over when he retired in 2006. I remember having taught one course while still at NXP, but not a single one for years after that – even though Imec allowed me to. There just wasn’t a demand for it.”

“Then, in 2015, all of a sudden, I was asked to teach it twice in one year. Since then, there has been a course about once a year.” By then, the training “Test and design-for-test for digital integrated circuits” had become part of the offerings of the independent High Tech Institute, although, unsurprisingly, many of the course participants work at companies that originate from Philips Semiconductors. “Many participants have a background in analog design or test and increasingly have to deal with digital components. I suppose that’s understandable, given the extensive mixed-signal expertise in the Brainport region.”

“I might be the teacher, but it’s great to be in a room with so much cumulative semiconductor experience. Interesting and intelligent questions pop up all the time – often ones I need to sleep on a bit before I have a good answer. It’s quite challenging, but I enjoy it a lot. As, I imagine, do the students. I’m sure they prefer challenging interactions over me flipping through my Powerpoint slides.”

From begrudgingly accepting a graduation assignment to sharing his authoritative DfT expertise in class – the young Erik Jan Marinissen would never have believed it.

Recommendation by former participants

By the end of the training participants are asked to fill out an evaluation form. To the question: 'Would you recommend this training to others?' they responded with a 8.4 out of 10.

So much data, so little value

Recently, in a discussion with a company about becoming data-driven, I ran into the same challenge as many times before: the company claims to gather so much data, but the amount of value generated from that data is very small. It makes one wonder what underlies these patterns of, apparently, enormous amounts of data being collected but very little of that data being used to create something of value. In my experience, there are at least three factors at play: sense of ownership, local optimization and cost of ‘productizing’ data.

A typical pattern in many organizations is that teams generating and collecting data for their purposes feel strong ownership of that data and don’t want others to prod “their data” with their big, fat fingers. It’s theirs and if anyone else needs similar data, they can go and collect it themselves rather than get it for free from the team.

This leads to many small islands of data that are entirely disconnected and don’t aggregate into something more valuable than the sum of the parts. Teams may brag about all their data, but nobody else can use it.

Any team that decides that they need data to improve the quality of their decisions will focus on their own challenge and only collect what they need at the level of detail, frequency and aggregation that they need. In addition, they can decide on a moment’s notice to fundamentally change the way data is collected, as well as what data is collected.

The consequence is that the data typically is hard to use outside of the immediate context for which it was generated. This leads to different teams collecting very similar data, due to the lack of coordination. Also, as few think about the broader use, teams that realize that they need data are unable to reuse any of the existing data as it’s so specific to the use case for which it was collected.

If a team would decide to make their data available for others, they would need to provide documentation on the semantics of the data, set up a system for finding and downloading data sets, ensure that changes to the way data is collected, the semantics, and so on, are carefully communicated to stakeholders and, of course, respond to requests from these stakeholders and make changes to the data collection processes not to benefit themselves, but to help others in the organization. And, last but not least, the team may easily be held accountable for privacy, GDPR, security and other concerns that companies have around the stored data.

'Teams will actively try to not share data'

The consequence is that, unless a counterforce is present, teams will actively try to not share data because of the effort and cost of sharing with others in the organization. This again leads to lots of data recorded, stored and used for specific, narrow use cases, but no synergies, no end-to-end understanding of systems in the field and the way customers are using it, and so on.

The solution to these challenges is to adopt a hierarchical value modeling approach where you connect top-down business KPIs to lower-level metrics that can be collected directly from the field. By building this hierarchical, acyclic, directed graph and quantitatively establishing the relationship between higher and lower-level factors, we can finally start to generate business value from all the data we collect.

Getting from the current state to this hierarchical value model isn’t easy, if only because most people in the companies I work with find it extremely hard to determine what quantitative factors we’re optimizing for, and if we do know, the relative priority of these factors is a source of significant debate. However, it provides enormous benefits as you can focus data collection on the things that matter, use the data to make higher-quality decisions and build data-driven offerings to customers that you couldn’t have created otherwise. As the adage goes, it’s not about what you have, but about how you use it!

What is the basis of good communication?

Trainer Communication and leadership

An engineer asks:

I have been working as a chief design engineer for many years. However, I am regularly told that I need to communicate better. By now, I’ve gotten to the point where I want to make some improvements, but what exactly do they mean by “communicate better” and how do I do that?

The communication trainer answers:

Good communication skills are necessary to work well together in complex projects. Basically, it’s about knowing how to give a message and knowing how to properly receive the information someone else gives you. The necessary condition for this to succeed is contact between sender and receiver.

Contact is established by paying attention to the person you are talking to. You will therefore have to show interest in the other person. When the other person also pays attention to you, the contact is established. Compare it with calling a colleague. The moment the connection is there and the line is noiseless, you can start discussing things.

By actively listening, you ensure that you understand the other person’s message. You do this by applying: listening, summarizing and asking follow-up questions. Listening means paying attention to the other person. You summarize by saying, for example, “Okay, I understand that …” or “Okay, I hear you say this and that, is that correct?’’ The other person hears what you have learned and receives confirmation that you have understood correctly, or he has the opportunity to make some corrections or adjust the level of his explanation to the level of your understanding. In both cases this is pleasant for the one who is telling and creates clarity.

When sending your message, it is important to be as concrete as possible. Quantify where you can. This always makes your story better. Tune your story to the level of understanding and focus of the other person. What does the other person want to know? Probably your project manager is primarily interested in the schedule, risks or costs and less in technical details. The sales manager is probably more interested in the consequences for his customer than in the problem itself. You can estimate this beforehand and take it into account in your story.

'The game of sending and receiving rarely runs smoothly'

But now the most important thing. The game of sending and receiving rarely runs smoothly. Simply because we all have our own frame of reference and so we don’t understand each other right away. It is therefore extremely important that you pay attention at all times to whether your message is getting through and that you react if it is not.

You notice that your message is getting through when the other person is paying attention to your story and maybe nods in agreement from time to time. However, if the other person suddenly changes position, frowns or starts to say something, this could be a sign that your message is not going down well. It could be that the other person doesn’t understand something, has an interesting association or disagrees. You don’t know until you check.

And so the latter is what you should do. Continuing with your own story while the other person wanders off in their own thoughts accomplishes little. The moment you notice that something is happening with the other person’s attention, the communication turns one hundred and eighty degrees and you switch from sending to receiving. You ask: ‘’I see you frowning, tell …’’ or ‘’You want to say something, tell …’’. In this way, the communication oscillates back and forth and you quickly come to clarity.

Should you immediately stop your story with every muscle that leaves the other person? No, but it is recommended to always remain aware of the other person’s reaction to your story while you are telling it. Check it at least every so often by asking, “How does it sound so far?’’ In that way, you invite the recipient of your message to respond and you get feedback on how your story has arrived.

 

Don’t fall for symptoms

Over the last few weeks, I’ve been in discussions with several companies and the same problem occurred: my contacts raised a change that they were looking to realize in their organizations and asked me for help to realize it. When I asked how they had ended up in the situation that required the change, most were stumped – it was clear that this had crossed their minds.

Of course, as engineers, we’re trained to think in terms of solutions and many of us follow that training to the dot. However, developing a solution for a problem that actually is a symptom and the consequence of something else is entirely useless as the likelihood of the solution solving anything is about as high as the survival chances of a snowflake in hell.

For example, several R&D departments want to introduce continuous deployment or DevOps in their company but run into strong resistance from sales and customer support. Many lament that the people on the other side of the fence just don’t get it. However, when analyzing the situation, it’s obvious that the introduction brings along significant cost and a fundamentally different relationship with customers. And without a business model to monetize the continuous value delivery, there’s no point in adopting DevOps. So, rather than stressing that the folks on the business side are idiots, work with them to figure out how to create business models and customer engagement models that make sense and then work with lead customers to experiment with this new model.

Many know the “rule of 5 why’s”: the notion of, after observing a problem, asking “why” five times to go from the observed symptoms to the actual root causes. The challenge is that in practice, this rule is followed not nearly as often as it deserves.

A related and subsequent challenge is that even if we’ve identified the root cause and we have an idea to solve it, there’s huge resistance in the organization to actually implement everything that’s needed to actually make progress. Instead, there’s a tendency to support the implementation of something small enough to make it palatable for all in the company. The result frequently is a watered and scaled-down proposal that gets broad support but that offers little more than a token effort with little real impact on the company. As a general observation, my experience is that the more politicized an organization is, the more it tends to focus on symptoms instead of root causes and the more it tends to focus on watered-down change initiatives that create the illusion of action but don’t result in any genuine change.

'Build a common platform of a root cause focused understanding'

My advice is obvious. First, use your intelligence and experience to develop a solid understanding of the root causes underlying observed phenomena. Don’t fall into the trap of believing what everyone else believes. Second, use your social and interaction skills to confirm your understanding with others and to build a common platform of a root cause focused understanding. Third, once you’ve established a common understanding, explore multiple (rather than only one) avenues to address the identified root cause and build a platform for the one that has the required impact while minimizing collateral damage. Fourth, when sufficient agreement is in place, move forward with execution in an iterative, experimental manner where you take one end-to-end slice of the organization through the change, observe and measure the impact, adjust accordingly and proceed with the next slice. Throughout all this, find the right balance between driving and being committed to realizing the change and an objective, reflective attitude where you’re able to identify the downsides of the change and the need for adjustment where necessary.

As the Buddhists say, the difference lies in the small window between trigger and response. Rather than instinctively reacting to what life throws your way, pause, reflect and decide on a course of action that actually results in what you’re looking to accomplish. In other words, think rather than react.

Making data-driven real

Recently, I expert-facilitated a workshop at a company having the desire to become data driven. Different from the product companies that I normally work with, this company is a service provider with a large amount of staff offering services to customers. The workshop participants included the CEO and head of business development, as well as several others that are in or close to the company’s leadership team.

In many ways, this looks to be the ideal setup as one would assume that we have all the management support we need and some of the smartest people in the company with us. This was even reinforced by several in the company sharing that they’ve been working with data for quite a long time. Nevertheless, we ran into a significant set of challenges and we didn’t nearly get as far as we’d hoped.

The first challenge was becoming concrete on specific hypotheses to test. Even though we shared concrete examples of hypotheses and associated experiments when we kicked off the brainstorming and teamwork, everyone was having an incredibly hard time to go from a high-level goal of increasing a specific business KPI, eg customer satisfaction, to a specific hypothesis and an associated concrete experiment. There are many reasons for this. An obvious one is that many people feel that ‘someone’ should ‘do something’ about the thing that they worry about but never spend many brain cycles thinking about what that would look like.

The second challenge was that, for all the data the company had at its disposal, the data relevant in the situation at hand was frequently unavailable. Many companies I work with claim to have lots of data and many in the organization get really surprised that ‘just’ the data they need hasn’t been recorded. When you reflect on it, it’s obvious that this would be the case as the number of hypotheses that one can formulate is virtually infinite and, consequently, the likelihood of data not being available is quite significant.

The third challenge we ran into was that even in the cases where the data was available, it turned out to be aggregated and/or have a too low frequency of recording to be relevant for the purpose at hand. So, we have the data, but it’s in a form that doesn’t allow for the analysis that we want to do.

The response to these challenges is, as one would expect, to go out and collect what we need to pursue the experiment to get to a confirmation or rejection of the hypothesis. The funny realization that I had is that the more relevant and important the hypothesis is from a business perspective, the more likely it relates to regulatory constraints that limit what can be collected without going through a host of disclaimers and permissions. So, we ran into the situation that several of the more promising hypotheses were not testable due to legal constraints.

Finally, even if we had a specific hypothesis and associated experiment and we were able to collect the data we needed, it proved incredibly hard to scale to the point of statistical significance. Running a large-scale experiment that has a decent chance of failure, but that’s very expensive and risky to run kind of defeats the purpose of experimentation.

Becoming a data-driven organization is one of the highest-priority goals that any company should have. It allows for much higher-quality decision-making and operations while preparing for use of AI as a key differentiator and enabler. However, going from word to action is a challenging journey where, ideally, you learn from other people’s mistakes before making new ones yourself. We need the data, but we need to be smart in execution.

Training is key to superior chip knowledge at NXP

As the electronics and semiconductor domain continues to explode with complexity, engineers are having to step outside of their comfort zones and take on new roles to keep up with the increasing demands of chip performance. For semiconductor giant NXP’s failure analysis department, training employees and broadening its knowledge base is instrumental in holding the leading.

For nearly 25 years, Johan Knol has known exactly where he wanted to be. In 1996, fresh off finishing his master’s degree in electronics with a focus on analog design and semiconductor processing at the University of Twente, he had his eyes set on joining the semiconductor arm of Philips – which was later spun out as NXP. “I saw what Philips was achieving in the semiconductor industry at that time and it was quite impressive. But even then, it was extremely evident to me that the industry needed a major catchup, particularly in the analog-chip world,” recalls Knol, Manager of Failure Analysis for Security and Connectivity at NXP. “I came to Nijmegen to tour their cutting-edge MOS-4 fab, and it really piqued my interest. I knew this was a place where real innovation could be realized, and I wanted to be part of it.”

In his 25 years with the company, Knol has held several positions. First as a device physics engineer, then a process integration engineer – working to improve the overall process from development to manufacturing – before opting for a move to NXP’s failure analysis (FA) department. “I chose failure analysis because it combines all corners of NXP. Essentially, we work in a state-of-the-art silicon debug lab, where my group is responsible for identifying electrical failures within all the new products NXP launches and ensuring all of our products meet the highest quality standards,” describes Knol. “We help the design teams identify issues in the design and manufacturing chains. To do that, NXP provides us with top-of-the-line equipment to handle all the analysis requests, from mixed-signal processing technologies down to 16nm, and using techniques like laser voltage probing, laser frequency mapping and nanoprobing – we do it all.”

Evolving

One aspect of the silicon domain that Knol has encountered in his 2.5 decades in service is just how quickly the industry seems to be evolving. According to him, engineers, at least in his department, are having to go well beyond their areas of focus and broaden their understanding of NXP’s entire production chain, especially as chip complexity continues to explode. One essential tool he relies on to keep his team sharp: training and personal development.

“Almost no one comes out of university, or even from another department, having a solid grasp of the entire field at NXP. When someone joins our team, they’ve got to learn at least 4-5 different areas of the production chain,” depicts Knol. “It’s only with that knowledge that you can solve the kinds of problems that we get sent to us – ie a chip isn’t working, but with no clue as to why. Typically, new hires have a background in physics or chemistry or electronics, and maybe they’ll even have experience in analog or digital design but hiring someone with expert knowledge on mixed-signal design and these other disciplines doesn’t really happen.”

For Knol, however, it’s precisely this understanding of multiple aspects and disciplines that’s so crucial to the success of NXP’s FA lab, and why he’s a big believer in tech training. Knol: “Our competence program is primarily focused on broadening the knowledge of our engineers. They need to have a broad view of everything involved in creating a chip.”

'At NXP, we’ve had a shift from truly analog design to embedding digital more and more – so mixed-signal designs – and it’s happening ridiculously fast'

Digital transition

One driving force that Knol and NXP have experienced in the semiconductor sphere is the transition from analog to digital chips, or at the very least a combination of the two. “At NXP, we’ve had a shift from truly analog design to embedding digital more and more – so mixed-signal designs – and it’s happening ridiculously fast,” says Knol. “But even products that were 100 percent analog in the past, for good reasons, are now embedding more digital cores.”

Knol uses the example of NXP’s smart antenna solutions product line for 5G applications, where they used to deliver single RF transistors or RF low-noise amplifiers but now have started embedding digital content in that line of chips. “These chips are now much more complex, and the engineers that have spent years perfecting the analog design are now suddenly facing products with digital content. At first, they didn’t know how to deal with that, how to interpret that, or even how to test.”

That’s when NXP’s FA department reached out to High Tech Institute and arranged for an in-company session of the tech training “Test and design-for-test for digital integrated circuits” . “This shift to digital isn’t going to go away, it’s only going to become more prevalent. As a unit, we decided we needed to establish new competencies in this domain and this training was a perfect opportunity,” highlights Knol. “We chose High Tech Institute because of its undeniable link to the high-tech industry. They have a strong understanding of the domain because the trainers are actually from the industry. More importantly, we were able to work directly with them to tune the content of the tech training to our specific needs. That was the real strength that we saw in High Tech Institute.”

 

Time management

Of course, the success of any technology company depends on highly skilled and highly technical people. Sometimes, however, success can also stem from the soft skills of employees – such as good communication, stakeholder management and using time in the most efficient ways. But as the complexity continues to increase, and engineers are taking on more responsibility, sometimes the soft skills can be a challenge. “We have some really outstanding minds at NXP. Our engineers are some of the best in the world. But one thing we’ve found is that the most specialized technical people can often be lacking when it comes to soft skills,” Knol describes. “Efficiency being key in an environment like this means every day you’re being challenged to do more in your daily efforts.”

This can be a little tricky when trying to balance work, meetings, planning and the many personalities you encounter in the workplace. That’s why NXP adopted another tech training from High Tech Institute: “Time management in innovation.” “We saw that people were struggling with time management. To be honest, I was one of them myself. So, we took this training and made it a default course for our people – meaning at some point in time, everyone should take it. And it’s from personal experience that I can say this tech training is extremely helpful,” states Knol. “People came back from this course having learned new tools to embed better planning in their work, learning how best to establish boundaries and how to address the issues they face in communicating with others. So yeah, that has become another default module that we offer to our people. Time management, education, self-reflection, taking leadership and working in project teams on a global scale. These are the kinds of courses that have become quite important to us. We believe that by investing in these trainings to help our workers enhance their personal development, it makes us a stronger department within NXP.”

This article is written by Collin Arocho, tech editor of Bits&Chips.

Klaus Werner cooks up a new solid-state RF training

RF energy systems have undergone a huge transformation since the early days of the tube-based magnetrons. But according to High Tech Institute trainer Klaus Werner, while the crude power of the tube is tough to match, the new generation of solid-state RF integrated circuits offers unprecedented control, efficiency and reproducibility.

Klaus Werner didn’t get a usual start in the field of RF energy solutions. After studying physics at the University of Aachen, he came to Delft University of Technology to further develop CVD systems for semiconductor technology. “At the time, I was just meant to be there for six months,” remembers Werner. But eight years and a PhD in silicon germanium growth in CVD-type systems later, Werner found himself still in Delft. “It was definitely time for a new challenge,” he recalls. Then, in 1995, Werner joined the MOS-3 fab in Nijmegen for 10 years before going to Eindhoven to the Philips team responsible for laser displacement sensors – those that are still used in computer mice today.

The fit wasn’t quite right for Werner, and the 3+ hours of commuting every day for work simply wasn’t working. So back to Nijmegen he went, becoming part of the RF power group at NXP. “The group was mostly concerned with the development of semiconductor technology and devices for high-power, high-frequency applications of RF. Most notably, in the areas of base stations for the cellular network, telephone, radar systems, and to a large extent, radio-TV transmission,” Werner describes. But it was while he was there at NXP that he saw people were applying the electromagnetic waves not for communications and data but using their sheer energy to power plasmas for lasers, lights and even medical applications, for example in hypothermia.

White goods

Suddenly, activity in the solid-state RF energy realm really started to heat up, specifically driven by white-goods companies, which got their name from the standard of white-coated exteriors of home appliances. “Whirlpool and several others saw a business opportunity to improve microwave ovens in the way they heat food,” explains Werner. “That’s when we started the RF Energy Alliance, an industry consortium that set out to establish standards, create roadmaps and develop new generations of the technology to build consensus and bring down cost.” But a few years in, and the white-goods companies pulled out, as it was simply taking too long for them to bring down costs to have a competitive offer against the magnetron-powered ovens.

“NXP, as a semiconductor company, wanted to focus on components and the technology behind the components. At the same time, I was focused on pushing forward with openly spreading the knowledge and interest of the technology and its applications, and in the end, we decided to split,” says Werner. “That’s when I decided to jump into the gap that I saw in the RF-energy field, and created Pink RF – taking on the name ‘pink’ as a nod to the breast cancer support organization Pink Ribbon – with an overall desire to develop the technology for wide use in areas that could really help people’s lives, for example in medicine.”

'One of the major hurdles in getting this technology known and used by broader audiences is sharing the knowledge about it'

Sharing knowledge

Despite the RF Energy Alliance folding, Werner was a firm believer in the promise of the technology and knew there was real value in the efforts of the failed consortium. “One of the major hurdles in getting this technology known and used by broader audiences is sharing the knowledge about it,” asserts Werner. “I was writing articles, preparing workshops and trainings, anything to increase the knowledge. I found that many people just didn’t have a solid idea of how to approach this unusual heat source.” Refusing to give up, Werner came across the International Microwave Power Institute (IMPI), which was doing a lot of the same outreach and promotional work on microwave power that he was looking for in the old RF Energy Alliance. Today, he serves as the chairman of IMPI’s RF energy section and is responsible for diffusing information around the unique technology and creating training opportunities to share his knowledge.

“That’s one of the reasons I wanted to join High Tech Institute. It’s a real institution that goes beyond simply giving workshops. It allows us to better reach technical people and connect with a specific audience and cater to its specific needs,” Werner says enthusiastically. “One of the best parts is that many participants already have a good understanding of what the technology entails. Everything they’ve already learned in school, about the behavior of waves and diffraction and refraction, still absolutely holds true. That idea alone has major implications, from a foundational aspect. It helps loosen the minds and starts to build perspective around this technology.”

New training

Werner’s first edition of the new “Solid-state generated RF and applications” training is aimed to do just that. The three-day course will give participants an inside view into the developments of the technology, from the previous generation of high-frequency tube-based magnetrons to the modern-day solid-state electronics-based energy source. “In terms of crude power, the magnetrons are tough to beat. The problem, however, stems from the lack of optimization and control of the tube and the degradation of the signal over time,” illustrates Werner. “The new generation of solid-state RF is really being driven by cellular communications, where there’s a need for high power linearity that’s created by transistors and semiconductors. This method creates a stable, efficient and, more importantly, controllable and reproducible signal that could never be realized by the magnetron.”

“There are many factors that come into play when determining how best to utilize RF energy and we’ll cover a lot of them in the new training. We’ll use a mixture of theory and practice to dig deeper into the technology. From safety aspects like radiation exposure – which is not a thing – to frequencies, behavior and interaction with matter,” describes Werner. “The reality is that this technology is extremely useful and completely scalable. From heating minute amounts of liquids under very well-controlled circumstances for Covid testing, up to cooking 1,000 liters of soup every hour. This modular technology is applicable from microjoules up to megajoules, with nearly unending possibilities.”

This article is written by Collin Arocho, tech editor of Bits&Chips.