Why are there so many stupid products?

During the last year, I’ve been in several discussions that, to a large extent, boiled down to “why is this product so stupid?”. The stupidity was defined by the lack of the system to anticipate user actions, the inability to learn to function better in a specific context or the total reliance on the user initiating activities by the system even if it was completely obvious what needed to be done. A few examples.

A representative of a company building radars relayed the story of being asked by a customer why the radar, once placed in a specific location, functioned the same after 2 minutes, 2 hours, 2 days and 2 months. Why wasn’t the radar learning from its context and improving its ability to detect objects by knowing what the static elements in the environment are and using that to better distinguish new objects?

A user of a route planning system complained about frequently being late for meetings because he wasn’t proactively warned for traffic jams that didn’t exist when he looked up the expected travel time the day before. Why doesn’t the system warn me, he lamented, for an unexpected traffic jam due to an accident or something so that I can leave earlier?

A company using expensive, high-tech equipment complained about the system being unable to adapt and get the hang of their very predictable schedule of operations. The equipment required adjustment time between different types of usage and even though the company ran virtually the same schedule day after day, the system didn’t learn to initiate the reconfiguration and subsequent adjustment by itself.

All these systems are built to the specifications that were put up before the start of development. All of them passed the validation and verification tests with flying colors. And yet, they fail to delight customers and users and provide significantly less efficiency and effectiveness than what they could.

'We have a set of tools that can help address the stupidity of products'

Being in the age of AI, we have a set of tools in our toolbox that can help address the stupidity of products. Using different forms of learning and experimentation, we’re able to include behavior in systems for detecting patterns, developing hypotheses on these patterns being consistent, running experiments of proactive system behavior, measuring the effect of the experiment and then learning from it.

A theoretical AI researcher may claim that this is reinforcement learning and at its core, that’s a correct conclusion. However, it would also violate the Einstein principle of making everything as simple as possible, but not simpler. The key challenge around making systems smarter isn’t the basic reinforcement learning, but rather our ability to realize the aforementioned activities and behavior in systems without causing safety or security risks, without annoying the user (remember Clippy?), managing the stochastic nature of feedback and to focus on those things that actually add value for the user.

Still, customers are increasingly expecting their products to get better every day they use them. I want my car, my phone, my computer, my apps, my wearables to get better every day. I want my devices to learn from me and my behavior to deliver more value to me by adjusting accordingly. To achieve this, it’s not enough to adopt DevOps and run A/B tests, but it also requires fully autonomous experimentation by systems at speeds that R&D organizations simply cannot match.

Our systems shouldn’t nudge us into different behaviors, as many social media apps tend to do, but rather act proactively on our behalf and to our benefit. I want the systems that I use and interact with to take the lead and remove the burden of always remembering and initiating activities from my shoulders and free me to focus on the things that I’m uniquely good at. Please stop building stupid systems and focus on adding smart, proactive behavior instead of yet another feature.

FREE WEBINAR – a hundred ways your machine learning systems are vulnerable

High Tech Institute and Cydrill organized a 45 minutes session on October 6, 2020 that gives you a thorough overview of how ML applications can be hacked, and what you can do about it.

This recorded webinar is an excerpt from the brand new face to face or online course on machine learning security that High Tech Institute and its partner for software security Cydrill are launching.

In this webinar, security expert Balázs Kiss will teach you:

  • About the cat and mouse game of software security;
  • Why machine learning security is important, and why it is difficult;
  • About the many ways the bad guys can compromise your ML systems;
  • Some real-world attacks on machine learning systems and how to defend against them;
  • How Cydrill courses can raise your paranoia to a healthy level and make your machine learning systems more robust and secure.

Outline

Introduction

  • What makes machine learning a valuable target?
  • Threats from the real world:
    – Some real-world abuse examples
    – Dealing with AI/ML threats in software security

Machine Learning Security

  • Adversarial ML examples
    – Poisoning and evasion attacks
    – Demo – ML evasion attack
    – Case studies
  • The ML supply chain
    – TensorFlow security issues and vulnerabilities

Learning how not to code

Conclusion, Q&A

Presenter: Balázs Kiss

Balázs has been working in the software security field for more than 13 years as a security evaluator, researcher, and mentor. Recently, he has focused on helping developers learn how typical vulnerabilities are introduced during software development and how to stop these problems at the source. To date, he has taught more than 60 training courses worldwide.

Does data-driven decision-making make you boring?

With all the focus on data and AI, it was simply a matter of time before the countermovement started. Reflecting on several discussions around this topic that I’ve had over the last year, the key theme seems to be that data and AI are predicting the future based on the past and as long as the future is like the past, this works fine. However, the world is in constant flux and these technologies cause stagnation as we can’t predict fundamental shifts and disruptive innovations. Even worse, we don’t even look for them as we look at data in a short-sighted fashion.

'Not exploiting the advantages of data and AI is tying one arm behind your back'

Although I most certainly believe that there’s a very important place for human creativity and insight, I also think that not exploiting the advantages data and AI offer is simply akin to shooting yourself in the foot or tying one arm behind your back. There are several reasons for this.

First, for all the criticism on machine learning for predicting the future, the fact is that in most cases, humans are even worse at it. Even for highly variable data, ML algorithms often manage to exploit patterns that humans fail to detect. For large retailers, predicting the amount of product to order and then allocating it to each individual shop used to be a human task, but it’s clear that ML algorithms, given sufficient data, do a better job. A counterargument used frequently recently is that these algorithms didn’t predict the Covid-19 disruption, but of course, humans didn’t predict it either, leaving many stores with a significant surplus of goods.

Second, I still meet people that continue to express beliefs about the world, their industry, their customers or their own performance that simply aren’t true. Although some, like Steve Jobs, were known for their “reality distortion field,” for virtually all of us, just wishing for something to be true doesn’t make it so. As William Edwards Deming famously said: in God we trust; all others must bring data.

Third, data-driven practices don’t remove human creativity but instead focus it on the formulation of hypotheses. In traditional organizations, one can build a career on making strong statements that are hard to verify and being vocal about them. Often, these are based on individual instances and storytelling, to which we as humans are very sensitive. When adopting data-driven practices, the focus should be on formulating testable hypotheses and being less concerned with being proven wrong. Even hypotheses that are creative and novel but don’t pan out provide ample opportunity for learning.

Fourth, when using data-driven practices, you need to know what you’re optimizing for. In virtually all companies that I work with, features are prioritized and developed based on the beliefs of some product manager. The effect of the prioritized feature on the customer or system behavior and the way it generates value is often described in qualitative and vague terms. The worst argument here is that it’s a “strategic investment.” Rather than prioritizing a feature to be developed based on the beliefs of a product manager, it’s much better to treat the feature as a hypothesis, define its expected, quantitative effect and then measure its impact as you iteratively develop the feature slice by slice.

Working in a data-driven fashion doesn’t make you boring. Instead, it instills a higher level of discipline in the organization, uses technology where it fits best and focuses creative energy on the areas where humans provide the most value. It helps organizations to shed so-called “shadow beliefs” (beliefs that everyone in the organization considers to be true but that are not) and, through that, remove hypotheses that don’t hold from the pool of ideas. Neither humans nor machines can predict the future. However, although history never repeats itself, it often rhymes. And machine learning is better at detecting the rhymes than you.

Machine learning adds another layer to your software security challenge

Although machine learning security research is still in its early stages, it’s clear that input possibilities without barriers increase threats. You don’t need to touch a keyboard anymore to fool a machine learning system. Software security expert Balázs Kiss touches upon a few points in this new field and gives advice on the basic protection measures.

Just like software in general, machine learning systems are vulnerable. “On the one hand, they’re pretty much like newborn babies that rely entirely on their parents to learn how the world works – including ‘backdoors’ such as fairy tales, or Santa Claus,” says security expert Balázs Kiss from Cydrill, a company specialized in software security. “On the other hand, machine learning systems are like old cats with poor eyesight – when a mouse learns how the cat hunts, it can easily avoid being seen and caught.”

Things don’t look good, according to Kiss. “Machine learning security is becoming a critical topic.” He points out that most software developers and experts in machine learning are unaware of the attack techniques. “Not even those that have been known to the software security community for a long time. Neither do they know about the corresponding best practices. This should change.”


Security expert and experienced software trainer Balázs Kiss recently developed a new course on machine learning security to be rolled out shortly by High Tech Institute in the Netherlands.

Machine learning (ML) solutions – like software systems – are vulnerable in various ways and they increase the security needs. Last year, this was pointed out in a quite embarrassing and simple way by two students from Leuven. They easily managed to mislead Yolo (You Only Look Once), one of the most popular algorithms to detect objects and people. By carrying a cardboard sign with a colorful print of 40 by 40 cm in front of their body, Simen Thys and Wiebe Van Ranst made themselves undetectable as human persons. Another example comes from McAfee researchers who managed to fool the Tesla autopilot by misclassifying speed limit signs and made the car accelerate past 35 mph.

Know your enemy

“An essential cybersecurity prerequisite is: know your enemy,” states Kiss, who is also an experienced software trainer and recently developed a brand new course on ML security to be rolled out shortly by High Tech Institute in the Netherlands. “Most importantly, you have to think with the head of an attacker,” he says.

Let’s take a look at what the attackers are going to target in machine learning. It all starts with exploring what security experts call “the attack surface,” the combination of all the different points in a software environment where an unauthorized user can try to enter or extract data. Keeping the attack surface as small as possible is a basic security measure. Like the students from Leuven proved: to fool an ML system you don’t even have to touch a keyboard.

'Garbage in, garbage out.'

A common saying in the machine learning world is “garbage in, garbage out.” All algorithms use training data to establish and refine their behavior. Bad data results in unexpected behavior. This is possible due to the model performing well on the training data but unable to generalize the results to other examples (overfitting), the model being unable to capture the underlying trends of the data (underfitting) or due to problems with the dataset. Biased, faulty or ambiguous training data are of course accidental problems, and there are ways to deal with them. For instance, by using appropriate testing and validation datasets. However, an adversary feeding in such bad input intentionally is a completely different scenario for which we also need special protection approaches.

Attackers are smart

Kiss: “We simply must assume that there will be malicious users. These attackers don’t even need to have any particular privileges within the system, but they can provide raw input as training data and see the system’s output, typically the classification value. This already means that they can send purposefully bad or malicious data to trigger inadvertent ML errors.”

'Attackers can learn how the model works and refine their inputs to adapt the attack.'

“But that’s just the tip of the iceberg,” finds Kiss. “Keep in mind that attackers are always working towards a goal. They will target specific aspects of the ML solution. By choosing the right input, they can actually do a lot of potential damage to the model, the generated prediction and even the various bits of code that process this input. Attackers are smart. They aren’t restricted to sending static inputs – they can learn how the model works and refine their inputs to adapt the attack.”

In case of supervised learning, it encompasses all three major steps of the ML workflow. For training, an attacker may be able to provide input data. For classification, an attacker can provide input data and read the classification result. If the ML system has feedback functionality, an attacker may also be able to give false feedback (“wrong” for a good classification and “correct” for a bad one) to confuse the system.

Crafted inputs

Many attacks make use of so-called adversarial examples. These crafted inputs either exploit the implicit trust an ML system puts in the training data received from the user to damage its security (poisoning) or trick the system into mis-categorizing its input (evasion). No foolproof method exists currently that can automatically detect and filter these examples; even the best solution, where a system is taught to recognize adversarial examples, is limited in scope.


By carrying a cardboard sign with a colorful print of 40 by 40 cm in front of their body, Simen Thys and Wiebe Van Ranst made themselves undetectable as human persons. Credit: KU Leuven/Eavise

There are defenses for detecting or mitigating adversarial examples, of course. However, an intelligent attacker can defeat solutions like obfuscation by producing a set of adversarial examples in an adaptive way. Kiss points to some excellent papers that highlighted these, like those from Nicholas Carlini and his colleagues at Google Brain.

All in all, ML security research is still in its early stages. The current studies mostly focus on image recognition. However, some defense techniques that work well for images may not be effective for text or audio. “That said, there are plenty of things you can still do to protect yourself in practice,” divulges Kiss. “Unfortunately, none will protect you completely from malicious activities. All of them will however add layers of protection, making the attacks harder to carry out.”

Most important, maintains the Cydrill expert, is that you think with the head of an attacker. “You have to train neural networks with adversarial samples to make them explicitly recognize this information as incorrect.” According to Kiss, it’s a good idea to create and use adversarial samples from all currently known attack techniques. A test framework can generate such samples to make the process easier. There are existing security testing tools that can help with this – like ML fuzz testers Tensorfuzzs and Deeptest, which automatically generate invalid or unexpected input.

Sanity checks

Limiting the attacker’s capabilities to send adversarial samples is always a good mitigation technique. One can easily achieve this by simply limiting the rate of inputs accepted from one user. Of course, detecting that the same user is behind a set of inputs might not be easy. “This is the same challenge as in the case of distributed denial-of-service attacks, but the same solutions might work as well.”

As always in software security, input validation can help. It may not be trivial to automatically tell good inputs from bad ones, but it’s definitely worth trying. We can also use machine learning itself to identify anomalous patterns in the input. “In the simplest case, if data received from an untrusted user is consistently closer to the classification boundary than to the average, we can flag the data for manual review, or just omit it.”

Applying regular sanity checks with test data can also help. Running the same test dataset against the model upon each retraining cycle can uncover poisoning attack attempts. Kiss: “Reject on negative impact, Roni, is a typical defense here, detecting if the system’s capability to classify the test dataset degrades after the retraining.”

The most obvious fact about ML security is often overlooked, notes Kiss. “Machine learning solutions are software systems. We program them in Python – or possibly C++ – and thus they potentially carry all common security weaknesses that apply to those languages.” The Cydrill trainer especially advises us to be aware of point 9 from the OWASP Top Ten. The Open Web Application Security Project is a document that summarizes the ten most critical security issues in web applications to raise awareness and help minimize the risk of attacks. Point 9 warns developers about using components with known vulnerabilities. “Any vulnerability in a widespread ML framework such as Tensorflow or one of its many dependencies can have far-reaching consequences for all of the applications that use it.”

Potential attack targets

The attackers interact with the ML system by feeding in data through the attack surface. Start to think with the head of the attacker and ask questions. How does the application digest the information? What kind of data? Does the system accept images, as well as audio and video files? Or are there restrictions? If so, how does it check the types? Does the program do any parsing or does it delegate it entirely to an open-source or commercially available media library? And after preprocessing the data, does the program have any assumptions (empty field, requirements on values)? Is data stored in a relational database or in XML or JSON? If so, what operations does the code perform on this data when it gets processed? Where are the hyperparameters stored, and are they modifiable at runtime? Does the application use third-party libraries, frameworks, middleware or web service APIs as part of the workflow that handles user input? If so, which ones?

Kiss: “Each of these questions can indicate potential attack targets. Each of them can hide vulnerabilities that attackers can exploit to achieve their original goals.”

These vulnerability types are not related to machine learning as much as to the underlying technologies: the programming language itself (probably Python), the deployment environment (mobile, desktop, cloud) and the operating system. But the dangers they pose are just as critical as the adversarial examples – successful exploitation can lead to a full compromise of the ML system. This isn’t restricted to the code of the application itself. Researcher Rock Stevens from the University of Maryland explored vulnerabilities in commonly-used platforms such as Tensorflow and Pytorch.

Real threats

Kiss’ main message is that ML security covers many real threats. It isn’t just a subset of cybersecurity, it shares many traits of software security in general. We should be concerned about malicious samples and adversarial learning but also about all the common software security weaknesses. Machine learning is software after all.

ML security is a new discipline. Research has just begun, we are just starting to understand the threats, the possible weaknesses and the vulnerabilities. Nevertheless, ML experts can learn a lot from software security. The last couple of decades have taught us lots of lessons there.

This article is written by René Raaijmakers, tech editor of Bits&Chips.

Understanding how to generate value – within time and budget

Luud Engels, trainer of the System architect(ing) training at High Tech Institute
As a project manager, system architect and crisis manager in the high-tech industry, Luud Engels has a reputation for not mincing words. In addition to his consultancy work, he recently started as a system architect(ing) trainer at High Tech Institute. “Clear communication is key in complex development environments.”

You don’t want to start with Luud Engels about how open-minded and communicative we are in the Dutch high tech as a system architect. He’ll be forceful in his response, underlining just how hypocritical it is to believe that. “Here in the Brabant region, we’re not that open at all. Just stand at a coffee machine and listen. We’re not talking with you, we’re talking about you.”

When it comes to direct communication – or rather, confrontation – Engels has a reputation. A few months ago, he was sent packing after strongly expressing – according to his client – what was wrong within the company. “I’m convinced that at the right time, you can say anything to anyone – be it in a team meeting or a discussion between two people. Of course, most Dutch don’t do that. But I don’t seem to excel at it either because I sometimes put things so bluntly that people tell me to get lost.”

Engels’ appreciation of factual and clear communication comes from his many years of experience as a project manager, a system architect, a crisis manager and a member of the management team at engineering firm TMC. His advice for development environments: “Speak your mind. Also, about personal stuff. It’s perfectly fine to tell someone his blue shirt bothers you. But statements like ‘Microsoft sucks and Apple is good’ don’t help. Make it factual: are we going to work object-oriented or process-oriented? Are we going to use glass or titanium? What are the advantages? What are the disadvantages? Talking about glass, I don’t need to know the whole history of glassworks. I want the five key criteria – in numbers, not in positives and negatives. If you know the dominant parameters, you also know how to measure them and we can agree on the first development steps to make the measurements possible.”

'Make sure the whole team is at least on the same path.'

Engels emphasizes that in the development of high-tech systems, several roads lead to Rome and that it’s important to stick to the choice made. “Make sure the whole team is at least on the same path, rather than endlessly searching for the only right solution – which, by definition, doesn’t exist.”

But sometimes, even the simplest of things can go wrong. “Once, after a positive conversation with a client, I received the report in colloquial Dutch. I asked if the client representatives had approved the text. Of course, they had not. So I insisted on writing it down in English, presenting it to the client and asking them for their approval. After all, it’s often about decisions with far-reaching consequences. Still, syncing with the customer proved a daunting task.”

The laws of Luud
  • If the financial people take over, the engineering interest becomes secondary; if the engineers take the lead, it will be financially broken
    (About balancing tech and money in high-tech OEMs)
  • The client who asks for a crisis to be averted is half the culprit or part of the crisis in question (About crisis management)
  • I firmly believe in the power of the outsider (About the crisis manager)
  • We talk past each other: one talks in Newtons per square meter, the other in bits per second (About communication and collaboration in high tech)
  • A crisis doesn’t go away by getting rid of the people who put their finger on the sore spots (About stranded development projects)
The outsider

Engels’ extensive technical career started with a study of electrical engineering, after which he joined Sattcontrol, a Swedish industrial automation specialist. He programmed PLCs for egg-grading machines, dairy factories and automated warehouses. Later, he switched to Fortran for PDP and Vax minicomputers.

After five years, Engels moved to Cap Volmac (later Cap Gemini), where he did projects. While he mainly worked in engineering, Cap’s core was business automation. “I learned a great deal about developing computer systems and software according to the rules.”

Engels started for Cap at ASML, he then worked on highway signaling at the Dutch Department of Waterways and Public Works, eventually taking on leadership roles. Later, audits were added to the mix. He estimates that he’s assessed about twenty projects. “After a day of walking around, you know what’s going on and where the project went wrong,” he says. Smilingly: “And certainly not because I’m so smart, or because I saw so much, but mainly because I was an outsider.”

Engels firmly believes in the power of the outsider. “You arrive at companies where things have gone completely wrong and then you’re allowed to walk around and speak to 5-10 people. They all have an opinion about the project in crisis. You get to hear the whole story. People want to pour their hearts out. You hear what’s wrong, and above all: what others aren’t allowed to say.”

The headstrong technician

Technicians are a stubborn, headstrong type – and Engels should know, as he certainly fits that mold. “We’re engineers, aren’t we? We think like this: ‘I’m an electrical engineer and according to my calculations, it’s 5 volts. If you don’t get it, I’ll explain again, but the outcome remains 5 volts. You’re crazy, not me.’ While in projects, it’s mainly about effective collaboration. That’s the difficult part. One talks in newtons per square meter, the other in bits per second. One talks about the goal, the other about the solution. The high tech is one big Tower of Babel. That starts with requirements and continues through to design, integration and testing. Just as well: if I do a project myself and an outsider comes in, he or she will also shoot holes in it.”


Luud Engels will lead the mid-November edition of the System Architect (Sysarch) training in Leuven.(Belgium).

Engels prefers to step in when the crisis is at its deepest. Take the Fusion project that ran at Philips at the end of the nineties. Its ambitious goal was to use a single platform to cover the mechanical, electrical and software construction for medical diagnostic systems. The idea was that cost savings through reuse would justify the extensive operation. “The director outlined his problem as follows: every month, thirty new developers joined the project and every month, they told him that completion was delayed for another two months.”

'The outsider is allowed to speak up.'

Engels, again, applied the power of the outsider. “The outsider is allowed to speak up. The deeper the crisis, the more receptive one is to outside messages. Usually, other people have already had a look at it. But often, they put their fingers on sore spots that they weren’t allowed to point to and ended up having to leave. They asked me to replace the current project leader because he couldn’t make up for the delay. But a crisis doesn’t go away when you get rid of the people who put their finger on the sore spots. Instead, I went to help the incumbent project leader. Together, we contained the crisis by adjusting the scope and working with early feedback. One of my laws is: the client who asks for a crisis to be averted is half the culprit or has at least a dominant part in it.”

Is it tunnel vision?

“Please note: you’re talking about very competent people with very relevant arguments and tons of knowledge. But gradually, the solution or working method has been placed in different silos. Very skilled people wear down paths, creating trenches that are so deep that you can barely look over the edge. Everyone has his trench and is defending it stubbornly. You hear people say things like: ‘This isn’t negotiable!’ When you hear that, it points you to where it went wrong and where a possible beginning of the solution lies.”

Where does the solution start?

“The first law of crisis management is containment. With Fusion, it meant that they had to stop adding thirty people per month. Instead, they had to cut twenty a month and reduce scope. The deeper cause – in my opinion – was pure self-overestimation. The platform idea for software alone is a major challenge. But when you start including mechanics and electronics, for all diagnostic products, it becomes too much at once. It’s difficult enough to develop electronics, software and mechanics together for a single system, but trying to develop one platform for different product lines in one project is naive, to say the least. At the time, they also had to work with developers in Bangalore, and they wanted to go from CCM level 2 to level 3 at the same time. That had to stop right away. You need to limit the scope of a project in crisis and postpone long-term improvement initiatives.”

'It’s often the case that the technicians already know what’s wrong and so does management.'

“It’s often the case that the technicians already know what’s wrong and so does management. Both are right, but they won’t reach a solution together. Much later, I did a job at Philips DPS, where I saw that Philips had made significant progress. Putting fingers on sore spots, however, was still not allowed, unfortunately.”

How does this get done the right way?

Start small, says Engels. “You need early feedback, preferably a launching customer. I’ve heard Martin van den Brink say it many times at ASML: put everything together, show me that it works. Then he challenges people by stating: ‘Your physics don’t work.’ There was a lot of that during early integration. Much later, the industry introduced fancy words for it, calling it Scrum, Agile and rapid development. But the point is that you need feedback, and it’s important to start getting it at an early stage. The goal has to be to deliver every six weeks and to deliver something that actually works. If not, you have the means available to find out why it failed, why the physics didn’t work. At that point, you might have to accept that you’re not going to meet your deadline. What you definitely shouldn’t do is bring in more people.”

“When technicians tell you they need more time to investigate something, you have to get suspicious. Van den Brink is also a master at assessing or challenging that.”

'Assign a person responsible to each problem, including deadlines for results and decisions.'

Another necessity: “Make people owners of a problem. Certainly in environments with complex developments, where there’s not even a beginning of a solution and new inventions are required, everyone feels like the master of their idea, with their personal insight. We Dutch are also very good at seizing every opportunity to talk about this in a very broad sense. But you simply need to take the next step. That’s the only thing from which a project benefits. So if you’re sitting in a room with thirty people and problems come up, the project manager, the crisis manager or the system architect must assign a person responsible to each problem. This also includes deadlines for results and decisions.”

According to Engels, it’s definitely in the culture of ASML, but there was a point in time when it got out of hand there. “They appointed an owner for everything and called him a project leader. McKinsey once did an analysis at ASML of project leaders and project sizes. They found that, on average, there were 1.2 people on each project, including the project leader! Then you run the risk that these owners, these project leaders, start competing over available resources and the underlying issue disappears into the background.”


Engels has extensive experience as a project manager, system architect and crisis manager in the high-tech industry.

The product manager defines the product that will perform well in the market. He determines the available budget – often too little – and negotiates with the system architect whether it can be made for that money. Engels: “It’s a balancing act. With mature products, it works differently, but with a first development, you want a proof of concept as soon as possible. Or at least a confirmation that your ideas are right and that you’re on the right track.”

To what extent should the system architect, like the product manager, talk directly to customers?

“In high tech, that’s beyond dispute. That’s where the product manager and the system architect come together. They have to. The former has more business focus, the latter looks at the technology and whether it’s feasible. They’re two sides of the same coin. This collaboration between the product manager and system architect is becoming more and more commonplace. However, I still see system architects who downplay the necessary coordination with the project manager or operational management. You then run the risk that a solution that perfectly meets market needs will ultimately fail in the realization phase.”

'The project manager sets hard deadlines and a system architect has to work with them.'

In smaller development projects, with ten to twenty developers, one person can take on the role of both project manager and system architect. In larger projects, with tens or hundreds of developers and several dozen suppliers, it’s important to split up. Engels has experience in both roles. “The project manager sets hard deadlines and a system architect has to work with them.”

“The project manager must define which issues the system architect still has to solve and with whom. Together, you discuss the ins and outs, weigh the benefits and concerns, decide on key parameters, and then the project manager calls the system architect: at the end of next week, we’ll make a decision! It’s all about direction, coming up with a format that involves knowledgeable people to arrive at quantified statements with which you can really make an assessment.”

A system architect has a major impact on product development, yet often has a less than visible role.

“He’s an experienced technician, but his value lies primarily in his view of the business. Ninety-nine times out of a hundred, the system architect knows the market in which his product or system is going to land. This is necessary to translate the market and product requirements into the system requirements and then outline the design.”

It takes quite a bit of experience to reach that level. At the same time, Engels observes that the concept of a system architect is subject to inflation. “Nowadays, there are architects all over the place. A software architect is usually a senior software developer, a requirements engineer or someone in charge of engineering. I wouldn’t say anything to the detriment of such a lead engineer. Still, the difference with the system architect is that the latter has to know the business, understand how value is generated and thus understand why it has to be done within a certain amount of time and money.”

“This is also the case in construction. Your architect asks you what you are going to do with your future house and adapts his design accordingly. Are you going to cook a lot, or do you mainly want to drink wine? That’s why Van den Brink does so well at ASML. He goes to customers and explains what kind of litho systems they need. He knows the market like no other. Even stronger, he dictates the market. That means he understands the goals and the timing of chip manufacturers like no other, including what their production processes look like. If they talk about critical dimension and overlay, he can explain that his machine can do that and also substantiate why.”

This article is written by René Raaijmakers, tech editor of Bits&Chips.

Recommendation by former participants

By the end of the training participants are asked to fill out an evaluation form. To the question: 'Would you recommend this training to others?' they responded with a 8.4 out of 10.

Six reasons why your digital transformation is failing

The common theme over the last weeks, as I started to talk to more and more folks in companies, is the difficulty of realizing digital transformations. Granted, I work with many who are expected or having taken it on themselves to drive the digital transformation of their organization, but I believe the challenge is widespread.

Especially in the embedded systems industry, there’s a large group of people who originate in the mechanical or electronics world and can’t see beyond the limits of their technological perspective. With a digitalizing business, mechanics and electronics don’t go away – we still need a chassis and a computing platform. The main difference is that these technologies shift from being differentiating to being commodity (see figure). This causes a shift in perspective as you drive innovation when something is differentiating and you look to minimize cost when something is commodity. When the precious engine, braking system or propulsion chain suddenly needs to be optimized for cost instead of innovation, all working with that technology suddenly resist change. But the fact of the matter is that in virtually every industry, the differentiation is largely generated through digital technologies, ie software, data and AI.

Differentiating versus commodity technologies

We also see a shift in business model. In industries dominated by ‘atoms,’ the primary business model tends to be transactional. You buy a box, use what’s in the box until it’s old and then you buy a new box. Industries that focus primarily on ‘bits’ tend to use continuous business models such as subscription and service models. In these industries, customers expect the offerings that they’re using to get better all the time. This shift may seem trivial but has huge implications on the architecture of your products, the way R&D is conducted, how you support your customers and even the company’s financial model.

With digital transformations, it seems to me that there are at least six behaviors that can be identified. First, the “it’s not my problem” syndrome. Here, the patient thinks that digitalization has to do with data being shuffled back and forth between boxes that are outside his or her area of responsibility. As it’s not in scope, it’s not a problem that the individual has to contend with nor worry about.

The second challenge is the “too complicated” case. I recently read about a study where people were offered to sit on a chair and think without distractions (like their mobile phone) or receive mild electric shocks while being allowed to distract themselves with an app of their choosing. Interestingly enough, a large majority preferred to receive electric shocks over being ‘forced’ to think. Digital transformation represents a fundamental paradigm shift and there’s a significant group of people in companies that simply can’t be bothered to logically think through the consequences of this.

Even when individuals and teams accept data-driven practices, one interesting observation that we recently got confirmed in several companies is that although many organizations have top-level KPIs and many teams have local KPIs, there’s no real connection between the two. The consequence is local optimization based on team metrics as there’s no agreed-upon way to connect local and global KPIs. As local metrics typically are short-term and based on the current, traditional business, they slow down or stop any digital transformation.

A consequence is the “local bastardization of global strategy” challenge. As the link between the company strategy and the team actions is tenuous at best, team members will develop a rhetoric on how what they’re doing today is actually supporting the company strategy. Any overly obvious mismatches are then explained away by referring to the local and short-term challenges that need to be overcome first.

The fifth behavior I run into a lot is the “it’s too early” argument. Here, the protagonist claims to agree with all the points made related to the consequences of digitalization but in the same sentence claims that these implications will only realize themselves years down the line. Consequently, it’s too early to start making changes.

'Customers won’t ask but simply vote with their feet'

The final excuse that gets used almost continuously is that “customers aren’t asking for it.” This seems like a quite reasonable argument until you realize that all companies that waited until customers asked them for new functionality, business models or technologies have gone out of business because they were disrupted by more enlightened competitors or new entrants. Customers will simply go somewhere else when you haven’t predicted their needs. They won’t ask but simply vote with their feet.

Concluding, the realization of digital transformation in many companies is a slow, brutal, uphill battle that leaves proponents frustrated, tired and scarred. I’ve described six typical behaviors of so-called “rejectors” that you should be aware of if you’re to have any hope to overcome this resistance. The digital transformation is real, but many incumbents are moving too slow to avoid disruption.

Innovation and character light the path to IMS success

Interview with Martin Langkamp & Martijn Bouwhuis of IMS about system architecting
In today’s high-tech environment, companies of all sizes are looking to stay at the cutting edge of innovation. According to team leaders Martin Langkamp and Martijn Bouwhuis of Almelo-based IMS, the equation is easy. It comes down to a few key factors: keeping the employees interested, keeping the workplace light and focusing on personal development through training.

Dutch innovation in the high-tech sector comes from businesses of all sizes. While big names like ASML and Philips are recognized around the globe, there are also several small and medium enterprises (SMEs) in the Netherlands playing a big role in global high tech. Take, for example, Almelo’s IMS. IMS has been around for just over 20 years, opening its doors in 1999 after it was spun out of Texas Instruments through a management buy-out.

Now, in 2020, the automation and technology expert has delivered more than 750 production lines with an emphasis on the medical device, smart device and automotive domains. “We’ve grown a lot since the early days. Now, we see our role as helping our global customers realize their production goals,” explains Martin Langkamp, technical sales coordinator at IMS. “We do that by delivering our innovative machines all over the world that excel in the high-volume production of small, precise and sometimes extremely complex products.”

Character

While IMS’s global customer base is certainly large, the company itself has a relatively small footprint – employing more than 120 people at its Almelo and Groningen locations in the Netherlands. Despite its small stature, it’s having a big impact on consumer electronics. Currently, the high-tech machine maker is active in delivering machines used in the assembly process for the smart device and automotive sectors, in addition to next-generation headlights and sensors for cars.

“The character of IMS is that we’re always focused on innovation, not just locally, but globally,” highlights Langkamp. “That means we do a lot of international projects, which offers our engineers exciting opportunities to travel, learn and share knowledge. That’s part of our DNA.”

'We use education-based developmental plans in our evaluation process, to help people and the company meet our goals.'

Another focal point in the character of IMS is the focus on the personal development of its employees. “One of our main focuses is on continuing education for our workers. We find that trainings, workshops and conferences are a great way for our engineers to develop both personally and professionally,” comments Langkamp. “In fact, as we look to the future as we continue to innovate, the necessary competencies of a position can expand and the engineers may be guided to specific courses to bolster their skills. We actually use education-based developmental plans in our evaluation process, to help people and the company meet our goals.”

Modularity

Recently, IMS found a golden opportunity to utilize training. Looking to continue to grow and push the cutting edge of complex part manufacturing, the company took on a new role for its customers, helping lead them in the design of production machines by offering series-based machines, rather than one-offs.

“For many years, R&D operated more reactively for development, finding solutions for the customers as they arose,” recalls IMS R&D team leader Martijn Bouwhuis. “More recently, however, we’ve started to adopt new methods to become more proactive in the process and we’ve focused our efforts into making standardized products that can be tailored to fit our individual customers.”

To get these standardized products, IMS decided that modular thinking was the best way to achieve the new goals and it started laying the foundational work to get its workforce aligned on the idea. However, it was during the Bits&Chips System Architecting Conference, the team found that their modular approach fits perfectly with the principles of system architecting. Langkamp: “For a few years, we’d already been adjusting our processes, but we were looking for a better structure with more continuity within the whole of the company.”


According to technical sales coordinator Martin Langkamp, one of IMS’s main focuses is on continuing education for our workers. Credit: Fotowerkt.nl

'It was time to update and professionalize our working methods.'

Bouwhuis: “While we were assessing the best way to progress, we found that often in the design process we would focus on subsystems because that’s where the value was added. Somehow, we forgot to look at things from a system level. But as the complexity of the parts our machines are making continues to explode, it’s clear that software engineering has become more important than ever and it was time to update and professionalize our working methods.”

Rather than sending a few team members to a relevant training, IMS reached out to High Tech Institute to develop its customized in-company edition of the System Architecting training, allowing the Almelo-based company to bring in a broad and diverse group of its team. “It’s important in our transition to establish cohesion among all the different disciplines and departments,” says Langkamp. “From mechanical to electrical and software engineers to the sales team, the goal was to get everyone on the same page, thinking at a system level.”

Added value

“The reason we selected High Tech institute was because of the strength of its instructors. Their knowledge and expertise matched our needs precisely,” emphasizes Bouwhuis. “What we appreciated the most was that the trainers found ways to trigger discussion, which got our group of about 12 trainees really participating. This interaction between the team and the instructors, all with different perspectives, really enhances the training with a lot of added value.”


“This interaction between the team and the instructors really enhances the training with a lot of added value,” says IMS R&D team leader Martijn Bouwhuis. Credit: Fotowerkt.nl

Does IMS use training to attract or keep its skilled engineers? Is it difficult to compete with larger companies in the high-tech domain?

“Yes and no. Yes, training and education opportunities are a great tool to attract and retain our engineers. But, as far as competing or losing our skilled workers to the bigger companies, no, that’s not the case. In fact, I think the size of IMS, the scope of our work and our approach is something that draws people to us and makes them want to stay,” illustrates Langkamp. “In the Brabant region, it’s pretty common for engineers to bounce around from place to place, but here at IMS and in the Twente region in general, it’s just not as common.”

'Sometimes we refer to IMS as a high-tech playground for engineers.'

“Because we’re small, we’re able to keep things light and fun in the workplace. Of course, we’re extremely professional in working with our customers. But the people here are more than just a number and embracing that mentality means we can operate as a family and have fun,” adds Bouwhuis, joking: “Sometimes we refer to IMS as a high-tech playground for engineers.”

“Yes exactly. Because of our roots from Texas Instruments, we sometimes joke about having people working here for 40 years, but the company is only 20 years old,” laughs Langkamp. “By keeping our people interested with exciting projects, a light-hearted informal workplace
and a focus on our workers and their development, IMS is in a strong position to continue innovating.”


Photo credit: Fotowerkt.nl

This article is written by Collin Arocho, tech editor of Bits&Chips.

Recommendation by former participants

By the end of the training participants are asked to fill out an evaluation form. To the question: 'Would you recommend this training to others?' they responded with a 8.8 out of 10.

How do I give direct feedback to colleagues who are not used to it?

An engineer asks:

I work with many colleagues from different cultures. Most of them are not used to how we give each other feedback in the Netherlands. Yet it is necessary to indicate what someone can or must improve if something is not going well. How do I tackle that?

The communication trainer answers:

Feedback is about delivering a message to the other person without disrupting the relationship. The direct ‘Dutch’ way may not work well when you work with cultures that are not used to this. The following ten points will help you.

1) As a starting point, it is always good to check with yourself whether you are giving the feedback to help the other person and improve the situation, and not just to confirm your authority or to kick someone’s ass.

2) It is important to invest in building a trusting relationship with your employees before you give them feedback. You’ve probably had the most valuable feedback yourself from people close to you. Building relationships in many countries often happens outside of work hours. So, it can help to pay attention to this regularly.

3) In Asian and Arab countries, avoiding losing face is important. It can therefore be helpful to start with more subtle feedback to get things done. Suppose you see that one section in a report contains errors. Then name how good one part is. The other person will pick up on the fact that the other part still needs attention. If you say something like this to an American or Dutch person, he will not understand your feedback and will prefer that you say what is still wrong directly, because that is faster. Start subtle first; then, should your message not be picked up, you can always become more direct. Backtracking when the cat is already out of the bag is harder.

'Start subtly, you can always be more direct'

4) Focus your feedback on behaviors and features of the work rather than judging. Instead of “Your work is sloppy” say “This presentation has three errors’’. Instead of “This report is incomplete” you can say “I would like to see an additional table’’.

5) To avoid losing face, you can also use the passive speaking mode instead of the active one. This sounds like the following: ‘’The front desk was unoccupied for fifteen minutes this morning’’ instead of ‘’’You were late’’. Then you avoid personal accusations. The addressed person can infer from the comment that it was his responsibility to be on time and that the absence was noticed. In many languages this passive language is already ingrained. If you are not used to this, it may take effort to do so.

6) Say what you do want instead of what you don’t. ‘’Stop doing this’’ just sounds like a rebuke whether you are 2 or 52. So, “Try to do it this way” instead of “This is not the way’’.

7) Addressing the entire team about, for example, adherence to the new work rules makes the pressure on the individual less immediate. Peer pressure can then cause that person to follow the rules anyway without you having to address them directly.

8) Speak softly rather than loudly. A quiet voice can relax a tense situation and makes it easier for the employee to hear your message.

9) Showing respect in your actions for the person you are giving feedback to is especially important. You can show this by spending time together. Or by asking the other person for advice or sending someone to them for advice. Also involving yourself in the solution shows that no one has all the answers alone and that you appreciate the other person’s vision. For example, you say “Let’s see how we can solve this’’.

10) Finally, give compliments. Your employee may feel uncomfortable when there is too much specific attention to his personal performance. In that case, it’s better to go out for dinner with the whole group and reward everyone that way. A compliment can then be given in a one-on-one situation.

Don’t be a sheep

During a meeting this week, I had to think of a famous quote by Margaret Mead: “Never doubt that a small group of thoughtful committed citizens can change the world; indeed, it’s the only thing that ever has.” The two senior leaders to whom I was talking complained about their R&D organization doing everything right on paper from an Agile, data-driven perspective, but still ending up with building humongous, inflated features that were released after many months of development.

This is of course a classic example of feature creep many companies fall into. When exploring how the situation developed, the discussion made clear that a very small number of influential people in the R&D organization had managed to convince the others that the initial plan of releasing a minimal viable feature wasn’t possible as it would cause angry customers.

I don’t want to focus on feature creep but rather on the ways a vocal minority in an organization, or even society at large, can have an impact that far exceeds the size of the group. In general, I’m a big proponent of a small group of individuals taking charge to initiate change within an organization. Even if the senior leaders pride themselves on leading major changes in their organization, almost always some individuals had been pushing for and championing the change for quite some time before it was picked up by senior management.

The challenge with “the vocal minority,” as I often refer to it, is that their success often depends more on the ability to use rhetoric and debating techniques than on the actual, technical nature of the change that is being advocated. The consequence is that individuals may passionately argue for changes that are detrimental to the organization and its members. To evaluate the relevance and validity of the proposed changes, I typically apply four questions or tactics.

The first test to which I subject a change proposal is to evaluate it against the fundamental principles I consider to be true. One of these is that faster feedback cycles are better than slower ones. This means that arguing for inflating a feature and consequently delaying its release, as well as the associated feedback goes against the argument for including more in the feature. My general rule of thumb is that work items should be kept to a size that allows one team to complete it in one sprint.

The second question I ask is whether the proponent of the change has considered a sufficiently broad scope of impact. It’s very easy when proposing a change to focus exclusively on the topic at hand and how to address it, without considering the broader scope. For instance, releasing larger features may offer more relevant functionality to a wider subset of customers, but it may also decrease the (perceived) quality of the system as it’s harder to test a large chunk of functionality than it is to test a small slice.

The third test is to explore second-order effects. In a famous story, Mao Zedong ordered all sparrows in China to be killed as they ate seeds. The second-order effect was an explosion of the locust population, causing a famine resulting in the death of millions of people. In software engineering, a well-known case is incentivizing software engineers to use code from a shared code library to increase software reuse. This has caused all kinds of interesting effects, including engineers first checking in their code into the shared code library and then “reusing it” to get their bonus. Although it often is very difficult to predict second-order effects, it’s generally possible to generate relevant hypotheses that either can be tested or for which at least circumstantial evidence can be collected.

'Complement the beliefs that underlie the reasoning with empirical data'

The final mechanism I use is to explore if it’s possible to run small-scale experiments that provide additional evidence concerning the proposed change. The challenge with the first three tests/questions is that these are based on argumentation and reasoning and not necessarily founded in empirical reality. It’s critical to complement the beliefs that underlie the reasoning with tangible, empirical data to increase the confidence that the change will have the intended outcome and avoids unwanted side effects.

Concluding, in my experience, virtually all change in organizations is initiated by a “vocal minority.” This minority often relies on rhetoric and debating techniques to gain influence, rather than the quality of their proposal. This requires all of us to critically reflect over change suggestions. I’ve described four techniques that I use to evaluate these proposals. Rather than submitting to some form of herd mentality, it’s the responsibility of each and every one of us to maintain independent and critical thought, independent of peer pressure. Don’t be a sheep!

Master the art of software engineering

Interview with Robert Deckers, trainer of the Good software architecture training at High Tech Institute
With the growing reliance on software in an increasingly high-tech world, it’s more important than ever to master the art of software engineering as software architects. Trainers Robert Deckers and Bart Vanderbeke have taken it upon themselves to turn developers into craftsmen.

“A colleague once told me about one of his former project managers, who, upon realizing that the estimates didn’t align with his timeline, just cut them in half to make them fit. I find it unheard of, not only that you’d do such a thing as a project manager but also that people stand for that kind of behavior. You don’t have to scold him, but you can open your mouth. Instead, at the end of the project, when everything has gone haywire, everyone complains about how this has happened to them.”

Inspired by Google executive Fred Kofman and his book “Conscious business,” Bart Vanderbeke calls on software architects to stop playing the victim. “It’s unacceptable and unhealthy,” he claims. “You’re the craftsman. When someone tells you that you need to do something in half the time, or skip the design, or refrain from reviews, you say no – constructively. Software architects are scarce, so you’re in a comfortable position, certainly no position to self-victimize. Don’t hide behind ‘management.’ As a software craftsman, using a term coined by Kofman, you’re ‘unconditionally responsible’ for everything you do or don’t do.”


“You need to take unconditional responsibility, which literally means that you need to have the ability to respond – in a meaningful way,” says Bart Vanderbeke. Credit: Bart Vanderbeke

At NXP in Leuven, Vanderbeke leads a team of fifteen software architects, working on 2.4 GHz radio applications for personal health – think hearing aids, headphones and earplugs. “Tiny systems containing tiny software stacks,” he notes. “But even if you have a codebase of 100k or 200k, like us, software craftsmanship is of paramount importance. Building the hardware takes about a year, followed by maybe five years of software enhancement. I’ve developed a series of lectures to help my colleagues bring out their inner craftsman.”

The non-functional

A kindred spirit, Robert Deckers, too, aims to increase software craftsmanship, but with a focus on software architecture – “the most difficult trick of the trade,” as he calls it. “It already starts with the question: what is software architecture? You can find hundreds of books that try to give an answer. While some are bad, terrible even, most of them are meaningful, but they all tell a different story.” This was one of two triggers that led him to dive into the subject, develop his own view, write his own book and bestow his insights upon others.

'The real complexity is in the non-functional.'

The second trigger was the realization that in traditional methodologies, there’s too much focus on the functional requirements, whereas the non-functionals are the hardest to get to grips with and therefore take up the most time. “Way back when I was an OOTI trainee at Eindhoven University of Technology, I was the software architect for a copier,” reminisces Deckers. “After two months of design, the anomalous system behavior started to rear its ugly head and I realized that we had to do error handling as well – while obvious to someone with 20 years of experience, it hadn’t crossed my newbie mind. When we were finished, to my big surprise, no less than 85 percent of all our code turned out to be for error handling, so only 15 percent of our efforts had been focused on the functionality. That’s when I first experienced that the real challenge, the real complexity, is in the non-functional.”

After the OOTI traineeship – the PDEng Software Technology, as it’s known today – Deckers sharpened his views at several companies, including Philips Research and Sogeti. Since 2013, he’s running Atom Free IT, coaching organizations and their architects, helping them create architectures, set up the architecture process and embed it. The last five years, he’s combining this with a PhD project at the Vrije Universiteit Amsterdam, researching the cognitive aspects of systems engineering.

Come prepared

Vanderbeke and Deckers are the newest additions to the software and systems portfolio of High Tech Institute. Both want to help software architects be better at their work – become real craftsmen. “As a software craftsman, you know how to organize your work and you have the assertiveness not to accept compromises on the way of working. Instead, you go for the optimum, taking into account the influencing variables and conditions. You don’t do things because someone tells you to, but as you understand the need, you autonomously decide to do so,” summarizes Vanderbeke the values he intends to convey in his workshops.


Robert Deckers stresses the importance of focusing on the non-functional properties, aka the quality attributes.

Learning to say no in a constructive way is a key topic in Vanderbeke’s teachings. “That requires you to come prepared. When you’re asked to plot a course in a project, you need to have a couple of options readily available, not down to the minute detail but to such an extent that you can weigh them and make an informed choice. When someone steps up to you and says something can be done in half the time you estimated and you don’t have your facts straight, he may well be right – you have no way of telling. If you know what you’re talking about, that will not happen. You can have a constructive conversation and you might be challenged, persuaded even, but you won’t let yourself be blown away by unsubstantiated claims. Someone once asked me if we could speed things up by taking shortcuts, upon which I replied: ‘The only shortcut you can take is skimp on the specs’ – and that was the end of the discussion.”

'Learning to say no requires you to come prepared.'

“You need to take unconditional responsibility, which means that you need to have the ability to respond – in a meaningful way,” continues Vanderbeke. “In my workshops, I use several small examples, taken from my everyday work, to get my point across. For instance, someone escaping responsibility would say: ‘I cut my estimation in half because my project manager told me to,’ as opposed to someone keeping ownership, a ‘player’ in Fred Kofman’s terms, who would say: ‘I wanted to avoid a fight with my project manager, so I gave in.’ Likewise, a victim would say: ‘I make estimations because our process demands it,’ whereas a player would say: ‘I want to stay with the company, so I use the established process.’ By making them aware of these little things, people are more inclined to correct their behavior.”


A good architecture is correct, consistent and communicated. Credit: Robert Deckers

Fish nor fowl

In his training courses, Deckers relays his ideas about good architectures. “The role of architecture is to offer a solution approach for the key system properties that are the hardest to realize. As a system architect, you always have to make sure you’re working towards a solution, providing guidance and serving your stakeholders’ needs, while also keeping an eye out for things that could go wrong if not addressed in the architecture. If you’re not doing this, you’re probably not architecting. Also, I want software architects to understand that an architecture needs to offer business value and that it’s feasible to build the system within the organization at hand. You can only be an effective architect when you’re prepared to step out of your technology comfort zone.”

'My advice: hang the top five stakeholder concerns on the wall.'

According to Deckers, a good architecture is correct, consistent and communicated. “A system has to be correct in that it has to adhere to the stakeholder concerns and the technical environment. The development process has to be consistent. At Philips in Bruges, I once witnessed a software architect testing all preconditions of all the functions he programmed because he wanted his code to be robust. Meanwhile, in the cubicle right next to his, a colleague was using pointers without testing anything because he wanted his code to be fast. Combined in one system, that gives you neither fish nor fowl. You need to be clear on the key properties – my advice: hang the top five on the wall. Finally, an architecture should be described in such a way that you can discuss it with the different stakeholders, which means using different views for different aspects.”

Deckers stresses the importance of focusing on the non-functional properties, aka the quality attributes. He acknowledges that this seems to be at odds with the popular Agile principle of delivering working software as quickly as possible. “People often ask me: how do you match Agile and architecture? My answer to them: you don’t. They’re two different mindsets. Architecture is about looking before you leap, whereas with Agile, you just go and adjust based on the feedback you get. That’s perfectly fine for some businesses, but not for a copier or a medical scanner, where aspects like reliability and safety are known beforehand. The closest way to match Agile and architecture is to bend the rules and dedicate the first few sprints to the key concerns.”


Descending down the management funnel, the focus narrows and the risk for conflict grows.

Better decisions

With collaboration, there’s bound to be friction. A software craftsman, therefore, also needs tools for conflict resolution. In his workshops, Vanderbeke presents a management funnel doubling as an inverse conflict pyramid. It goes from wide to narrow in three levels: strategic, tactical and operational – from the what to the how. Descending down the funnel, the focus narrows and the risk for conflict grows. When a conflict actually arises, going back up the funnel to try and find a shared goal or principle helps to smooth things over.

“Software architects who are in disagreement about the way to tackle a problem often are at the bottom, stuck in their own solution. Taking them up and discussing the problem criteria usually ends the stalemate as they establish common ground,” illustrates Vanderbeke. “It’s a very useful instrument in process management as well. When you’re in a meeting that’s going nowhere, revisit the reasons why it was set up in the first place and a way out will present itself almost automatically.”

Stocked toolbox in hand, software engineers are well equipped for craftsmanship. With Deckers, Vanderbeke concludes: “It would be great to see them make better decisions. To see them operate more autonomously. And, at the same time, to see them have more fun in what they do.”

This article is written by Nieke Roos, tech editor of Bits&Chips.

Recommendation by former participants

By the end of the training participants are asked to fill out an evaluation form. To the question: 'Would you recommend this training to others?' they responded with a 9.1 out of 10.