After completing the Motion Control Tuning training, you can achieve optimal motion control performance in minutes

motion control tuning interview met trainer en studenten
Academics who are experts in control theory often have difficulty in designing a controller for industrial practice. On the other hand, many mechatronics professionals who come into contact with control technology lack the theoretical basis to bring their systems to optimum performance. The Motion Control Tuning training offers a solution for both target groups. “Once you’ve gone all the way through it, you can design a perfect control system yourself in just a few minutes,” says course leader Tom Oomen.

How do you ensure that a probe microscope scans a sample in the right way with its nanodial needle? How can a pick and place machine put parts on a circuit board in a flash while still achieving super precision? How can a litho scanner project chip patterns at high speed and just the right position on a silicon wafer? It’s all about control engineering, about motion control.

It’s this knowledge that’s in the DNA of the Brainport region. Motion control is at the heart of accuracy and high performance. The success of Dutch high tech is partly due to the control technology knowledge built up around the city of Eindhoven in the Netherlands.

Technological developments at the Philips divisions Natlab and Centrum voor Fabricagetechnologie (CFT) made an important contribution to the development of the control technology field in the eighties and nineties. Time and time again, however, there was a hurdle to be overcome. When engineers in the product divisions started working with it, it was not so easy to convert the technology and theoretical principles that had been developed into industrial systems.


Students work with a very simple two-mass-spring-damper system.

Training courses Advanced Motion Control and Advanced Feedforward & Learning Control

That is why Philips realised in the 1990s that it had to transfer its knowledge effectively. This resulted in a course structure with a very practical approach. The short training courses of at least three days are intensive, but when participants return to work, they can apply the knowledge immediately.

Motion Control Tuning (MCT) was one of the first control courses set up at Philips CFT in the 1990s by Maarten Steinbuch, currently professor at Eindhoven University of Technology. Today, Mechatronics Academy develops and maintains the MCT training and markets it in collaboration with High Tech Institute, together with the Advanced Motion Control and Advanced Feedforward & Learning Control training courses.

MCT trainer Tom Oomen

Tom Oomen, associate professor at Steinbuch’s section Control Systems Technology of the Faculty of Mechanical Engineering at Eindhoven University of Technology, is one of the driving forces behind these three courses. “The field is developing rapidly,” says Oomen, “which means a lot of theory, but the basis, for example how to program a PID controller, has remained the same.”

The Motion Control Tuning (MCT) training provides engineers with a solid basis. Participants are often developers with a thorough knowledge of control theory who want to apply their knowledge in practice but encounter practical obstacles. The surprising thing is that each edition always is joined by a number of international participants. It says a lot about how the world views the Dutch expertise in this field.

Motion control training students can roughly be divided into two groups. The first are people with insufficient technical background in control technology, who do have to deal with control technology on a daily basis. They want to learn the basics in order to be able to communicate better with their colleagues. “These people do design controllers, but don’t understand the techniques behind them. They make models for a controller, without knowing exactly what a controller can do. This causes communication problems between system designers and control engineers,” says Oomen.

Control engineers traditionally design a good controller on the basis of pictures, the so-called Bode and Nyquist-diagrams. “For seasoned control engineers, those diagrams are a piece of cake, but if you’ve never learned to read those figures, it’s still abracadabra. Then you can turn the knobs any way you want, but you’ll never design a good controller”, says Oomen.

Motion Control Tuning features twenty trainers

The best way to teach the essence of the profession to people with insufficient theoretical backgrounds, according to TNO’s Gert Witvoet, is to drag them all the way through it once. Witvoet, who also serves as a part-time assistant professor at Eindhoven University of Technology, is one of the twenty trainers and supervisors involved in the MCT training. “They have to learn how to read such diagrams. They need to understand exactly what they mean. With this training you really learn how control engineers in the industry design controllers, and what the possibilities and limitations of feedback are,” says Witvoet.

The other target group consists of engineers who are theoretically prepared. They are trained in theoretical control technology and have a good background, including knowledge of the underlying mathematics. Most of them are international participants, who come to the Netherlands especially for the motion control training. “These people have moved from academia to industry but have often never designed a controller for an industrial system. They are unable to achieve a good performance with modern tools, and the ability to tune classic PID controllers is often lacking,’ says Oomen. Witvoet: “In our course they will learn the real industry practice: how to handle a motion system and come to a good design step by step.”

Tom Oomen says that he looks with ‘theorical glasses’. Witvoet is more the applications guy. Both of them think it’s cool to teach engineers how to put the knowledge from state-of-the-art research transfer it into practice.

The academic world and industry work in very different ways, although their starting point is the same: a model. Researchers and engineers, however, each choose a different approach. Academics often use physical models including underlying mathematics, differential equations and the like. But in practice, engineers work with so-called non-parametric models such as frequency response functions. “This is very different from what we work with in the scientific world and we will work with it in the training”, says Oomen.


Tom Oomen.

MCT training part one is feedback design

Motion control tuning students get started with frequency response functions on the first day. They are quick and easy to obtain and are a means to reach the goal: to design a feedback controller. They measure the properties and characteristics of an existing mechatronic system. “A frequency response function follows from these measurements, which shows how the machine behaves,” says Oomen. “Then a model rolls out, which allows you to design a controller for that system.”

In contrast to these rapidly acquired and highly accurate frequency response models, many techniques from academia build a parametric model. For that they need detailed information on masses, springs, stiffness, dampers and so on. In practice, this is far too time-consuming. It is difficult to know all the parameters exactly.

But if you have an existing system, a frequency response is a good alternative. “You offer a suitable signal and simply measure how the system reacts,” says Witvoet. “This way you get a super good frequency response function of the input-output behaviour in just a few minutes, which allows you to design a good controller. If you then also know how to tune such a thing, you can make the best controller for your system, step by step, within a few minutes.”

Students in MCT training use a simple, practical system

Students get started with a very simple two-mass-spring-damper system. One mass is connected directly to the motor, the second mass (the load) is connected to the first mass. The system has position sensors at the motor, as well as at the load. The challenge is to design a controller that controls the second mass accurately. Not easy, because the shaft is torsional.

Oomen: “In practice, systems always measure the load. Just look at a printer. Somewhere there is a motor that moves the carriage via a drive belt. Because you want to know exactly where the ink is on the paper, you measure the position of the carriage. When you measure on the engine, you never know for sure, because the transmission between the engine and the print head is flexible.”

Gert Witvoet.

Even seasoned researchers in the control technology sometimes have trouble understanding the stubborn practice. In their experience, everything can be modelled in detail, including the transmission between engine and load. During visits to top international groups, Oomen regularly shows the experimental set-up from the motion control tuning training to theorists. “I then ask them if it makes any difference where I measure, at the motor or the load. Starting from theoretical concepts like controllability and observability, they usually answer that it doesn’t matter”.

In the MCT course, however, the trainers show that it is essential where you measure. “If you measure over the motor, then the sky is the limit in terms of performance. Everything is possible. Malfunctions can be suppressed up to any frequency. But if you measure – as always in practice – over the load, then you are very limited, because you have to deal with unpredictable behavior due to flexible parts. Then there are significant limitations for control loops and the performance that you can actually achieve. If you want to make a stabilizing regulator under these conditions, you have to be very careful. It’s easy to get unstable behavior. If you want to know exactly what that’s like, you have to come to the course,” laughs Oomen.

Henry Nyquist and Hendrik Bode

To give a motion controller stability, classic concepts are necessary. These were devised by Henry Nyquist and Hendrik Bode. Oomen: “In the first half of the last century, Nyquist already devised principles to guarantee the stability of such a control loop. I recently read a book from 1947 in which he described this. We still use this on a daily basis, in combination with those frequency response functions. Both are deeply interwoven. In this way we guarantee the stability of control loops.”

Mention the name Nyquist, and you’re also talking about Fourier and Laplace transformations. It might sound complicated but working with mathematics in practice doesn’t require a deep understanding. “We explain these concepts in a very intuitive way that is accessible to everyone,” says Oomen. “The role of these concepts in control design forms the basis and is encountered by control engineers in their work anyway. We think it’s important that people really know it, but it’s really not necessary to go deep into mathematics for that.”

After the basic concepts, the training makes the step to stability. Witvoet: “They learn to lay a good foundation with a picture, a Nyquist diagram. This allows students to test the stability of their system. All mysticism is then gone, because they know what’s underneath and how to use it. Students will then be able to turn the knobs and check whether the closed control loop is stable.”

This is followed by the step to an actual design. The first requirement of such a design may be stability, but in the end, it is all about performance. To achieve this, students are given a wide range of motion control tools such as notch, lead, lag filters and PID controllers. “It’s all in the engineer’s toolbox and it’s the prelude to one of the most appreciated afternoons of the course – the loop-shaping game. In this game, students will tune the controller as well as possible and squeeze out the performance. If they can do that, they’ll have mastered how a feedback controller works.”

MCT training part two is feedforward controller design

In addition to the feedback controller for stability and interference suppression, each motion system also has a feedforward controller. This tells the system how to follow its path from a to b. This is also called reference tracking. “You control that with the feedforward controller,” says Oomen. “The most important part of the system’s performance comes from the feedforward control. Here, too, we briefly go into the theory and then immediately start experimenting. It is a very systematic and intuitive approach. Once you’ve done it, you can apply it immediately.”

By actually applying it, participants in the MCT training learn how things like mass feedforward and capture feedforward work. “It’s a very systematic approach that allows you to tune the parameters one by one in an optimal way,” says Oomen. “If you master that technique, you can tune the best feedforward controller for your system in just a few minutes, by doing iterative experiments.

'Once you have experienced this, you can almost get optimal performance out of the system within a few minutes.'

Once you know how to measure a frequency response function and design a feedback and feedforward control, you can design controllers very quickly. Oomen: “Time is money, of course, and that’s why the entire Dutch high-tech industry does it this way. You can find it in Venlo at Canon Printing Systems and in Best at Philips Healthcare. The smaller mechatronic companies also use these techniques. At ASML in Veldhoven, almost all motion controllers in wafer scanners are tuned in this way. Once you are a little experienced, you can almost get the optimal performance out of the system. That’s within a few minutes and, of course, that’s cool.”

MCT training is 100 percent practice

When asked about the relationship between theory and practice, Oomen laughingly says that the MCT training is “100 percent practice”. “All the theory we do is essential to practice,” adds Witvoet. “We explain a number of theoretical concepts, but we do so by means of an application. It’s all about tuning. It’s really a design course and gradually one learns some theory. Every afternoon we work on that system, making frequency response functions and then fine tuning. Feedforward, feedback, it’s a daily job getting your hands dirty and your feet in the mud, because you apply the theory right away.”

'The Motion Control Tuning training is 100 percent practice, every day with your feet in the mud.'

After five days, participants will be able to develop a feedback and feedforward controller independently. In the final day various trainers and experts discuss the developments within their field of expertise.

Oomen: “Within the five days, participants succeed in making controllers with one input and one output, but many industrial systems have multiple inputs and outputs. That seems to have consequences for tuning.” Witvoet: “We show where the dangers lie. When things can go wrong and when things go wrong, how to deal with them.”

To design control systems for multiple inputs and outputs, motion control engineers need a stronger theoretical basis. This knowledge of multivariable systems is discussed in the five-day Advanced Motion Control training course. “In this course, participants will learn in great detail how to make control systems with multiple inputs and outputs”, says Oomen, “We will follow the same philosophy and reasoning as in the Motion Control Tuning training”.

On the last day, learning from data is also discussed, a trend that is currently growing rapidly within the control area. “The latest generations of control systems can learn from past mistakes and at the same time correct them,” says Oomen. “In doing so, we use large amounts of data produced by sensors in machines. This enables us to correct machine faults within a few iterations. This paves the way for new revolutionary machine designs that are lightweight, more accurate, less expensive and more versatile, but also allow existing machines to be upgraded in this way. On the last day of MCT, I’ll tell you about it for an hour, but in the Advanced Feedforward Control training course, we’ll take three days to do it.”

This article is written by René Raaijmakers, tech editor of High-Tech Systems.

Recommendation by former participants

By the end of the training participants are asked to fill out an evaluation form. To the question: 'Would you recommend this training to others?' they responded with a 9 out of 10.

If you already know everything, how will you ever learn something new?

Design patterns training - Testimonial Thermo Fisher Scientific
In the midst of a tight Dutch labor market, companies are working harder than ever to keep and attract new talent. Thermo Fisher software manager Reinier Perquin believes that providing his employees with training opportunities not only helps bring in new personnel, but it also keeps his people fresh. He organized the ‘Design patterns and emergent architecture‘ training for his team.

Thermo Fisher Scientific, a multinational leader in biotechnology product development, employs more than 70,000 people around the world. But how does a company, with such a large global footprint, manage to keep its workers and continually draw in new employees? According to the software group manager from Thermo Fisher’s Eindhoven offices, Reinier Perquin, the main attraction for engineers is the opportunity to work on cutting-edge projects. An example: using advanced software to help solve the problem of global diseases. To get these talented engineers on board, Perquin says investment in training – both technical and social – is a valuable tool.

'Training budgets are increasingly important to attracting prospective colleagues.'

As a manager within the Thermo Fisher R&D department in Eindhoven, Perquin is routinely interviewing to bring new faces to the software group. What he’s noticed in these meetings: training budgets are increasingly important to attracting prospective colleagues. “In some interviews, it’s one of the first questions that people will ask. We’re seeing that more and more. While we don’t offer individual training budgets, we understand how important it can be, so we have a group budget specifically to encourage our employees to utilize training opportunities,” explains Perquin.


Photo by Vincent van den Hoogen.

Do you prefer internal or external training?

“We offer both to our employees but getting an outside view can be very helpful and that’s why we encourage external training. Our workers can gain new insights and learn about emerging technologies and cutting-edge methods. In my department, we’re seeing that the whole architecture of software is evolving before our eyes. Before, it was closed off but now you see things happening in the cloud or edge computing. That happens because new technology enables that. In software, you must constantly learn and adjust. So, if you don’t invest in yourself, then, in the end, you stand still. These trainings are a great method to enhance skills and learn about novel solutions.”
The ‘Design patterns and emergent architecture’ training took place in-company.

What’s the greatest benefit of offering your employees training?

“Well, first of all, people are really busy with their day-to-day tasks. Sometimes it’s good to step outside and take a break from thinking only about your work. It gives people the opportunity to not only get a break from their daily challenges but to focus on enhancing their personal skill set,” describes Perquin. “Also, it gives our engineers the opportunity to meet people from other companies and build a social and professional network. If people sit still too long without training – especially externally – they start to think in certain ways within their comfort zones. For some problems, you need to think outside of the box – not in absolutes like, ‘We’ve always done it like this, so we’ll continue to do this like this’. That’s the wrong mentality. Trainings help to disrupt this way of thinking.”

What type of courses are your workers choosing?

“Being in software, we often see our employees opting for training in design patterns in emergent architecture, taught by Onno van Roosmalen at High Tech Institute. In software, you see a repetition of certain patterns. By giving these patterns common names, essentially creating a unified software language, our engineers can better communicate and solve problems. Onno and I have a long history, going back to university, so I know the level of the knowledge that’s being taught and that training is easy to approve for our employees.”

Are there any other trainings you utilize?

“To be honest, we probably spend most of our budget on the soft-skills training – probably more than the technical trainings. Sometimes when people come straight from university, they tend to think that they know everything. Technically, these people can be very strong but often their soft skills are their weakest spot. Everyone wants to believe they’re system architects but I always say, an architect is not a technical person. In that situation, soft skills are more important than the whole technical level. If you already know everything, how will you ever learn something new? Sometimes they don’t realize it and they need time for reflection. That’s something the soft-skills training is incredibly helpful with.”

Do you notice a return on your investment? Does it help output? 

“Absolutely. I don’t see it as we’re losing three days of work; I see it as a worthwhile investment, both for the company and for the individual. I believe it helps in terms of productivity, especially the soft skills. We see very positive changes because people realize that if they want to achieve something, they may need to adopt a different approach. We see that trainees come back communicating ideas more clearly and working better with people and it makes them a far more effective employee. We find that our colleagues come back with new ideas, new energy and new inspiration. It keeps people fresh.”

This article is written by Collin Arocho, tech editor of Bits&Chips.

Recommendation by former participants

By the end of the training participants are asked to fill out an evaluation form. To the question: 'Would you recommend this training to others?' they responded with a 8.8 out of 10.

Bridging the hardware-software gap

Trainer Software engineering for non-software engineers
Nico Meijerman joined NTS to help build and expand the company’s software competency. Shortly after arriving at the hardware stronghold, he started to work on bridging the gap between software engineering and the worlds of physics, mechanics and hardware-related disciplines. The result is a workshop in which Meijerman teaches his non-software colleagues the basics of software engineering. Customer and business specifics included.

First-tier supplier NTS Group has quietly been shaping its software engineering competence over the last couple of years. You might not expect this from a company that’s still making most of its money by bending sheet steel, milling parts and assembling systems.

However, embracing software expertise is a natural step for some first-tier suppliers. Over the past decades, NTS has been actively building its system development capabilities. It now develops and manufactures complete machines and modules that are branded and marketed by its customers. With the value of end products shifting to software, it seems a natural move for NTS to develop the competences to catch the digitization wave.

NTS’ Development and Engineering (D&E) department has a headcount of about two hundred engineers, of which 15 percent focus on software. For a supplier that delivers high-end systems and designs, this is still on the low side, Meijerman argues. “It will grow because we see software becoming more and more essential for creating value for our customers. We see the software effort increasing in our projects.”


“We see the software effort increasing in our projects”, says Nico Meijerman. 

An intriguing offer

Meijerman has been walking the hardware-software trail for his entire career. He learned to design chips during his study in Twente and joined Sagantec, a company that both worked on a silicon compiler and developed application-specific ICs for customers. There, he started designing chips, but soon enough, he shifted to programming because the embedded software turned out to take more effort than the hardware itself.

Later, Meijerman taught informatics-related subjects at several departments of the university of applied sciences (HTS) in Arnhem. Subsequently, he joined Philips CFT, where they needed a software engineer who understood what was happening in the mechanical and electrical domains. There, he developed motion control software for ASML’s first PAS 5500 litho scanners. Soon after, he also worked on MRI scanners for Philips Healthcare.

In 2010, Meijerman decided it was time to start his own consultancy company but a few years later, NTS approached him with an intriguing offer: would he be interested in becoming the group leader for machine control, a team focusing on software and electrical engineering. Helping to build up the software competency seemed a daunting, yet very attractive challenge.

After arriving at the hardware stronghold, Meijerman knew he needed to work on his relationship with the NTS mechanics and mechatronics base. He figured a short workshop would help make his new colleagues more familiar with software.

According to Nico Meijerman of NTS, “Mechanical engineers deal with the limitations of physics, software manages complexity.”

Meijerman started interviewing colleagues to get an idea of their needs. First, he talked to the systems engineers – the guys that mostly have a mechanical background. “The most frequent response was that they had no clue about software. I heard remarks like ‘Those guys are always too late’, ‘They never make the things that I really need’ and ‘I can’t work with them because they don’t understand anything’. It was very much a culture of blaming and it was clear that our systems engineers didn’t know what software was doing. They saw it as an unpredictable black box.”

In Meijerman’s contact with the system architects, things started to resonate more. “They at least got an idea about what they would like to know about software. They wanted to know more about programming languages, third-party, multi-tasking, real-time, Agile and other basic concepts. They also wanted to know what a software development process looked like.”

' We saw the need to involve clients early in the software development process.'

Before long, Meijerman and the system architects concluded that the customer perspective is of enormous importance. “We saw the need to involve clients early in the software development process. For NTS, this was a high priority because most of its customers have a mechanics background. They know that software has to be included but they have to be educated on the specifics – for instance, on the fact that software is never bug-free. That’s why part of my workshop is also about business models and everything that follows our development activities.”


Nico Meijerman is the trainer for ‘Software engineering for non-software engineers’

Wrong assumptions

In the high tech industry, you often hear that communication is the problem in settings with different disciplines. But at NTS, Meijerman experienced that it’s more about understanding and being able to step in someone else’s shoes. “People do try to communicate. I see that there’s definitely a willingness to talk,” he says. “But hardware and software engineers are often living in completely different worlds.”

'Mechanics is about managing the limits of physics, while software is about managing complexity.'

Meijerman explains that mechanical engineers predominantly look at the limitations of physics. “It’s about nanometers, about milliseconds. The stiffness of a construction determines what you can achieve. Components wear out if you use them too long.” Software engineers, on the other hand, do not deal with physics; they try to control complexity. “Mechanics is about managing the limits of physics, while software is about managing complexity.”

The problems often arise from wrong assumptions. “Mechanics rarely ask a software engineer about the degree of complexity. That’s why a software engineer will say in most cases he can fix a machine control problem – except for some very difficult issues. But if you ask them how much effort it’s going to take and how complex it is, you may get a completely different answer. A mechanical engineer looks at things from a physics point of view, not from a complexity point of view. But he should know how much work his question can generate. The lack of understanding of such basic concepts makes it difficult to interact. Equally, software engineers definitely have to improve their knowledge in the field of mechanical engineering.”

Part of Meijerman’s workshop is understanding that software engineering isn’t the same as programming. “Youngsters are learning how to program while at university or during technical education. A lot of people think software engineering is just more programming but that couldn’t be further from the truth. In programming, complexity usually isn’t the issue, as you’ll end up with some hundreds of lines of code. It’s not until you’re dealing with over a hundred thousand lines of code that it starts to get complicated. In high-tech systems this is the case: there are sometimes millions of lines of code, and the only way to tackle challenges of this magnitude is to find a way to work through the problem. That means breaking it down and ensuring that your work is correct. Engineering is about focusing on architecture and design, as well as managing complexity.”

“My goal is to teach participants all aspects of software engineering,” Meijerman concludes. “When they realize that, they understand that they can’t ask their nephew playing with Arduino boards to write a program for them over the weekend.” At the end of the workshop, participants understand more about the intriguing world of software engineering and about the differences and commonalities between software engineering and other disciplines, resulting in better collaboration, better solutions and hopefully more fun in their work.

This article is written by René Raaijmakers, tech editor of Bits&Chips.

Object-oriented techniques are also suitable for PLCs

Onno van Roosmalen and Tim van Heijst have set up a five-day course in which PLC experts and OO specialists learn how to develop a PLC application through object-oriented programming. High Tech Institute will be rolling out this course for the first time in early 2020.

More and more, PLC programmers are facing the same challenges that were previously reserved for large, complex software projects. Object-oriented techniques offer a solution, but in the past, they could not be applied one-on-one due to the traditional limitations of PLCs. With the new release of the IEC 61131-3 standard for PLCs, object-oriented functions are now available and the possibilities have grown enormously.

Universities of Applied Sciences and academics regularly dismiss PLCs as an inferior technology. Nice for simple machines, but totally inadequate for the complex systems they work on. In the past, they may have had a point. Basically, PLCs are always cyclical, but the cycle time can often throw a wrench in the works. However, for modern PLCs, a cycle of less than ten milliseconds is quite normal. For time-critical applications, there are even PLCs available with a cycle time of less than a millisecond.


Onno van Roosmalen (left) and Tim van Heijst (right) have set up a five-day course in which PLC experts and OO specialists learn how to develop a PLC application through object-oriented programming.


“Packaging machines – even if they have to reach a very high speed – can easily be controlled with a PLC,” says Tim van Heijst, owner of the Codesys specialist, Extend Smart Coding, and lecturer at High Tech Institute. “PLCs are also an excellent platform for robot applications. With the current power of PLCs, there are hardly any applications where they fail.”

This does not mean that suppliers such as B&R, Beckhoff, Rockwell and Siemens will also sell their PLCs in the higher segment without a hitch. In the case of larger machine builders, it is sometimes still an internal struggle: ‘do we design everything ourselves or do we choose a PLC?’, Van Heijst has experience. However, the definition of the problem is almost always the same. They want to develop an application as quickly and as well as possible and, if feasible, reuse existing code. “A PLC meets all these requirements. It is a standard product that is well supported by the supplier. If you have a bug in the firmware, it will fix it for you. You can easily connect to your actuators and sensors, because there are all kinds of standardized communication protocols available. That’s why you can get your system up and running very quickly,” says Van Heijst. An additional advantage is that almost all PLC manufacturers follow the IEC 61131-3 standard. This means that the application software you build is hardware-independent and switching to a faster PLC – or even an industrial PC – is a breeze.

Uncoupling

Whereas PLCs have traditionally been used in applications with limited functionality, the possibilities are now so extensive that modern programming techniques are needed to regulate everything in a good way. With version 3 of the IEC 61131 standard, developers can also opt for an object-oriented approach.


Van Heijst: ‘With the current power of plc’s, there are virtually no applications where they fail’.

“You will then have to deal with important software features such as information hiding and encapsulation, i.e. the idea that you locate information and do not throw it through the entire system,” says Onno van Roosmalen, independent software engineering consultant and lecturer at High Tech Institute. “In practice, you regularly see that a team provides a component with extra functionality, causing a cascade of events throughout the entire system. That makes it difficult to add anything. In many web applications you see that encapsulation is broken, but that depends on their nature: making information available. With machine control, that’s a different matter.”

“The thought of hiding information goes hand in hand with the way in which components address each other: the interface. This allows you to hide the detailed shape of your objects and ensure that no implementation details leak out. You then make sure that the user can only do what is currently required, no more and no less,” explains Van Roosmalen. The idea behind this is that the evolution of components can be disconnected. If a new version of a component continues to do what it used to do via an interface, the software built on it does not need to be modified immediately. A development team that programs against the component can use the new version without fear of something falling over in the meantime. When encapsulation and interfaces are in order, a maintainable, scalable architecture that can grow with the application almost automatically emerges.

' You can always expand an interface later on, but downsizing is a lot more difficult.'

“The condition, however, is that teams take a defensive stance when designing their interface. You shouldn’t just offer everything that other teams ask,” says Van Roosmalen. “The more you offer, the more unintended uses there are and the more likely it is that things will break in a new version. You can always expand an interface later, but downsizing is a lot more difficult.

How does that work in practice? Van Heijst gives an example: “I recently visited a manufacturer of charging stations. They make a lot of variants and work with a wide range of plugs. The company wanted to develop an energy management system that could cope with all these differences. We solved that with PLCs and an object-oriented approach. Now the management system communicates with all charging station versions via defined, generic interfaces and the company can manage the development of all blocks separately. This reduces the risk of errors, makes re-use easier and makes an application much faster in the air.”

Building knowledge

“For PLC programmers ‘from the old pedigree’, the object-oriented approach means a new way of thinking. Software engineers for whom OOP is a piece of cake, find that PLCs work just that little bit differently from their familiar PC environment. PLC vendors offer courses, but these are about how to connect to your i/o, for example. Real programming courses are not,” says Van Heijst, “because you don’t learn how to build your own application.”


‘Object-oriented thinking for the PC is almost identical to object-oriented thinking for the PLC,’ says Van Roosmalen.

That is why Van Heijst and Van Roosmalen have set up the training “Object-oriented system control automation.” A five-day course in which PLC experts and OO specialists learn how to develop a PLC application via object-oriented programming. In this way, companies can build up knowledge and do not have to fall back on system integrators who prefer to take care of everything.

“Object-oriented programming is essentially different from procedural programming,” says Van Roosmalen. “But object-oriented thinking for the PC is almost identical to object-oriented thinking for the PLC. You can use the same software design. Well, with some minor adjustments because the programming concepts for a PLC are a bit more limited.”

“We explain the technique and tell you how to apply it,” adds Van Heijst. “A large part of the training is about how to get a good software design. It still happens very often that a programmer just goes to work and delivers a working piece of software a few weeks or months later. Then, suddenly, there is a change and they have to make all sorts of turns to get it done. If you apply the object-oriented methodology properly, you will be free of that problem. Another advantage of OOP.”

This article is written by Alexander Pil, tech editor of High-Tech Systems.

The expat’s guide to working in Dutch high tech

From an endless loop of deliberations to receiving criticism that can sound downright rude, when you’re new to the Netherlands, the Dutch work culture can seem totally weird. To help facilitate integration, tech companies are sending their expat workers to the training “How to be successful in the Dutch high tech work culture”.

As an expat in the Netherlands, you’ve probably already learned that transitioning into the high tech work culture in the Netherlands is a difficult adjustment. If you’re new to the region, and you find yourself asking questions like: ‘We’re having another meeting?’ or thinking, ‘Why are they asking me, it’s not my job’ – welcome to the Dutch work culture, one place that’s sure to have you feeling like a square peg, trying to fit into a round hole.

Because this can be such a difficult transition to maneuver, High Tech Institute, together with content partner Settels Savenije & Friedrich, is offering the training “How to be successful in the Dutch high tech work culture”. Here’s your beginner’s guide.

Flat-work society

One of the first things you’ll notice while working as an expat in the Netherlands is that the corporate power structure is typically as flat as a ‘pannenkoek’. The Netherlands isn’t at all into hierarchy. In business, status is nothing, and credibility counts for everything. In the training, participants learn ways to build their reputation by taking ownership and doing what’s necessary to get the job done. They also learn that one of the fastest ways to kill credibility is by being confined to the parameters of a job description.

'Don’t be limited to only what was asked of you.'

“If you think you can, then do. Even if it’s not exactly your job. Don’t be limited to only what was asked of you, and don’t be afraid to take a risk,” proposes course instructor, Claus Neeleman. “It’s also important to be honest and own up to mistakes rather than make excuses. It’s always better to be sorry than to do nothing at all.”

Building consensus

Another indicator of the lack of hierarchy in the Netherlands is the need for consensus. One well-known characteristic of the Dutch high tech workplace is the seemingly endless number of meetings and discussions. Do the words, ‘let’s meet again next week to discuss this further’, sound familiar? You’re thinking, how hard could it be to decide, right? For some, this is difficult to acclimate to. Yes, meetings take time and it might appear to be inefficient, but it’s all designed to build consensus – or what the Dutch refer to as, ‘polderen’. “Polderen is simply about everyone doing their part to come to a consensus and make decisions,” explains Neeleman.

His longstanding method is a deeply ingrained cultural value. For centuries, the Netherlands has looked for innovative solutions to confront the threat of water. The lowland mentality is, ‘we’re all on this ship together, so it’s up to everyone to come up with the best possible answer’. Thus, it’s important to participate and give input. It doesn’t matter if you’re an expert, or if you have nothing crucial to add. Your responsibility is simply to be part of the discussion, which is another way to build your credibility. Some issues require out-of-the-box thinking from the non-experts. “Just ask the little boy who put his finger in the dike,” jokes Neeleman. “The idea doesn’t have to be perfect; it just has to work.”

One of the many practice rounds during the training.
Communication is key

Because of the sheer number of meetings and interaction in the Dutch work environment, good communication skills are a necessity – both verbal and nonverbal. As such, one of the central themes of the one-day training focuses on communication styles and active listening skills. From body language to facial expressions and tone of voice, participants learn not only how to express themselves more effectively, but they also gain experience in how to pick up on the social cues given by others.

During multiple practice rounds, students learn that good communication skills start with the ability to really hear what someone is saying. For this, trainees are taught a three-step process for active listening. First, listen intently. Second, summarize to display you listened and understand. Finally, ask questions for clarification. While this is simple in theory, cross-cultural communication is not always clear nor easy to understand. For many participants, like Bahaa Ibrahiem – a setup tooling and visualization engineer with ASML – this section of the course was a real eye-opener. Ibrahiem: “After more than a year of living and working in the Netherlands, this training has really improved my cultural awareness and my communication with my Dutch colleagues.”

Kick the ball, not the person

Of course, even with active communication skills, when trying to bring together so many personalities and opinions, inherently there are going to be disagreements. This can sometimes result in the exchange of heated discussions or feedback that seems rather harsh. This sort of critical back and forth can be especially difficult for expats that are new to the workforce in the Netherlands. Culturally speaking, the Dutch don’t mince words and are well-known for their directness. All too often, this can leave a foreign colleague befuddled and entirely insecure with the critique.

Participants are practising to give and receive feedback.
Feedback process

Of course, even with active communication skills, when trying to bring together so many personalities and opinions, inherently there are going to be disagreements. This can sometimes result in the exchange of heated discussions or feedback that seems rather harsh. This sort of critical back and forth can be especially difficult for expats that are new to the workforce in the Netherlands. Culturally speaking, the Dutch don’t mince words and are well-known for their directness. All too often, this can leave a foreign colleague befuddled and entirely insecure with the critique.

'Good feedback is constructive, is to the point and is given simply with the intent to solve a problem.'

“The idea is that feedback shouldn’t be personal. It’s about kicking the ball, not the person,” elucidates Neeleman. “Good feedback is constructive, is to the point and is given simply with the intent to solve a problem.”

This article is written by Collin Arocho, tech editor of Bits&Chips.

Recommendation by former participants

By the end of the training participants are asked to fill out an evaluation form. To the question: 'Would you recommend this training to others?' they responded with a 9.1 out of 10.

When your product and your company become more complex, a simple method to manage the process is essential.

trainer High tech Institute: Product configuration management course
A good configuration management process for creating high tech systems provides cost savings, strength and fully transparent development and production. Frank Ploegmakers, trainer at High Tech Institute, talks about obstacles and common mistakes in configuration management. ‘Those responsible for technology, development and operations are not always able to understand the essence of the Configuration Management complexity.’

Within a high tech organisation, hordes of engineers produce an enormous amount of technical data: partial designs of printed circuit boards, motors, sensors, mechanical and optical components, you name it. Electronics engineers, software designers, optical engineers, mechanical engineers: they all have their own computer tools. Even prototyping itself is shifting to the virtual environment. Remove the design tools from any high tech company and you may as well shut it down.

The discipline of Configuration Management has been developed to control the coherence of all this design information. It ensures that different disciplines can work together on a design, and that the process from design through to the delivered product is controlled.

It is hard to believe, but only a small proportion of high tech machine builders have specified and implemented a configuration management process and method within the appropriate ICT tools. ‘This doesn’t exist in many companies,’ says Frank Ploegmakers, trainer in product configuration management at High Tech Institute. ‘Configuration management tools are needed to exchange, test, secure, hold and place all design knowledge into a structure. I think that only a small proportion of machine builders have documented their development and manufacturing processes and use them in the correct manner and, for example, understand what baselining is.’


‘Understanding complexity is a prerequisite for configuration management,’ says Frank Ploegmakers, lecturer in system configuration management.

Baselining

To explain what baselining is and to clarify relative issues, let’s take a trip into an ideal world. In this world, ingenious mechanics, electronics engineers and software engineers deliver perfect partial designs in close consultation. They are – miraculously – all correct first time round. Everyone is happy: that works! We can produce! The person in charge gives the starting signal and the design department draws up a baseline. This defines the machine design in a precise manner: materials, composition, purchasing parts, modules, coherence (think of geometry and quality specifications) and the associated software. Production can get started making the machine and the purchasing department can go ahead and order.

If only it were that simple, sighs every technician. In practice, there are many design layers. Improvement follows improvement. Before you know it, the mechanics department is on version 3, the electronics department on version 6 and the software team on version 4.11. Not a disaster either, since once a baseline has been drawn, the machine has also been defined in detail.

Observing hundreds of small and large improvements

In practice, matters are different. We will continue to make improvements even after ‘drawing up the baseline.’ A component in version number 5 is in the baseline, but the manufacturer has still found something that makes it better or cheaper. Therefore, production will have to take version number 6. Even then, there isn’t a problem, but in practice many technicians and disciplines all work on their own partial design.

'Weak leadership often hinders full transparency in development and production.'

Then suddenly there appear to be hundreds of small to large improvements let loose upon a baseline. Which person still maintains the whole overview? Who still knows the relationship between the product or machine at the customer and the baseline within their own organisation? Ploegmakers says: ‘With today’s complex products and systems, you need something that you are able to maintain an overview of, so that it is clear what each person is doing at each precise moment whilst all changes to baselines are completely transparent. A large part of the machine builders have laid out their development and manufacturing processes properly, a small part actually uses them for what they are intended.’

Software

By the way, in software development, configuration management is commonplace everywhere. At the end of a day of development, the engineers in that discipline check in their software code and a build is run: creating the program with all recent additions. Ploegmakers believes that this working method should also be applied by other disciplines. ‘Strangely enough, companies do software configuration management, but they don’t apply it at system level.’

According to Ploegmakers, this is because many companies do not (yet) realise that this is their major problem. ‘If I say to a software manager: “I am now removing your software configuration system,” he will panic completely, because then he will no longer be able to carry out his software output. But in most product or machine building organisations there are employees at a higher level who have to watch over multidisciplinary system integration with tens of thousands or even hundreds of thousands of components, whilst in the case in question, they don’t. When I talk to software people about it, they say to me: ‘Frank, you’ve just touched a raw nerve.’

Time stamps

A complicating factor is the time between drawing up the baseline and a working machine. With software everyone sees the result the next morning, but with hardware it takes months. With a high chance that changes will slip through that have not been coordinated with everyone.

You prevent that by using a configuration management system, says Ploegmakers. ‘With this system you create complete transparency. The power of baselining is that the entire company works with the baseline. Everyone can see the development and production situation at any given point in time.’

Ploegmakers compares it to a film. ‘You can rewind the entire history. You create time stamps. You simply see a historical development of your product with all the associated benefits. It can be useful to look back at baselines and it is also nice for the customer. You can recall the precise configuration if the client places an additional order.’


Trainer Frank Ploegmakers has seen more than a hundred companies ‘on the inside.’

Background and practical experience

Via LTS, MTS and HTS mechanical engineering, Frank ended up reading Engineering and Construction Informatics at Eindhoven University of Technology (nowadays Technology Management faculty). Half of that was hard technology and the other half economics, business administration, marketing, philosophy and social psychology. He came into contact with virtual reality and witnessed the first wave of automation and its excesses: major IT projects that went wild. In this way he became interested in how you ensure that information technology actually delivers something to a company.

By organising a study trip to China, Frank got his first job. He started at WAIDE Consultants in the mid-nineties. This company advised Dutch companies on Joint Ventures to gain access to the Chinese market. Great projects and a great experience, but it was not technical enough for Frank and after a year and a half he started working at Philips Display Components.

For five years, he and his design support department focused on the further optimisation of picture tube design processes and tools. In addition, the field of product data management (PDM) rose strongly in the late nineties. ‘This involved recording and jointly using worldwide information about the display tubes, the production process and production machines. This had to be properly supported by PDM automation.’

Ploegmakers used much of what he learned at Philips Components at Assembleon, manufacturer of pick and place machines. There, his field of work expanded to the entire creation process: from product creation to logistics, production, delivery and service.

'We built everything from scratch.'

After his Philips days, for four years Ploegmakers worked at engineering firm Irmato Group as director of sales and operations. Together with his team, he helped the company grow from 20 to 135 employees. He learnt a lot on the job. ‘We built everything from scratch.’ In 2008, after four years of Irmato, Ploegmakers started working at various companies as an interim manager and project manager. He has now seen more than a hundred companies on the inside.

Insight and overview

Configuration management is not a problem for IT, the reliability department or the R&D department, emphasises Ploegmakers. ‘This goes beyond all departments, from the CTO to the factory floor.’ He believes the real problem often lies with the leadership. ‘Those responsible for technology, development and operations are not always able to understand the essence of the configuration management complexity. Organisations can deliver beautiful configurations of products and machines to customers, but the internal control of these configurations often leaves something to be desired. Business leaders often fail to see that this leads to enormous inefficiencies and ineffectiveness.’

Managing and automating business processes starts with the insight into one’s own company and a good overview of the complexity. ‘It starts with a good company model. Many managers are unable to set that up with all teams. But it is necessary if you want to achieve complex products or machines together with a large organisation. If your product and your company become more complex, a simple method to manage the configuration process is essential.’ Once that process and the associated working methods are known, the introduction of the required information technology is easy. ‘Then it can be configured in PDM and ERP systems in no time at all.’

Doesn’t Ploegmakers paint a somewhat rather too rosy picture with this last statement? ‘No,’ he affirms. ‘The difficult thing is to first understand the complexity. That is an absolute condition for doing configuration management. The implementation of the underlying details is then simple. The old adage “organise first, automate second” still applies.’

This article is written by René Raaijmakers, tech editor of Bits&Chips.

Recommendation by former participants

By the end of the training participants are asked to fill out an evaluation form. To the question: 'Would you recommend this training to others?' they responded with a 7.3 out of 10.

Lower bar by raising the bar: high vacuum

specialist in vacuum at High Tech Institute
Vacuums seem simple: you pump out the air until you reach the desired low pressure. However, for a high vacuum, simply pumping out the air is not enough. To achieve this, you must take extreme measures. For many engineers though, this topic doesn’t always come naturally. High Tech Institute teaches them the tricks of the trade.

More and more, processes in the high-tech industry require a highly controlled environment. Consider the electron microscopes from Thermo Fisher or the EUV systems from ASML. If you insert air into their systems, electron beams are scattered and the EUV light gets absorbed. Therefore, a high vacuum is an absolute necessity. Contamination is also a product killer in the production of displays. Any amount of moisture in the air would prove to be disastrous for OLED materials and the display would be a total loss.

The bar is getting higher and higher. “As long as I can remember, the pressure in electron microscopes should not exceed 10-10 mbar,” says Mark Meuwese, vacuum specialist at Settels Savenije Van Amelsvoort. “But the requirements are also becoming stricter in other applications. For example, soft x-ray systems used to be able to deal with 10-3 mbar. Nowadays, 10-7 is the new standard. With increasing accuracies, come more sensitive sensors that are more susceptible to pollution or disturbance by the atmosphere present.”

Mark Meuwese is involved in the 4-day training ‘Basics and design principles for ultra-clean vacuum‘. 

“The fuller you build your vacuum system, the greater the chance of contamination,” says Mark Meuwese of Settels Savenije Van Amelsvoort. Up to 10-8 mbar, it’s all relatively simple, Meuwese knows. “Of course, you still have to work hard, but if you want to go even further, the challenges increase exponentially, and the system will be many times more expensive. A water molecule is a dipole and therefore sticks to surfaces. You can pump it off better if you put enough energy into it. The easiest method for this is to heat the vacuum chamber. But by creating a temperature distribution, you introduce the risk that the evaporated elements will settle on cold surfaces, in the worst case on the sensor, the samples or the product. Moreover, many sensor systems cannot withstand high temperatures. 10-8 mbar is the limit at which everything goes well.”

Meuwese does not expect that the bulk of the applications will require lower pressures in the foreseeable future. The requirements can get stricter for specialized research work. “The limit is at 10-12 – 10-13, I estimate. And for that, you can hardly build a machine. Everything you introduce into the vacuum chamber is too much. The vessel and the pressure sensor are already too polluting, and even the most advanced pump leaks too much back into the system.”

Fingerprint

At its base, vacuum technology is simple. It starts with a vessel to which you connect a pump. You continue to pump air out until the pressure reaches the desired level. In practice, such a system is of little use. After all, you want to carry out processes in that vacuum. So, everything has to be in the vessel. In fact: you often want the space you are working in to be full of mechanics, sensors and other components. How can you build a vacuum chamber and still achieve a good vacuum level? This is one of the things you learn at an intensive training like “Basics & design principles for ultra-clean vacuum” of High Tech Institute.

“The more components you put in, the greater the chance of contamination,” says Meuwese, one of the teachers during the training. “The surface alone causes contamination through outgassing, and everything you place in the vessel means more surface, and therefore more outgassing. You have to pay attention to that.”

'A fingerprint lasts for weeks.'

How can you take a vacuum environment into account in your design? “There are a number of do’s and don’ts that we cover during the training. To begin with, there is, of course, a list of materials that are suitable for vacuum. Stainless steel is really good and you can also use aluminum without any problems. Brass, however, is not suitable because it contains zinc that evaporates at 300 degrees at 10-3 mbar. Many companies have a list of materials and coatings its engineers are allowed to use.”

Rust is also out of the question because it is porous and contains water that gasses out – meaning a proper brushing is the way of life. “A simple fingerprint can make you suffer for weeks. There are a surprisingly large number of molecules in a fingerprint, so it takes a long time before everything is gone. And there’s no guarantee you’ll be able to pump it out at all,” says Meuwese. Proper cleaning is a profession in its own right and is discussed extensively during training. Since fat is a no-no, ball bearings are a no-go. Designers have to rely heavily on elastic elements such as leaf springs and cross-spring hinges. “Or on ball bearings with ceramic balls, or fully ceramic bearings, since they do not need any lubricant.”

Little legs

Designers must also pay close attention to the shape and construction of the components. “For example, they should avoid sharp edges. If you polish it with a cotton swab or a cloth, remnants will get caught up in it,” Meuwese explains. “A bolt in a blind hole traps a volume of air. If you empty the barrel, it will leak out. Remember that the gas law states that pV/T is constant. If you want to reach 10-7 mbar, that small volume becomes ten orders larger. “Potholes are annoying because water remains in them after rinsing. “So blind holes are also to be avoided. And if you drill a hole to let the water out, it should not be too small. Due to the capillary action, the water will otherwise remain in the hole.’

'Fat is a no-no in a vacuum, so moving is done with elastic elements.'

If you use electrical discharge machining to create a part, there must not be any right angles in the pattern. “That is a different way of thinking. It is not about the most efficient design, but about preventing edges and corners. You have to curve everything and that is always a challenge. With some common sense and experience, you will eventually work it out.”

Even connecting two components in a vacuum is not straightforward. The surfaces are never flat enough to make them fit perfectly. A gap always remains – no matter how small – where air or contaminants are trapped. For the vacuum pump it is more convenient if you separate the two parts with little legs. Half a millimeter will often suffice.

Fat is a no-no in a vacuum, so moving is done with elastic elements.

Cheating

The training of High Tech Institute in the past was mainly about vacuum technology. In recent years, more attention has been paid to ultraclean. “Vacuum is easier to understand; you pump until you reach the desired pressure,” says Meuwese. “For ultraclean, that is just the first step. Afterwards, you fill the barrel again with a “clean” gas, which, for example, no longer contains any water. But how can you backfill without polluting the barrel again? Nowadays, we also deal with that challenge during the course.”

'A vacuum is more thermally challenging than ultraclean.'

For a designer, there is little distinction between vacuum or ultraclean. The biggest difference is in the thermal properties. In a vacuum, heat transfer is very bad because there is no conductive medium. Which means no convection and no conduction, only radiation and you need a large temperature difference for that. “In vacuum, therefore, everything becomes hot by definition,” Meuwese knows. “Cooling can be done through closed channels with water, along and through the components. Or by making a thermal connection to a cold part of the system. There are also complex alternatives such as a helium backfill solution where you apply local low pressure with molecules that can transfer heat. Actually, that is cheating,” Meuwese says with a smile.


“A vacuum is more thermally challenging than ultraclean”, says Mark Meuwese.

Sense

The growing importance of vacuum technology and ultraclean means that more and more engineers must be aware of the matter. Meuwese observes that although the level across the board is rising, there is still much to be gained. “Most people who come from college or university have a sense of technology. They sense that a thick I-profile beam can take more weight than a thin I-beam. They have much less of a natural sense for vacuum. If I tell someone that I can evaporate 1015 molecules within a certain time and there are 1018, I am a factor of a thousand off, but they don’t know what that means. A vacuum is more abstract than mechanics. Mbar liters per second: it does not ring a bell for many engineers.”

Schools nowadays are paying more attention to the subject. Certainly, in the Eindhoven region, more and more students master the basic knowledge. “Coincidentally, I now have a student from Enschede, and it is less widely represented there. More on the University of Twente, but much less at higher professional education. It is also closely related to the Eindhoven region, but something like vapor deposition is used all over the world and you need vacuum knowledge for that. ”

This article is written by Alexander Pil, tech editor of High-Tech Systems.

Recommendation by former participants

By the end of the training participants are asked to fill out an evaluation form. To the question: 'Would you recommend this training to others?' they responded with a 8.3 out of 10.

Value Engineering is so much more than just saving a few euros – says a lead system architect

trainer High Tech Institute
After years of practical experience at Philips Healthcare, Goof Pruijsen now offers advice on value engineering and cost management. He provides training on these subjects for High Tech Institute.

‘Really enjoy it.’ Goof Pruijsen does, as people from different technical development disciplines reap the benefits of his views and knowledge. ‘It gives me a wonderful sense of appreciation.’ He himself is immensely curious. It fascinates him to understand in detail what it is that people are considering buying, how and why a product works technically and how you can improve it in order to improve a business.

Recently he received a big compliment from a lead system architect from ASML who attended a Pruijsen workshop together with his team. ‘I thought we were going to save a few euros, but I learned that value engineering was much more,’ says this system architect. ‘We dealt with some fantastic topics and posed questions about decisions that we had taken at a high level in system architecture. The insights we were left with didn’t only have an impact on costs, but also on the reduction of complexity, risk, time to market and the hours that we spent on engineering.’


Goof Pruijsen: ‘It is precisely the solution-driven approach, used by many teams, which makes them blind to alternatives.’

Therefore, value engineering is perfectly suited to Pruijsen. Although the definition is a bit boring: it’s about adjusting and changing design and manufacturing based on a thorough analysis. If it is done well, it often leads to cost reductions. That’s why developers often have a negative association with value engineering, the ‘squeezing’ of a design, to the saving of costs.

However, High Tech Institute’s trainer Goof Pruijsen, identifies a much more important value: value engineering creates bridges between marketing, development, manufacturing, purchasing and the suppliers. Precisely this interplay between different disciplines ensures that you can achieve large profits by using this approach.

Cost reduction often focuses on the component list of the current solution. This is what Pruijsen calls a beginner’s mistake. ‘You can see that newbies in the profession carry out a so-called pareto-analysis, in which they map out the 20 percent of the components that are responsible for 80 percent of the costs. They will then take something off the most expensive things. It’s not called the cheese cutter method for nothing.’

This approach is often not very effective, says Pruijsen. ‘When this happens, others have often intervened before. Then there is not much more to be gained and chances are that new interventions will affect the quality. If that is at the expense of your image, you are even more worse off.’

Value engineering therefore starts, according to Pruijsen, with value for the customer. ‘What does the customer want to achieve? Which functions are needed for this? What is the value of that function and what are the costs?’ An example that he often mentions is as follows: it is not about the drill, not about the hole, but about hanging the painting in order to decorate your house. Going back to the ultimate goal makes room for creativity and new solutions and concepts.

Tolerance is the cost driver

Thinking in functions is less well established than most developers think. Pruijsen sees that the solution focus with which many teams work, makes them blind to alternatives. ‘They don’t think out-of-the-box.’ It helps – and that requires practice – to analyse an existing solution and to gradually abstract it from there until the functions are perfectly clear. Without describing the solution. Then you can map out the costs functionally and together investigate why these functions are expensive. That is a good start for optimising current and future product generations. I call that cost driver analysis. If you do this well, everyone starts to understand the problem much better and you are already halfway to the solution,’ says Pruijsen.

Tolerance or accuracy is a typical example of a cost driver. Narrow tolerances result in more processing time or steps. An average power supply is usually not that expensive, but if the voltage ripple is very small, then the price rises.

'Developers are usually unaware of the consequences of their risk-avoiding copying behaviour.'

You need to take a close look at those tolerances, according to Pruijsen. ‘Are they really needed everywhere, or only locally? Why is this tolerance so specified? This is often something that doesn’t seem to be considered. Tolerances may have been copied from the previous drawing, designers pay no attention to them, but they do appear on the invoice. Developers are usually unaware of the consequences of their risk-avoiding copying behaviour. If it turns out that a tolerance requirement is not so strict, the manufacturing suddenly becomes much easier, faster and cheaper. Problems with manufacturability and production yield are often resolved spontaneously.’

Large projects, multiple teams, balanced design

In large projects with multiple sub-teams each and every one optimises his own area as much as possible – even if only out of ambition. Pruijsen: ‘If the teams don’t understand how the job is distributed across the modules, then the chance of imbalance in design and specification is high. You don’t put a Formula 1 engine on the chassis of a 2CV. The performance of the components must be in balance with each other. The task of the system architect is to maintain that balance and prevent over-engineering.’

Pruijsen provides a practical case from his time at Philips Healthcare.  X-rays have been used for many years in medical diagnostics and material research. To generate these x-rays, you shoot high voltage electrons onto heavy metal. At one point, the marketing department asked for a new high-voltage generator. One with more power, better stability and higher reliability. And preferably also cheaper.

'Every step in the labour process also includes an error risk; and you can add to that an additional risk of quality problems and production loss.'

‘A project like this often starts purely for performance and technology driven purposes,’ says Pruijsen, from experience. ‘In this case, however, we decided to start formally with a value engineering workshop in order to improve the profit margin on the product as well as the technical direction. The old generator was analysed with respect to costs and functions. It turned out that a relatively large amount of money was invested in much smaller parts (the so-called ‘long tail of pareto’). You cannot quickly put your finger on that one expensive part; the syndrome is one of many components. A many-parts syndrome typically manifests itself in high design costs, high handling costs, and high assembly costs for all parts involved. Every step in the labour process also includes an error risk; and you can add to that an additional risk of quality problems and production loss. The direction for improvement is therefore usually reducing parts through integration, the so-called DFMA (Design for Manufacturing & Assembly).’

Another cost driver was decided in the concept. In order to safely protect the high voltage in the old concept, it was completely submerged in an oil tank. That later turned out to be too big, too heavy and unnecessarily expensive.

Pruijsen: ‘We brainstormed each function and built a consistent and optimal scenario. For the high-voltage generation, we could ride on new technology that makes it possible to transform at higher frequencies. That way we could greatly reduce the volume.’

Observing how it was used brought the biggest breakthrough. The old generator was developed by maximizing all individual performance requirements, without looking at whether these were useful combinations or not. However, doctors use either a single high power shot or several images per second with very low power (and some combinations in between). ‘When the engineers saw this, they were indignant. Nobody had ever told them that! The result was a large reduction in required power and a high voltage tank that was ultimately only a tenth of the original volume.

Cooling is still necessary, but instead of using large ventilators, Pruijsen and his team placed the largest heat source on the bottom of the cabinet. ‘This created a convection current. We used the heat source to improve cooling.’ This is an example of ‘reversed thinking’.

‘The end result was a smaller and quieter generator, 35 percent cheaper. Moreover, fewer components were needed and we achieved a better reliability. And there was another optimisation, the total space required for the system could be reduced by one cabinet.’

Could it have been even better? Yes of course it could, says Pruijsen. ‘We were unable to break through one specification point during this process. The generator was specified at 100 kW. It was said that this had to be so according to medical regulations. ‘It took me months to find the source of this misconception. It turned out to be a medical guideline that advises the use of a generator of at least 80 kW in order to be able to make a good diagnosis with greater certainty. That was therefore a piece of advice given, not a regulation!’ says Pruijsen.

This ‘advice’ dated back to 1991. In the intervening twenty years, image processing techniques have progressed so fast, that a better result can be obtained with much less power. Eventually, Pruijsen found a product manager who admitted that it was not a legal directive, but a so-called tender spec. ‘Because manufacturers have been telling their customers for years that only 100 kW gives sufficient quality, it has become an ‘accepted customer belief’.

‘If the tolerance requirements prove too high but can be relaxed, manufacturing can suddenly become much easier, faster and cheaper,’ says Goof Pruijsen. ‘Problems with manufacturability and production yield are then often resolved spontaneously.’

Managing modular architecture

Pruijsen gives another example. A large module in a production machine was designed in a number of small modules. This meant that a sub-module could be replaced quickly should there be errors. The assumption was that this was cheaper and provided less service stock. ‘The increase in the number of critical interfaces with high tolerance requirements, however, made the cost price double and the complexity increased so that the expected reliability was dramatically lower,’ says Pruijsen. ‘Add to this additional development costs and production tests. A one-piece design turned out to be the better solution. Components with the most risk of failure were thereby placed in an easily accessible location. The lesson was: Modularity is not to cut a module into submodules, but to place your modularity and interfaces correctly. In this case, with a view to providing the best service and also cost-efficient service. You have to keep thinking about the consequences and the balance.’

In his value engineering training course, Pruijsen makes it clear how the set-up of a value engineering study works in practice. First, he concentrates on analysis tools and then on creative techniques for improved scenarios. In addition, attention is paid to involving suppliers in this approach.

There is a lot of attention paid to practical training. One third of the training course consists of practical exercises. For example, there is a ‘Lego-car exercise’ in which course participants learn how to tackle cost reduction and value increase. In addition, they also carry out benefit analyses (case: on the basis of which criteria do customers decide to buy a car?), process flow analysis (case: optimisation of a canteen) and function analysis (the core of functional thinking). Many techniques are clarified on the basis of examples.

Pruijsen also asks course participants to prepare a short presentation of up to ten minutes in advance about their business and product. He may choose one to jointly analyse ‘on the spot.’

Recommendation by former participants

By the end of the training participants are asked to fill out an evaluation form. To the question: 'Would you recommend this training to others?' they responded with a 8.3 out of 10.

Goof’s tips for value engineering

Last but not least, here are some tips from Goof Pruijsen with relation to value engineering:

1. Analyze before considering solutions

2. Go back to basic comprehension: what does it do?

3. What makes it expensive and why?

4. Make an inventory of the assumptions and try to destroy them

5. Be creative; don’t limit yourself to thinking of traditional solutions (risk avoiding), but look for the boundaries

6. Bring the solutions together in a total overview and build scenarios

7. Don’t play down the risks, but also don’t use them as an excuse for not doing things either. Make them explicit and find mitigations for them

8. Keep an eye on the business side of things. Everyone likes to be creative, but money also needs to be earned. Which scenario best satisfies the financial and organisational preconditions?

9. Go for it!

This article is written by René Raaijmakers, tech editor of High-Tech Systems.

Recommendation by former participants

By the end of the training participants are asked to fill out an evaluation form. To the question: 'Would you recommend this training to others?' they responded with a 8.3 out of 10.

Multicore programming skills do not come from Dijkstra

Multicore programming in C++ trainer Klaas van Gend
In practice, writing parallel software is still a difficult task. You keep coming up against unforeseen issues if you don’t understand each and every level of the problem, says Klaas van Gend.

In 2019, Multicore software should be easier to write than ever. Modern programming languages ​​such as Scala and Rust are maturing, programming frames are getting easier to use and C # and good old C ++ are embracing parallelism as part of their standard libraries.

However, in practice, it’s still a messy process. The whole thing turns out to be difficult to synchronize and once the software works, it mostly only runs a little or no faster at all on a multicore processor. And to make matters worse, it tends to evidence all kinds of elusive errors.

Parallel programming is just a very tough subject, where you run into all sorts of subtle, unexpected effects if you don’t understand what’s happening at all levels, tells Klaas van Gend, software architect at Sioux. ‘I’ve heard people talking about sharing nodes on a supercomputer using virtual machines. But they ruin each other’s processor cache; they just get in one other’s way.’

'At university it was all about Dijkstra, which means mutexes, locks and condition variables. But the moment you turn on a lock, you only ensure that the code is executed on one core whilst the others temporarily do nothing. So, you really only learn how not to program for multicore.'

According to Van Gend, the problem is that many developers failed to receive a pedagogically sound basis during their computer science training. ‘At university it was all about Dijkstra, which means mutexes, locks and condition variables. But the moment you turn on a lock, you only ensure that the code is executed on one core whilst the others temporarily do nothing. So, you really only learn how not to program for multicore,’ he says.

That is why Van Gend has taken the multicore training given by his old employer Vector Fabrics, out of the mothballs. Until a few years ago, Vector Fabrics focused on tooling to provide insight into the perils of parallel software. Together with cto Jos van Eijndhoven and other employees, Van Gend provided training courses on the subject. The company went bankrupt in 2016, but Van Gend, in his current employment, has realised that the problem is still relevant. After having once again given the training course at his present employment, he now also offers it under High Tech Institute’s flag, for third parties.


Klaas van Gend is the lecturer of the 3-day training ‘Multicore programming in C++‘.

A problem at each and every level

One of the important matters when writing parallel software, is finding out how to make it work clearly across /on multiple levels, explains Van Gend. He always makes this point with a simple example: Conway’s Game of Life, the cellular automaton where boxes in a grid become black or white with each new round, depending on the status of their immediate neighbours. ‘At the bottom level of your program you have to check what your neighbouring cells are. You can do that with two for-runs/loops. And then you have a loop for a complete row, and above that for the complete set of rows.‘

‘Most programmers will begin to parallelize at those bottom loops. That is very natural, because that is a piece of code that you can still understand, that still fits in your mind. But it makes much more sense to sit/begin at a higher level and take that outer loop. Then you divide the field into multiple blocks of rows and your workload per core is much larger.’

If you look at matters in that way, it soon becomes clear that there are many things to watch out for. There are also programs where the load is variable. ‘For example, we have an exercise to calculate the first hundred prime numbers. There is already more than a factor of one hundred between prime number ten and prime number ninety-nine. Then you have to calculate load balancing.’

There are also differences in what you can parallelize: the data or the task. ‘Data parallelism is generally suitable for very specific applications, but otherwise you soon find a kind of decomposition of your task. This can be done with an actor model or with a Kahn process network, but data-parallelism can again be part of it. In practice you will see that you always end up with mixed forms.’  It has not just been about algorithms for some time now; the underlying hardware plays a key role. For example, if the programmer doesn’t take the caching mechanisms of the processor into account, the problem of false sharing may arise. ‘I have seen huge applications brought to their knees,’ says Van Gend. ‘Suppose you have two threads that are both collecting metrics. If you divide those messily, counters from different threads can end up in the same cache line. The two processors then need to work simultaneously with the same cache and your cache mechanism constantly drags the lines back and forth. That lowers performance greatly.’ For that reason, Van Gend is also skeptical about the use of high-level languages in multicore designs; they have the tendency to abstract the details about the memory layout. ‘With a language like C ++ it is still very clear that you are working on basic primitives and you can see that clearly. But high-ranking languages often hastily skim over the details of the data types, which means that the system can never really run smoothly.’

'If you only partially understand the model, then you will run into problems. It works well for certain specific situations, but it can’t be used everywhere.'

In any case, Van Gend thinks that new languages ​​are no wonder cure for the multicore problem. As a rule, they assume a specific approach that doesn’t have to fit well /necessarily fit well with the application at all. ‘Languages ​​such as Scala or Rust rely heavily on the actor model to make threading easier. If you only partially understand the model, then you will run into problems. It works well for certain specific situations, but it can’t be used everywhere.’

The wrong assumption

The modern versions of C ++ also offer additions to enable parallel programming. ‘Atomics are now fully involved, for example. With this you can often exchange data without stopping anything. We are also working on a library within which the locking is no longer visible to users at all. If it is necessary, it happens without the user seeing it and also with the shortest possible scope, so the lock is released as soon as possible,’ says Van Gend. Here, it is also important to understand what you are doing. Van Gend, for example, is a lot less enthusiastic about the execution policies’ addition to the standard library in C ++ 17. This allows a series of basic algorithms such as find, count, sort and transform to run in parallel by simply adding an extra parameter in the function call. ‘But that only works for some academic examples; in practice, it will not work,’ Van Gend says. ‘These api’s are based on a wrong basic assumption. And in the C # api they have made the same mistake again.’

The problem is that with this approach you can only make separate steps. ‘It stimulates the individual paralleling of each operation. You re-share your dataset with each operation, do something, then make it whole again and go on to the next operation. It is always parallel, sequential, parallel, sequential, and so on. That is conceptually very clear, but you have to wait all that time until all the threads are ready and then continue. It is a complete waste of time. On the other hand, with a library such as Openmp the entire set of operations is simply distributed over the threads. This means therefore that you don’t have to wait unnecessarily.’

'The funny thing is that Microsoft also played a large part in the Par Lab at the University of Berkeley. This has resulted in a fairly large collection of design patterns for parallel programming, which I deal with extensively in the training course.'

The gcc compiler doesn’t provide any support for these parallel functions. Visual Studio does, because the additions eventually come from Microsoft. ‘The funny thing is that Microsoft also played a large part in the Par Lab at the University of Berkeley. This has resulted in a fairly large collection of design patterns for parallel programming, which I deal with extensively in the training course. Microsoft has shown that they understand exactly how to do it properly.’

This article is written by Pieter Edelman, tech editor of Bits&Chips.

Recommendation by former participants

By the end of the training participants are asked to fill out an evaluation form. To the question: 'Would you recommend this training to others?' they responded with a 8.6 out of 10.

Accurate machines can’t exist without good thermal management

thermal design and thermal management trainer Theo Ruijll
In most companies, thermal design and thermal management is still in its infancy’ says Theo Ruijl, CTO of MI-Partners and ‘Thermal effects in mechatronic systems’ trainer. Ruijl sees this fact as a huge deficiency. ‘You can’t build a precise machine if you neglect the thermal aspects.’

The largest errors in a machine are caused by vibrations and fluctuations in temperature. If you don’t have both under control, you can say goodbye to an accurate system. Unfortunately, not all designers are aware of this fact. With a leaf spring you can support a system in a statically determined manner, but many engineers are unaware of the fact that such a leaf spring is also a great thermal insulator. ‘Many developers are lacking in knowledge about thermal effects in mechatronic systems,’ says Theo Ruijl, CTO of MI-Partners and trainer of the ‘Thermal effects in mechatronic systems’ course (TEMS).

In Dutch and Belgium high tech there is a lot of knowledge about dynamics, about good design, about damping. After all, generations of mechanical engineers have grown up with the construction principles of great teachers like Rien Koster and Wim van der Hoek and the Des Duivels Picture Book. But in most companies, thermal management is still not well covered.

‘Any engineer seeking to achieve a high level of accuracy will sooner or later be confronted with thermal effects,’ says Theo Ruijl. Ruijl has been working on thermal effects in mechatronic systems for two decades. ‘Temperature variations, drift, dissipation in an actuator, energy absorption of electromagnetic waves in a lens or mirror: all of these things have an impact on the performance of a system. Of course, you can ignore them, and, for a while, things might work well. But if a competitor, who has good knowledge of thermal aspects, suddenly appears, he will overtake and leave you far behind.’


‘The technical universities produce excellent graduates and post graduates in dynamics and control technology, but they do not train students in the thermal effects in mechatronic systems,’ says Theo Ruijl, thermal effects trainer.

In the high tech industry, developers are struggling with thermal distortions and inaccuracies. ‘At ASML these challenges are currently greater than the dynamic ones,’ says Ruijl. ‘An enormous amount of light is being pumped into these machines. It is inevitable that as a result the wafer heats up and deforms. If that happens nice and evenly, then you are still able to simulate it and predict it. Unfortunately, all kinds of non-linear effects occur. Then modelling and compensation becomes very difficult,’

Thermo Fisher also highlights the subject. Ruijl: ‘Many users of electron microscopes are in the life sciences. They research biological processes that they literally freeze in order to study them properly. That means dissolving them in water and cooling the water down to the freezing point. The ice must be amorphous, not crystalline, because otherwise you can’t see anything under the microscope. You will only get that kind of structure if you cool the sample at lightning speed, at 100,000 to one million Kelvin per second. Then the frozen sample needs to be observed under the microscope. The preparation and positioning pose a huge thermal challenge. How do you keep the sample at the right temperature within high vacuum? And what effect does that have on the sensitive optical and mechatronic systems around it?’

The big loss

The fact that many companies still lack in-depth thermal knowledge is largely due to something missing in the education. ‘The technical universities produce excellent graduates and post graduates in dynamics and control technology, but they do not teach the thermal effects in mechatronic systems,’ says Ruijl firmly. He himself studied with TUE professor Piet Schellekens. ‘Since Piet Schellekens retired fifteen years ago, thermal design and metrology have been neglected. Nobody has taken these issues seriously, not even in Delft or Twente. That is a big loss. There are so many fundamental challenges in this domain. That would really require a dedicated full-time professor.’

With the arrival of Hans Vermeulen a couple of years ago, there has been a part-time professor at the Eindhoven University of Technology who has put the subject on the agenda. For his Mechatronic Systems Design group, however, advanced thermal control is one of many topics. A large part of the permanent staff of Schellekens has since left. ‘In Germany the subject is more on the map,’ says Ruijl. ‘There is a large market for machine tools in which thermal effects play a major role. German machine tool builders and knowledge institutions understand each other well on this point. They run various research projects at the Fraunhofer institutes. TEMS research programs are also running in Switzerland and Spain.’

Recycling

Despite the gap in university education, there are quite a few thermal specialists in the industry. They are all self-made people who have learned the trade in practice. For Ruijl, that process started at Philips almost twenty years ago. ‘For a long time, we have known exactly how we can model dynamics and control technology and how to integrate it into machines. In a typical design process, different specialists sit at the table so that you can develop a machine with input from all disciplines. In the old days it sometimes happened at Philips that someone at the end of such a process with a complicated finite element sum found out that thermally, it didn’t work. That is why we started to develop a competence in this field with focus on mechatronic systems.’


To calculate thermal effects, engineers reuse mathematical techniques from dynamics. This resulted in the concept of thermal mode shapes.

Right from the outset, the specialists discovered that the techniques that they have already applied in dynamics can also be used in the thermal domain. ‘In dynamics and control engineering, we use state-space models and their Eigen-frequencies and mode shapes are important quantities,’ Ruijl explains. ‘Such a model is nothing more than a set of differential equations. Thermal effects are also described with differential equations. And for mathematics it doesn’t matter whether you pass through a mechanical-dynamic or a thermal-dynamic system.’

It is not exactly the same. In the thermal domain there are no objects that behave like a mass-spring system; the temperature does not overshoot, but gradually goes back, like a first order system. Like a metal plate, if you heat it up in the middle it will cool down as soon as you remove the heat source. But it never gets colder than the environment. Temperature distribution, as a function of time, can be perfectly modelled.

'It is quite unique how we, here in the Netherlands, look at thermal effects from a mechatronic design approach.'

Ruijl and his colleagues recycled the mathematical techniques from dynamics. ‘We still use tools from, for example, Ansys or Mathworks, to perform the calculations. The analyses of mechanical vibration shapes have since long been included in those packages. The thermal shapes are not, even though the technology is already there. When we started about twenty years ago, we asked Ansys if they could give us access to this feature. It took a long time, but now they have included a button for it. That shows that it is quite unique how we, here in the Netherlands, look at thermal effects from a mechatronic design approach. It is really different from a pure physics approach that often involves thermodynamics processes. We link thermal effects to mechatronic systems.’

Consciously incompetent

In order to get the theme fixed in the way of working of its employees, Philips developed a special training course: Thermal effects in mechatronic systems. The three-day course has since found refuge at Mechatronics Academy and is being marketed by High Tech Institute. Alongside Rob van Gils (Philips), Marco Koevoets (ASML) and Jack van der Sanden (ASML), Theo Ruijl is one of the trainers.

‘Of course, you can’t give a full training covering all topics in only three days,’ Ruijl admits. ‘The public is too broad for that; people from different technical background come to the training course. Some have never done anything with TEMS, others are already quite experienced. Some are engineers, others are control engineers.’

Dutch specialists look at thermal effects from a mechatronic design approach. That is unique in the world. For Ruijl, that started years ago with his PhD research supervised by Jan van Eijk and Piet Schellekens.

On the first day, the students receive an introduction to the physics background. ‘Heat transfer as radiation, conduction, convection,’ sums up Ruijl. ‘How do you deal with it? Many facts, tips and tricks. Then we go deeper; and we do simulations with Matlab and Simulink.’ Then the foundation has been laid. ‘The goal is for everyone to speak the same language afterwards.’

Day two deals with measurement techniques. ‘Measuring the temperature is a skill in itself,’ emphasises Ruijl. ‘In any case, there are many different sensor types. But how do you measure accurately? And where? And do I measure the temperature of the object itself or of the lamp that is shining on it? Together with Jack, I once developed a system to control the water temperature in a precise manner. With a small coil in the stream we were able to warm it up very quickly and very accurately. Then we made a nice setup for an exhibition, with beautiful Perspex tubes so that everything could be seen very clearly. Unfortunately, we didn’t manage to get the temperature stable anymore. We must have done something wrong, but what? It was so bad that the temperature fluctuated as people came along. In the end, the ceiling lighting in the hall was influencing the sensor by radiation through the transparent Perspex. You only make a mistake like that once,’ laughs Ruijl.

The students themselves will also model. Using Matlab, although this particular tool doesn’t have a special toolbox for thermal effects. ‘We also deal with a cryogenic example as a practical case,’ says Ruijl. ‘How do you measure, for example, 77 Kelvin? Which materials can you use best? Cryogenic is important for scientific experiments and builders of electron microscopes.’

'Every design group should include a thermal specialist.'

What is the lesson for the TEMS students? ‘The most important thing is that they understand the language,’ Ruijl replies. ‘We also make them aware of the issues that they have to pay attention to and that they need to take into account. Consciously incompetent. That is very valuable, because manufacturers with that knowledge can catch mistakes at an early stage by looking at the project again or by getting in a specialist. Every design group should always include a thermal specialist.’

This article is written by Alexander Pil, tech editor of High-Tech Systems.

Recommendation by former participants

By the end of the training participants are asked to fill out an evaluation form. To the question: 'Would you recommend this training to others?' they responded with a 8.9 out of 10.