Bridging the hardware-software gap

Trainer Software engineering for non-software engineers
Nico Meijerman joined NTS to help build and expand the company’s software competency. Shortly after arriving at the hardware stronghold, he started to work on bridging the gap between software engineering and the worlds of physics, mechanics and hardware-related disciplines. The result is a workshop in which Meijerman teaches his non-software colleagues the basics of software engineering. Customer and business specifics included.

First-tier supplier NTS Group has quietly been shaping its software engineering competence over the last couple of years. You might not expect this from a company that’s still making most of its money by bending sheet steel, milling parts and assembling systems.

However, embracing software expertise is a natural step for some first-tier suppliers. Over the past decades, NTS has been actively building its system development capabilities. It now develops and manufactures complete machines and modules that are branded and marketed by its customers. With the value of end products shifting to software, it seems a natural move for NTS to develop the competences to catch the digitization wave.

NTS’ Development and Engineering (D&E) department has a headcount of about two hundred engineers, of which 15 percent focus on software. For a supplier that delivers high-end systems and designs, this is still on the low side, Meijerman argues. “It will grow because we see software becoming more and more essential for creating value for our customers. We see the software effort increasing in our projects.”


“We see the software effort increasing in our projects”, says Nico Meijerman. 

An intriguing offer

Meijerman has been walking the hardware-software trail for his entire career. He learned to design chips during his study in Twente and joined Sagantec, a company that both worked on a silicon compiler and developed application-specific ICs for customers. There, he started designing chips, but soon enough, he shifted to programming because the embedded software turned out to take more effort than the hardware itself.

Later, Meijerman taught informatics-related subjects at several departments of the university of applied sciences (HTS) in Arnhem. Subsequently, he joined Philips CFT, where they needed a software engineer who understood what was happening in the mechanical and electrical domains. There, he developed motion control software for ASML’s first PAS 5500 litho scanners. Soon after, he also worked on MRI scanners for Philips Healthcare.

In 2010, Meijerman decided it was time to start his own consultancy company but a few years later, NTS approached him with an intriguing offer: would he be interested in becoming the group leader for machine control, a team focusing on software and electrical engineering. Helping to build up the software competency seemed a daunting, yet very attractive challenge.

After arriving at the hardware stronghold, Meijerman knew he needed to work on his relationship with the NTS mechanics and mechatronics base. He figured a short workshop would help make his new colleagues more familiar with software.

According to Nico Meijerman of NTS, “Mechanical engineers deal with the limitations of physics, software manages complexity.”

Meijerman started interviewing colleagues to get an idea of their needs. First, he talked to the systems engineers – the guys that mostly have a mechanical background. “The most frequent response was that they had no clue about software. I heard remarks like ‘Those guys are always too late’, ‘They never make the things that I really need’ and ‘I can’t work with them because they don’t understand anything’. It was very much a culture of blaming and it was clear that our systems engineers didn’t know what software was doing. They saw it as an unpredictable black box.”

In Meijerman’s contact with the system architects, things started to resonate more. “They at least got an idea about what they would like to know about software. They wanted to know more about programming languages, third-party, multi-tasking, real-time, Agile and other basic concepts. They also wanted to know what a software development process looked like.”

' We saw the need to involve clients early in the software development process.'

Before long, Meijerman and the system architects concluded that the customer perspective is of enormous importance. “We saw the need to involve clients early in the software development process. For NTS, this was a high priority because most of its customers have a mechanics background. They know that software has to be included but they have to be educated on the specifics – for instance, on the fact that software is never bug-free. That’s why part of my workshop is also about business models and everything that follows our development activities.”


Nico Meijerman is the trainer for ‘Software engineering for non-software engineers’

Wrong assumptions

In the high tech industry, you often hear that communication is the problem in settings with different disciplines. But at NTS, Meijerman experienced that it’s more about understanding and being able to step in someone else’s shoes. “People do try to communicate. I see that there’s definitely a willingness to talk,” he says. “But hardware and software engineers are often living in completely different worlds.”

'Mechanics is about managing the limits of physics, while software is about managing complexity.'

Meijerman explains that mechanical engineers predominantly look at the limitations of physics. “It’s about nanometers, about milliseconds. The stiffness of a construction determines what you can achieve. Components wear out if you use them too long.” Software engineers, on the other hand, do not deal with physics; they try to control complexity. “Mechanics is about managing the limits of physics, while software is about managing complexity.”

The problems often arise from wrong assumptions. “Mechanics rarely ask a software engineer about the degree of complexity. That’s why a software engineer will say in most cases he can fix a machine control problem – except for some very difficult issues. But if you ask them how much effort it’s going to take and how complex it is, you may get a completely different answer. A mechanical engineer looks at things from a physics point of view, not from a complexity point of view. But he should know how much work his question can generate. The lack of understanding of such basic concepts makes it difficult to interact. Equally, software engineers definitely have to improve their knowledge in the field of mechanical engineering.”

Part of Meijerman’s workshop is understanding that software engineering isn’t the same as programming. “Youngsters are learning how to program while at university or during technical education. A lot of people think software engineering is just more programming but that couldn’t be further from the truth. In programming, complexity usually isn’t the issue, as you’ll end up with some hundreds of lines of code. It’s not until you’re dealing with over a hundred thousand lines of code that it starts to get complicated. In high-tech systems this is the case: there are sometimes millions of lines of code, and the only way to tackle challenges of this magnitude is to find a way to work through the problem. That means breaking it down and ensuring that your work is correct. Engineering is about focusing on architecture and design, as well as managing complexity.”

“My goal is to teach participants all aspects of software engineering,” Meijerman concludes. “When they realize that, they understand that they can’t ask their nephew playing with Arduino boards to write a program for them over the weekend.” At the end of the workshop, participants understand more about the intriguing world of software engineering and about the differences and commonalities between software engineering and other disciplines, resulting in better collaboration, better solutions and hopefully more fun in their work.

This article is written by René Raaijmakers, tech editor of Bits&Chips.

Object-oriented techniques are also suitable for PLCs

Onno van Roosmalen and Tim van Heijst have set up a five-day course in which PLC experts and OO specialists learn how to develop a PLC application through object-oriented programming. High Tech Institute will be rolling out this course for the first time in early 2020.

More and more, PLC programmers are facing the same challenges that were previously reserved for large, complex software projects. Object-oriented techniques offer a solution, but in the past, they could not be applied one-on-one due to the traditional limitations of PLCs. With the new release of the IEC 61131-3 standard for PLCs, object-oriented functions are now available and the possibilities have grown enormously.

Universities of Applied Sciences and academics regularly dismiss PLCs as an inferior technology. Nice for simple machines, but totally inadequate for the complex systems they work on. In the past, they may have had a point. Basically, PLCs are always cyclical, but the cycle time can often throw a wrench in the works. However, for modern PLCs, a cycle of less than ten milliseconds is quite normal. For time-critical applications, there are even PLCs available with a cycle time of less than a millisecond.


Onno van Roosmalen (left) and Tim van Heijst (right) have set up a five-day course in which PLC experts and OO specialists learn how to develop a PLC application through object-oriented programming.


“Packaging machines – even if they have to reach a very high speed – can easily be controlled with a PLC,” says Tim van Heijst, owner of the Codesys specialist, Extend Smart Coding, and lecturer at High Tech Institute. “PLCs are also an excellent platform for robot applications. With the current power of PLCs, there are hardly any applications where they fail.”

This does not mean that suppliers such as B&R, Beckhoff, Rockwell and Siemens will also sell their PLCs in the higher segment without a hitch. In the case of larger machine builders, it is sometimes still an internal struggle: ‘do we design everything ourselves or do we choose a PLC?’, Van Heijst has experience. However, the definition of the problem is almost always the same. They want to develop an application as quickly and as well as possible and, if feasible, reuse existing code. “A PLC meets all these requirements. It is a standard product that is well supported by the supplier. If you have a bug in the firmware, it will fix it for you. You can easily connect to your actuators and sensors, because there are all kinds of standardized communication protocols available. That’s why you can get your system up and running very quickly,” says Van Heijst. An additional advantage is that almost all PLC manufacturers follow the IEC 61131-3 standard. This means that the application software you build is hardware-independent and switching to a faster PLC – or even an industrial PC – is a breeze.

Uncoupling

Whereas PLCs have traditionally been used in applications with limited functionality, the possibilities are now so extensive that modern programming techniques are needed to regulate everything in a good way. With version 3 of the IEC 61131 standard, developers can also opt for an object-oriented approach.


Van Heijst: ‘With the current power of plc’s, there are virtually no applications where they fail’.

“You will then have to deal with important software features such as information hiding and encapsulation, i.e. the idea that you locate information and do not throw it through the entire system,” says Onno van Roosmalen, independent software engineering consultant and lecturer at High Tech Institute. “In practice, you regularly see that a team provides a component with extra functionality, causing a cascade of events throughout the entire system. That makes it difficult to add anything. In many web applications you see that encapsulation is broken, but that depends on their nature: making information available. With machine control, that’s a different matter.”

“The thought of hiding information goes hand in hand with the way in which components address each other: the interface. This allows you to hide the detailed shape of your objects and ensure that no implementation details leak out. You then make sure that the user can only do what is currently required, no more and no less,” explains Van Roosmalen. The idea behind this is that the evolution of components can be disconnected. If a new version of a component continues to do what it used to do via an interface, the software built on it does not need to be modified immediately. A development team that programs against the component can use the new version without fear of something falling over in the meantime. When encapsulation and interfaces are in order, a maintainable, scalable architecture that can grow with the application almost automatically emerges.

' You can always expand an interface later on, but downsizing is a lot more difficult.'

“The condition, however, is that teams take a defensive stance when designing their interface. You shouldn’t just offer everything that other teams ask,” says Van Roosmalen. “The more you offer, the more unintended uses there are and the more likely it is that things will break in a new version. You can always expand an interface later, but downsizing is a lot more difficult.

How does that work in practice? Van Heijst gives an example: “I recently visited a manufacturer of charging stations. They make a lot of variants and work with a wide range of plugs. The company wanted to develop an energy management system that could cope with all these differences. We solved that with PLCs and an object-oriented approach. Now the management system communicates with all charging station versions via defined, generic interfaces and the company can manage the development of all blocks separately. This reduces the risk of errors, makes re-use easier and makes an application much faster in the air.”

Building knowledge

“For PLC programmers ‘from the old pedigree’, the object-oriented approach means a new way of thinking. Software engineers for whom OOP is a piece of cake, find that PLCs work just that little bit differently from their familiar PC environment. PLC vendors offer courses, but these are about how to connect to your i/o, for example. Real programming courses are not,” says Van Heijst, “because you don’t learn how to build your own application.”


‘Object-oriented thinking for the PC is almost identical to object-oriented thinking for the PLC,’ says Van Roosmalen.

That is why Van Heijst and Van Roosmalen have set up the training “Object-oriented system control automation.” A five-day course in which PLC experts and OO specialists learn how to develop a PLC application via object-oriented programming. In this way, companies can build up knowledge and do not have to fall back on system integrators who prefer to take care of everything.

“Object-oriented programming is essentially different from procedural programming,” says Van Roosmalen. “But object-oriented thinking for the PC is almost identical to object-oriented thinking for the PLC. You can use the same software design. Well, with some minor adjustments because the programming concepts for a PLC are a bit more limited.”

“We explain the technique and tell you how to apply it,” adds Van Heijst. “A large part of the training is about how to get a good software design. It still happens very often that a programmer just goes to work and delivers a working piece of software a few weeks or months later. Then, suddenly, there is a change and they have to make all sorts of turns to get it done. If you apply the object-oriented methodology properly, you will be free of that problem. Another advantage of OOP.”

This article is written by Alexander Pil, tech editor of High-Tech Systems.

The expat’s guide to working in Dutch high tech

From an endless loop of deliberations to receiving criticism that can sound downright rude, when you’re new to the Netherlands, the Dutch work culture can seem totally weird. To help facilitate integration, tech companies are sending their expat workers to the training “How to be successful in the Dutch high tech work culture”.

As an expat in the Netherlands, you’ve probably already learned that transitioning into the high tech work culture in the Netherlands is a difficult adjustment. If you’re new to the region, and you find yourself asking questions like: ‘We’re having another meeting?’ or thinking, ‘Why are they asking me, it’s not my job’ – welcome to the Dutch work culture, one place that’s sure to have you feeling like a square peg, trying to fit into a round hole.

Because this can be such a difficult transition to maneuver, High Tech Institute, together with content partner Settels Savenije & Friedrich, is offering the training “How to be successful in the Dutch high tech work culture”. Here’s your beginner’s guide.

Flat-work society

One of the first things you’ll notice while working as an expat in the Netherlands is that the corporate power structure is typically as flat as a ‘pannenkoek’. The Netherlands isn’t at all into hierarchy. In business, status is nothing, and credibility counts for everything. In the training, participants learn ways to build their reputation by taking ownership and doing what’s necessary to get the job done. They also learn that one of the fastest ways to kill credibility is by being confined to the parameters of a job description.

'Don’t be limited to only what was asked of you.'

“If you think you can, then do. Even if it’s not exactly your job. Don’t be limited to only what was asked of you, and don’t be afraid to take a risk,” proposes course instructor, Claus Neeleman. “It’s also important to be honest and own up to mistakes rather than make excuses. It’s always better to be sorry than to do nothing at all.”

Building consensus

Another indicator of the lack of hierarchy in the Netherlands is the need for consensus. One well-known characteristic of the Dutch high tech workplace is the seemingly endless number of meetings and discussions. Do the words, ‘let’s meet again next week to discuss this further’, sound familiar? You’re thinking, how hard could it be to decide, right? For some, this is difficult to acclimate to. Yes, meetings take time and it might appear to be inefficient, but it’s all designed to build consensus – or what the Dutch refer to as, ‘polderen’. “Polderen is simply about everyone doing their part to come to a consensus and make decisions,” explains Neeleman.

His longstanding method is a deeply ingrained cultural value. For centuries, the Netherlands has looked for innovative solutions to confront the threat of water. The lowland mentality is, ‘we’re all on this ship together, so it’s up to everyone to come up with the best possible answer’. Thus, it’s important to participate and give input. It doesn’t matter if you’re an expert, or if you have nothing crucial to add. Your responsibility is simply to be part of the discussion, which is another way to build your credibility. Some issues require out-of-the-box thinking from the non-experts. “Just ask the little boy who put his finger in the dike,” jokes Neeleman. “The idea doesn’t have to be perfect; it just has to work.”

One of the many practice rounds during the training.
Communication is key

Because of the sheer number of meetings and interaction in the Dutch work environment, good communication skills are a necessity – both verbal and nonverbal. As such, one of the central themes of the one-day training focuses on communication styles and active listening skills. From body language to facial expressions and tone of voice, participants learn not only how to express themselves more effectively, but they also gain experience in how to pick up on the social cues given by others.

During multiple practice rounds, students learn that good communication skills start with the ability to really hear what someone is saying. For this, trainees are taught a three-step process for active listening. First, listen intently. Second, summarize to display you listened and understand. Finally, ask questions for clarification. While this is simple in theory, cross-cultural communication is not always clear nor easy to understand. For many participants, like Bahaa Ibrahiem – a setup tooling and visualization engineer with ASML – this section of the course was a real eye-opener. Ibrahiem: “After more than a year of living and working in the Netherlands, this training has really improved my cultural awareness and my communication with my Dutch colleagues.”

Kick the ball, not the person

Of course, even with active communication skills, when trying to bring together so many personalities and opinions, inherently there are going to be disagreements. This can sometimes result in the exchange of heated discussions or feedback that seems rather harsh. This sort of critical back and forth can be especially difficult for expats that are new to the workforce in the Netherlands. Culturally speaking, the Dutch don’t mince words and are well-known for their directness. All too often, this can leave a foreign colleague befuddled and entirely insecure with the critique.

Participants are practising to give and receive feedback.
Feedback process

Of course, even with active communication skills, when trying to bring together so many personalities and opinions, inherently there are going to be disagreements. This can sometimes result in the exchange of heated discussions or feedback that seems rather harsh. This sort of critical back and forth can be especially difficult for expats that are new to the workforce in the Netherlands. Culturally speaking, the Dutch don’t mince words and are well-known for their directness. All too often, this can leave a foreign colleague befuddled and entirely insecure with the critique.

'Good feedback is constructive, is to the point and is given simply with the intent to solve a problem.'

“The idea is that feedback shouldn’t be personal. It’s about kicking the ball, not the person,” elucidates Neeleman. “Good feedback is constructive, is to the point and is given simply with the intent to solve a problem.”

This article is written by Collin Arocho, tech editor of Bits&Chips.

Recommendation by former participants

By the end of the training participants are asked to fill out an evaluation form. To the question: 'Would you recommend this training to others?' they responded with a 9.1 out of 10.

When your product and your company become more complex, a simple method to manage the process is essential.

trainer High tech Institute: Product configuration management course
A good configuration management process for creating high tech systems provides cost savings, strength and fully transparent development and production. Frank Ploegmakers, trainer at High Tech Institute, talks about obstacles and common mistakes in configuration management. ‘Those responsible for technology, development and operations are not always able to understand the essence of the Configuration Management complexity.’

Within a high tech organisation, hordes of engineers produce an enormous amount of technical data: partial designs of printed circuit boards, motors, sensors, mechanical and optical components, you name it. Electronics engineers, software designers, optical engineers, mechanical engineers: they all have their own computer tools. Even prototyping itself is shifting to the virtual environment. Remove the design tools from any high tech company and you may as well shut it down.

The discipline of Configuration Management has been developed to control the coherence of all this design information. It ensures that different disciplines can work together on a design, and that the process from design through to the delivered product is controlled.

It is hard to believe, but only a small proportion of high tech machine builders have specified and implemented a configuration management process and method within the appropriate ICT tools. ‘This doesn’t exist in many companies,’ says Frank Ploegmakers, trainer in product configuration management at High Tech Institute. ‘Configuration management tools are needed to exchange, test, secure, hold and place all design knowledge into a structure. I think that only a small proportion of machine builders have documented their development and manufacturing processes and use them in the correct manner and, for example, understand what baselining is.’


‘Understanding complexity is a prerequisite for configuration management,’ says Frank Ploegmakers, lecturer in system configuration management.

Baselining

To explain what baselining is and to clarify relative issues, let’s take a trip into an ideal world. In this world, ingenious mechanics, electronics engineers and software engineers deliver perfect partial designs in close consultation. They are – miraculously – all correct first time round. Everyone is happy: that works! We can produce! The person in charge gives the starting signal and the design department draws up a baseline. This defines the machine design in a precise manner: materials, composition, purchasing parts, modules, coherence (think of geometry and quality specifications) and the associated software. Production can get started making the machine and the purchasing department can go ahead and order.

If only it were that simple, sighs every technician. In practice, there are many design layers. Improvement follows improvement. Before you know it, the mechanics department is on version 3, the electronics department on version 6 and the software team on version 4.11. Not a disaster either, since once a baseline has been drawn, the machine has also been defined in detail.

Observing hundreds of small and large improvements

In practice, matters are different. We will continue to make improvements even after ‘drawing up the baseline.’ A component in version number 5 is in the baseline, but the manufacturer has still found something that makes it better or cheaper. Therefore, production will have to take version number 6. Even then, there isn’t a problem, but in practice many technicians and disciplines all work on their own partial design.

'Weak leadership often hinders full transparency in development and production.'

Then suddenly there appear to be hundreds of small to large improvements let loose upon a baseline. Which person still maintains the whole overview? Who still knows the relationship between the product or machine at the customer and the baseline within their own organisation? Ploegmakers says: ‘With today’s complex products and systems, you need something that you are able to maintain an overview of, so that it is clear what each person is doing at each precise moment whilst all changes to baselines are completely transparent. A large part of the machine builders have laid out their development and manufacturing processes properly, a small part actually uses them for what they are intended.’

Software

By the way, in software development, configuration management is commonplace everywhere. At the end of a day of development, the engineers in that discipline check in their software code and a build is run: creating the program with all recent additions. Ploegmakers believes that this working method should also be applied by other disciplines. ‘Strangely enough, companies do software configuration management, but they don’t apply it at system level.’

According to Ploegmakers, this is because many companies do not (yet) realise that this is their major problem. ‘If I say to a software manager: “I am now removing your software configuration system,” he will panic completely, because then he will no longer be able to carry out his software output. But in most product or machine building organisations there are employees at a higher level who have to watch over multidisciplinary system integration with tens of thousands or even hundreds of thousands of components, whilst in the case in question, they don’t. When I talk to software people about it, they say to me: ‘Frank, you’ve just touched a raw nerve.’

Time stamps

A complicating factor is the time between drawing up the baseline and a working machine. With software everyone sees the result the next morning, but with hardware it takes months. With a high chance that changes will slip through that have not been coordinated with everyone.

You prevent that by using a configuration management system, says Ploegmakers. ‘With this system you create complete transparency. The power of baselining is that the entire company works with the baseline. Everyone can see the development and production situation at any given point in time.’

Ploegmakers compares it to a film. ‘You can rewind the entire history. You create time stamps. You simply see a historical development of your product with all the associated benefits. It can be useful to look back at baselines and it is also nice for the customer. You can recall the precise configuration if the client places an additional order.’


Trainer Frank Ploegmakers has seen more than a hundred companies ‘on the inside.’

Background and practical experience

Via LTS, MTS and HTS mechanical engineering, Frank ended up reading Engineering and Construction Informatics at Eindhoven University of Technology (nowadays Technology Management faculty). Half of that was hard technology and the other half economics, business administration, marketing, philosophy and social psychology. He came into contact with virtual reality and witnessed the first wave of automation and its excesses: major IT projects that went wild. In this way he became interested in how you ensure that information technology actually delivers something to a company.

By organising a study trip to China, Frank got his first job. He started at WAIDE Consultants in the mid-nineties. This company advised Dutch companies on Joint Ventures to gain access to the Chinese market. Great projects and a great experience, but it was not technical enough for Frank and after a year and a half he started working at Philips Display Components.

For five years, he and his design support department focused on the further optimisation of picture tube design processes and tools. In addition, the field of product data management (PDM) rose strongly in the late nineties. ‘This involved recording and jointly using worldwide information about the display tubes, the production process and production machines. This had to be properly supported by PDM automation.’

Ploegmakers used much of what he learned at Philips Components at Assembleon, manufacturer of pick and place machines. There, his field of work expanded to the entire creation process: from product creation to logistics, production, delivery and service.

'We built everything from scratch.'

After his Philips days, for four years Ploegmakers worked at engineering firm Irmato Group as director of sales and operations. Together with his team, he helped the company grow from 20 to 135 employees. He learnt a lot on the job. ‘We built everything from scratch.’ In 2008, after four years of Irmato, Ploegmakers started working at various companies as an interim manager and project manager. He has now seen more than a hundred companies on the inside.

Insight and overview

Configuration management is not a problem for IT, the reliability department or the R&D department, emphasises Ploegmakers. ‘This goes beyond all departments, from the CTO to the factory floor.’ He believes the real problem often lies with the leadership. ‘Those responsible for technology, development and operations are not always able to understand the essence of the configuration management complexity. Organisations can deliver beautiful configurations of products and machines to customers, but the internal control of these configurations often leaves something to be desired. Business leaders often fail to see that this leads to enormous inefficiencies and ineffectiveness.’

Managing and automating business processes starts with the insight into one’s own company and a good overview of the complexity. ‘It starts with a good company model. Many managers are unable to set that up with all teams. But it is necessary if you want to achieve complex products or machines together with a large organisation. If your product and your company become more complex, a simple method to manage the configuration process is essential.’ Once that process and the associated working methods are known, the introduction of the required information technology is easy. ‘Then it can be configured in PDM and ERP systems in no time at all.’

Doesn’t Ploegmakers paint a somewhat rather too rosy picture with this last statement? ‘No,’ he affirms. ‘The difficult thing is to first understand the complexity. That is an absolute condition for doing configuration management. The implementation of the underlying details is then simple. The old adage “organise first, automate second” still applies.’

This article is written by René Raaijmakers, tech editor of Bits&Chips.

Recommendation by former participants

By the end of the training participants are asked to fill out an evaluation form. To the question: 'Would you recommend this training to others?' they responded with a 7.3 out of 10.

Lower bar by raising the bar: high vacuum

specialist in vacuum at High Tech Institute
Vacuums seem simple: you pump out the air until you reach the desired low pressure. However, for a high vacuum, simply pumping out the air is not enough. To achieve this, you must take extreme measures. For many engineers though, this topic doesn’t always come naturally. High Tech Institute teaches them the tricks of the trade.

More and more, processes in the high-tech industry require a highly controlled environment. Consider the electron microscopes from Thermo Fisher or the EUV systems from ASML. If you insert air into their systems, electron beams are scattered and the EUV light gets absorbed. Therefore, a high vacuum is an absolute necessity. Contamination is also a product killer in the production of displays. Any amount of moisture in the air would prove to be disastrous for OLED materials and the display would be a total loss.

The bar is getting higher and higher. “As long as I can remember, the pressure in electron microscopes should not exceed 10-10 mbar,” says Mark Meuwese, vacuum specialist at Settels Savenije Van Amelsvoort. “But the requirements are also becoming stricter in other applications. For example, soft x-ray systems used to be able to deal with 10-3 mbar. Nowadays, 10-7 is the new standard. With increasing accuracies, come more sensitive sensors that are more susceptible to pollution or disturbance by the atmosphere present.”

Mark Meuwese is involved in the 4-day training ‘Basics and design principles for ultra-clean vacuum‘. 

“The fuller you build your vacuum system, the greater the chance of contamination,” says Mark Meuwese of Settels Savenije Van Amelsvoort. Up to 10-8 mbar, it’s all relatively simple, Meuwese knows. “Of course, you still have to work hard, but if you want to go even further, the challenges increase exponentially, and the system will be many times more expensive. A water molecule is a dipole and therefore sticks to surfaces. You can pump it off better if you put enough energy into it. The easiest method for this is to heat the vacuum chamber. But by creating a temperature distribution, you introduce the risk that the evaporated elements will settle on cold surfaces, in the worst case on the sensor, the samples or the product. Moreover, many sensor systems cannot withstand high temperatures. 10-8 mbar is the limit at which everything goes well.”

Meuwese does not expect that the bulk of the applications will require lower pressures in the foreseeable future. The requirements can get stricter for specialized research work. “The limit is at 10-12 – 10-13, I estimate. And for that, you can hardly build a machine. Everything you introduce into the vacuum chamber is too much. The vessel and the pressure sensor are already too polluting, and even the most advanced pump leaks too much back into the system.”

Fingerprint

At its base, vacuum technology is simple. It starts with a vessel to which you connect a pump. You continue to pump air out until the pressure reaches the desired level. In practice, such a system is of little use. After all, you want to carry out processes in that vacuum. So, everything has to be in the vessel. In fact: you often want the space you are working in to be full of mechanics, sensors and other components. How can you build a vacuum chamber and still achieve a good vacuum level? This is one of the things you learn at an intensive training like “Basics & design principles for ultra-clean vacuum” of High Tech Institute.

“The more components you put in, the greater the chance of contamination,” says Meuwese, one of the teachers during the training. “The surface alone causes contamination through outgassing, and everything you place in the vessel means more surface, and therefore more outgassing. You have to pay attention to that.”

'A fingerprint lasts for weeks.'

How can you take a vacuum environment into account in your design? “There are a number of do’s and don’ts that we cover during the training. To begin with, there is, of course, a list of materials that are suitable for vacuum. Stainless steel is really good and you can also use aluminum without any problems. Brass, however, is not suitable because it contains zinc that evaporates at 300 degrees at 10-3 mbar. Many companies have a list of materials and coatings its engineers are allowed to use.”

Rust is also out of the question because it is porous and contains water that gasses out – meaning a proper brushing is the way of life. “A simple fingerprint can make you suffer for weeks. There are a surprisingly large number of molecules in a fingerprint, so it takes a long time before everything is gone. And there’s no guarantee you’ll be able to pump it out at all,” says Meuwese. Proper cleaning is a profession in its own right and is discussed extensively during training. Since fat is a no-no, ball bearings are a no-go. Designers have to rely heavily on elastic elements such as leaf springs and cross-spring hinges. “Or on ball bearings with ceramic balls, or fully ceramic bearings, since they do not need any lubricant.”

Little legs

Designers must also pay close attention to the shape and construction of the components. “For example, they should avoid sharp edges. If you polish it with a cotton swab or a cloth, remnants will get caught up in it,” Meuwese explains. “A bolt in a blind hole traps a volume of air. If you empty the barrel, it will leak out. Remember that the gas law states that pV/T is constant. If you want to reach 10-7 mbar, that small volume becomes ten orders larger. “Potholes are annoying because water remains in them after rinsing. “So blind holes are also to be avoided. And if you drill a hole to let the water out, it should not be too small. Due to the capillary action, the water will otherwise remain in the hole.’

'Fat is a no-no in a vacuum, so moving is done with elastic elements.'

If you use electrical discharge machining to create a part, there must not be any right angles in the pattern. “That is a different way of thinking. It is not about the most efficient design, but about preventing edges and corners. You have to curve everything and that is always a challenge. With some common sense and experience, you will eventually work it out.”

Even connecting two components in a vacuum is not straightforward. The surfaces are never flat enough to make them fit perfectly. A gap always remains – no matter how small – where air or contaminants are trapped. For the vacuum pump it is more convenient if you separate the two parts with little legs. Half a millimeter will often suffice.

Fat is a no-no in a vacuum, so moving is done with elastic elements.

Cheating

The training of High Tech Institute in the past was mainly about vacuum technology. In recent years, more attention has been paid to ultraclean. “Vacuum is easier to understand; you pump until you reach the desired pressure,” says Meuwese. “For ultraclean, that is just the first step. Afterwards, you fill the barrel again with a “clean” gas, which, for example, no longer contains any water. But how can you backfill without polluting the barrel again? Nowadays, we also deal with that challenge during the course.”

'A vacuum is more thermally challenging than ultraclean.'

For a designer, there is little distinction between vacuum or ultraclean. The biggest difference is in the thermal properties. In a vacuum, heat transfer is very bad because there is no conductive medium. Which means no convection and no conduction, only radiation and you need a large temperature difference for that. “In vacuum, therefore, everything becomes hot by definition,” Meuwese knows. “Cooling can be done through closed channels with water, along and through the components. Or by making a thermal connection to a cold part of the system. There are also complex alternatives such as a helium backfill solution where you apply local low pressure with molecules that can transfer heat. Actually, that is cheating,” Meuwese says with a smile.


“A vacuum is more thermally challenging than ultraclean”, says Mark Meuwese.

Sense

The growing importance of vacuum technology and ultraclean means that more and more engineers must be aware of the matter. Meuwese observes that although the level across the board is rising, there is still much to be gained. “Most people who come from college or university have a sense of technology. They sense that a thick I-profile beam can take more weight than a thin I-beam. They have much less of a natural sense for vacuum. If I tell someone that I can evaporate 1015 molecules within a certain time and there are 1018, I am a factor of a thousand off, but they don’t know what that means. A vacuum is more abstract than mechanics. Mbar liters per second: it does not ring a bell for many engineers.”

Schools nowadays are paying more attention to the subject. Certainly, in the Eindhoven region, more and more students master the basic knowledge. “Coincidentally, I now have a student from Enschede, and it is less widely represented there. More on the University of Twente, but much less at higher professional education. It is also closely related to the Eindhoven region, but something like vapor deposition is used all over the world and you need vacuum knowledge for that. ”

This article is written by Alexander Pil, tech editor of High-Tech Systems.

Recommendation by former participants

By the end of the training participants are asked to fill out an evaluation form. To the question: 'Would you recommend this training to others?' they responded with a 8.3 out of 10.

Value Engineering is so much more than just saving a few euros – says a lead system architect

trainer High Tech Institute
After years of practical experience at Philips Healthcare, Goof Pruijsen now offers advice on value engineering and cost management. He provides training on these subjects for High Tech Institute.

‘Really enjoy it.’ Goof Pruijsen does, as people from different technical development disciplines reap the benefits of his views and knowledge. ‘It gives me a wonderful sense of appreciation.’ He himself is immensely curious. It fascinates him to understand in detail what it is that people are considering buying, how and why a product works technically and how you can improve it in order to improve a business.

Recently he received a big compliment from a lead system architect from ASML who attended a Pruijsen workshop together with his team. ‘I thought we were going to save a few euros, but I learned that value engineering was much more,’ says this system architect. ‘We dealt with some fantastic topics and posed questions about decisions that we had taken at a high level in system architecture. The insights we were left with didn’t only have an impact on costs, but also on the reduction of complexity, risk, time to market and the hours that we spent on engineering.’


Goof Pruijsen: ‘It is precisely the solution-driven approach, used by many teams, which makes them blind to alternatives.’

Therefore, value engineering is perfectly suited to Pruijsen. Although the definition is a bit boring: it’s about adjusting and changing design and manufacturing based on a thorough analysis. If it is done well, it often leads to cost reductions. That’s why developers often have a negative association with value engineering, the ‘squeezing’ of a design, to the saving of costs.

However, High Tech Institute’s trainer Goof Pruijsen, identifies a much more important value: value engineering creates bridges between marketing, development, manufacturing, purchasing and the suppliers. Precisely this interplay between different disciplines ensures that you can achieve large profits by using this approach.

Cost reduction often focuses on the component list of the current solution. This is what Pruijsen calls a beginner’s mistake. ‘You can see that newbies in the profession carry out a so-called pareto-analysis, in which they map out the 20 percent of the components that are responsible for 80 percent of the costs. They will then take something off the most expensive things. It’s not called the cheese cutter method for nothing.’

This approach is often not very effective, says Pruijsen. ‘When this happens, others have often intervened before. Then there is not much more to be gained and chances are that new interventions will affect the quality. If that is at the expense of your image, you are even more worse off.’

Value engineering therefore starts, according to Pruijsen, with value for the customer. ‘What does the customer want to achieve? Which functions are needed for this? What is the value of that function and what are the costs?’ An example that he often mentions is as follows: it is not about the drill, not about the hole, but about hanging the painting in order to decorate your house. Going back to the ultimate goal makes room for creativity and new solutions and concepts.

Tolerance is the cost driver

Thinking in functions is less well established than most developers think. Pruijsen sees that the solution focus with which many teams work, makes them blind to alternatives. ‘They don’t think out-of-the-box.’ It helps – and that requires practice – to analyse an existing solution and to gradually abstract it from there until the functions are perfectly clear. Without describing the solution. Then you can map out the costs functionally and together investigate why these functions are expensive. That is a good start for optimising current and future product generations. I call that cost driver analysis. If you do this well, everyone starts to understand the problem much better and you are already halfway to the solution,’ says Pruijsen.

Tolerance or accuracy is a typical example of a cost driver. Narrow tolerances result in more processing time or steps. An average power supply is usually not that expensive, but if the voltage ripple is very small, then the price rises.

'Developers are usually unaware of the consequences of their risk-avoiding copying behaviour.'

You need to take a close look at those tolerances, according to Pruijsen. ‘Are they really needed everywhere, or only locally? Why is this tolerance so specified? This is often something that doesn’t seem to be considered. Tolerances may have been copied from the previous drawing, designers pay no attention to them, but they do appear on the invoice. Developers are usually unaware of the consequences of their risk-avoiding copying behaviour. If it turns out that a tolerance requirement is not so strict, the manufacturing suddenly becomes much easier, faster and cheaper. Problems with manufacturability and production yield are often resolved spontaneously.’

Large projects, multiple teams, balanced design

In large projects with multiple sub-teams each and every one optimises his own area as much as possible – even if only out of ambition. Pruijsen: ‘If the teams don’t understand how the job is distributed across the modules, then the chance of imbalance in design and specification is high. You don’t put a Formula 1 engine on the chassis of a 2CV. The performance of the components must be in balance with each other. The task of the system architect is to maintain that balance and prevent over-engineering.’

Pruijsen provides a practical case from his time at Philips Healthcare.  X-rays have been used for many years in medical diagnostics and material research. To generate these x-rays, you shoot high voltage electrons onto heavy metal. At one point, the marketing department asked for a new high-voltage generator. One with more power, better stability and higher reliability. And preferably also cheaper.

'Every step in the labour process also includes an error risk; and you can add to that an additional risk of quality problems and production loss.'

‘A project like this often starts purely for performance and technology driven purposes,’ says Pruijsen, from experience. ‘In this case, however, we decided to start formally with a value engineering workshop in order to improve the profit margin on the product as well as the technical direction. The old generator was analysed with respect to costs and functions. It turned out that a relatively large amount of money was invested in much smaller parts (the so-called ‘long tail of pareto’). You cannot quickly put your finger on that one expensive part; the syndrome is one of many components. A many-parts syndrome typically manifests itself in high design costs, high handling costs, and high assembly costs for all parts involved. Every step in the labour process also includes an error risk; and you can add to that an additional risk of quality problems and production loss. The direction for improvement is therefore usually reducing parts through integration, the so-called DFMA (Design for Manufacturing & Assembly).’

Another cost driver was decided in the concept. In order to safely protect the high voltage in the old concept, it was completely submerged in an oil tank. That later turned out to be too big, too heavy and unnecessarily expensive.

Pruijsen: ‘We brainstormed each function and built a consistent and optimal scenario. For the high-voltage generation, we could ride on new technology that makes it possible to transform at higher frequencies. That way we could greatly reduce the volume.’

Observing how it was used brought the biggest breakthrough. The old generator was developed by maximizing all individual performance requirements, without looking at whether these were useful combinations or not. However, doctors use either a single high power shot or several images per second with very low power (and some combinations in between). ‘When the engineers saw this, they were indignant. Nobody had ever told them that! The result was a large reduction in required power and a high voltage tank that was ultimately only a tenth of the original volume.

Cooling is still necessary, but instead of using large ventilators, Pruijsen and his team placed the largest heat source on the bottom of the cabinet. ‘This created a convection current. We used the heat source to improve cooling.’ This is an example of ‘reversed thinking’.

‘The end result was a smaller and quieter generator, 35 percent cheaper. Moreover, fewer components were needed and we achieved a better reliability. And there was another optimisation, the total space required for the system could be reduced by one cabinet.’

Could it have been even better? Yes of course it could, says Pruijsen. ‘We were unable to break through one specification point during this process. The generator was specified at 100 kW. It was said that this had to be so according to medical regulations. ‘It took me months to find the source of this misconception. It turned out to be a medical guideline that advises the use of a generator of at least 80 kW in order to be able to make a good diagnosis with greater certainty. That was therefore a piece of advice given, not a regulation!’ says Pruijsen.

This ‘advice’ dated back to 1991. In the intervening twenty years, image processing techniques have progressed so fast, that a better result can be obtained with much less power. Eventually, Pruijsen found a product manager who admitted that it was not a legal directive, but a so-called tender spec. ‘Because manufacturers have been telling their customers for years that only 100 kW gives sufficient quality, it has become an ‘accepted customer belief’.

‘If the tolerance requirements prove too high but can be relaxed, manufacturing can suddenly become much easier, faster and cheaper,’ says Goof Pruijsen. ‘Problems with manufacturability and production yield are then often resolved spontaneously.’

Managing modular architecture

Pruijsen gives another example. A large module in a production machine was designed in a number of small modules. This meant that a sub-module could be replaced quickly should there be errors. The assumption was that this was cheaper and provided less service stock. ‘The increase in the number of critical interfaces with high tolerance requirements, however, made the cost price double and the complexity increased so that the expected reliability was dramatically lower,’ says Pruijsen. ‘Add to this additional development costs and production tests. A one-piece design turned out to be the better solution. Components with the most risk of failure were thereby placed in an easily accessible location. The lesson was: Modularity is not to cut a module into submodules, but to place your modularity and interfaces correctly. In this case, with a view to providing the best service and also cost-efficient service. You have to keep thinking about the consequences and the balance.’

In his value engineering training course, Pruijsen makes it clear how the set-up of a value engineering study works in practice. First, he concentrates on analysis tools and then on creative techniques for improved scenarios. In addition, attention is paid to involving suppliers in this approach.

There is a lot of attention paid to practical training. One third of the training course consists of practical exercises. For example, there is a ‘Lego-car exercise’ in which course participants learn how to tackle cost reduction and value increase. In addition, they also carry out benefit analyses (case: on the basis of which criteria do customers decide to buy a car?), process flow analysis (case: optimisation of a canteen) and function analysis (the core of functional thinking). Many techniques are clarified on the basis of examples.

Pruijsen also asks course participants to prepare a short presentation of up to ten minutes in advance about their business and product. He may choose one to jointly analyse ‘on the spot.’

Recommendation by former participants

By the end of the training participants are asked to fill out an evaluation form. To the question: 'Would you recommend this training to others?' they responded with a 8.3 out of 10.

Goof’s tips for value engineering

Last but not least, here are some tips from Goof Pruijsen with relation to value engineering:

1. Analyze before considering solutions

2. Go back to basic comprehension: what does it do?

3. What makes it expensive and why?

4. Make an inventory of the assumptions and try to destroy them

5. Be creative; don’t limit yourself to thinking of traditional solutions (risk avoiding), but look for the boundaries

6. Bring the solutions together in a total overview and build scenarios

7. Don’t play down the risks, but also don’t use them as an excuse for not doing things either. Make them explicit and find mitigations for them

8. Keep an eye on the business side of things. Everyone likes to be creative, but money also needs to be earned. Which scenario best satisfies the financial and organisational preconditions?

9. Go for it!

This article is written by René Raaijmakers, tech editor of High-Tech Systems.

Recommendation by former participants

By the end of the training participants are asked to fill out an evaluation form. To the question: 'Would you recommend this training to others?' they responded with a 8.3 out of 10.

Multicore programming skills do not come from Dijkstra

Multicore programming in C++ trainer Klaas van Gend
In practice, writing parallel software is still a difficult task. You keep coming up against unforeseen issues if you don’t understand each and every level of the problem, says Klaas van Gend.

In 2019, Multicore software should be easier to write than ever. Modern programming languages ​​such as Scala and Rust are maturing, programming frames are getting easier to use and C # and good old C ++ are embracing parallelism as part of their standard libraries.

However, in practice, it’s still a messy process. The whole thing turns out to be difficult to synchronize and once the software works, it mostly only runs a little or no faster at all on a multicore processor. And to make matters worse, it tends to evidence all kinds of elusive errors.

Parallel programming is just a very tough subject, where you run into all sorts of subtle, unexpected effects if you don’t understand what’s happening at all levels, tells Klaas van Gend, software architect at Sioux. ‘I’ve heard people talking about sharing nodes on a supercomputer using virtual machines. But they ruin each other’s processor cache; they just get in one other’s way.’

'At university it was all about Dijkstra, which means mutexes, locks and condition variables. But the moment you turn on a lock, you only ensure that the code is executed on one core whilst the others temporarily do nothing. So, you really only learn how not to program for multicore.'

According to Van Gend, the problem is that many developers failed to receive a pedagogically sound basis during their computer science training. ‘At university it was all about Dijkstra, which means mutexes, locks and condition variables. But the moment you turn on a lock, you only ensure that the code is executed on one core whilst the others temporarily do nothing. So, you really only learn how not to program for multicore,’ he says.

That is why Van Gend has taken the multicore training given by his old employer Vector Fabrics, out of the mothballs. Until a few years ago, Vector Fabrics focused on tooling to provide insight into the perils of parallel software. Together with cto Jos van Eijndhoven and other employees, Van Gend provided training courses on the subject. The company went bankrupt in 2016, but Van Gend, in his current employment, has realised that the problem is still relevant. After having once again given the training course at his present employment, he now also offers it under High Tech Institute’s flag, for third parties.


Klaas van Gend is the lecturer of the 3-day training ‘Multicore programming in C++‘.

A problem at each and every level

One of the important matters when writing parallel software, is finding out how to make it work clearly across /on multiple levels, explains Van Gend. He always makes this point with a simple example: Conway’s Game of Life, the cellular automaton where boxes in a grid become black or white with each new round, depending on the status of their immediate neighbours. ‘At the bottom level of your program you have to check what your neighbouring cells are. You can do that with two for-runs/loops. And then you have a loop for a complete row, and above that for the complete set of rows.‘

‘Most programmers will begin to parallelize at those bottom loops. That is very natural, because that is a piece of code that you can still understand, that still fits in your mind. But it makes much more sense to sit/begin at a higher level and take that outer loop. Then you divide the field into multiple blocks of rows and your workload per core is much larger.’

If you look at matters in that way, it soon becomes clear that there are many things to watch out for. There are also programs where the load is variable. ‘For example, we have an exercise to calculate the first hundred prime numbers. There is already more than a factor of one hundred between prime number ten and prime number ninety-nine. Then you have to calculate load balancing.’

There are also differences in what you can parallelize: the data or the task. ‘Data parallelism is generally suitable for very specific applications, but otherwise you soon find a kind of decomposition of your task. This can be done with an actor model or with a Kahn process network, but data-parallelism can again be part of it. In practice you will see that you always end up with mixed forms.’  It has not just been about algorithms for some time now; the underlying hardware plays a key role. For example, if the programmer doesn’t take the caching mechanisms of the processor into account, the problem of false sharing may arise. ‘I have seen huge applications brought to their knees,’ says Van Gend. ‘Suppose you have two threads that are both collecting metrics. If you divide those messily, counters from different threads can end up in the same cache line. The two processors then need to work simultaneously with the same cache and your cache mechanism constantly drags the lines back and forth. That lowers performance greatly.’ For that reason, Van Gend is also skeptical about the use of high-level languages in multicore designs; they have the tendency to abstract the details about the memory layout. ‘With a language like C ++ it is still very clear that you are working on basic primitives and you can see that clearly. But high-ranking languages often hastily skim over the details of the data types, which means that the system can never really run smoothly.’

'If you only partially understand the model, then you will run into problems. It works well for certain specific situations, but it can’t be used everywhere.'

In any case, Van Gend thinks that new languages ​​are no wonder cure for the multicore problem. As a rule, they assume a specific approach that doesn’t have to fit well /necessarily fit well with the application at all. ‘Languages ​​such as Scala or Rust rely heavily on the actor model to make threading easier. If you only partially understand the model, then you will run into problems. It works well for certain specific situations, but it can’t be used everywhere.’

The wrong assumption

The modern versions of C ++ also offer additions to enable parallel programming. ‘Atomics are now fully involved, for example. With this you can often exchange data without stopping anything. We are also working on a library within which the locking is no longer visible to users at all. If it is necessary, it happens without the user seeing it and also with the shortest possible scope, so the lock is released as soon as possible,’ says Van Gend. Here, it is also important to understand what you are doing. Van Gend, for example, is a lot less enthusiastic about the execution policies’ addition to the standard library in C ++ 17. This allows a series of basic algorithms such as find, count, sort and transform to run in parallel by simply adding an extra parameter in the function call. ‘But that only works for some academic examples; in practice, it will not work,’ Van Gend says. ‘These api’s are based on a wrong basic assumption. And in the C # api they have made the same mistake again.’

The problem is that with this approach you can only make separate steps. ‘It stimulates the individual paralleling of each operation. You re-share your dataset with each operation, do something, then make it whole again and go on to the next operation. It is always parallel, sequential, parallel, sequential, and so on. That is conceptually very clear, but you have to wait all that time until all the threads are ready and then continue. It is a complete waste of time. On the other hand, with a library such as Openmp the entire set of operations is simply distributed over the threads. This means therefore that you don’t have to wait unnecessarily.’

'The funny thing is that Microsoft also played a large part in the Par Lab at the University of Berkeley. This has resulted in a fairly large collection of design patterns for parallel programming, which I deal with extensively in the training course.'

The gcc compiler doesn’t provide any support for these parallel functions. Visual Studio does, because the additions eventually come from Microsoft. ‘The funny thing is that Microsoft also played a large part in the Par Lab at the University of Berkeley. This has resulted in a fairly large collection of design patterns for parallel programming, which I deal with extensively in the training course. Microsoft has shown that they understand exactly how to do it properly.’

This article is written by Pieter Edelman, tech editor of Bits&Chips.

Recommendation by former participants

By the end of the training participants are asked to fill out an evaluation form. To the question: 'Would you recommend this training to others?' they responded with a 8.6 out of 10.

Accurate machines can’t exist without good thermal management

thermal design and thermal management trainer Theo Ruijll
In most companies, thermal design and thermal management is still in its infancy’ says Theo Ruijl, CTO of MI-Partners and ‘Thermal effects in mechatronic systems’ trainer. Ruijl sees this fact as a huge deficiency. ‘You can’t build a precise machine if you neglect the thermal aspects.’

The largest errors in a machine are caused by vibrations and fluctuations in temperature. If you don’t have both under control, you can say goodbye to an accurate system. Unfortunately, not all designers are aware of this fact. With a leaf spring you can support a system in a statically determined manner, but many engineers are unaware of the fact that such a leaf spring is also a great thermal insulator. ‘Many developers are lacking in knowledge about thermal effects in mechatronic systems,’ says Theo Ruijl, CTO of MI-Partners and trainer of the ‘Thermal effects in mechatronic systems’ course (TEMS).

In Dutch and Belgium high tech there is a lot of knowledge about dynamics, about good design, about damping. After all, generations of mechanical engineers have grown up with the construction principles of great teachers like Rien Koster and Wim van der Hoek and the Des Duivels Picture Book. But in most companies, thermal management is still not well covered.

‘Any engineer seeking to achieve a high level of accuracy will sooner or later be confronted with thermal effects,’ says Theo Ruijl. Ruijl has been working on thermal effects in mechatronic systems for two decades. ‘Temperature variations, drift, dissipation in an actuator, energy absorption of electromagnetic waves in a lens or mirror: all of these things have an impact on the performance of a system. Of course, you can ignore them, and, for a while, things might work well. But if a competitor, who has good knowledge of thermal aspects, suddenly appears, he will overtake and leave you far behind.’


‘The technical universities produce excellent graduates and post graduates in dynamics and control technology, but they do not train students in the thermal effects in mechatronic systems,’ says Theo Ruijl, thermal effects trainer.

In the high tech industry, developers are struggling with thermal distortions and inaccuracies. ‘At ASML these challenges are currently greater than the dynamic ones,’ says Ruijl. ‘An enormous amount of light is being pumped into these machines. It is inevitable that as a result the wafer heats up and deforms. If that happens nice and evenly, then you are still able to simulate it and predict it. Unfortunately, all kinds of non-linear effects occur. Then modelling and compensation becomes very difficult,’

Thermo Fisher also highlights the subject. Ruijl: ‘Many users of electron microscopes are in the life sciences. They research biological processes that they literally freeze in order to study them properly. That means dissolving them in water and cooling the water down to the freezing point. The ice must be amorphous, not crystalline, because otherwise you can’t see anything under the microscope. You will only get that kind of structure if you cool the sample at lightning speed, at 100,000 to one million Kelvin per second. Then the frozen sample needs to be observed under the microscope. The preparation and positioning pose a huge thermal challenge. How do you keep the sample at the right temperature within high vacuum? And what effect does that have on the sensitive optical and mechatronic systems around it?’

The big loss

The fact that many companies still lack in-depth thermal knowledge is largely due to something missing in the education. ‘The technical universities produce excellent graduates and post graduates in dynamics and control technology, but they do not teach the thermal effects in mechatronic systems,’ says Ruijl firmly. He himself studied with TUE professor Piet Schellekens. ‘Since Piet Schellekens retired fifteen years ago, thermal design and metrology have been neglected. Nobody has taken these issues seriously, not even in Delft or Twente. That is a big loss. There are so many fundamental challenges in this domain. That would really require a dedicated full-time professor.’

With the arrival of Hans Vermeulen a couple of years ago, there has been a part-time professor at the Eindhoven University of Technology who has put the subject on the agenda. For his Mechatronic Systems Design group, however, advanced thermal control is one of many topics. A large part of the permanent staff of Schellekens has since left. ‘In Germany the subject is more on the map,’ says Ruijl. ‘There is a large market for machine tools in which thermal effects play a major role. German machine tool builders and knowledge institutions understand each other well on this point. They run various research projects at the Fraunhofer institutes. TEMS research programs are also running in Switzerland and Spain.’

Recycling

Despite the gap in university education, there are quite a few thermal specialists in the industry. They are all self-made people who have learned the trade in practice. For Ruijl, that process started at Philips almost twenty years ago. ‘For a long time, we have known exactly how we can model dynamics and control technology and how to integrate it into machines. In a typical design process, different specialists sit at the table so that you can develop a machine with input from all disciplines. In the old days it sometimes happened at Philips that someone at the end of such a process with a complicated finite element sum found out that thermally, it didn’t work. That is why we started to develop a competence in this field with focus on mechatronic systems.’


To calculate thermal effects, engineers reuse mathematical techniques from dynamics. This resulted in the concept of thermal mode shapes.

Right from the outset, the specialists discovered that the techniques that they have already applied in dynamics can also be used in the thermal domain. ‘In dynamics and control engineering, we use state-space models and their Eigen-frequencies and mode shapes are important quantities,’ Ruijl explains. ‘Such a model is nothing more than a set of differential equations. Thermal effects are also described with differential equations. And for mathematics it doesn’t matter whether you pass through a mechanical-dynamic or a thermal-dynamic system.’

It is not exactly the same. In the thermal domain there are no objects that behave like a mass-spring system; the temperature does not overshoot, but gradually goes back, like a first order system. Like a metal plate, if you heat it up in the middle it will cool down as soon as you remove the heat source. But it never gets colder than the environment. Temperature distribution, as a function of time, can be perfectly modelled.

'It is quite unique how we, here in the Netherlands, look at thermal effects from a mechatronic design approach.'

Ruijl and his colleagues recycled the mathematical techniques from dynamics. ‘We still use tools from, for example, Ansys or Mathworks, to perform the calculations. The analyses of mechanical vibration shapes have since long been included in those packages. The thermal shapes are not, even though the technology is already there. When we started about twenty years ago, we asked Ansys if they could give us access to this feature. It took a long time, but now they have included a button for it. That shows that it is quite unique how we, here in the Netherlands, look at thermal effects from a mechatronic design approach. It is really different from a pure physics approach that often involves thermodynamics processes. We link thermal effects to mechatronic systems.’

Consciously incompetent

In order to get the theme fixed in the way of working of its employees, Philips developed a special training course: Thermal effects in mechatronic systems. The three-day course has since found refuge at Mechatronics Academy and is being marketed by High Tech Institute. Alongside Rob van Gils (Philips), Marco Koevoets (ASML) and Jack van der Sanden (ASML), Theo Ruijl is one of the trainers.

‘Of course, you can’t give a full training covering all topics in only three days,’ Ruijl admits. ‘The public is too broad for that; people from different technical background come to the training course. Some have never done anything with TEMS, others are already quite experienced. Some are engineers, others are control engineers.’

Dutch specialists look at thermal effects from a mechatronic design approach. That is unique in the world. For Ruijl, that started years ago with his PhD research supervised by Jan van Eijk and Piet Schellekens.

On the first day, the students receive an introduction to the physics background. ‘Heat transfer as radiation, conduction, convection,’ sums up Ruijl. ‘How do you deal with it? Many facts, tips and tricks. Then we go deeper; and we do simulations with Matlab and Simulink.’ Then the foundation has been laid. ‘The goal is for everyone to speak the same language afterwards.’

Day two deals with measurement techniques. ‘Measuring the temperature is a skill in itself,’ emphasises Ruijl. ‘In any case, there are many different sensor types. But how do you measure accurately? And where? And do I measure the temperature of the object itself or of the lamp that is shining on it? Together with Jack, I once developed a system to control the water temperature in a precise manner. With a small coil in the stream we were able to warm it up very quickly and very accurately. Then we made a nice setup for an exhibition, with beautiful Perspex tubes so that everything could be seen very clearly. Unfortunately, we didn’t manage to get the temperature stable anymore. We must have done something wrong, but what? It was so bad that the temperature fluctuated as people came along. In the end, the ceiling lighting in the hall was influencing the sensor by radiation through the transparent Perspex. You only make a mistake like that once,’ laughs Ruijl.

The students themselves will also model. Using Matlab, although this particular tool doesn’t have a special toolbox for thermal effects. ‘We also deal with a cryogenic example as a practical case,’ says Ruijl. ‘How do you measure, for example, 77 Kelvin? Which materials can you use best? Cryogenic is important for scientific experiments and builders of electron microscopes.’

'Every design group should include a thermal specialist.'

What is the lesson for the TEMS students? ‘The most important thing is that they understand the language,’ Ruijl replies. ‘We also make them aware of the issues that they have to pay attention to and that they need to take into account. Consciously incompetent. That is very valuable, because manufacturers with that knowledge can catch mistakes at an early stage by looking at the project again or by getting in a specialist. Every design group should always include a thermal specialist.’

This article is written by Alexander Pil, tech editor of High-Tech Systems.

Recommendation by former participants

By the end of the training participants are asked to fill out an evaluation form. To the question: 'Would you recommend this training to others?' they responded with a 8.9 out of 10.

Technical experts can also be successful in advisory sales

Consultative selling and communication skills trainer
Engineers and technical professionals are used to thinking of their advisory role as related to content alone. But if they want the customer or stakeholder to take action on the advice they offer, something else is needed: acceptance from the person for whom the advice is intended. That’s where sales skills come in. Claus Neeleman trains technical experts in successful advisory sales. ‘Once you understand how the sales process works, you can advise much more effectively. Both external and internal customers. Leading to a positive effect on your company’s results.’

Consultative selling, or advisory sales, is an effective sales method and therefore receives a lot of attention. According to trainer Claus Neeleman, this attention is justified. ‘Advisory sales is the best thing for the customer, it is about finding the best solution for that customer and matching it to your own interest, namely the margin on the products or services that you sell. Advising and selling are therefore both important. The trick is to create value for the customer. That value is in good advice that yields more than what the customer pays for it. In engineering companies, engineers have an important, supporting sales role, because they know exactly what matters in terms of technical content. To sell something in the high tech environment, it is the content that sells, not the sales talk.

'Advisory sales is the best thing for the customer, it is about finding the best solution for that customer and matching it to your own interest, namely the margin on the products or services that you sell.'


Claus Neeleman trains technical experts in successful advisory sales.

'The trick is to create value for the customer. That value is in good advice that yields more than what the customer pays for it.'

Neeleman has a friendly personality and an intelligent glance. He is qualified as an occupational and organisational psychologist. He has worked at an assessment agency, at a reintegration agency, and amongst other things as regional manager. ‘When you carry out an assessment, you analyse and test people, which I thought was super fun and still do. You find out how to see people’s qualities and pitfalls, with an aim to helping them improve. At the reintegration agency, that didn’t always help, because in that environment commerce plays a major role. This sometimes results in moral dilemmas. Do you help the person you have to put a lot of energy into, or the person who doesn’t cause much bother? I did this type of work mainly to help people move forward in their careers and their lives, so such choices were not what I wanted. That’s why I decided to become a trainer. Of course, I also took training courses myself and discovered that it is a fascinating field. Training is something positive. People improve after taking a training course, they like it and are enthusiastic afterwards. That gives me energy. And I find it more fun to talk and to be busy with people than to write reports at a desk.’

Lots of practice

Neeleman has been working as a trainer for some eighteen years. He focuses mainly on practical skills. ‘Many of the things that I tell you come from psychology together with insights from the field about how you can influence people and what the effects are. The content of a conversation can therefore be the same, but the strategy of transferring that content to another may differ. The best approach depends upon the situation and the people in question. I firmly believe that practice is the best way to learn how to sell, for example, in an advisory capacity. The theory behind it is not complicated at all, but to better address conversations with customers you first have to experience what it is like when you try out different behaviour.’

'To better address conversations with customers you first have to experience what it is like when you try out different behaviour.'

Teacher of the year

For several years now, Neeleman has been giving two training courses at High Tech Institute: Effective Communication Skills for engineers and Sales skills for engineers. In 2016, he was appointed trainer of the year by the High Tech Institute, with an evaluation score of 9.1 out of 10. Trainees called him impressive, inspiring and empathetic as a teacher and say that he is excellent at explaining things and tailors the course right to their needs.


In 2016, Claus became High Tech Institute’s ‘Teacher of the year’.

'To every advise moment, belongs a sales moment.'

That is quite special, because selling is not the favourite job of technology professionals…
‘Correct. They also often think that they only give advice. But that is incorrect. What is not seen is that they use less effective strategies in conversations with the customer. The result however is noticeable: as soon as the customer puts them under pressure, they already give a discount that is not in their interest. Or they are too customer-friendly and forget to make agreements about the remuneration for their consultancy work. Or they are unclear about the costs. During an on-going contract, a sales moment belongs to every advice moment. You have to pay attention to that.

But also, in the initial phase of contact with the client, a technician must be sufficiently convincing to make the sale of a service or product succeed. How do you ensure that you come across well and generate trust? How do you give the customer the idea that you are strong enough to carry out the project? You have to create trust and adapt your communication style to the customer and what is important to him/her. Both with regard to content and personal interaction. And if you work together with an account manager you have to learn to speak one another’s language, so that you know what your colleague’s intentions are and what the other person is doing in the sales process. The sales person must of course also know when the content is important.’

That sounds pretty difficult.
‘In reality it’s not such a big deal! The theory is a tool, a model that tells you which steps to take. Analytical people, such as technicians, can handle this very well. For example, the theory is that you yourself often generate resistance from the customer. This happens, for example, if you are more concerned with your own goals than with those of the customer. Or if you put too much pressure on them. That is what we call counter-behaviour. For example, if you constantly know things better than your customer, they will start to object. And if you are too dominant in the speed at which you talk about things or try to enforce a decision, this also provokes resistance. Counter-behaviour doesn’t help you sell your solution. But if you connect with your customer and go into a constructive dialogue, you will build things. The customer then moves on with you much more smoothly. If you encounter resistance during a conversation, you can adjust it by adjusting your behaviour. For example, by leaving more of the pace during the conversation to the customer and by clearly putting his/her interests first.’

Do you yourself have to change in order to sell better?
‘That’s not necessary at all. You just remain yourself, you only choose to exhibit different behaviour in certain situations in order to be more effective in your performance. If you are aware of the way a sales process progresses and you know what works, you can determine much more effectively what effect you want to have on others. It is not about right or wrong. You can reach your goal in many ways. But if you want to bring your story on stage successfully, it will certainly help if you know how to carry out advisory sales. And you can easily do that without forcing yourself into a situation that you don’t like.

'As soon as you understand the sales process, you can advise more effectively. '

What is the secret of a successful advisory sales conversation?
‘You need two ingredients: a good sound story and acceptance by the customer. The latter refers to ensuring that the customer can accept your advice. You do this by raising questions, feelings and doubts that could prevent the customer from accepting your product or service and giving good answers. Interviewing your customer based on the signals s/he gives you is not easy for technicians, because technicians deal mainly with facts and less with emotions. But, with a little practice, they can learn how to do this.’

How does such a conversation proceed?
‘The first phase is the contact phase. Technicians often find it difficult to get through this part and prefer to go straight to the content. But the first phase is important for generating trust and for creating a good personal relationship. In this phase, you also decide what you are talking about. You show that you have thought about the customer’s problem and you indicate that you already have a few ideas. In the contact phase you also agree on your way of communicating with the customer. If you have the same communication style, that’s easy. A customer can also be very directive and want to decide quickly. As a technician you have a tendency to look at a problem from all sides, but this type of customer gets irritated by that. So, if you find that time and money are important goals for a customer, then you have to respond to that information. You will then get more space for the content later in the conversation.

In the second phase you will make an inventory, thus mapping out the customer’s needs. You have proven effective methods for that. As a result, the customer recognises the scope of his/her problem and wants to take action. You cannot achieve that by saying that they have a big problem, you do that by asking questions. This leads to a sense of urgency, the idea that something has to be done.

The third phase is the presentation of your advice, where you show your skills and influence people. In the fourth and final phase you help the customer to come to a decision by taking steps together in the decision process. This is the actual advice work.

All in all, an advisory sales conversation is more about the customer than about you. The customer is king.’

Tips from Claus

‘Be happy with critical questions or reactions, because this is the moment when you have contact about the content. When this happens don’t try to be smarter or question the question, instead, go deeper into it because there is a fear or worry hidden behind such a question. So, take a step towards the customer by using criticism as positive input. After all, the customer knows the most about the problem for which s/he needs your advice. The beauty of this is: what you learn during this training course, you can also apply in other situations inside and outside your company. Acceptance comes with every deal. It’s all about influencing.

This article is written by Mathilde van Hulzen, tech editor of High-Tech Systems.

Recommendation by former participants

By the end of the training participants are asked to fill out an evaluation form. To the question: 'Would you recommend this training to others?' they responded with a 9.1 out of 10.

Passive damping: increasingly part of a high tech engineers standard toolset

Trainer High Tech Institute: Kees Verbaan
Passive damping has been a standard tool for civil engineers and architects for quite some time. Mechanical engineers, however, designing for micron accuracy typically tried to avoid the use of damping. Now the high tech world has entered the domain of sub-nanometer precision, mechanical engineers are more and more discovering that passive damping is an effective medicine for contemporary precision ailments.

In recent years, passive damping is becoming more and more a standard tool for precision engineers. It is not a coincidence that the five-day training course Design Principles for Precision Engineering devotes a whole day to this subject. Due to the increasing importance of passive damping for systems with subnanometer position requirements, High Tech Institute-partner Mechatronics Academy, has developed a special training course in this topic. Top experts Hans Vermeulen and Kees Verbaan teach this new course Passive damping for high tech systems.

Hans Vermeulen first came into contact with passive damping at Philips CFT in the late nineties. Since mid-2000, he works at ASML, where this technology has meanwhile been implemented in various sub-systems to achieve sub-nanometer level precision. He is also a part-time professor at the TU Eindhoven for one day a week. Unhindered by the daily hectics in Veldhoven, Vermeulen is able to focus, among others, on passive damping. The fact that his lectures in this field started several years ago shows that passive damping is very much in the spotlight.


Hans Vermeulen informs that ASML is increasingly using passive damping to achieve sub-nanometer precision.

Colleague-trainer Kees Verbaan received his doctorate in robust mass dampers for motion stages in 2015. He works for the NTS Group, a first-tier supplier for high tech machine design. In his role as system architect, Verbaan sees passive damping technology as becoming well established in many high-end companies.


System architect Kees Verbaan who obtained his doctorate in robust mass dampers, now sees his professional field become well established.

In the world of gross dimensions (centimetres instead of nanometres), passive damping is encountered everywhere. Put your finger on a vibrating tuning fork or nail a large rug to the wall and you readily apply passive dampinging. The automobile industry frequently applies it to car doors. A layer of anti-banging film renders a good sound experience. When you close the door, you don’t hear the sheet metal annoyingly resonate: the damping layer provides the gentle sound that we associate with quality. The energy doesn’t stay in the material as a continuous vibration, but is transferred into heat via a layer of bitumen on the inside of the door. A rather extreme example of a passive damping design is to be found in Taipei 101, the tallest building in the eponymous Taiwanese capital. Because earthquakes and typhoons appear quite frequently, the 101-storey building is equipped with a tuned mass damper, a huge spherical mass of more than eight hundred tons that hangs at the top of the building on four ropes and is provided with large viscous dampers. In the event of vibrations caused by earthquakes or severe storms, the sphere moves out of phase, absorbing a large part of the building’s kinetic energy.’Similar techniques are also now entering  high tech,’ Hans Vermeulen says. ‘In recent years, damping layers – so-called constrained layers – have been applied in high-precision stages and tuned mass dampers are being used to suppress disturbing vibrations at specific frequencies to increase the accuracy of the entire system.’

In high tech mechanical engineering, the application passive damping has been avoided and worked around for a long time. This is mainly due to the fact that designers were able to reach their goals (and often still can) with the traditional approach of using relatively stiff structures in metal or ceramics and metal springs to get predictable behaviour.

Plastics, rubber and composites

Although the use of plastics, rubber material and composites can significantly reduce unwanted vibrations, the application has never been that popular, because the hysteretic behaviour of these materials potentially makes precision systems unpredictable. Another reason is that, for a long time, analytical tools such as finite element analysis and the necessary computers didn’t have sufficient computing power to calculate the complex behaviour necessary to properly predict the influence of passive damping in structures made from exotic materials. In recent years, however, things have changed.

It may be a truism, but it’s still very true: in the world of high tech systems, the demands for precision are constantly increasing. Semiconductor manufacturers want lithographic machines that are able to make patterns in a reliable way at sub-nanometer level precision. Biotechnologists need microscopes that allow for imaging DNA structures at atomic level and medical professionals jump rely on  diagnostic equipment with, if possible, molecular resolution. In all sectors, demands are rising, in such a way that mechanical designers and architects can no longer rely on their standard toolset.

''In the traditional toolset of a design engineer there used to be three drawers of tools. Now it appears there are six..'

It appears that passive damping can make a very significant contribution here. The approach has proven effectiveness, also in the high tech equipment. ‘The nice thing about damping is that a whole new box of tricks is being used,’ Verbaan says. ‘Precision engineers really benefit from a few additional pieces on their chessboard. I like that, because in a manufacturer’s traditional toolkit, there were only three full drawers. Now it turns out that there are three more and they are full of new types of tools that he didn’t use before.’ He underlines that damping is an extension of the solution space, not a replacement. ‘If you don’t master traditional design, the additions will not bring you much.’

‘When requirement were less demanding, designers were used to the  predictable solution space consisting of masses and springs,’ Vermeulen says. ‘In traditional design you have to deal with linear relationships, such as relationships between force and position or stress and strain. To limit the negative effect of amplifications at resonance, designers make sure that the natural frequencies in the system are sufficiently high enough. That translates into light and rigid designs, using low mass solutions and highly stiff materials and geometries. ‘

Monolithic leaf spring

Hooke’s law states a linear relationship between force and position or  stress and strain for linear elastic materials. This means that an elastic material  returns exactly to its original position, which is nice, because as long as you know the forces that act on the system, you can accurately predict the position. Take the example of a monolithic leaf spring, a solid block of metal that has been processed with holes and slots into a mechanism based on masses and springs. Such a structure exhibits reproducible linear behaviour, free from hysteresis. From control perspective,  however, this approach might create a problems in case higher precision is required.


Typical construction with integraed tuned mass damping. Photo: Janssen Precision Engineering.


Example of a monolithic leaf spring. A solid block of metal is processed with holes and slots into a mechanism based on masses and spring. Such a structure exhibits reproducible linear behaviour but has the disadvantage that it ‘sounds like a clock.’

In this type of design, the control system suffer from long lasting vibrations. Resonances might be excited by forces within the system itself, such as imposed motion profiles, but also due to external influences, for example floor vibrations or air displacement. Without damping, these vibrations remain in the system for a long time. The vibration cannot get rid of the vibrational energy.

Mechanical engineers tend to say: ‘it sounds like a clock,’ and in this case this is not a positive observation. High frequency resonances are generally difficult to get rid of via active control. That is why system designers always try to make sure that these types of resonances are outside the area of interest. This means that the first natural frequency is typically designed roughly five times above the bandwidth. Hence, the control system is not affected in the lower frequency range. Vibrations caused by disturbances do occur, but the effect is not limiting performance.

If the demands for accuracy increase, however, designers using the traditional approach will be forced to achieve higher natural frequencies within the design. ‘The demands are increasing,’ says program manager Adrian Rankers of Mechatronics Academy. ‘That will come to an end, because it is not manufacturable anymore.’

Aversion

The traditional approach was sufficient for high tech system designers for many years. But in their search for increasing precision, all high-end system suppliers are now looking at the possibilities of implementing passive damping. Vermeulen: ‘I dare to say that it is becoming standard in the high tech systems industry. Not everyone is familiar with it, but it is expanding.’ Verbaan: ‘The big players such as ASML, Philips, TNO and ThermoFisher have the time to develop their knowledge and conduct research.’

Vermeulen: `Damping means that you deviate from the linear elastic behaviour of materials as  defined by Hooke’s law. This is because the material converts part of the energy into heat. If you plot force against  elongation in a graph, the dissipation is expressed in the hysteresis loop. The surface of this loop is proportional to the dissipated energy: the damping that you can provide to the structure.’In addition, stiffness and damping properties of rubber are temperature- and frequency-dependent (for specialists: linear viscoelastic models can be used for rubbers). As a result, these types of damping materials have been avoided for a long time: a system can have different states under the same load conditions. Vermeulen: ‘That means uncertainty in position.’  Precision engineers have an aversion to this. ‘With damping you deviate from the linear relationship. You pass through a hysteresis loop when the force increases and decreases again, and you don’t know exactly how since not all the forces that affect the system are exactly known. Often there are disturbances from the outside and then you can end up in a position that was not predicted beforehand. We have actually sought to avoid that uncertainty for a long time. As a result, everyone in the high tech systemssector has avoided damping and has designed things traditionally using masses and springs. But at a given moment, the possibilities come to an end.’

Venom

The venom, however, is in the above mentioned hysteresis loop. It’s more complicated to predict behaviour correctly, because the system can be found in different states as mentioned. This means that operating and controlling is complex in environments where floor vibrations and small variations in air pressure or temperature cause major disruptions. A soft exhalation over a wafer stage already provides a standing wave with an amplitude of several tens of nanometers while the stages need to be controlled  at sub-nanometer level.Over the last few decades, the pursuit of the holy grail of completely predictable behaviour of guide ways has been expressed in the avoidance of friction as much as possible – also providing  energy dissipation, hence damping. ‘In many applications, Coulomb friction is not desired,’ Vermeulen says. ‘Also, rRolling elements don’t work in every situation. That is why air bearings are popular. They hardly have any friction.’IBM already used air bearings in its hard drives  in 1961. Lithographic equipment developed in the Sixties and Seventies at the Philips Physics Laboratory were equipped with virtually frictionless oil bearings, and use  air bearing technology in multiple systems these days. . Vermeulen: ‘With the classical box of tricks to design frictionless guide ways, avoiding play, and applying high-stiffness springs with limited mass, we were able to make the behaviour predictable for a long time. But for nanometer applications and beyond, this is no longer sufficient.’

Wobbly pizza disk

Until recently, the classic approach was fine for designing motion stages for wafer steppers and –scanners. By using structural metals and ceramics, such a stage can be made lightweight and stiff. The natural frequencies are high enough not to be limiting for high-bandwidth control. However, the requirement for subnanometer precision make the introduction of more rigorous steps necessary.

'At the nanometer level it is as if you have to keep a wobbly pizza disk quite with your hands.'

Verbaan, during his PhD, investigated the influence of passive damping on a positioning system for 450 millimetre wafers. Such a stage has outer dimensions of 600 mm squared. ‘At the nanometer level it is as if you have to keep a wobbly pizza disk quite with your hands,’ says Verbaan. He compared various materials, , and investigated and optimized with finite element analyses the influence of mass distributions on performance.

Such a large system is susceptible to multiple resonance frequencies. To be able to control the stage accurately, these resonances must be suppressed. ‘For one frequency it is clear how that is done, and you can also put that into a simple model. But if you have multiple resonance peaks across a broad frequency band, that is virtually impossible. Then you get a model that is too complex to handle.’

That is exactly what engineers encounter in practice. The first ‘hurdle’ that limits system performance is the first natural frequency, the frequency at which an object starts to vibrate violently when the frequency is increased.  The traditional approach is to try to increase this frequency. If the means for this are exhausted, attenuation can help to suppress the resonance amplitudes. The first eigenfrequency of a square wafer table is, for example, the torsion mode, for which two pairs of opposite corners move in phase. But at higher frequencies everything starts to rattle, due to the numerous parts and components that are attached to the table such as connectors and sensors. ’Multiple small masses that vibrate at kilohertz. They will ultimately determine the dynamic behaviour. You are not able to solve this via active filtering in the control system because there so many of them. With passive damping, however, you can solve all of that,’ says Vermeulen.

Hans Vermeulen shows how damping can reduce a resonance  peak on a graph.

Verbaan: What helps is that damping materials such as rubbers and liquids and the dampers you design with these materials, typically behave very  suitable at those high frequencies, primarily because of the frequency-dependent material properties. At low frequencies, they behave like a low-stiffness spring, and therefore give in a little bit, but at higher frequencies, they become viscous.’ Vermeulen and Verbaan’s training course makes it clear that, although you can make the field of damping extremely difficult, there are also very good rules of thumb and several very useful design principles. Verbaan: ‘Our goal is to outline the entire pallet of options and ensure that students attending the course can get to a  solution using the right approach. You can let modern computers calculate for days or even for weeks, but then you have to be a real specialist. We want to provide the course participants with various possibilities for applying damping. They are taught the backgrounds of modelling, and also the simple approach to the problem so that they can apply damping correctly. ‘

‘Potential students are people with a design principles background on the one hand,’ says Rankers. ‘They want to apply damping in practice. On the other hand, system architects will also be interested, so that they are aware of the possibilities that damping can offer.’


Kees Verbaan draws a motion training stage that needs to be kept steady  in a vertical position. All kind of forces act on such a table, varying from horizontal motors to accelerate, to vertical actuators that keep the wafer on the table at the correct height. At the first vibration mode, the opposite corners move up or down simultaneously, while the other corners move in the other direction. The result can be in the order of tens of nanometers, while the stage requires subnanometer position control.

Vermeulen and Verbaan underline that passive damping is not a ‘miracle oil’. An integral design approach is indispensable. ‘I have heard engineers  saying: leave that mistake in for now, we’ll solve it later on with controls,’ says Verbaan. He says that people sometimes come to him with systems that don’t achieve the desired performance and ask him if they can use passive damping to fix it. Verbaan: ‘Sometimes, this is dealt withtoo easily. You cannot simply forget the basicss of sound mechanical design. It all starts with light-weight and stiff design  which indisputably remains necessary, also for proper functioning of damping. The pallet of options is getting bigger, but damping is not a replacement.’In the course ‘Passive damping for high tech systems’, Verbaan and Vermeulen will explain multiple damping mechanisms in detail, such as material damping, tuned mass and robust mass damping, constrained layer damping, and Eddy current damping. Starting with damping implementations in other application areas, such as civil engineering and automotive, the focus is on design, modelling and implementation of passive damping in high-tech systems.  Stan van der Meulen, co-trainer of the course, will focus on the application of viscoelastic damping in a semiconductor wafer stage.

This article is written by René Raaijmakers, tech editor of High-Tech Systems.

Recommendation by former participants

By the end of the training participants are asked to fill out an evaluation form. To the question: 'Would you recommend this training to others?' they responded with a 9.2 out of 10.