Value Engineering is so much more than just saving a few euros – says a lead system architect

trainer High Tech Institute
After years of practical experience at Philips Healthcare, Goof Pruijsen now offers advice on value engineering and cost management. He provides training on these subjects for High Tech Institute.

‘Really enjoy it.’ Goof Pruijsen does, as people from different technical development disciplines reap the benefits of his views and knowledge. ‘It gives me a wonderful sense of appreciation.’ He himself is immensely curious. It fascinates him to understand in detail what it is that people are considering buying, how and why a product works technically and how you can improve it in order to improve a business.

Recently he received a big compliment from a lead system architect from ASML who attended a Pruijsen workshop together with his team. ‘I thought we were going to save a few euros, but I learned that value engineering was much more,’ says this system architect. ‘We dealt with some fantastic topics and posed questions about decisions that we had taken at a high level in system architecture. The insights we were left with didn’t only have an impact on costs, but also on the reduction of complexity, risk, time to market and the hours that we spent on engineering.’


Goof Pruijsen: ‘It is precisely the solution-driven approach, used by many teams, which makes them blind to alternatives.’

Therefore, value engineering is perfectly suited to Pruijsen. Although the definition is a bit boring: it’s about adjusting and changing design and manufacturing based on a thorough analysis. If it is done well, it often leads to cost reductions. That’s why developers often have a negative association with value engineering, the ‘squeezing’ of a design, to the saving of costs.

However, High Tech Institute’s trainer Goof Pruijsen, identifies a much more important value: value engineering creates bridges between marketing, development, manufacturing, purchasing and the suppliers. Precisely this interplay between different disciplines ensures that you can achieve large profits by using this approach.

Cost reduction often focuses on the component list of the current solution. This is what Pruijsen calls a beginner’s mistake. ‘You can see that newbies in the profession carry out a so-called pareto-analysis, in which they map out the 20 percent of the components that are responsible for 80 percent of the costs. They will then take something off the most expensive things. It’s not called the cheese cutter method for nothing.’

This approach is often not very effective, says Pruijsen. ‘When this happens, others have often intervened before. Then there is not much more to be gained and chances are that new interventions will affect the quality. If that is at the expense of your image, you are even more worse off.’

Value engineering therefore starts, according to Pruijsen, with value for the customer. ‘What does the customer want to achieve? Which functions are needed for this? What is the value of that function and what are the costs?’ An example that he often mentions is as follows: it is not about the drill, not about the hole, but about hanging the painting in order to decorate your house. Going back to the ultimate goal makes room for creativity and new solutions and concepts.

Tolerance is the cost driver

Thinking in functions is less well established than most developers think. Pruijsen sees that the solution focus with which many teams work, makes them blind to alternatives. ‘They don’t think out-of-the-box.’ It helps – and that requires practice – to analyse an existing solution and to gradually abstract it from there until the functions are perfectly clear. Without describing the solution. Then you can map out the costs functionally and together investigate why these functions are expensive. That is a good start for optimising current and future product generations. I call that cost driver analysis. If you do this well, everyone starts to understand the problem much better and you are already halfway to the solution,’ says Pruijsen.

Tolerance or accuracy is a typical example of a cost driver. Narrow tolerances result in more processing time or steps. An average power supply is usually not that expensive, but if the voltage ripple is very small, then the price rises.

'Developers are usually unaware of the consequences of their risk-avoiding copying behaviour.'

You need to take a close look at those tolerances, according to Pruijsen. ‘Are they really needed everywhere, or only locally? Why is this tolerance so specified? This is often something that doesn’t seem to be considered. Tolerances may have been copied from the previous drawing, designers pay no attention to them, but they do appear on the invoice. Developers are usually unaware of the consequences of their risk-avoiding copying behaviour. If it turns out that a tolerance requirement is not so strict, the manufacturing suddenly becomes much easier, faster and cheaper. Problems with manufacturability and production yield are often resolved spontaneously.’

Large projects, multiple teams, balanced design

In large projects with multiple sub-teams each and every one optimises his own area as much as possible – even if only out of ambition. Pruijsen: ‘If the teams don’t understand how the job is distributed across the modules, then the chance of imbalance in design and specification is high. You don’t put a Formula 1 engine on the chassis of a 2CV. The performance of the components must be in balance with each other. The task of the system architect is to maintain that balance and prevent over-engineering.’

Pruijsen provides a practical case from his time at Philips Healthcare.  X-rays have been used for many years in medical diagnostics and material research. To generate these x-rays, you shoot high voltage electrons onto heavy metal. At one point, the marketing department asked for a new high-voltage generator. One with more power, better stability and higher reliability. And preferably also cheaper.

'Every step in the labour process also includes an error risk; and you can add to that an additional risk of quality problems and production loss.'

‘A project like this often starts purely for performance and technology driven purposes,’ says Pruijsen, from experience. ‘In this case, however, we decided to start formally with a value engineering workshop in order to improve the profit margin on the product as well as the technical direction. The old generator was analysed with respect to costs and functions. It turned out that a relatively large amount of money was invested in much smaller parts (the so-called ‘long tail of pareto’). You cannot quickly put your finger on that one expensive part; the syndrome is one of many components. A many-parts syndrome typically manifests itself in high design costs, high handling costs, and high assembly costs for all parts involved. Every step in the labour process also includes an error risk; and you can add to that an additional risk of quality problems and production loss. The direction for improvement is therefore usually reducing parts through integration, the so-called DFMA (Design for Manufacturing & Assembly).’

Another cost driver was decided in the concept. In order to safely protect the high voltage in the old concept, it was completely submerged in an oil tank. That later turned out to be too big, too heavy and unnecessarily expensive.

Pruijsen: ‘We brainstormed each function and built a consistent and optimal scenario. For the high-voltage generation, we could ride on new technology that makes it possible to transform at higher frequencies. That way we could greatly reduce the volume.’

Observing how it was used brought the biggest breakthrough. The old generator was developed by maximizing all individual performance requirements, without looking at whether these were useful combinations or not. However, doctors use either a single high power shot or several images per second with very low power (and some combinations in between). ‘When the engineers saw this, they were indignant. Nobody had ever told them that! The result was a large reduction in required power and a high voltage tank that was ultimately only a tenth of the original volume.

Cooling is still necessary, but instead of using large ventilators, Pruijsen and his team placed the largest heat source on the bottom of the cabinet. ‘This created a convection current. We used the heat source to improve cooling.’ This is an example of ‘reversed thinking’.

‘The end result was a smaller and quieter generator, 35 percent cheaper. Moreover, fewer components were needed and we achieved a better reliability. And there was another optimisation, the total space required for the system could be reduced by one cabinet.’

Could it have been even better? Yes of course it could, says Pruijsen. ‘We were unable to break through one specification point during this process. The generator was specified at 100 kW. It was said that this had to be so according to medical regulations. ‘It took me months to find the source of this misconception. It turned out to be a medical guideline that advises the use of a generator of at least 80 kW in order to be able to make a good diagnosis with greater certainty. That was therefore a piece of advice given, not a regulation!’ says Pruijsen.

This ‘advice’ dated back to 1991. In the intervening twenty years, image processing techniques have progressed so fast, that a better result can be obtained with much less power. Eventually, Pruijsen found a product manager who admitted that it was not a legal directive, but a so-called tender spec. ‘Because manufacturers have been telling their customers for years that only 100 kW gives sufficient quality, it has become an ‘accepted customer belief’.

‘If the tolerance requirements prove too high but can be relaxed, manufacturing can suddenly become much easier, faster and cheaper,’ says Goof Pruijsen. ‘Problems with manufacturability and production yield are then often resolved spontaneously.’

Managing modular architecture

Pruijsen gives another example. A large module in a production machine was designed in a number of small modules. This meant that a sub-module could be replaced quickly should there be errors. The assumption was that this was cheaper and provided less service stock. ‘The increase in the number of critical interfaces with high tolerance requirements, however, made the cost price double and the complexity increased so that the expected reliability was dramatically lower,’ says Pruijsen. ‘Add to this additional development costs and production tests. A one-piece design turned out to be the better solution. Components with the most risk of failure were thereby placed in an easily accessible location. The lesson was: Modularity is not to cut a module into submodules, but to place your modularity and interfaces correctly. In this case, with a view to providing the best service and also cost-efficient service. You have to keep thinking about the consequences and the balance.’

In his value engineering training course, Pruijsen makes it clear how the set-up of a value engineering study works in practice. First, he concentrates on analysis tools and then on creative techniques for improved scenarios. In addition, attention is paid to involving suppliers in this approach.

There is a lot of attention paid to practical training. One third of the training course consists of practical exercises. For example, there is a ‘Lego-car exercise’ in which course participants learn how to tackle cost reduction and value increase. In addition, they also carry out benefit analyses (case: on the basis of which criteria do customers decide to buy a car?), process flow analysis (case: optimisation of a canteen) and function analysis (the core of functional thinking). Many techniques are clarified on the basis of examples.

Pruijsen also asks course participants to prepare a short presentation of up to ten minutes in advance about their business and product. He may choose one to jointly analyse ‘on the spot.’

Recommendation by former participants

By the end of the training participants are asked to fill out an evaluation form. To the question: 'Would you recommend this training to others?' they responded with a 8.3 out of 10.

Goof’s tips for value engineering

Last but not least, here are some tips from Goof Pruijsen with relation to value engineering:

1. Analyze before considering solutions

2. Go back to basic comprehension: what does it do?

3. What makes it expensive and why?

4. Make an inventory of the assumptions and try to destroy them

5. Be creative; don’t limit yourself to thinking of traditional solutions (risk avoiding), but look for the boundaries

6. Bring the solutions together in a total overview and build scenarios

7. Don’t play down the risks, but also don’t use them as an excuse for not doing things either. Make them explicit and find mitigations for them

8. Keep an eye on the business side of things. Everyone likes to be creative, but money also needs to be earned. Which scenario best satisfies the financial and organisational preconditions?

9. Go for it!

This article is written by René Raaijmakers, tech editor of High-Tech Systems.

Recommendation by former participants

By the end of the training participants are asked to fill out an evaluation form. To the question: 'Would you recommend this training to others?' they responded with a 8.3 out of 10.

Multicore programming skills do not come from Dijkstra

Multicore programming in C++ trainer Klaas van Gend
In practice, writing parallel software is still a difficult task. You keep coming up against unforeseen issues if you don’t understand each and every level of the problem, says Klaas van Gend.

In 2019, Multicore software should be easier to write than ever. Modern programming languages ​​such as Scala and Rust are maturing, programming frames are getting easier to use and C # and good old C ++ are embracing parallelism as part of their standard libraries.

However, in practice, it’s still a messy process. The whole thing turns out to be difficult to synchronize and once the software works, it mostly only runs a little or no faster at all on a multicore processor. And to make matters worse, it tends to evidence all kinds of elusive errors.

Parallel programming is just a very tough subject, where you run into all sorts of subtle, unexpected effects if you don’t understand what’s happening at all levels, tells Klaas van Gend, software architect at Sioux. ‘I’ve heard people talking about sharing nodes on a supercomputer using virtual machines. But they ruin each other’s processor cache; they just get in one other’s way.’

'At university it was all about Dijkstra, which means mutexes, locks and condition variables. But the moment you turn on a lock, you only ensure that the code is executed on one core whilst the others temporarily do nothing. So, you really only learn how not to program for multicore.'

According to Van Gend, the problem is that many developers failed to receive a pedagogically sound basis during their computer science training. ‘At university it was all about Dijkstra, which means mutexes, locks and condition variables. But the moment you turn on a lock, you only ensure that the code is executed on one core whilst the others temporarily do nothing. So, you really only learn how not to program for multicore,’ he says.

That is why Van Gend has taken the multicore training given by his old employer Vector Fabrics, out of the mothballs. Until a few years ago, Vector Fabrics focused on tooling to provide insight into the perils of parallel software. Together with cto Jos van Eijndhoven and other employees, Van Gend provided training courses on the subject. The company went bankrupt in 2016, but Van Gend, in his current employment, has realised that the problem is still relevant. After having once again given the training course at his present employment, he now also offers it under High Tech Institute’s flag, for third parties.


Klaas van Gend is the lecturer of the 3-day training ‘Multicore programming in C++‘.

A problem at each and every level

One of the important matters when writing parallel software, is finding out how to make it work clearly across /on multiple levels, explains Van Gend. He always makes this point with a simple example: Conway’s Game of Life, the cellular automaton where boxes in a grid become black or white with each new round, depending on the status of their immediate neighbours. ‘At the bottom level of your program you have to check what your neighbouring cells are. You can do that with two for-runs/loops. And then you have a loop for a complete row, and above that for the complete set of rows.‘

‘Most programmers will begin to parallelize at those bottom loops. That is very natural, because that is a piece of code that you can still understand, that still fits in your mind. But it makes much more sense to sit/begin at a higher level and take that outer loop. Then you divide the field into multiple blocks of rows and your workload per core is much larger.’

If you look at matters in that way, it soon becomes clear that there are many things to watch out for. There are also programs where the load is variable. ‘For example, we have an exercise to calculate the first hundred prime numbers. There is already more than a factor of one hundred between prime number ten and prime number ninety-nine. Then you have to calculate load balancing.’

There are also differences in what you can parallelize: the data or the task. ‘Data parallelism is generally suitable for very specific applications, but otherwise you soon find a kind of decomposition of your task. This can be done with an actor model or with a Kahn process network, but data-parallelism can again be part of it. In practice you will see that you always end up with mixed forms.’  It has not just been about algorithms for some time now; the underlying hardware plays a key role. For example, if the programmer doesn’t take the caching mechanisms of the processor into account, the problem of false sharing may arise. ‘I have seen huge applications brought to their knees,’ says Van Gend. ‘Suppose you have two threads that are both collecting metrics. If you divide those messily, counters from different threads can end up in the same cache line. The two processors then need to work simultaneously with the same cache and your cache mechanism constantly drags the lines back and forth. That lowers performance greatly.’ For that reason, Van Gend is also skeptical about the use of high-level languages in multicore designs; they have the tendency to abstract the details about the memory layout. ‘With a language like C ++ it is still very clear that you are working on basic primitives and you can see that clearly. But high-ranking languages often hastily skim over the details of the data types, which means that the system can never really run smoothly.’

'If you only partially understand the model, then you will run into problems. It works well for certain specific situations, but it can’t be used everywhere.'

In any case, Van Gend thinks that new languages ​​are no wonder cure for the multicore problem. As a rule, they assume a specific approach that doesn’t have to fit well /necessarily fit well with the application at all. ‘Languages ​​such as Scala or Rust rely heavily on the actor model to make threading easier. If you only partially understand the model, then you will run into problems. It works well for certain specific situations, but it can’t be used everywhere.’

The wrong assumption

The modern versions of C ++ also offer additions to enable parallel programming. ‘Atomics are now fully involved, for example. With this you can often exchange data without stopping anything. We are also working on a library within which the locking is no longer visible to users at all. If it is necessary, it happens without the user seeing it and also with the shortest possible scope, so the lock is released as soon as possible,’ says Van Gend. Here, it is also important to understand what you are doing. Van Gend, for example, is a lot less enthusiastic about the execution policies’ addition to the standard library in C ++ 17. This allows a series of basic algorithms such as find, count, sort and transform to run in parallel by simply adding an extra parameter in the function call. ‘But that only works for some academic examples; in practice, it will not work,’ Van Gend says. ‘These api’s are based on a wrong basic assumption. And in the C # api they have made the same mistake again.’

The problem is that with this approach you can only make separate steps. ‘It stimulates the individual paralleling of each operation. You re-share your dataset with each operation, do something, then make it whole again and go on to the next operation. It is always parallel, sequential, parallel, sequential, and so on. That is conceptually very clear, but you have to wait all that time until all the threads are ready and then continue. It is a complete waste of time. On the other hand, with a library such as Openmp the entire set of operations is simply distributed over the threads. This means therefore that you don’t have to wait unnecessarily.’

'The funny thing is that Microsoft also played a large part in the Par Lab at the University of Berkeley. This has resulted in a fairly large collection of design patterns for parallel programming, which I deal with extensively in the training course.'

The gcc compiler doesn’t provide any support for these parallel functions. Visual Studio does, because the additions eventually come from Microsoft. ‘The funny thing is that Microsoft also played a large part in the Par Lab at the University of Berkeley. This has resulted in a fairly large collection of design patterns for parallel programming, which I deal with extensively in the training course. Microsoft has shown that they understand exactly how to do it properly.’

This article is written by Pieter Edelman, tech editor of Bits&Chips.

Recommendation by former participants

By the end of the training participants are asked to fill out an evaluation form. To the question: 'Would you recommend this training to others?' they responded with a 8.6 out of 10.

Accurate machines can’t exist without good thermal management

thermal design and thermal management trainer Theo Ruijll
In most companies, thermal design and thermal management is still in its infancy’ says Theo Ruijl, CTO of MI-Partners and ‘Thermal effects in mechatronic systems’ trainer. Ruijl sees this fact as a huge deficiency. ‘You can’t build a precise machine if you neglect the thermal aspects.’

The largest errors in a machine are caused by vibrations and fluctuations in temperature. If you don’t have both under control, you can say goodbye to an accurate system. Unfortunately, not all designers are aware of this fact. With a leaf spring you can support a system in a statically determined manner, but many engineers are unaware of the fact that such a leaf spring is also a great thermal insulator. ‘Many developers are lacking in knowledge about thermal effects in mechatronic systems,’ says Theo Ruijl, CTO of MI-Partners and trainer of the ‘Thermal effects in mechatronic systems’ course (TEMS).

In Dutch and Belgium high tech there is a lot of knowledge about dynamics, about good design, about damping. After all, generations of mechanical engineers have grown up with the construction principles of great teachers like Rien Koster and Wim van der Hoek and the Des Duivels Picture Book. But in most companies, thermal management is still not well covered.

‘Any engineer seeking to achieve a high level of accuracy will sooner or later be confronted with thermal effects,’ says Theo Ruijl. Ruijl has been working on thermal effects in mechatronic systems for two decades. ‘Temperature variations, drift, dissipation in an actuator, energy absorption of electromagnetic waves in a lens or mirror: all of these things have an impact on the performance of a system. Of course, you can ignore them, and, for a while, things might work well. But if a competitor, who has good knowledge of thermal aspects, suddenly appears, he will overtake and leave you far behind.’


‘The technical universities produce excellent graduates and post graduates in dynamics and control technology, but they do not train students in the thermal effects in mechatronic systems,’ says Theo Ruijl, thermal effects trainer.

In the high tech industry, developers are struggling with thermal distortions and inaccuracies. ‘At ASML these challenges are currently greater than the dynamic ones,’ says Ruijl. ‘An enormous amount of light is being pumped into these machines. It is inevitable that as a result the wafer heats up and deforms. If that happens nice and evenly, then you are still able to simulate it and predict it. Unfortunately, all kinds of non-linear effects occur. Then modelling and compensation becomes very difficult,’

Thermo Fisher also highlights the subject. Ruijl: ‘Many users of electron microscopes are in the life sciences. They research biological processes that they literally freeze in order to study them properly. That means dissolving them in water and cooling the water down to the freezing point. The ice must be amorphous, not crystalline, because otherwise you can’t see anything under the microscope. You will only get that kind of structure if you cool the sample at lightning speed, at 100,000 to one million Kelvin per second. Then the frozen sample needs to be observed under the microscope. The preparation and positioning pose a huge thermal challenge. How do you keep the sample at the right temperature within high vacuum? And what effect does that have on the sensitive optical and mechatronic systems around it?’

The big loss

The fact that many companies still lack in-depth thermal knowledge is largely due to something missing in the education. ‘The technical universities produce excellent graduates and post graduates in dynamics and control technology, but they do not teach the thermal effects in mechatronic systems,’ says Ruijl firmly. He himself studied with TUE professor Piet Schellekens. ‘Since Piet Schellekens retired fifteen years ago, thermal design and metrology have been neglected. Nobody has taken these issues seriously, not even in Delft or Twente. That is a big loss. There are so many fundamental challenges in this domain. That would really require a dedicated full-time professor.’

With the arrival of Hans Vermeulen a couple of years ago, there has been a part-time professor at the Eindhoven University of Technology who has put the subject on the agenda. For his Mechatronic Systems Design group, however, advanced thermal control is one of many topics. A large part of the permanent staff of Schellekens has since left. ‘In Germany the subject is more on the map,’ says Ruijl. ‘There is a large market for machine tools in which thermal effects play a major role. German machine tool builders and knowledge institutions understand each other well on this point. They run various research projects at the Fraunhofer institutes. TEMS research programs are also running in Switzerland and Spain.’

Recycling

Despite the gap in university education, there are quite a few thermal specialists in the industry. They are all self-made people who have learned the trade in practice. For Ruijl, that process started at Philips almost twenty years ago. ‘For a long time, we have known exactly how we can model dynamics and control technology and how to integrate it into machines. In a typical design process, different specialists sit at the table so that you can develop a machine with input from all disciplines. In the old days it sometimes happened at Philips that someone at the end of such a process with a complicated finite element sum found out that thermally, it didn’t work. That is why we started to develop a competence in this field with focus on mechatronic systems.’


To calculate thermal effects, engineers reuse mathematical techniques from dynamics. This resulted in the concept of thermal mode shapes.

Right from the outset, the specialists discovered that the techniques that they have already applied in dynamics can also be used in the thermal domain. ‘In dynamics and control engineering, we use state-space models and their Eigen-frequencies and mode shapes are important quantities,’ Ruijl explains. ‘Such a model is nothing more than a set of differential equations. Thermal effects are also described with differential equations. And for mathematics it doesn’t matter whether you pass through a mechanical-dynamic or a thermal-dynamic system.’

It is not exactly the same. In the thermal domain there are no objects that behave like a mass-spring system; the temperature does not overshoot, but gradually goes back, like a first order system. Like a metal plate, if you heat it up in the middle it will cool down as soon as you remove the heat source. But it never gets colder than the environment. Temperature distribution, as a function of time, can be perfectly modelled.

'It is quite unique how we, here in the Netherlands, look at thermal effects from a mechatronic design approach.'

Ruijl and his colleagues recycled the mathematical techniques from dynamics. ‘We still use tools from, for example, Ansys or Mathworks, to perform the calculations. The analyses of mechanical vibration shapes have since long been included in those packages. The thermal shapes are not, even though the technology is already there. When we started about twenty years ago, we asked Ansys if they could give us access to this feature. It took a long time, but now they have included a button for it. That shows that it is quite unique how we, here in the Netherlands, look at thermal effects from a mechatronic design approach. It is really different from a pure physics approach that often involves thermodynamics processes. We link thermal effects to mechatronic systems.’

Consciously incompetent

In order to get the theme fixed in the way of working of its employees, Philips developed a special training course: Thermal effects in mechatronic systems. The three-day course has since found refuge at Mechatronics Academy and is being marketed by High Tech Institute. Alongside Rob van Gils (Philips), Marco Koevoets (ASML) and Jack van der Sanden (ASML), Theo Ruijl is one of the trainers.

‘Of course, you can’t give a full training covering all topics in only three days,’ Ruijl admits. ‘The public is too broad for that; people from different technical background come to the training course. Some have never done anything with TEMS, others are already quite experienced. Some are engineers, others are control engineers.’

Dutch specialists look at thermal effects from a mechatronic design approach. That is unique in the world. For Ruijl, that started years ago with his PhD research supervised by Jan van Eijk and Piet Schellekens.

On the first day, the students receive an introduction to the physics background. ‘Heat transfer as radiation, conduction, convection,’ sums up Ruijl. ‘How do you deal with it? Many facts, tips and tricks. Then we go deeper; and we do simulations with Matlab and Simulink.’ Then the foundation has been laid. ‘The goal is for everyone to speak the same language afterwards.’

Day two deals with measurement techniques. ‘Measuring the temperature is a skill in itself,’ emphasises Ruijl. ‘In any case, there are many different sensor types. But how do you measure accurately? And where? And do I measure the temperature of the object itself or of the lamp that is shining on it? Together with Jack, I once developed a system to control the water temperature in a precise manner. With a small coil in the stream we were able to warm it up very quickly and very accurately. Then we made a nice setup for an exhibition, with beautiful Perspex tubes so that everything could be seen very clearly. Unfortunately, we didn’t manage to get the temperature stable anymore. We must have done something wrong, but what? It was so bad that the temperature fluctuated as people came along. In the end, the ceiling lighting in the hall was influencing the sensor by radiation through the transparent Perspex. You only make a mistake like that once,’ laughs Ruijl.

The students themselves will also model. Using Matlab, although this particular tool doesn’t have a special toolbox for thermal effects. ‘We also deal with a cryogenic example as a practical case,’ says Ruijl. ‘How do you measure, for example, 77 Kelvin? Which materials can you use best? Cryogenic is important for scientific experiments and builders of electron microscopes.’

'Every design group should include a thermal specialist.'

What is the lesson for the TEMS students? ‘The most important thing is that they understand the language,’ Ruijl replies. ‘We also make them aware of the issues that they have to pay attention to and that they need to take into account. Consciously incompetent. That is very valuable, because manufacturers with that knowledge can catch mistakes at an early stage by looking at the project again or by getting in a specialist. Every design group should always include a thermal specialist.’

This article is written by Alexander Pil, tech editor of High-Tech Systems.

Recommendation by former participants

By the end of the training participants are asked to fill out an evaluation form. To the question: 'Would you recommend this training to others?' they responded with a 9.0 out of 10.

Technical experts can also be successful in advisory sales

Consultative selling and communication skills trainer
Engineers and technical professionals are used to thinking of their advisory role as related to content alone. But if they want the customer or stakeholder to take action on the advice they offer, something else is needed: acceptance from the person for whom the advice is intended. That’s where sales skills come in. Claus Neeleman trains technical experts in successful advisory sales. ‘Once you understand how the sales process works, you can advise much more effectively. Both external and internal customers. Leading to a positive effect on your company’s results.’

Consultative selling, or advisory sales, is an effective sales method and therefore receives a lot of attention. According to trainer Claus Neeleman, this attention is justified. ‘Advisory sales is the best thing for the customer, it is about finding the best solution for that customer and matching it to your own interest, namely the margin on the products or services that you sell. Advising and selling are therefore both important. The trick is to create value for the customer. That value is in good advice that yields more than what the customer pays for it. In engineering companies, engineers have an important, supporting sales role, because they know exactly what matters in terms of technical content. To sell something in the high tech environment, it is the content that sells, not the sales talk.

'Advisory sales is the best thing for the customer, it is about finding the best solution for that customer and matching it to your own interest, namely the margin on the products or services that you sell.'


Claus Neeleman trains technical experts in successful advisory sales.

'The trick is to create value for the customer. That value is in good advice that yields more than what the customer pays for it.'

Neeleman has a friendly personality and an intelligent glance. He is qualified as an occupational and organisational psychologist. He has worked at an assessment agency, at a reintegration agency, and amongst other things as regional manager. ‘When you carry out an assessment, you analyse and test people, which I thought was super fun and still do. You find out how to see people’s qualities and pitfalls, with an aim to helping them improve. At the reintegration agency, that didn’t always help, because in that environment commerce plays a major role. This sometimes results in moral dilemmas. Do you help the person you have to put a lot of energy into, or the person who doesn’t cause much bother? I did this type of work mainly to help people move forward in their careers and their lives, so such choices were not what I wanted. That’s why I decided to become a trainer. Of course, I also took training courses myself and discovered that it is a fascinating field. Training is something positive. People improve after taking a training course, they like it and are enthusiastic afterwards. That gives me energy. And I find it more fun to talk and to be busy with people than to write reports at a desk.’

Lots of practice

Neeleman has been working as a trainer for some eighteen years. He focuses mainly on practical skills. ‘Many of the things that I tell you come from psychology together with insights from the field about how you can influence people and what the effects are. The content of a conversation can therefore be the same, but the strategy of transferring that content to another may differ. The best approach depends upon the situation and the people in question. I firmly believe that practice is the best way to learn how to sell, for example, in an advisory capacity. The theory behind it is not complicated at all, but to better address conversations with customers you first have to experience what it is like when you try out different behaviour.’

'To better address conversations with customers you first have to experience what it is like when you try out different behaviour.'

Teacher of the year

For several years now, Neeleman has been giving two training courses at High Tech Institute: Effective Communication Skills for engineers and Sales skills for engineers. In 2016, he was appointed trainer of the year by the High Tech Institute, with an evaluation score of 9.1 out of 10. Trainees called him impressive, inspiring and empathetic as a teacher and say that he is excellent at explaining things and tailors the course right to their needs.


In 2016, Claus became High Tech Institute’s ‘Teacher of the year’.

'To every advise moment, belongs a sales moment.'

That is quite special, because selling is not the favourite job of technology professionals…
‘Correct. They also often think that they only give advice. But that is incorrect. What is not seen is that they use less effective strategies in conversations with the customer. The result however is noticeable: as soon as the customer puts them under pressure, they already give a discount that is not in their interest. Or they are too customer-friendly and forget to make agreements about the remuneration for their consultancy work. Or they are unclear about the costs. During an on-going contract, a sales moment belongs to every advice moment. You have to pay attention to that.

But also, in the initial phase of contact with the client, a technician must be sufficiently convincing to make the sale of a service or product succeed. How do you ensure that you come across well and generate trust? How do you give the customer the idea that you are strong enough to carry out the project? You have to create trust and adapt your communication style to the customer and what is important to him/her. Both with regard to content and personal interaction. And if you work together with an account manager you have to learn to speak one another’s language, so that you know what your colleague’s intentions are and what the other person is doing in the sales process. The sales person must of course also know when the content is important.’

That sounds pretty difficult.
‘In reality it’s not such a big deal! The theory is a tool, a model that tells you which steps to take. Analytical people, such as technicians, can handle this very well. For example, the theory is that you yourself often generate resistance from the customer. This happens, for example, if you are more concerned with your own goals than with those of the customer. Or if you put too much pressure on them. That is what we call counter-behaviour. For example, if you constantly know things better than your customer, they will start to object. And if you are too dominant in the speed at which you talk about things or try to enforce a decision, this also provokes resistance. Counter-behaviour doesn’t help you sell your solution. But if you connect with your customer and go into a constructive dialogue, you will build things. The customer then moves on with you much more smoothly. If you encounter resistance during a conversation, you can adjust it by adjusting your behaviour. For example, by leaving more of the pace during the conversation to the customer and by clearly putting his/her interests first.’

Do you yourself have to change in order to sell better?
‘That’s not necessary at all. You just remain yourself, you only choose to exhibit different behaviour in certain situations in order to be more effective in your performance. If you are aware of the way a sales process progresses and you know what works, you can determine much more effectively what effect you want to have on others. It is not about right or wrong. You can reach your goal in many ways. But if you want to bring your story on stage successfully, it will certainly help if you know how to carry out advisory sales. And you can easily do that without forcing yourself into a situation that you don’t like.

'As soon as you understand the sales process, you can advise more effectively. '

What is the secret of a successful advisory sales conversation?
‘You need two ingredients: a good sound story and acceptance by the customer. The latter refers to ensuring that the customer can accept your advice. You do this by raising questions, feelings and doubts that could prevent the customer from accepting your product or service and giving good answers. Interviewing your customer based on the signals s/he gives you is not easy for technicians, because technicians deal mainly with facts and less with emotions. But, with a little practice, they can learn how to do this.’

How does such a conversation proceed?
‘The first phase is the contact phase. Technicians often find it difficult to get through this part and prefer to go straight to the content. But the first phase is important for generating trust and for creating a good personal relationship. In this phase, you also decide what you are talking about. You show that you have thought about the customer’s problem and you indicate that you already have a few ideas. In the contact phase you also agree on your way of communicating with the customer. If you have the same communication style, that’s easy. A customer can also be very directive and want to decide quickly. As a technician you have a tendency to look at a problem from all sides, but this type of customer gets irritated by that. So, if you find that time and money are important goals for a customer, then you have to respond to that information. You will then get more space for the content later in the conversation.

In the second phase you will make an inventory, thus mapping out the customer’s needs. You have proven effective methods for that. As a result, the customer recognises the scope of his/her problem and wants to take action. You cannot achieve that by saying that they have a big problem, you do that by asking questions. This leads to a sense of urgency, the idea that something has to be done.

The third phase is the presentation of your advice, where you show your skills and influence people. In the fourth and final phase you help the customer to come to a decision by taking steps together in the decision process. This is the actual advice work.

All in all, an advisory sales conversation is more about the customer than about you. The customer is king.’

Tips from Claus

‘Be happy with critical questions or reactions, because this is the moment when you have contact about the content. When this happens don’t try to be smarter or question the question, instead, go deeper into it because there is a fear or worry hidden behind such a question. So, take a step towards the customer by using criticism as positive input. After all, the customer knows the most about the problem for which s/he needs your advice. The beauty of this is: what you learn during this training course, you can also apply in other situations inside and outside your company. Acceptance comes with every deal. It’s all about influencing.

This article is written by Mathilde van Hulzen, tech editor of High-Tech Systems.

Recommendation by former participants

By the end of the training participants are asked to fill out an evaluation form. To the question: 'Would you recommend this training to others?' they responded with a 8.9 out of 10.

Passive damping: increasingly part of a high tech engineers standard toolset

Trainer High Tech Institute: Kees Verbaan
Passive damping has been a standard tool for civil engineers and architects for quite some time. Mechanical engineers, however, designing for micron accuracy typically tried to avoid the use of damping. Now the high tech world has entered the domain of sub-nanometer precision, mechanical engineers are more and more discovering that passive damping is an effective medicine for contemporary precision ailments.

In recent years, passive damping is becoming more and more a standard tool for precision engineers. It is not a coincidence that the five-day training course Design Principles for Precision Engineering devotes a whole day to this subject. Due to the increasing importance of passive damping for systems with subnanometer position requirements, High Tech Institute-partner Mechatronics Academy, has developed a special training course in this topic. Top experts Hans Vermeulen and Kees Verbaan teach this new course Passive damping for high tech systems.

Hans Vermeulen first came into contact with passive damping at Philips CFT in the late nineties. Since mid-2000, he works at ASML, where this technology has meanwhile been implemented in various sub-systems to achieve sub-nanometer level precision. He is also a part-time professor at the TU Eindhoven for one day a week. Unhindered by the daily hectics in Veldhoven, Vermeulen is able to focus, among others, on passive damping. The fact that his lectures in this field started several years ago shows that passive damping is very much in the spotlight.


Hans Vermeulen informs that ASML is increasingly using passive damping to achieve sub-nanometer precision.

Colleague-trainer Kees Verbaan received his doctorate in robust mass dampers for motion stages in 2015. He works for the NTS Group, a first-tier supplier for high tech machine design. In his role as system architect, Verbaan sees passive damping technology as becoming well established in many high-end companies.


System architect Kees Verbaan who obtained his doctorate in robust mass dampers, now sees his professional field become well established.

In the world of gross dimensions (centimetres instead of nanometres), passive damping is encountered everywhere. Put your finger on a vibrating tuning fork or nail a large rug to the wall and you readily apply passive dampinging. The automobile industry frequently applies it to car doors. A layer of anti-banging film renders a good sound experience. When you close the door, you don’t hear the sheet metal annoyingly resonate: the damping layer provides the gentle sound that we associate with quality. The energy doesn’t stay in the material as a continuous vibration, but is transferred into heat via a layer of bitumen on the inside of the door. A rather extreme example of a passive damping design is to be found in Taipei 101, the tallest building in the eponymous Taiwanese capital. Because earthquakes and typhoons appear quite frequently, the 101-storey building is equipped with a tuned mass damper, a huge spherical mass of more than eight hundred tons that hangs at the top of the building on four ropes and is provided with large viscous dampers. In the event of vibrations caused by earthquakes or severe storms, the sphere moves out of phase, absorbing a large part of the building’s kinetic energy.’Similar techniques are also now entering  high tech,’ Hans Vermeulen says. ‘In recent years, damping layers – so-called constrained layers – have been applied in high-precision stages and tuned mass dampers are being used to suppress disturbing vibrations at specific frequencies to increase the accuracy of the entire system.’

In high tech mechanical engineering, the application passive damping has been avoided and worked around for a long time. This is mainly due to the fact that designers were able to reach their goals (and often still can) with the traditional approach of using relatively stiff structures in metal or ceramics and metal springs to get predictable behaviour.

Plastics, rubber and composites

Although the use of plastics, rubber material and composites can significantly reduce unwanted vibrations, the application has never been that popular, because the hysteretic behaviour of these materials potentially makes precision systems unpredictable. Another reason is that, for a long time, analytical tools such as finite element analysis and the necessary computers didn’t have sufficient computing power to calculate the complex behaviour necessary to properly predict the influence of passive damping in structures made from exotic materials. In recent years, however, things have changed.

It may be a truism, but it’s still very true: in the world of high tech systems, the demands for precision are constantly increasing. Semiconductor manufacturers want lithographic machines that are able to make patterns in a reliable way at sub-nanometer level precision. Biotechnologists need microscopes that allow for imaging DNA structures at atomic level and medical professionals jump rely on  diagnostic equipment with, if possible, molecular resolution. In all sectors, demands are rising, in such a way that mechanical designers and architects can no longer rely on their standard toolset.

''In the traditional toolset of a design engineer there used to be three drawers of tools. Now it appears there are six..'

It appears that passive damping can make a very significant contribution here. The approach has proven effectiveness, also in the high tech equipment. ‘The nice thing about damping is that a whole new box of tricks is being used,’ Verbaan says. ‘Precision engineers really benefit from a few additional pieces on their chessboard. I like that, because in a manufacturer’s traditional toolkit, there were only three full drawers. Now it turns out that there are three more and they are full of new types of tools that he didn’t use before.’ He underlines that damping is an extension of the solution space, not a replacement. ‘If you don’t master traditional design, the additions will not bring you much.’

‘When requirement were less demanding, designers were used to the  predictable solution space consisting of masses and springs,’ Vermeulen says. ‘In traditional design you have to deal with linear relationships, such as relationships between force and position or stress and strain. To limit the negative effect of amplifications at resonance, designers make sure that the natural frequencies in the system are sufficiently high enough. That translates into light and rigid designs, using low mass solutions and highly stiff materials and geometries. ‘

Monolithic leaf spring

Hooke’s law states a linear relationship between force and position or  stress and strain for linear elastic materials. This means that an elastic material  returns exactly to its original position, which is nice, because as long as you know the forces that act on the system, you can accurately predict the position. Take the example of a monolithic leaf spring, a solid block of metal that has been processed with holes and slots into a mechanism based on masses and springs. Such a structure exhibits reproducible linear behaviour, free from hysteresis. From control perspective,  however, this approach might create a problems in case higher precision is required.


Typical construction with integraed tuned mass damping. Photo: Janssen Precision Engineering.


Example of a monolithic leaf spring. A solid block of metal is processed with holes and slots into a mechanism based on masses and spring. Such a structure exhibits reproducible linear behaviour but has the disadvantage that it ‘sounds like a clock.’

In this type of design, the control system suffer from long lasting vibrations. Resonances might be excited by forces within the system itself, such as imposed motion profiles, but also due to external influences, for example floor vibrations or air displacement. Without damping, these vibrations remain in the system for a long time. The vibration cannot get rid of the vibrational energy.

Mechanical engineers tend to say: ‘it sounds like a clock,’ and in this case this is not a positive observation. High frequency resonances are generally difficult to get rid of via active control. That is why system designers always try to make sure that these types of resonances are outside the area of interest. This means that the first natural frequency is typically designed roughly five times above the bandwidth. Hence, the control system is not affected in the lower frequency range. Vibrations caused by disturbances do occur, but the effect is not limiting performance.

If the demands for accuracy increase, however, designers using the traditional approach will be forced to achieve higher natural frequencies within the design. ‘The demands are increasing,’ says program manager Adrian Rankers of Mechatronics Academy. ‘That will come to an end, because it is not manufacturable anymore.’

Aversion

The traditional approach was sufficient for high tech system designers for many years. But in their search for increasing precision, all high-end system suppliers are now looking at the possibilities of implementing passive damping. Vermeulen: ‘I dare to say that it is becoming standard in the high tech systems industry. Not everyone is familiar with it, but it is expanding.’ Verbaan: ‘The big players such as ASML, Philips, TNO and ThermoFisher have the time to develop their knowledge and conduct research.’

Vermeulen: `Damping means that you deviate from the linear elastic behaviour of materials as  defined by Hooke’s law. This is because the material converts part of the energy into heat. If you plot force against  elongation in a graph, the dissipation is expressed in the hysteresis loop. The surface of this loop is proportional to the dissipated energy: the damping that you can provide to the structure.’In addition, stiffness and damping properties of rubber are temperature- and frequency-dependent (for specialists: linear viscoelastic models can be used for rubbers). As a result, these types of damping materials have been avoided for a long time: a system can have different states under the same load conditions. Vermeulen: ‘That means uncertainty in position.’  Precision engineers have an aversion to this. ‘With damping you deviate from the linear relationship. You pass through a hysteresis loop when the force increases and decreases again, and you don’t know exactly how since not all the forces that affect the system are exactly known. Often there are disturbances from the outside and then you can end up in a position that was not predicted beforehand. We have actually sought to avoid that uncertainty for a long time. As a result, everyone in the high tech systemssector has avoided damping and has designed things traditionally using masses and springs. But at a given moment, the possibilities come to an end.’

Venom

The venom, however, is in the above mentioned hysteresis loop. It’s more complicated to predict behaviour correctly, because the system can be found in different states as mentioned. This means that operating and controlling is complex in environments where floor vibrations and small variations in air pressure or temperature cause major disruptions. A soft exhalation over a wafer stage already provides a standing wave with an amplitude of several tens of nanometers while the stages need to be controlled  at sub-nanometer level.Over the last few decades, the pursuit of the holy grail of completely predictable behaviour of guide ways has been expressed in the avoidance of friction as much as possible – also providing  energy dissipation, hence damping. ‘In many applications, Coulomb friction is not desired,’ Vermeulen says. ‘Also, rRolling elements don’t work in every situation. That is why air bearings are popular. They hardly have any friction.’IBM already used air bearings in its hard drives  in 1961. Lithographic equipment developed in the Sixties and Seventies at the Philips Physics Laboratory were equipped with virtually frictionless oil bearings, and use  air bearing technology in multiple systems these days. . Vermeulen: ‘With the classical box of tricks to design frictionless guide ways, avoiding play, and applying high-stiffness springs with limited mass, we were able to make the behaviour predictable for a long time. But for nanometer applications and beyond, this is no longer sufficient.’

Wobbly pizza disk

Until recently, the classic approach was fine for designing motion stages for wafer steppers and –scanners. By using structural metals and ceramics, such a stage can be made lightweight and stiff. The natural frequencies are high enough not to be limiting for high-bandwidth control. However, the requirement for subnanometer precision make the introduction of more rigorous steps necessary.

'At the nanometer level it is as if you have to keep a wobbly pizza disk quite with your hands.'

Verbaan, during his PhD, investigated the influence of passive damping on a positioning system for 450 millimetre wafers. Such a stage has outer dimensions of 600 mm squared. ‘At the nanometer level it is as if you have to keep a wobbly pizza disk quite with your hands,’ says Verbaan. He compared various materials, , and investigated and optimized with finite element analyses the influence of mass distributions on performance.

Such a large system is susceptible to multiple resonance frequencies. To be able to control the stage accurately, these resonances must be suppressed. ‘For one frequency it is clear how that is done, and you can also put that into a simple model. But if you have multiple resonance peaks across a broad frequency band, that is virtually impossible. Then you get a model that is too complex to handle.’

That is exactly what engineers encounter in practice. The first ‘hurdle’ that limits system performance is the first natural frequency, the frequency at which an object starts to vibrate violently when the frequency is increased.  The traditional approach is to try to increase this frequency. If the means for this are exhausted, attenuation can help to suppress the resonance amplitudes. The first eigenfrequency of a square wafer table is, for example, the torsion mode, for which two pairs of opposite corners move in phase. But at higher frequencies everything starts to rattle, due to the numerous parts and components that are attached to the table such as connectors and sensors. ’Multiple small masses that vibrate at kilohertz. They will ultimately determine the dynamic behaviour. You are not able to solve this via active filtering in the control system because there so many of them. With passive damping, however, you can solve all of that,’ says Vermeulen.

Hans Vermeulen shows how damping can reduce a resonance  peak on a graph.

Verbaan: What helps is that damping materials such as rubbers and liquids and the dampers you design with these materials, typically behave very  suitable at those high frequencies, primarily because of the frequency-dependent material properties. At low frequencies, they behave like a low-stiffness spring, and therefore give in a little bit, but at higher frequencies, they become viscous.’ Vermeulen and Verbaan’s training course makes it clear that, although you can make the field of damping extremely difficult, there are also very good rules of thumb and several very useful design principles. Verbaan: ‘Our goal is to outline the entire pallet of options and ensure that students attending the course can get to a  solution using the right approach. You can let modern computers calculate for days or even for weeks, but then you have to be a real specialist. We want to provide the course participants with various possibilities for applying damping. They are taught the backgrounds of modelling, and also the simple approach to the problem so that they can apply damping correctly. ‘

‘Potential students are people with a design principles background on the one hand,’ says Rankers. ‘They want to apply damping in practice. On the other hand, system architects will also be interested, so that they are aware of the possibilities that damping can offer.’


Kees Verbaan draws a motion training stage that needs to be kept steady  in a vertical position. All kind of forces act on such a table, varying from horizontal motors to accelerate, to vertical actuators that keep the wafer on the table at the correct height. At the first vibration mode, the opposite corners move up or down simultaneously, while the other corners move in the other direction. The result can be in the order of tens of nanometers, while the stage requires subnanometer position control.

Vermeulen and Verbaan underline that passive damping is not a ‘miracle oil’. An integral design approach is indispensable. ‘I have heard engineers  saying: leave that mistake in for now, we’ll solve it later on with controls,’ says Verbaan. He says that people sometimes come to him with systems that don’t achieve the desired performance and ask him if they can use passive damping to fix it. Verbaan: ‘Sometimes, this is dealt withtoo easily. You cannot simply forget the basicss of sound mechanical design. It all starts with light-weight and stiff design  which indisputably remains necessary, also for proper functioning of damping. The pallet of options is getting bigger, but damping is not a replacement.’In the course ‘Passive damping for high tech systems’, Verbaan and Vermeulen will explain multiple damping mechanisms in detail, such as material damping, tuned mass and robust mass damping, constrained layer damping, and Eddy current damping. Starting with damping implementations in other application areas, such as civil engineering and automotive, the focus is on design, modelling and implementation of passive damping in high-tech systems.  Stan van der Meulen, co-trainer of the course, will focus on the application of viscoelastic damping in a semiconductor wafer stage.

This article is written by René Raaijmakers, tech editor of High-Tech Systems.

Recommendation by former participants

By the end of the training participants are asked to fill out an evaluation form. To the question: 'Would you recommend this training to others?' they responded with a 9.0 out of 10.

‘Applied optics’ training shifts focus to demos and experiments

optics training
Experts from TNO and trainers from T2prof are putting the finishing touches to the renewed ‘Applied optics’ training courses to commence in February in Delft and in Eindhoven. The focus shifts. There is a decrease in hard maths, giving way to demos and hands-on experiments.

The ‘Applied optics‘ course from T2prof originates from the Philips Center for Technical Training. High Tech Institute launches the course in an exclusive partnership with T2prof. The first edition dates back to 2003. Shortly before that, ASML had indicated to Philips CTT that it needed a course to give electronic engineers, mechanical engineers and chemical engineers a better understanding of the optical R&D world in which they needed to operate.

The idea behind ASML’s request was to prevent Babylonian speech confusion within research projects. If non-optical engineers were to know more about lenses, reflection, refraction, collimators, lasers and similar, they would be able to work more effectively with optical specialists within the company. This is part of a growing trend in the high tech world. Companies derive their innovative strength less and less from individuals and more and more from multidisciplinary teams. If people are able to work together more efficiently, this in turn benefits development and innovative strength.

Experts from TNO and trainers from T2prof have spent the last few months updating the training to make the content more in line with the latest technological developments. Most of the difficult maths has disappeared. This has created room for more practical matters such as optical systems, aberration correction and the interaction between light and matter. The training course runs both in Eindhoven and in Delft (TNO).

A new timetable also applies to the TNO training in Delft. In Eindhoven the course is spread over sixteen afternoon sessions for a period of eight months. In Delft that becomes eight sessions spanning both afternoon and evening, spread over sixteen weeks. The training is known to be challenging, but in recent years it has, on average, been valued by the participants at more than 8 on a scale of 10.


The ‘Applied optics’ course in Delft will be organized upon request (eight sessions spanning both afternoon and evening). Also, enrollment is open for the ‘Applied optics’ course, starting twice a year in Eindhoven (fifteen afternoons).

Historical baggage

There are normally two types of trainee, says Jean Schleipen, who has, for the past four years, been one of the three trainers in the ‘Applied Optics’ course. ‘One half readily appropriates the content and gets right down to the maths and the homework. They want to master the profession. Others need more of a global picture. Think of marketing people who find it enough to be roughly updated in all optical areas. My aim is, in addition to transferring knowledge, to fascinate and enthuse all participants for our beautiful and important field of science.’

In order to make the content stick and to place it in a broader context, Schleipen deems it essential to give students both historical baggage and deeper background information. ‘We can’t teach all the formulas and mathematical background to non-opticial engineers. But it is useful if they know where these calculations come from. If they know that Ampère, Coulomb and Faraday made discoveries in the eighteenth and nineteenth century in the field of electricity and magnetism and that afterwards a genius physicist, James Maxwell, came along, a physicist who was able to mathematically describe electromagnetic forces. And that when this physicist was juggling with his formulas, all pieces of the puzzle fell in place and a new physical constant dropped out, closely resembling the speed of light as measured at that time. He felt that there must be a connection. Maxwell’s equations still form the basis of modern optics.’

‘At the end of the nineteenth century, continues Schleipen, Hertz discovered the photoelectric effect, followed by the rise of quantum mechanics at the beginning of the twentieth century.’ ‘New insights found that particals, such as electrons, could be described as mass and as a wave. Conversely, light could behave both as a wave or a particle. Students tell me that they appreciate this kind of knowledge.’

Schleipen also wants to give background information when explaining optical-physical phenomena. ‘If you use a lens when focussing a laser beam, then the spot has a finite width. But why? You can then indicate that it is due to diffraction, and/or refraction of light. But I also want students to understand the cause of this phenomenon. That it belongs to the wave character of light. I am firmly convinced that this helps to create understanding.’

'Small demonstrations can be very illuminating.'

In the new training course, developed in collaboration with TNO, Schleipen has made more room for demos. His experience is that small demonstrations can be very illuminating. ‘In a simple practical demo I let students determine the distance between the tracks of a CD, DVD or Blu-ray disc. We shine a laser on a DVD, the students measure the angles of the diffracted beams, just with a tape measure. And gosh: really, to an accuracy of less than a tenth of a micrometer!’ He laughs.’’Such a test lasts ten minutes, everyone has woken up and can move onto the next module.’


Interesting natural phenomena are also discussed during the course. ‘Why do we see a rainbow, why are there sometimes two and why are the colors of these two arches inverted?’

A stable discipline

In our country, optical technology can look forward to renewed attention. For example, the top High Tech Systems & Materials sector published the Photonics National Agenda last July and in 2018 substantial subsidies for photonic chips were attributed. However, Schleipen reacts quite neutral to the question as to  whether we are dealing with a renaissance of his field. ‘We certainly play the game, but in the field of optics we are a small country, for example, when compared to Germany. Naturally we have ASML and Signify and a few dozen small and medium-sized companies that do very well in the field of photonics, but certainly not hundreds, as is the case with our Eastern neighbours.’

With this statement, Schleipen doesn’t mean to play down optics. ‘It’s primarily a very stable field because it provides a basis for an extremely wide range of subjects with many areas of application. You can find optical technology in metrology, sensors, inspection, safety, data communication, imaging, in the automotive industry and in the biomedical and life sciences.’

The new course also responds to recent developments in the field of optical phenomena and instrumentation. ‘To give an example: Imec in Leuven developed a new cmos image sensor a few years ago, which had a large range of tiny spectral filters integrated on it. These compact and potentially inexpensive hyperspectral sensors have now found their way into a whole range of new applications in healthcare.’

'Students now experience the material themselves by being able to physically turn the knobs during the weekly practical sessions.'

’In fact, the optical field is so broad that not all sub-areas can be covered in sixteen three-hour modules. ‘We could easily add four more modules, but we have to stop somewhere. We don’t deal with life sciences and biomedical technology, but the basic principles are addressed adequately. And above all: students now experience the material themselves by being able to physically turn the knobs during the weekly practical sessions. After the course, participants are sufficiently equipped, in all teams where the discussions about optics go into depth. And at home, they can explain in minute detail where the colours of a rainbow come from.’

This article is written by René Raaijmakers, tech editor of Bits&Chips.

Recommendation by former participants

By the end of the training participants are asked to fill out an evaluation form. To the question: 'Would you recommend this training to others?' they responded with a 8.6 out of 10.

Huub Janssen on his lead in Design Principles for Precision Engineering

trainer precision engineering
Huub Janssen from Janssen Precision Engineering is one of the former figureheads of the Design Principles for Precision Engineering training. His ambition was to spread know-how in the vein of Wim van der Hoek.

A longstanding wish of Huub Janssen of Janssen Precision Engineering has been fulfilled: he shared his knowledge in the same way that his mentor Wim van der Hoek did.

Janssen deems Van der Hoek ‘awe-inspiring.’ In the early 1980s, he was looking for a niche in which to spend his last university years at the Eindhoven University of Technology and came across a professional who worked mainly in precision mechanics. “Wim invited me to his monthly mornings. There he would put a large sheet of paper on the table and scribble down all kinds of problems. We would discuss them with a handful of students who each had their own graduation assignment and for two to three hours we would talk about progress and technical problems.’


Huub Janssen is the new figurehead of the Design Principles for Precision Engineering training course.

The main focus was on the content, the technical approach, the concept and how it is put into practice. ‘Everyone offered free solutions. One graduate would put down his problem and then five or six men would jump up to solve it in various ways. It was some game. That stimulation from Wim really appealed to me. I took to it like a fish to water. It goes without saying, I felt at home.’

'I have always enjoyed discussing technical problems with young people. I also do that when coaching my employees.'

In the ‘80’s, Janssen worked at ASML, made production equipment for LCDs at Philips in Heerlen, and then started an engineering firm dedicated to precision instrumentation. Education has always attracted him, but in recent decades entrepreneurship has taken priority.  ‘Just like Van der Hoek, I have always enjoyed discussing technical problems with young people. I also do that when coaching my employees,’ says Janssen.

Now that employees have taken over part of his duties, his thoughts have automatically turned to knowledge transfer. When approached by Jan van Eijk and Adrian Rankers of the Mechatronics Academy, partner of High Tech Institute, Janssen didn’t have to think twice.

Limburg’s flan

We are talking in the very space that Huub Janssen named after his great inspiration, Wim van der Hoek. Over Limburgs flan and coffee, the precision engineering entrepreneur raises a subject that engineers often bring up in conversation: the passion he already had for technology in his youth.


In Janssen Precision Engineering’s new meeting room, completely surrounded by glass. Huub Janssen named the space after his mentor.

During his high school years Janssen photographed birds. His challenge was to capture them in flight. He didn’t want to sit behind the camera all day long, so he came up with a solution. In a nesting box, he set up a Praktica – the SLR camera that still fitted a more or less within his budget – and put together a shutter mechanism with a light beam and photodetector. ‘Everything was arranged so that the Praktica shutter closed at the precise moment that the bird flew through the beam. An electric solenoid triggered the self-timer. Not with a normal motor, because it had to be bam! Done.’

He received his entrepreneurial spirit from home. His parents had a fruit company and his father often built machines himself, such as a machine to sort apples. During his last years at university, Huub devised a measuring scale which made it easier to fill fruit trays with a specific weight. Not ordinary scales because with those you would need to calculate back and forth and Janssen wanted to avoid that. ‘You could buy those kinds of scales for three thousand guilders, but that was a lot of money back then. I wanted something that would enable you to see in one go whether you had to add or take away a few apples. I was always thinking about things like that.’

He solved it with leaf springs, electronics and an optical sensor. ‘There were all kinds of Van der Hoek design principles in it,’ he laughs, referring to the professor whose Monday morning sessions he sat in on that time.

During his final years at university, Janssen developed an instrument that could map out wear and tear in fillings and molars. ‘Interferometry and optics were part of the solution. I had to position in six degrees of freedom within fractions of a micrometer, and I could completely break loose with new ideas. Moreover, I also had a real customer so it had to work eventually.’


Huub Janssen with the piëzo-knob, a component on which he has a patent; a revolutionary concept based on piëzo elements and a rotating mass, steps of 5 nanometres can be made.

After graduating in the eighties, Janssen worked at ASML on the first PAS2500 wafer stepper. ‘I had learned a lot from Van der Hoek, but at ASML I have been able to see where things can go wrong. With Van der Hoek you learn to design something statically determined. For example, you get stability with three support points. But not everyone is happy with a three-legged table. At ASML I learnt to understand when to apply specific design principles and when not to.’

'I learnt that you cannot always apply Van der Hoek’s design principles in any situation. You have to know when you can and when you can’t.'

For the PAS2500, they had initially developed a new interferometer  to measure the position of the stage in directions x and y. ‘We did this completely in accordance with the Van der Hoek design principles, with elastic elements and so on.  There was no hysterisis, but everything kept vibrating. There, I learnt that you cannot always apply Van der Hoek’s design principles in any situation. You have to know when you can and when you can’t,’ explains Janssen.

After ASML, he joined Philips in Heerlen, where he developed production equipment for LCDs. A few years later he started his own engineering office. ‘During my final university years, I also worked for a real customer with a real technical problem, including the demand for hardware. That was just my thing.’

In 2010 Huub Janssen received the Rien Koster prize in recognition of the high level at which he practices precision technology in his company Janssen Precision Engineering (JPE). In addition to the large amount of advanced work done for clients, the jury also emphasised Janssens’s attention to the coaching and training of his employees. JPE has since recorded thirty patents for its inventions.

Within JPE, more than ten years ago, Janssen started collecting and documenting technical principles and solutions. Initially for his employees, but also for the outside world. Whenever Janssen or his colleagues delve into something or have to come up with a technical solution, they record it. ‘We always have to figure something out or look it up again. How did that technical calculation go again? I thought: let’s do it properly once, and then the next time employees need it, they will also benefit from it. I started documenting the cases on one A4 sheet. ‘Everything is divided into categories such as ‘engineering fundamentals,’ ‘construction fundamentals,’ ‘dynamics and control’ and ‘construction design & examples.’

You have to invest time in it, ‘but then you also have something,’ says Janssen. ‘The technical problem, all the formulas that matter, all have to fit on that sheet of A4. That means only the essential information. In the meantime, it totals about fifty sheets of A4. Janssen thought that the information also had marketing value and started to publish it. That is how the precision point came about, a page on the Janssen Precision website where everything is accessible. ‘Even a professor at MIT mailed me to ask if he could use the knowledge in his lectures.’ Janssen also bundled the A4 cases in a handy booklet under Albert Einstein’s motto ‘never remember anything you can look up.’ He regularly receives orders from schools, competitors and customers.


Under Albert Einstein’s motto ‘never remember anything you can look up,’ Huub Janssen has documented precision cases. Each case fits on one sheet of A4. The knowledge is available at Precision Point on his website, and also available in print.

It is difficult to say whether the efforts also generate extra business. ‘We can, however, see that interested parties look at our core activities in high tech engineering and at our products after reading our precision point link.’

He said yes to Van Eijk and Rankers’ request to become the figurehead of the Design principles for precision engineering training course because education always attracted him. Much of the knowledge and experience in the design principles training course comes from the Wim van der Hoek ideology. ‘Just like Van der Hoek, I have always enjoyed discussing technical problems with young people. I also do that when coaching my employees,’ says Janssen.

For old students and colleagues, Van der Hoek can’t put a foot wrong. When they praised him at a party in honour of his 80th birthday, the Emeritus Professor responded: ‘I am praised in heaven in a shameful way.’

But after some thought, Janssen manages to dig up a criticism. ‘He liked to talk. He talked pretty quickly, so it was quite difficult for beginning university students, who still had to master the profession, to follow everything. You really had to pay attention, because a lot of information came flying at you in those few hours.’

'Van der Hoek quickly came up with his own ideas about the path that solutions should take.'

Van der Hoek liked to talk, rapidly pointing in which direction to go, and he also had something to say. ‘He quickly came up with his own ideas about the path that solutions should take, and that was often astonishing.’


‘Thirty years ago, positioning at a micrometre was something from another planet.’

What was so special about Van der Hoek’s approach?

‘It has to do with the field. Thirty years ago, positioning at a micrometre was something from another planet. It is a field where you cannot simply apply normal functional elements such as bearings and gears. Even at this moment, it is still unexplored territory for many parties. Worldwide. Until the fifth year at university, we only learnt what other prospective engineers were learning: gears, drive shafts, v-belts and so on. But if you are going to position at a micrometre or a fraction thereof, then you can’t simply use those components. Then you get completely different solution directions and things such as reproducibility and avoiding backlash become important.’

You want to shape the design principles training in the spirit of Van der Hoek. What do you mean?

‘We are talking about design principles for precision engineering. That is the world of complex machines and instruments for the chip industry, astronomy and space travel. To position more accurately than a micrometre, you cannot simply use standard functional elements such as bearings. Then you come to elastic elements, no friction and those sort of things. After that it becomes exciting, because you are very close to physics.’


Janssen: ‘I can still remember that Van der Hoek asked his students to crawl in thought into a ball bearing.’

Manufacturers must recognise that they cannot buy standard parts from a catalogue. They have to think a bit further, analyse all the problems that may arise. Then you have to imagine things in your head, do ‘thought experiments’: where can things go wrong? If you can see that, the way to the solution is close. ‘I can still remember that Van der Hoek asked his students to crawl in thought into a ball bearing, to imagine the outer ring and inner ring with all the balls in between. We had to make ourselves so small that we were sitting between those spinning bullets. Then you see that the ball on one side is against the ring and on the other side has room to play. Next you see that a bullet isn’t completely round, it has indents and doesn’t turn well. If it is an indent of a micrometre then it means a micrometre of error. You don’t have to have much experience, but you do need a lot of imagination to be able to do thought experiments.’

What is specific about your contribution to the training?

‘The way solutions are reached is important. I don’t have a lot to do with formulas. Of course, they are needed, but calculating is the last ten percent of the job. Primarily, designers need to get a feel for the details. What should they pay attention to? How do they solve matters? You first need to know where things can go wrong and then come up with a good conceptual direction. I especially want to instil intuition. Calculation techniques will come after that.’

That’s why I want to introduce case studies. Van der Hoek did that in his Des Duivels prentenboek in which he published unsuccessful projects. ‘Participants thus get to work alone and in groups. Then we have a large group discussion. I don’t want a lecture, I prefer interaction.’

This article is written by René Raaijmakers, tech editor of High-Tech Systems.

Recommendation by former participants

By the end of the training, participants are asked to fill out an evaluation form. To the question: 'Would you recommend this training to others?' they responded with a 8.9 out of 10. Besides Huub Janssen, trainers include Dannis Brouwer (University of Twente), Piet van Rens (Settels Savenije), Kees Verbaan (NTS), Chris Werner and Roger Hamelinck (Entechna Engineering).

On to battery-less IoT devices: Ultra-low power

trainer Herman Roebers Ultra-low power for the Internet of Things
Herman Roebbers is an advanced expert at Capgemini Engineering and has been working on embedded systems and parallel processing since the mid-1980s. He is also an external advisor to the EEMBC working groups Ulpmark, Iotmark and Securemark, and ultra-low power trainer in the workshop “Ultra-low power for the Internet of Things.

In the pursuit of battery-less IoT, it is important to use energy as efficiently as possible. By using an encryption library as an example, Herman Roebbers shows how small tweaks to the tooling and chip settings alone can have a huge impact on consumption.

How can I reduce the energy consumption of my IoT system towards ultra-low power? This question is becoming more and more relevant as we continue to increase our expectations of IoT devices. Ultimately, the goal is that systems require so little energy, they can harvest it from their environment and no longer need batteries.

To achieve this, we need to work in two directions, increasing harvest yields and reducing consumption. The first is being addressed: new materials and methods to make and post process solar cells produce ever-higher yields. Progress is also being made in the field of RF energy harvesting. The Delft startup Nowi, for example, has made special chips that are very good at this. Furthermore, a lot of research is being done on new materials to convert temperature differences into energy more efficiently. We are also working hard on increasingly efficient converters that convert harvested energy into required voltage(s) and ensure efficient energy storage, for example, in rechargeable batteries or supercapacitors.

A case study for ultra-low power

An earlier Bits&Chips article gave an overview of all aspects that are important to save energy: from chip substrate, transistor selection, processor architecture and the circuit board to driver, OS, coding tools and coding styles up to the application. In the meantime, the table has been expanded somewhat.

A recent case illustrates the effect of different mechanisms on energy consumption. EEMBC just released a benchmark to determine the energy needed for several typical tls (transport layer security) operations. Tls is part of an https implementation and, as such, is essential for setting up a secure connection. The benchmark has been ported to an evaluation board that supports cryptography through the Arm Mbedtls library.

We can use that process to show what each optimization step delivers. For this purpose, we first perform a baseline measurement each time. The benchmarking framework uses an energy monitor from Stmicroelectronics and an Arduino Uno. The Arduino is used as a uart interface towards the device under test (dut, Figure 1).


Figure 1: The setup for measuring power consumption

We also use the development environment Atollic Truestudio 9.0.1 for STM32, which uses a proprietary version of the GCC compiler, as well as the Stm32cubemx software, which can generate (initialization) code for peripherals and thus considerably simplifies configuration.

Step 1: Look at the compiler settings

If we do a baseline measurement with a non-optimized version (setting -O0) at 80 MHz (highest speed) and 3.0 volts, this results in a Securemark score of 505. If we change the optimizer setting to -O1, this makes a huge difference: we’re going to 1336! The optimizer settings for -Og and -O2 don’t make much difference, but if we go to -O3 or -Ofast, things will go even better: 1490.

This demonstrates what you can achieve with the compiler settings alone. The ideal settings, however, can differ per function. In our case, for example, there is no difference between -O3 and -Ofast, but this is not always the case. So, it may pay off to choose the settings per function or per file separately.

With the compiler settings -O2, -O3 and -Ofast, programming errors may appear that do not occur with other settings. Timing can change, and it is necessary to qualify variables that are used in multiple contexts (e.g. normal and interrupt context) as volatile to avoid problems.

Step 2: Look at the pll

Microcontrollers nowadays have very extensive settings for all kinds of clock signals on the chip. One of those settings concerns the frequency multiplier (pll). This can be used to multiply and divide a low frequency to create all kinds of other clock speeds.

In our case, the frequency of the internal oscillator is 16 MHz. To make 80 MHz out of that, we cannot simply multiply it by five, unfortunately. We have a choice of two settings: the first is to divide by 1, multiply by 10 and then divide by 2. The second option is to divide by 2, multiply by 20 and then divide by 2 again.

That gives different scores: 1462 against 1490. The result in both cases is 80 MHz, but the second method is two percent more economical. The lower the clock frequencies, the less energy you lose, and the sooner you divide the clock frequency, the better.

If you have enough time at your disposal, you can also use the processor without pll, because that’s actually quite an energy guzzler. With the built-in oscillator, we can generate a maximum frequency of 48 MHz, which results in a 4 percent higher score. The disadvantage is that it takes a bit longer: 80/48 = 1.66 times longer to be precise.


The Nucleo L4A6ZG development board from STMicroelectronics offers a lot of tools to optimize energy use.

Step 3: Turn off unnecessary clocks

Now that we have explored a few things, we can choose a setting and further optimize from there. We start quite conservatively: -O1 and a frequency of 80 MHz via our second pll setting. This brings our Securemark score to 1336.

The next step is to turn off all superfluous clocks. In our case, the clock to the uart and all i/o ports can be turned off. This saves between 2.5 and 2.9 mW and gives a score of 1448. This costs 1448/1336 = 1.08 times less energy (8 percent gain).

Step 4: Optimize the memcpy function

During the execution of cryptographic functions, the memcpy function is regularly used. Opting for an optimized version yields a five percent profit in the case of GCC. The IAR compiler already provides an optimized version. This allows us to increase our score to 1524. Profit: 5 percent.

Step 5: Tighten the thumbscrews

Now we can see if we can also do it with a low clock frequency. With that, we could lower the core voltage. For our mcu, this core voltage should be 1.2 V for frequencies above 26 MHz. For simplicity, we take 24 MHz, a standard frequency in the menu of the msi oscillator, where the pll can remain off – another 4 percent gain: 1588.

We can also test whether we can safely set the compiler optimizations a bit more aggressively. If we go to setting -O2, we arrive at a score of 1691 – another 6.4 percent gain.

Step 6: Reduce voltages

We have already prepared the clock frequency for allowing a lower core voltage. Now we are actually going to set it. The result is beautiful: 2021, almost 20 percent profit!

The power supply voltage can also be a bit lower. We started at 3.0 V, but if we go to 2.4 V, that again gives an improvement of 26 percent. We can go even further to 1.8 V if necessary. We haven’t done that here, but if we extrapolate, we can expect a further saving of a third.

Conclusion

With a few simple measures, energy consumption can already be drastically reduced towards ultra-low power. In our case study, a factor of five to seven is easily achievable compared to a non-optimized version.

'An embedded system without batteries is within reach.'

However, I have limited myself here to tooling and chip settings. With additional measures in other areas, tens of percent extra improvement can be achieved. Therefore, an embedded system without batteries is within reach.

This article is published by Bits&Chips.

Recommendation by former participants

By the end of the training participants are asked to fill out an evaluation form. To the question: 'Would you recommend this training to others?' they responded with a 7.6 out of 10.

Design Principles (course) still firmly anchored in Wim van der Hoek’s ideology

Design Principles is one of the most renowned training courses given by the Mechatronics Academy and High Tech Institute. Our ‘mechatronics men’ Jan van Eijk and Adrian Rankers have renewed the training course and asked Huub Janssen to assume the role of course director. The course remains firmly anchored in the foundations laid down by the renowned Professor Wim van der Hoek. The biggest changes are the addition of new top specialists and new focus points. The training course is now known as Design Principles for Precision Engineering.

It is pretty risky to redesign one of the most renowned training courses in the Netherlands’ high tech world. Yet we had no other option. Piet van Rens, who was for a long time the face of the course, wanted to considerably limit his work as a structural engineering trainer. He has a lot of fun in the projects he does for ASML, but his agenda is just too full.


Piet van Rens was for many years the face of the Design Principles training course. He continues as a course trainer but is no longer a course director.

Thus, Van Eijk and Rankers had to go in quest of successors. They took advantage of this situation to reformulate the training course itself. For approximately a year now, the training programme has been in the hands of a strong team of specialists from the Dutch precision world. In addition to Van Rens, a handful of top experts have been found, to immerse the course participants in trusted fundamental knowledge and insights, as well as in relevant additions to the engineering field. The new faces include Huub Janssen from Maastricht’s Janssen Precision Engineering, Chris Werner and Roger Hamelinck from Entechna Engineering in Eindhoven, the precision engineering Professor from the University of Twente, Dannis Brouwer and Kees Verbaan from NTS. Van Eijk and Rankers’s debut in June 2018 was a success. Thereafter, the new training programme was fully booked and awarded an average score of 8.4.

The knowledge and experience for the design principles training course Design Principles comes from the ideology of Wim van der Hoek, the renowned Professor of precision technology, to which Dutch high tech owes a lot of its design principles and knowledge. Van der Hoek devised a number of essential design principles in the sixties and seventies, such as the famous hole hinges, with which machine builders could achieve nanometre precision.

'It became an honour for someone to have their design and improvements in the Des Duivels prentenboek.'

In addition, Van der Hoek gained great fame by collecting unsuccessful designs and and included them in Chapter 13 of his infamous Des Duivels prentenboek. He stated that you learn best by making mistakes. The easiest and cheapest training is by getting to know those mistakes. ‘This reference work became so well known that it became an honour for someone to have their design and improvements added to it,’ says Piet van Rens.

Van der Hoek’s successors, Professors Rien Koster and Herman Soemers, are enriching that basis. ‘The new style of design principle training elaborates on the legacy which we have had in the Netherlands for decades, namely to design properly using the correct design principles,’ says course leader Adrian Rankers of Mechatronics Academy, the partner in charge of the mechatronics training at the High Tech Institute.

Huub Janssen is the new figurehead of the Design Principles training course. Like Piet van Rens, he comes from the school of thought of Professor Wim van der Hoek. The precision engineer honoured his mentor by naming the new meeting and demo room at Janssen Precision Engineering after the person who had inspired him, Wim van der Hoek.

The updated training course includes countless new elements. For example, there is more attention given to damping and to advanced elastic elements which have a somewhat larger stroke. Elastic elements are often limited in their range of motion, but there are concepts available which achieve larger strokes. This is one of the research topics of Professor Dannis Brouwer from the University of Twente, who imparts a day of training on flexure mechanisms.

Brouwer also handles energy compensation and gravity compensation techniques (think of the kitchen cabinets that you can open and close vertically and which can stay in each position whilst they move up and down easily).’That includes balancing mass-like issues,’ says Adrian Rankers. ‘As in a complex robot system where you try to get rid of the reaction forces on the floor by having another body simultaneously make the right movements that precisely compensate the forces. That can be complicated, so we have called it energy compensation. But you can also call it energy balancing.’

Rankers emphasizes that the ‘mechatronic context’ recurs throughout the training. ‘On the one hand it provides additional requirements for mechanics, on the other hand it also offers alternative solution space. If previously you needed to create a positioning system, you did that with a cam drive and a drive chain up to the element that you had to position properly. In that chain you used to encounter all sorts of friction and slack – all of which is very annoying. But in a mechatronic movement system you have sensors on your payload. They tell you exactly what the position or position error is. In principle, you won’t be worried if there is a bit of friction or play in-between, because you have the information and you can immediately compensate for it. These kinds of trends make the subject choice for the Design Principles training shift, although it is still true that you can never get a high-quality system solution with rattling mechanics,’ emphasises Rankers. ‘In the less important topics we have made some room for new subjects.’


‘We have a long history here of designing properly using correct design principles,’ says Adrian Rankers, director of Mechatronics Academy. ‘Wim van der Hoek started that off, Rien Koster and Herman Soemers are continuing it.’

The course director Huub Janssen has given himself the goal of giving shape to the design principles course in the spirit of Van der Hoek. ‘We are talking about design principles for precision engineering. That is the world of complex machines and instruments for the chip industry, astronomy and space travel. To position more accurately than a micrometer, you cannot simply use standard functional elements such as bearings. Then you come to elastic elements, no friction and those sort of things. After that it becomes exciting, because you are very close to physics.’

Janssen says that manufacturers must recognise that they cannot buy standard parts from a catalogue. They have to think a bit further, analyse all the problems that may arise. Then you have to imagine things in your head, do ‘thought experiments’: where can things go wrong? If you can see that, the way to the solution is close.

During his part-time professorship, Van der Hoek asked his students to do thought experiments. Janssen: ‘I can still remember that Van der Hoek asked his students to crawl in thought into a ball bearing, to imagine the outer ring and inner ring with all the balls in between. We had to make ourselves so small that we were sitting between those spinning balls. Then you see that the ball on one side is against the ring and on the other side has room to play. Next you see that a ball isn’t completely round, it has indents and doesn’t turn well. You don’t have to have much experience, but you do need a lot of imagination.’

It is no coincidence that Janssen wants to enrich the training by injecting experience and exercises. ‘The way solutions are reached is important. I don’t have a lot to do with formulas. Of course they are needed, but calculating is the last ten percent of the job. Primarily, designers need to get a feel for the details. What should they pay attention to? How do they solve matters? You first need to know where things can go wrong and then come up with a good conceptual direction. I especially want to instil intuition. Calculation techniques will come after that.’

'I don’t want a lecture, I prefer interaction.'

He mainly uses case-studies, just as Van der Hoek did in his Des Duivels prentenboek. ‘Participants thus get to work alone and in groups. Then we have a large group discussion. I don’t want a lecture, I prefer interaction.’

Exercises, interaction and working with practical cases are distinguishing factors in the structural training course that the Mechatronics Academy and the High Tech Institute bring to the market. In other organisations, the training is also available as a three-day variant.

Piet van Rens also has experience as a trainer for this three-day variant. He wants to emphasise that participants in the short version really miss something. ‘Some customers require employees sent by the temporary work agencies to complete a design principles training course. This means that some engineering firms then choose on a costs basis or for an evening version.’

Van Rens thinks this is a not such a sensible choice. Practical exercises are the most valuable component of the Design Principles training course. They ensure that the contents really sink in and the participants actually understand and apply them to their work. This hands-on element is precisely what is killed in the shortened version. “The three-day and evening training courses are not bad, but they skimp on content, cutting corners to fit into less time. This effect is noticed more when learning after a normal working day, people are tired in the evenings. If you only give lectures, then the outcome is really less effective,’ states Van Rens.

This article is written by René Raaijmakers, tech editor of High-Tech Systems.

Recommendation by former participants

By the end of the training participants are asked to fill out an evaluation form. To the question: 'Would you recommend this training to others?' they responded with a 8.9 out of 10.

ITRI sees crucial role for system architecting to achieve industrial transformation

System architect(ing) training - ITRI Testimonial
Two years in a row, Industrial Technology Research Institute (ITRI) from Taiwan, invited High Tech Institute to help introduce System Architecting-thinking in their organization. We asked executive vice president Pei-Zen Chang to tell us about ITRI’s ambitions and the role of system architecture in keeping the Taiwanese industry competitive in this age of fierce international competition. This is where the system architect(ing) training comes in.

It was July 2017 when Ger Schoeber arrived at Taipei international airport to lecture his first system architecting training in Taiwan. The next day he arrived at his final destination, the renowned Industrial Technology Research Institute (ITRI) at Chutung in the Hsinchu region. There the Dutchman faced a firing squad. Figuratively, that is.

All began very friendly. ITRI’s former executive vice president Charles Liu kicked off a week of training by introducing Schoeber to sixteen participants, all senior executives and cross-domain project leaders of the Taiwanese institute. Liu told Schoeber with a smile that his colleagues had all prepared well. They had read the material and were actually not so impressed. Some even had asked Liu why they had to clear their agenda for the whole five days for this stuff. “I wish you good luck this week,” Liu pronounced firmly.


Pei-Zen Chang, executive vice president of ITRI: “System architecture knowledge will contribute to value creation across Taiwan’s industry. Photo: ITRI

To Schoeber, Liu’s message was clear. He had to prove that his System Architecting (Sysarch) training was worth the investment for ITRI. Schoeber faced five days of lecturing a group at the level of vice president and higher. At that moment in time, he had to swallow, Schoeber admits two years later.

ITRI and High Tech Institute got acquainted in 2016. Dr. Jonq-Min Liu, at that time president at ITRI, wanted to strengthen the institutes systems thinking knowledge to overcome cross-domain problems. Liu directed that ITRI College should do an assessment and this subsequently evolved into a recommendation to seek cooperation with the High Tech Institute of the Netherlands.

ITRI College identified a system architecting training at High Tech Institute that originated from abundant experience in complex systems development at Philips and ASML. The Dutch institute is a spin off company from Philips since 2010. As the successor of Philips Centre for Technical Training it is active in post academic education for technicians in the open market.

ITRI wanted to introduce the System Architecting course with the goal to train leaders of cross-disciplinary projects. It should help them to handle cross-domain planning, management, communicating and resolving system problems. Edwin Liu, the president of ITRI, firmly believes in a systems approach for his organization: “In addition to continuously deepen scientific and technological innovation and R&D, ITRI must carry out cross-unit and cross-disciplinary cooperation to bring about industrial transformation.”


Ger Schoeber teaching system architecture at ITRI in July 2018. Notice the abundance of paper on the wall, resulting from discussions and learning exercises. At the end of the week usually the whole class room is covered with paper.

ITRI’s role in Taiwan

Industrial transformation, that’s what ITRI is all about. The institute is a nonprofit R&D organization engaging in applied research and technical services. Since its foundation in 1973, ITRI grew to one of the world’s leading technology R&D institutions. It has played a vital role in transforming Taiwan’s economy from a labor-intensive industry to a high-tech industry. “ITRI’s mission is to assist Taiwan’s industrial development,” says Pei-Zen Chang, the executive vice president who is responsible for introducing system thinking at ITRI. “It has been mandated not only to provide assistance in technological development, but also to assist in industrial transformation and development.”


The 2018 System Architecture class with Pei-Zen Chang, the executive vice president of ITRI (sitting 2nd from left) and trainer Ger Schoeber (sitting in the middle).

Taiwan and The Netherlands are similar in size and population. Both countries know: if you are small, you have to be smart. Just like the Dutch, the Taiwanese people have relied on their determination and perseverance in search of optimal economic development models to compete with their larger and stronger neighbors. In this continuous race, the drive to excel in technology has always been a major force for Taiwan, and ITRI is playing a crucial role there. Even an imperative role, says Chang: “In the age of fierce international competition, Taiwan’s industrial structure remained predominantly small. We have a lot of medium-sized enterprises that rely on our innovations.”

The Taiwanese institute has been quite successful since its foundation. It helped incubate over 270 companies, including famous examples like UMC and TSMC. Its 6100 employees – over 80 percent of them hold advanced degrees – produce over a thousand patents annually (accumulated total number of over 27,000). Chang’s message is that ITRI has to continuously help the Taiwanese industry to transform and upgrade –  a role comparable to that of TNO in The Netherlands and Fraunhofer in Germany.

One example is ITRI’s involvement in the fiercely competitive machine tool industry. The R&D-institute developed the controllers that helped Taiwanese manufacturers to upgrade their products, become world-class and rival their German and Japanese competitors. Taiwan is a top-5 player in machine tools, on par with China. This market continues to be challenging, says Chang. “With the support of our Ministry of Economic Affairs, we have established a smart manufacturing demo line for product equipment performance verification and system commissioning in the field. This will keep us up with Industry 4.0 and such facilities are expected to gradually strengthen the entire system’s capabilities.’

 

Smart logistics

Logistics is another example where systems thinking helped ITRI to work with industry on innovative solutions. The institute helped to introduce RFID, automation systems, smart pick-up station, and many more logistics technologies in Taiwan. Chang: “In the logistics industry there are many ways to get things delivered quickly. System engineering analysis enabled us to better understand the needs of the industry. It was evident to us that the identification and classification of various and voluminous items are key factors. Along the way ITRI helped steer Taiwan’s logistics and e-commerce companies towards smart logistics and services.”

'Research provides greater value when the development of technologies, components and modules is based on the needs of the industry'

Chang points out that value creation is a prime focus for ITRI. “Research provides greater value when the development of technologies, components and modules is based on the needs of the industry.”

That’s where system architecture comes in. Over the last couple of years ITRI invited industry veterans and system innovation experts to Taiwan to give lectures on system architecting. The institute wanted to infuse stronger system architecture thinking into its managers of various cross-disciplinary projects.

The goal was to establish a common language for the project leaders in different fields. Although the focus technologies in ITRI’s focus fields ‘smart living’, ‘quality health’ and ‘sustainable environment’ can lie apart, the Taiwanese were convinced that a shared language among various labs and fields would strengthen innovative R&D cooperation.

Part of ITRI’s strategy to introduce system architecture thinking was High Tech Institute’s system architecting training, a five-day intensive course. Apart from theory, participants spend most of the training working on case studies with in-depth discussion and learning exercises. “Our goal is to gradually build up the system architecture thinking,” says Chang. As a common language the participating students learn to think and talk according to the so-called CAFCR model.

CAFCR is all about moving into the customer’s shoes. It forces system developers to not only look at the technology. The letters C, A, F, C and R represent five viewpoints to look at system architecture. Only two of them are about technology. Three are about the customer’s perspective and that is where the greatest value of the CAFCR framework lies. Most important is the first C, that stands for ‘customer objectives’.

“It is all about the customer,” explains trainer Ger Schoeber. “What exactly is their business? How do they earn their money? What is the living environment of the customer or the colleague who is going to install my subsystem? If you really understand the customer, you will see much clearer what it is that they need in order to do better business. CAFCR forces you to look not only at the technology, but also at the specification and the rationale of the requirements. It allows you to come up with solutions that help customers even more.”

'We keep strengthening the interdisciplinary competence of our labs and nurturing the professional talents for system integration'

ITRI’s senior executives and various project leaders all attended the entire Sysarch course in 2017. A survey among participants showed that especially the CAFCR model did help project leaders to systematically promote and implement the projects they are responsible for. That convinced ITRI to continue with Sysarch in 2018. Chang: “To keep strengthening the interdisciplinary competence of our labs and nurturing the professional talents for system integration.”

Participants valued the extensive experiences that Schoeber has in system architecting in industry and system innovation, Pei-Zen Chang points out. “Ger talked about the role of the System Architect and its importance in operating a company, and introduced System Architecting with detailed explanations, procedures, key drivers and CAFCR models. Ger has also deepened participants’ understanding of the course content through role-play exercise. Using a hypothetical situation of ‘proposing equipment solutions for bedridden senior people’ over the five day course, he divided the class into four groups, and asked each to present their solutions to a company’s senior executive or angel investors.”


Case studies with in-depth discussion form a large part of the Sysarch training.

This proved an effective way to ensure the participants gained a thorough understanding from the course. “In addition, drawing from his rich practical experiences, Ger provided guidance to each group, so that the participants could correctly use the content and methods learned from the class”, says Chang. With the above-mentioned systematic and professional curriculum and guidance, the course scored 4.97 points out of 5 in July 2018. “A very high satisfaction rating”, smiles Chang.

Program directors that are designated to lead a cross-disciplinary project and have followed the Sysarch course will take the next step in system architecting at ITRI. Chang: “From now on they will effectively implement their newly acquired planning and maintenance skills, and share their experiences and knowledge with colleagues throughout the institute.”

Once the entire ITRI has been ingrained with this market and customer demands oriented mindset, such system thinking and experiences will be disseminated to Taiwan’s industrial sector. “In this way they will help accelerating its transformation and upgrading to create new value,” says Chang.

Close relationship

In the past fifty years, key industries from Taiwan and the Netherlands have forged close relationships. Both countries have been able to carve out unique industrial advantages and flourish internationally. Chang points to the Philips TV factories that were set up decades ago in Taiwan. “This helped upgrade our nation’s production knowhow and cultivate our talents,” he says.


During Sysarch 2018 Ger Schoeber discusses the case ‘equipment solutions for bedridden senior people’ with division director Keh-Ching Huang, who is one of the participants.

In the 1980s Philips bankrolled the creation of TSMC. As the world’s largest semiconductor foundry, TSMC is now one of ASML’s  main customers. Recently ASML acquired Taiwanese Hermes Microvision, a specialist in metrology solutions for chip production.

Chang sees a bright future for the relationship between both countries. He points to the ‘5+2 Industry Innovation Plan’ that the Taiwan government has been promoting in recent years. “This plan encompasses smart machinery, Asian Silicon Valley, green energy technology, biomedical industry, national defense industry, new agriculture, and circular economy,” elaborates Chang. “We reckon that international cooperation is one of the most important means to implement programs as such, and there is no doubt the Netherlands will be an ideal partner for us to work with, given the country’s deep experiences in system development and solid foundation with respect to semiconductor, agriculture, circular economy, green energy, precision machinery and so on.”

ITRI underlined this by opening a physical presence at the High Tech Campus in Eindhoven. This office actively promotes scientific cooperation with the Netherlands. Chang: “Since ITRI is expected to take on some of the responsibility for implementing Taiwan’s 5+2 Industrial Innovation Plan, I will spare no efforts to help strengthen the cooperation momentum with the Netherlands, in order to create for both countries high-value technology industries with blue ocean benefits.”

This article is written by René Raaijmakers, tech editor of Bits&Chips.

Recommendation by former participants

By the end of the training participants are asked to fill out an evaluation form. To the question: 'Would you recommend this training to others?' they responded with a 8.4 out of 10.