Microcredentials: digital diplomas tracking your knowledge development

An accredited proof of up-to-date knowledge without having to return to the classroom. Hans Krikhaar is a driving force behind the introduction of microcredentials at the Dutch Society for Precision Engineering. In this interview he shares his view on the opportunities this offers.

Hans Krikhaar experienced it himself: after seven years in the field of construction engineering, returning to his original field of study – mechanical engineering – proved to be quite a challenge. Companies wanted verifiable knowledge in this field and were not willing to give him the opportunity to demonstrate his skills and knowledge on the job. In the end, that opportunity came from Philips Lighting, as Krikhaar had demonstrable experience with computer-aided design that the Eindhoven-based company was investing in. Had he been able to prove his up-to-date knowledge in mechanical engineering through microcredentials, his career might have turned out very differently.

For professionals who start working full-time after graduation, it is important to continue to develop their knowledge. Unfortunately, a long-term education program is hard to maintain next to a job, both in terms of time and costs. Workers can, however, benefit greatly from shorter training programs as they can immediately apply the gained knowledge. For one’s position in the market, formal recognition of this knowledge is very important.

In come the microcredentials: recognized digital diplomas or certificates linked to compact, validated courses. Professionals can use these to prove their specifically acquired knowledge or skills without the need to complete a full degree program.

'A system such as microcredentials can help people in similar situations demonstrate their current knowledge, which makes them more attractive for companies.''

From Philips to education

Krikhaar studied mechanical engineering at the University of Twente. He chose Twente because of the space and nature around it.

In the 1980’s, he came into contact with computer-aided design while working at Comprimo, a company that developed oil refineries and chemical plants. At the time, construction drawings were still made by hand, and computers were just starting to support this process. However, when he wanted to return to mechanical engineering after seven years in construction engineering, companies were reluctant to hire him. “A system such as microcredentials can help people in similar situations demonstrate their current knowledge, which makes them more attractive for companies,” Krikhaar explains.

Eventually, Krikhaar obtained his PhD at Philips Lighting, on computer-aided design and manufacturing within mechanical engineering, which allowed him to continue his career in that field. He later worked at Calumatic, Philishave, ASML, and as an independent consultant, before becoming a professor of Smart Manufacturing at Fontys Engineering in 2018.

The request to set up microcredentials came during the COVID-19 pandemic, when ASML wanted to have developed a Manufacturing Excellence course. “In the spirit of lifelong learning, management wanted microcredentials to be awarded to that course,” Krikhaar says. “That’s when I started exploring this form of course validation.”

The Dutch Society for Precision Engineering (DSPE), for which Krikhaar was already active at the time, has had a certification program for post-academic training since 2008, stemming from Philips’ former Center for Technical Training. Courses that the DSPE evaluates are assessed by field professionals for both quality and societal relevance. “The DSPE doesn’t teach courses, they only certify them,” Krikhaar clarifies. “That independence makes our certification particularly valuable, since we’re not judging our own work.”

To keep up with the times, Krikhaar had long believed DSPE should digitize her diplomas and certificates. He connected with Wilfred Rubens, an expert in microcredentials. With his knowledge Krikhaar is now digitizing and transforming the certificates of DSPE-accredited courses.

The value of microcredentials

To harbour the quality of microcredentials the DSPE considers four core values when awarding them. Firstly, they critically evaluate the course’s learning outcomes: what is the added value for the professional? Secondly, the level of the course is taken into account. Courses range from vocational to master’s level, and this is reflected in the microcredential. The third factor is workload: how many days or sessions does the course take? Finally, the assessment method is important. A diploma is awarded when the participant has demonstrated mastery of the learning outcomes. If there is no individual assessment, a certificate of participation is issued instead.

By taking courses needed for current projects, the professional builds a portfolio of competencies. Microcredentials from these courses can be accessed and downloaded by the professional through a secure system. The credentials can also be linked to their LinkedIn profile, which can benefit their career.

'Precision technology is developing incredibly fast. It is important for people in the field to keep up with their knowledge.''

To date, DSPE has awarded microcredentials to 49 courses. Participants who completed one of these in 2023 or 2024 received digital recognition retroactively. Krikhaar ultimately hopes to see microcredentials attached to over 200 courses.

“This way of certifying needs to gain traction. We aim to achieve this by defining ‘learning pathways’: sets of courses that, once you completed them all, show that you’ve gained specific knowledge. For example, after a vocational course in milling and turning, you could follow the specified pathway to become an instrument maker at the Leiden Instrument Makers School. Once you complete all the relevant courses, you are officially certified as an instrument maker.”

Microcredentials and the future

Although Krikhaar has reached retirement age, he remains active in precision engineering about three days a week. For example, he organizes the Dutch Precision Week around the precision fair in November. Why is he so invested in microcredentials?

“Precision technology is developing incredibly fast. It is important for people in the field to keep up with their knowledge. In addition to what I’ve said about how microcredentials work, the system can also help colleagues in HR, who often lack technical training, in guiding employees toward the right development paths. The way DSPE works enables them to better support these engineers. I think that’s a great development.”

Krikhaar hopes that DSPE’s microcredentials will eventually be recognized as professional qualifications and intends to keep working towards that goal. The organization has been around since 1954 and is run entirely by professionals, for professionals, which helps safeguard the quality of the certifications. In order to maintain independence, and to not compete with the providers they assess, the DSPE intends to stay away from offering courses itself.

When asked whether he will roll out microcredentials across Europe, perhaps through the European Society for Precision Engineering and Nanotechnology (EUSPEN), Krikhaar is brief: “That’s not something I’ll take on, but if someone else wants to do this, that would be fine.”

This article is written by Marleen Dolman, freelancer for High Tech Systems.

“If you add a little bit of damping, you can gain a lot”

passive damping
Passive damping is increasingly used by mechanical engineers designing for the high-tech industry. This was the reason for Patrick Houben, mechanical architect at Nobleo Technology, to attend the “Passive damping for high-tech systems” course at High Tech Institute.

Eindhoven-based Nobleo Technology is an engineering firm that takes on in-house development projects. It specializes in software, mechatronics and mechanics in three core areas: autonomous & intelligence solutions, embedded & electronics solutions and mechatronic systems. Patrick Houben has been employed there for two years as a mechanical architect with the business unit Mechatronic Systems. Originally a mechanical engineer, he’s worked his entire career at semicon companies, including Assembléon, when it was still called Philips EMT, and ITEC in Nijmegen.

“What I mainly do at Nobleo now is define the architecture in projects for customers, lay down concepts and support the project team,” Houben explains. “I’m working together with a team of mechatronic engineers. We ensure that customers’ wishes are properly embedded in the products or modules we design for them.”

“At Nobleo, we take care of the entire design process for the customer, including supervising the industrialization of the products in the customer’s supply chain. We do the latter together with Nobleo Manufacturing. We call this Design House+ and it’s catching on well. In addition to product development, we build and test the prototypes. During the industrialization process, we can efficiently incorporate necessary improvements in the design. The customer then has a fully equipped supply chain.”

'We were given good study cases that showed that in a mechanical construction, you often have very little damping.''

Pragmatic, practical and applicable

The reason for taking the “Passive damping for high-tech systems” course at High Tech Institute was twofold, according to Houben: to broaden his technical knowledge and to be able to apply the acquired knowledge at his clients. He had some prior experience with applying damping, but mainly for isolation, to isolate highly dynamic modules from external vibrations, for example. “I had no experience with the applications from the course. It was surprising and new to me that damping, or suppressing, a single component can greatly improve system performance.”

The course lasted three days and included practical exercises and about six extensive study cases. Houben particularly liked the fact that the course quickly switched to design rules that were easy to apply. “We were given good study cases that showed that in a mechanical construction, you often have very little damping. And if you add a little bit of damping, you can gain a lot – that was really surprising to me as well. When I look at static components in the machines of our customers, for example, they’re often sandwiched in a long span where they can resonate quite strongly. If you can reduce that with passive damping, you can get better performance and increase bandwidths without much extra cost. I really found that very instructive and practical.”

'It was surprising and new to me that muting, or suppressing, a single component can greatly improve system performance.''

In particular, the MRI scanner case, a doctoral research project by a TU Eindhoven student, resonated well with the course participants, Houben observed. “That was a clear and telling case. It involved a Philips MRI scanner where a person was placed in between two horizontal magnetic strips. Because of the positioning of the two strips, the top one could only be supported by two relatively narrow uprights. The stiffness of this construction was suboptimal and as a result of  the magnetic movements, the construction started to resonate on the uprights. By applying passive damping in the right place with the right mass and the right specifications, that whole mode disappeared. The damping mass was a simple thirty-pound plate suspended in rubber dampers and hardly added any cost to the scanner.”

Houben also appreciated the practical tip that you can install an oscillator app on your smartphone with which you can map resonances quite accurately and reason about the cause of the problems. “That helps you quickly move toward the right solution. I really liked that in the course – it was very pragmatic, practical and applicable.”

For Houben, the course was surprisingly easy to follow. “I’ve also attended courses that were a bit more difficult. Because I have a classical background in mechanical engineering, I had to build up my knowledge of dynamics, mechatronics and control technology as I progressed through my career. And yes, I sometimes noticed in courses that this was difficult, especially when faced with theoretical sums. But in this course, it wasn’t that difficult. I especially liked the interaction with the two teachers and how they coordinated with each other. It was very informal and open and there was also a lot of back and forth.”

Opportunities

Houben already sees his colleagues applying passive damping to their projects. For the client he’s currently working for, however, the concept is still new. “I’m thinking about how to introduce the acquired knowledge there, but I definitely see opportunities.”

This article is written by Titia Koerten, editor for High Tech Systems.

AI and the future of systems programming

C++
Kris van Rens looks at the future of systems development and how developer happiness is an important aspect of software engineering

Artificial intelligence in general and large language models (LLMs) in particular are undeniably changing how we work and write code. Especially for learning, explaining, refactoring, documenting and reviewing code, they turn out to be extremely useful.

For me, however, having a chat-style LLM generate production-grade code is still a mixed bag. The carefully engineered prompt for a complex constrained task often outsizes the resulting code by orders of magnitude, making me question the productivity gains. Sometimes, I find myself iteratively fighting the prompt to generate the right code for me, only to discover that it casually forgot to implement one of my earlier requirements. Sometimes also, the LLMs generate code featuring invalid constructs: they hallucinate answers, invariably with great confidence. What’s more, given the way LLMs work, the answers can be completely different every time you input a similar query or at least highly dependent on the given prompt.

OpenAI co-founder Andrej Karpathy put it well: “In some sense, hallucination is all LLMs do. They’re dream machines.” This seemingly ‘black magic’ behavior of LLMs is slightly incompatible with my inner tech-driven urge to follow a deterministic process. It might be my utter incompetence at prompt engineering, but from where I’m standing, despite the power of generative AI at our fingertips, we still need to absolutely understand what we’re doing rather than blindly trusting the correctness of the code that was generated by these dream machines. The weird vibe-induced feel and idiosyncrasy of LLMs will probably wear off in the future, but I still like to truly understand the code I produce and am responsible for.

Probably, AI in general is going to enable an abstraction shift in future software development, allowing us to design at a higher level of abstraction than we often do nowadays. This might, in turn, diminish the need to write code manually. Yet, I fail to see how using generated code in production is going to work well without the correctness guarantees of rigorous testing and formal verification – this isn’t the reality today.

''An aspect of software engineering where LLMs can make an overall positive difference is interpreting compiler feedback.''

Positive difference

Another application area of LLMs is in-line code completion in an editor/IDE. Even this isn’t an outright success for me. More than once, I’ve been overwhelmed by the LLM-based code completer suggesting a multiline solution of what it thinks I wanted to type. Then, instead of implementing the code idea straight from my imagination, I find myself reading a blob of generated suggestion code, questioning what it does and why. It’s hit-and-miss with these completions and they often tend to put me on the wrong foot. I’ve been experimenting with embedded development for microcontroller units lately and have found that especially with code in this context, the LLM-based completion just takes guesses, sometimes even making up non-existent general-purpose IO (GPIO) pin numbers as it goes. I do like the combination of code completion LLMs with an AI model that predicts editor movements for refactoring. Refactors are often batches of similar small operations that the models are able to forecast well.

An aspect of software engineering where LLMs can make an overall positive difference is interpreting compiler feedback. C++, for example, is notorious for its hard-to-read and often very long compiler errors. The arrival of concepts in C++20 was supposed to accomplish a drastic improvement here, but I haven’t seen it happen. Perhaps this is still a work in progress, but until then, we’re forced to deal with complex and often long error messages (sometimes even hundreds of lines in length). Because of their ability to interpret or summarize compiler messages, combined with their educational and generative features, LLMs with a large context window are fit to process such feedback, making them a great companion tool for C++ developers. There’s an enormous body of already existing C++ code and documentation to learn from, which is a good basis for training an LLM.

Other drawbacks of C++ are the ever-increasing language complexity and the compiler’s tendency to fight rather than help you. Effective use of LLMs to combat these issues might well save the language in the short term. C++ language evolution is slow, but tool potency is tremendous. Given the sheer amount of existing C++ code in use today, the language is here to stay, and any tool that helps developers work with it is appreciated.

''To me, writing code is a highly creative, educational and enjoyable activity.''

Developer happiness

Using LLMs for code generation also takes away part of the joy of programming for me. To me, writing code is a highly creative, educational and enjoyable activity, honing my skills in the process; having a magic box do the work for me kills this experience to some extent – even manually writing the boring bits and the tests has some educational value.

Fellow educator in the software development space Ger Cloudt in his works on software quality asserts that organizational quality, a part of which is developer happiness, is half the story. According to him, organizational quality is key as it enables design, code and product quality. Sure, clean code and architecture are important, but without the right tools, mindset, culture, education and so on, the development process will eventually grind to a halt.

LLMs undoubtedly help in the tools and education department, but there’s more to programming than just producing code like a robot. Part of the craft of software engineering – as with any craft – is experiencing joy and pride in your work and the results you produce. Consider me weird, but it can bring me immense satisfaction to create beautiful-looking code with my own two hands.

Revisiting the state of Rust

C++
In late 2022, Kris van Rens wrote about the rise of the Rust programming language, largely in the same application space dominated by C and C++. Did the traditional systems programming landscape really change or was it all much ado about nothing?

According to the Tiobe index, Python is lonely at the top of “most popular programming languages,” with a score of 23 percent. It’s followed by C++ (10 percent), Java (also 10 percent) and C (9 percent). The index tries to gain insight from what people are searching for in search engines, the assumption being that this provides a measure of popularity. As a relatively young language, Rust scores 14th place with a little over 1 percent.

In a concluding summary, Tiobe CEO Paul Jansen writes about Rust that “its steep learning curve will never make it become the lingua franca of the common programmer, unfortunately.” Mentioning a language’s steep learning curve as the barrier to becoming a big success feels slightly dubious from the perspective of how popular C++ is in combination with its complexity at scale. I also think overestimating and emphasizing a learning curve for a language is selling developers short – many companies adopting Rust in production have already shown it’s very manageable.

''When it comes to learning in general, I always tend to keep a positive attitude: people are much more capable than we might think.''

Unique feat

Over the past years, Rust has established itself as a worthy alternative in the field of production-grade systems programming. It’s successfully demonstrating how a language can be modern, performant and safe at the same time. It steadily releases every six weeks, so there’s always something new there – we’re at v1.85 at the time of writing. New features land when ready and most language or library changes tend to be more piecemeal.

As Rust is growing more mature, its popularity and adoption have been gradually increasing over time. The risk factor to adopt it as a production language of choice has worn off, as can be concluded from many companies reporting about it. Google has been rewriting parts of Android in Rust for improved security, Microsoft is rewriting core Windows libraries in Rust and Amazon has been known to use Rust in its AWS infrastructure for a long time already.

Another unique feat worth mentioning is that Rust is part of the mainline Linux kernel next to C. It must be said that the effort to expand support for Rust across kernel subsystems isn’t without contention, but progress is being made with the blessing of Linus Torvalds. It will be very interesting to see how this experiment will advance.

''One of my main observations is that switching back from Rust to C++ makes me feel as if I’m being flung back into the dark ages of systems software development.''

Happy developers

I’ve been heavily using Rust alongside C++ for the past number of years. One of my main observations is that switching back from Rust to C++ makes me feel as if I’m being flung back into the dark ages of systems software development. This may sound harsh, but honestly, even when using the leading-edge version, C++23, most coding tasks feel painstakingly hard and limited compared to how they would in Rust. In the early days, I would sometimes miss the ability to directly correlate written code to output machine code as can be done in C++, but this is strictly unnecessary in 99 percent of the cases, and modern compilers are much more competent at optimization than humans anyway.

When it comes to the tooling ecosystem and integration, Rust is on another level altogether and much more up to speed with the web development world today. Whereas the C++ language and compiler often fight me to get things right, Rust’s strictness, type system, sane defaults and borrow checker seem to naturally lead me to the right design decisions – contend vs guide. When my Rust code builds successfully and the tests pass, I can leave the project with the ease of mind that the software won’t crash during runtime and the code can’t easily be broken by a colleague. Also, the Rust macro systems and excellent quality package ecosystem with libraries as well as plugin tools for the build system make a big difference in productivity.

These and other aspects make Rust extremely nice to work with. They make developers happy. There’s a reason why the Stack Overflow developer survey shows Rust as the most widely desired programming language for nine years in a row now.

Dividends

Rust is very much fit for production use, even in critical systems requiring safety certifications (for example by using the Ferrocene toolchain). I see its adoption as a logical move to enjoy the benefits of memory safety, high productivity and increased developer happiness already today, rather than waiting until the current set of tools is up to speed with the rest of the world. Add to that the cross-pollination of becoming a better developer in any other programming language by learning a new one.

When it comes to learning in general, I always tend to keep a positive attitude: people are much more capable than we might think. Yes, the learning curve for Rust is steeper than that of most other languages, but it’s so well worth it and pays dividends in the long term. I would take a steep learning curve and more sane and strict language rules and guarantees over a life with software memory safety bugs any day of the week.

“Calculations that you should be able to do in five minutes on a beer coaster.”

precision engineering
Erik Manders and Marc Vermeulen take on a leading role in the training “Design Principles for Precision Engineering” (DPPE). The duo takes over from Huub Janssen, who was the face of the training for seven years. Part two of a two-part series: training, trends, and trainers.

When it comes to knowledge sharing within the Eindhoven region, the “Design Principles for Precision Engineering” (DPPE) training is considered one of the crown jewels. The course originated in the 1980s within the Philips Center for Manufacturing Technology (CFT), where the renowned professor Wim van der Hoek laid the foundation with his construction principles. Figures like Rien Koster, Piet van Rens, Herman Soemers, Nick Rosielle, Dannis Brouwer, and Hans Vermeulen built upon it.

The current DPPE course, offered by Mechatronics Academy (MA) through the High Tech Institute, is supported by multiple experts. The lead figures among them have the special task of keeping an eye on industry trends. “Our lead figures signal trends, new topics, and best practices in precision technology,” says Adrian Rankers, a partner at Mechatronics Academy responsible for the DPPE training.

When asked about his ‘fingerprints’ on the DPPE training, Janssen refers to his great inspiration, Wim van der Hoek. “I’m not a lecturer nor a professor with long stories. I like to lay down a case, work on it together, and then discuss it. With Van der Hoek, we would sit around a large white sheet of paper, and then the problems would be laid on the table.”

Virtual Play

Janssen says that as a lead figure, he was able to shape the DPPE training. He chose to give participants more practical assignments and discuss those cases in class. Rankers: “Right from the first morning. After we explain the concept of virtual play, we ask participants to start working with it.” Janssen: “Everyone thinks after our explanation: I’ve got it. But when they put the first sketches on paper, it turns out it’s not that simple. That’s the point: because when they do the calculations themselves, it really sticks.”

On the last day of the training, participants are tasked with designing an optical microscope in groups of four. Janssen: “They receive the specifications: the positioning table with a stroke of several millimeters, a specific resolution, stability within a tenth of a micrometer in one minute, etc. Everything covered in this case has been discussed in the days prior: plasticity, friction, thermal center, and more.”

Vermeulen: “The fun part is that people must work together, otherwise, they won’t make it.”

Janssen: “We push four tables together, and they really have to work the four of them as a team. Then you see some people reaching for super-stable Zerodur or electromagnetic guidance or an air bearing, and someone else says: ‘Also consider the cost aspect.’”

'With Wim van der Hoek, we would all sit around a large white sheet of paper, and then the problems would be laid on the table.''

Not Easy

Participants experience the difficulty level very differently, regardless of their educational background, Janssen observes: “It depends on their prior knowledge, but it’s challenging for everyone. People are almost always highly educated, but when they need to come up with a design, they often don’t know whether to approach it from the left or right.”

However, he believes it’s not rocket science. “It’s not complex. It’s about calculations that you should be able to do in five minutes on a beer mat.”

All four of them agree that it’s about getting a feel for the material. “You should also be able to quantify it, quickly calculate it,” emphasizes Vermeulen.

Janssen offers a simple thought experiment: “Take two rubber bands. Hold them parallel and pull them. Then knot them in series and pull again. What’s the difference? What happens? Where do you have to pull hardest to stretch them a few centimeters? Not everyone has an intuitive grasp of that.”

Rankers: “It’s a combination of creativity and analytical ability. You have to come up with something, then do some rough calculations to see how it works out. Some people approach it analytically, others can construct wonderfully. They may not know exactly why it works, but they have a great feel for it.”

Calculation Tools

Creativity and design intuition cannot be replaced by calculation tools, they all agree. “You can let a computer do the calculations,” says Janssen, “but then you still have to assess it. What if it’s not right? There are thousands of parameters you can tweak. It’s about feeling for construction, knowing where the pain points are. You don’t need a calculation program for that.”

''For every design question, you must go all the way back to the beginning, keep your feet on the ground, and start simple.''

Manders: “We talk about the proverbial beer mat because you want to make an initial sketch or calculation in a few minutes. If you let a computer calculate, you’re busy for days. Building an initial model takes a long time. But a good constructor can put that calculation on paper in a few minutes. If afterwards you are busy for an hour, you have a good sense of which direction it’s going. I think that’s the core of the construction principle course: simple calculations, not too complicated, choose a direction, and see where it goes.”

White Sheet of Paper

Manders observes that highly analytical people are often afraid to put the first lines on a blank sheet of paper. To start with a concept. “Often, they are so focused on the details that they get stuck immediately. Creatives start drawing and see where it goes.”

For Manders, training is a way to stay connected with the field of construction. “In my career, I’ve expanded into more areas, also towards mechatronics. But my anchor point is precision mechanics. By training, I can deepen my knowledge and tell people about the basics. It sharpens me as well. Explaining construction principles in slightly different ways helps me in my coaching job.”

He often learns new things during training. “Then I get questions that make me really think. If it’s really tough, I’ll come back to it outside the course. I’ll puzzle it out at home and prepare a backup slide for the next time.”

Vermeulen says he gets a lot of satisfaction from training a new generation of technicians. “That gives me energy. For the current growth in high-tech, it’s also necessary to share knowledge. That applies to ASML, but also to VDL and other suppliers. If we don’t pass on our knowledge, we’ll all hit a wall.”

''We could emphasize considering the costs of production methods more.''

Complacency

Janssen observes that a certain bias or complacency is common among designers. “When there are many ASML participants in the class, they immediately pull out a magnetic bearing when we ask for frictionless movement. But in some cases, an air bearing or two rollers will do. I’m exaggerating, but designers sometimes have a bias because of their own experience or work environment. With every design question, they really need to go back to the basics, feet on the ground, and start simple.”

Vermeulen: “The simplest solution is usually the best. Many designers aren’t trained that way. I often see copying behavior. But the design choice they see their neighbor make is not necessarily the best solution for their own problem. You could perfectly fine use a steel plate instead of a complex leaf spring. It works both ways, but if you choose the expensive option, you better have a good reason.”

Quarter

“It’s always fun to see how Marc starts,” says Rankers about Vermeulen’s approach in training. “When he talks about air bearings, he asks participants if they use them, what their biggest challenge is, where they run into problems. In a quarter of an hour, he explores the topic and knows what’s familiar to them. Who knows a lot, who knows nothing, or who will be working with it in a project soon.”

Vermeulen: “In my preview, I go over the entire material without diving deep into it. That process gives me energy. In fact, the whole class is motivated, but the challenge is to really engage them at the start. You don’t know each other yet. But I want to be able to read them, so to speak, to get them involved. They need to be eager, on the edge of their seats.”

So it’s not about the slides, Vermeulen emphasizes once again. “It’s about participants coming with their own questions. They all have certain things in mind and are wondering how to make it work.” That’s the reason for the extensive round of questions at the start. “I ask about the different themes they’re encountering. Then I use that as a framework. When a slide about a topic they mentioned comes up, I go into it a bit. That makes it much easier for them to follow. They stay focused.”

Basic Training

DPPE is a basic training. Manders and Vermeulen don’t expect major changes in the material covered, though they see opportunities to bring the content more up to date.

However, participants must still learn fundamental knowledge and principles. Janssen on stiffness, play, and friction—the topics he teaches: “I spend a day and a half on those, but they’re three crucial things. If you don’t grasp these, you’ll never be a good designer. That’s the foundation.” Concepts like passive damping come up briefly, but that’s a complex topic. No wonder Mechatronics Academy offers a separate three-day training for that.

The “degrees of freedom” topic that Manders teaches is another fundamental element. “That just takes some time. You have to go through it,” says Manders.

Vermeulen: “Then comes the translation to hardware. Once participants are familiar with spark erosion, they need to have the creativity to turn to cheaper solutions in some cases. We could emphasize the critical assessment of production method costs more. If you get a degree of freedom in one system with spark erosion, you shouldn’t automatically reach for this expensive production method next time. We could delve more into that translation to hardware. It’s also good to strive for simplicity there.”

''The core is simple calculations, not too complicated, choose a direction and see where it leads.''

Overdetermined

By the way, Wim van der Hoek also looked critically at costs. Rankers: “A great statement from him was that many costs in assembly are caused by things being overdetermined.”

The terms “determined” or “overdetermined” in precision construction essentially refer to this: A rigid body has six degrees of freedom (3 translations and 3 rotations) that fully define its position and orientation. If you want to move that object in one direction using an actuator, you need to fix the other degrees of freedom with a roller bearing, air bearing, or leaf spring configuration.

If you as a designer choose a configuration of constraints that fixes more than five degrees of freedom, the constraints may interfere with each other. Rankers: “That’s called statically overdetermined, and you might get lucky if it works, as long as everything is neatly aligned. The people doing that have ‘golden hands,’ as Wim van der Hoek put it. But the neat alignment can’t change, like with thermal expansion differences.” Especially the gradients and differences in expansion of various components play a big role.

Rankers: “Of course, it’s impossible to perfectly align everything. It also changes over time during use. So internal forces arise within the object you wanted to hold or position due to the ‘fighting’ between the constraints. If that object is a delicate piece of optics that must not deform, you’ve got a big problem. That means you need to avoid overdetermination in ultra-precision machines.”

Vermeulen: “So if you design it to be better determined, it’s easier to assemble, and that gives you a bridge to costs.”

Rankers also notes that the cost aspect should receive more attention than before. He thinks guest speakers could enrich the training with practical examples. Showing examples of affordable and expensive versions. Vermeulen immediately offers an example where you need to guide a lens. “If you make a normal linear guide, the lens sinks a little on the nanometer scale. You can compensate with a second guide, but then the solution might be twice as expensive and twice as complex. Is that really necessary? So as a designer, you can challenge the optics engineer: ‘You want to make it perfect, but that comes at a high cost. We need to pay attention to these things.’”

This article is written by René Raaijmakers, tech editor of Bits&Chips.

The magic of Precision Engineering

precision engineering
Erik Manders and Marc Vermeulen are taking a leading role in the “Design Principles for Precision Engineering” (DPPE) training. The duo is taking over from Huub Janssen, who was the lead for seven years. Part one of a two-part series: trends in construction principles.

Precision technology is not a fixed concept; this toolkit for high-tech engineers evolves over time. To gain insight into this, High Tech Systems magazine invited Huub Janssen, Erik Manders, Adrian Rankers, and Marc Vermeulen for a discussion about the precision world, the changing trends and requirements in high-tech, and what it’s like to work in this field. In the second part, we will delve into the impact this has on the Design Principles for Precision Engineering (DPPE) training.

Like Janssen, Manders and Vermeulen have been active in high-tech for decades, although their roles and interests differ. Janssen is the owner of a high-tech engineering firm and was the figurehead of the DPPE training for seven years. The new duo setting the broad direction now works at ASML, Manders as Principal Systems Architect for Mechatronics, and Vermeulen as Principal Mechanical System Architect. Adrian Rankers, who previously worked as Head of Mechatronics Research at Philips CFT, is now a partner at Mechatronics Academy (MA) and is responsible for the DPPE training that MA offers through the High Tech Institute.

 

“Thirty years ago, positioning to the micrometer was a field from another planet,” said Janssen in 2019 when he became the face of the DPPE. When he graduated in the mid-eighties, designers were still working with micrometers. “Over the years, this has shifted to nanometers,” he observes today.

Since the early nineties, with his company JPE, he has been developing mechatronic modules for high-tech, scientific instruments for research, and more recently, systems for quantum computers. “If you talk to those physicists now, they talk about picometers without blinking an eye. To me, that almost feels philosophical.”

Erik Manders and Marc Vermeulen have been involved as trainers in the Design Principles for Precision Engineering training for years. The training was originally developed at the Philips Center for Manufacturing Technology (CFT), where both started their careers. Vermeulen has been part of a group of DPPE trainers at Mechatronics Academy for several years. Manders taught the course for many years with Herman Soemers at Philips Engineering Services, until the mechatronics group of this activity was transferred to ASML in 2023.

Not straightforward

The concept of precision technology is difficult to define. It’s a toolbox that offers designers significant room for creativity. Give ten designers the same problem, and you’ll receive different solutions that vary in both direction and detail. The design approach differs greatly depending on the application but is also subject to trends and changing requirements. In a few years, the requirements and approaches may barely change, but look ahead ten years, and the designs and methods that are used to bring them to fruition can be entirely different.

''You keep running into new physical phenomena that previously had no influence and suddenly appear.''

Interferometer Suspension

There is no holy grail or universal design rules in precision technology. Best practices differ depending on the market, system, or application. Huub Janssen discovered this when he first joined ASML freshly out of school. “At first, I learned to build something statically determined from Wim van der Hoek,” he says. “But at ASML, I found that this approach didn’t always work. For the PAS2500 wafer stepper, we initially developed a new interferometer suspension to measure the position of the stage in the x and y directions. This design followed Van der Hoek’s principles, with elastic elements and so forth. But when we tested it, we found that there was no damping. It is reproducible, but everything kept vibrating. It was a disaster. I learned that you can’t just apply certain Van der Hoek construction principles everywhere; you have to know when to use them.”

Increasing Demands

The ever-increasing demands for precision strongly influence design choices. Vermeulen explains, “With increased accuracy, complexity increases. Each time you have to peel away the problem a little further. You continuously encounter new physical phenomena that didn’t matter before but now have an impact. You then need to get to the core: what’s happening physically here?”

Vermeulen gives the example of the application of passive damping on the short-stroke wafer stage of lithographic scanners. ‘That was quite a hurdle that we had to take around 2015, because what you design has to be predictable. If you think in terms of stiffness and mass, that is still possible. But in the beginning we did not know how a damper would behave.

Would it age? Creep? We had to understand that completely. That meant modeling how damping affects the dynamics. We couldn’t match that at first, but when we finally got it right we could match the measurements and the model. Only after we were reasonably sure that we understood it, could we take the next step. If you don’t do this properly, it remains guesswork, you can’t predict the behavior well and you will be surprised later.’

Another example is problems that can arise when increasing productivity. Especially with water-cooled components, it is a challenge to keep this under control. Everyone knows the bursting of the water pipe when you quickly close a tap. In the same way, acceleration creates pressure waves in systems with water cooling. ‘You have to dampen those waves, because pressure pulses cause deformation’, says Vermeulen, ‘you have to understand how that works.’

Manders adds, “On a micrometer scale, you wouldn’t notice this, but on a nanometer scale, even a glass block deforms if the pressure changes. This is a physical issue at the system level.”

Simplicity

The main approach is to strive for simplicity. This leads to robust and cost-effective constructions. But there’s another important reason to keep things simple. Once a chosen solution is embedded in a product, designers who build on it won’t quickly change that subsystem. “If you opt for complexity, you’ll never be able to remove it,” summarizes Rankers. “If you don’t enforce simplicity from the start, you’ll keep struggling with it. It’ll keep nagging at you.”

Janssen: “If it works, no one dares to touch it. If you build in reserves, no one will later suggest removing them. Because everyone will counter: ‘Are you sure it will still work then? You can guess what the outcome will be.'”

Vermeulen: “Exactly. No one dares to go back. You start with a design, set up a test rig, and once it has more or less proven itself, you go with it.”

Manders: “You must avoid complex adjustments or calibrations because they will never go away. The project team that comes afterward will say, ‘We’ll just copy this because it works. We’ll do it the same way.'”

These are tough decisions, says Janssen. Design choices can vary greatly and depend on the application and market. “For semiconductor equipment, you want to recalculate everything a hundred times before you build the machine. Designers may build in some reserve to make the construction work. But small margins in various budgets sometimes make a solution impossible or overly complicated. Sometimes you really have to pull out all the stops to achieve that last bit of precision. But once it’s done, you can’t go back.”

At his company JPE, Janssen encourages his designers to sometimes take more risks. “It can often be cheaper. Something thinner and a little less stiff can be finished faster and more cheaply. But you really have to dare to do it.”

Manders: “But sometimes reserve costs almost nothing. By designing smartly, accuracy can often be achieved without going through many extra manufacturing steps. For example, by smartly looking at whether you can mill multiple surfaces in one setup and take advantage of today’s highly accurate milling machines. In any case, it’s important to develop a feel for it.”

''The process of creating a design is magical. You just can’t design the more complex modules alone.''

System architect

Manders started at Philips CFT as a designer. In recent years, he had a more coaching role as a systems architect in the mechatronics department of Philips Engineering Services, which transitioned to ASML in 2023, working with a team of about a hundred colleagues and technicians at suppliers. “Yes, then you’re in a lot of reviews.”

He sees his role as “maintaining the overview between the disciplines.” “I try to be the cement between the bricks. In the end, it has to function. That’s the game.”

Twenty Balls

Janssen chose to start his own company early in his career, Janssen Precision Engineering, later JPE. Manders and Vermeulen, on the other hand, work in a larger organization where they must coordinate with many colleagues and suppliers. “I have to keep twenty balls in the air with challenging technique,” describes Janssen, who also sees his job as a hobby. “Meanwhile, I have to look at what the market needs. We’re not a large company, but we have a significant impact worldwide.”

What’s it like in a much larger organization like ASML? Vermeulen says, “Someone who just joined will be working on a very small part. The challenge is to help them understand how their contribution fits into the bigger picture.”

Manders adds, “Thousands of people work on our machines. You can’t immediately grasp it as a newcomer. The complexity is overwhelming.”

The founders at ASML, according to Manders, had the advantage of starting with simpler devices. “They could understand those better, and that was their anchor point when the machines became more complex. People who join later can’t immediately see the whole picture. People who only start working can’t see the forest from the trees at first. They have to grow into it and discover the context over time.”

Conductor

In such a large team, everyone has their role. “What the servologists and flow dynamics experts in my team calculate, I couldn’t do myself,” says Manders, who sees himself more as a conductor. “I try to give less experienced colleagues direction and a feel for the context. Why are we doing this? Where are we heading? You try to make the team play together and create something beautiful. But a good orchestra essentially plays on its own.”

Rankers adds, “On your own, you can’t accomplish these complex modules. It’s like a football team. The coach doesn’t score goals either.”

Vermeulen recognizes this. “I’m responsible for the technology, but also for how we work together. This is probably half of my time: providing leadership. You have influence over how the team collaborates. As a systems architect, you bring everything together and provide direction. You ask your experts what the best solution is from their perspective, and that leads to a balanced design. There can be a hundred or a hundred and fifty people in a team, but how they work together is key.”

''The most important approach is to strive for simplicity.''

Big projects

Manders regrets not constructing things himself anymore, but he finds his current role just as challenging. “Now, I’m more focused on keeping everything balanced and making system choices in large projects.”

Vermeulen relates to this role as a coach. “It’s about zooming out and zooming in. Keeping an eye on the big picture.”

Manders explains, “Lots of one-on-one discussions, crouching next to colleagues, brainstorming where we need to go. Sometimes you have to zoom out and realize you’re on the wrong track. The approach needs to change entirely.”

Manders refers to this as “the charm of designing”. “All the considerations you make with your team lead to something beautiful if it’s done right. It’s exciting to see it grow from the side as an architect. Sometimes, people come up with very surprising ideas at the coffee machine. The process of creating a design is magical. You just can’t design the more complex modules alone.”

Vermeulen adds, “One plus one equals three. One person says something, which sparks an idea in another person. A third then comes up with something surprising, and so on.”

Janssen concludes, “But eventually, someone needs to choose a direction.”

This article is written by René Raaijmakers, tech editor of Bits&Chips.

Revisiting the state of C++

C++
In late 2022, Kris van Rens wrote about the state of C++ at that time and its challengers. A follow-up after two more years.

In 2022, out of discontent with the evolution process of C++, Google pulled out a substantial amount of its resources from working on C++ and the Clang compiler front-end. As an alternative, it announced the long-term project Carbon, a successor language that can closely interoperate with C++. This and subsequent events posed a watershed moment for C++ because it faced serious criticism for the amount of technical debt it accrued and its relatively slow development pace. From that moment on, it seemed, everybody had a (strong) opinion and loudly advertised it – the amount of critique could no longer be ignored by the C++ committee.

Another aspect of C++ that has been under attack is its lack of memory safety. Memory safety in a programming language refers to the ability to prevent or catch errors related to improper memory access, such as buffer overflows, use-after-free bugs or dangling pointers, through built-in features and guarantees in the language itself. This can be extended to general language safety where all undefined behavior and unspecified semantics are eliminated from the language. Language safety is a concept defined on a spectrum rather than a binary property; some languages are more safe than others. Examples of languages considered safe and still relatively low-level are Swift, Ada and Rust.

Following the intense proverbial heat of the summer of 2022, a series of blows in various forms of public advisories on memory safety explicitly put C as well as C++ in bad daylight. In late 2022, the NSA first came in with a white paper urging us to move away from C and C++. Then, CISA (the US Cybersecurity Infrastructure Security Agency) started advocating for a memory safety roadmap. In 2023 and 2024, even the White House and US Consumer Report proclaimed we should take memory safety more seriously than ever and move to memory-safe languages. There were many more events, but suffice to say that all of them didn’t go unnoticed by the C++ committee.

''C is quite a simple language; it’s easy to learn and get started with. However, it’s very hard to become advanced and proficient at it at scale.''

Admittedly, some of the efforts by C++ committee members to rebut the public attacks came across as slightly contemptuous, often almost downplaying memory safety as “only one of the many potential software issues.” This, to me, sounds an awful lot like a logical fallacy. Sure, many things can go wrong, and a safe language isn’t a panacea. However, software development requirements have drastically changed over the last forty years and today, memory safety is a solved problem for many other languages usable in the same application domain. Officially, ISO workgroup 21 (WG21) instated study group 23 (SG23) for “Safety and Security,” tasked to find the best ways to make C++ a safer language while keeping up the other constraints like backward compatibility – not so easy.

Undeniable gap

I’ve worked with various programming languages in production simultaneously over the past decades. What really stands out to me from all my experiences with C and C++ is the sheer cognitive load they put onto developers.

C is quite a simple language; it’s easy to learn and get started with. However, it’s very hard to become advanced and proficient at it at scale. As a simple language, it forces you to manually address many important error-prone engineering tasks like memory management and proper error handling – staple aspects of reliable, bug-free software. There’s plenty of low-level control, yes, but the ceremony and cognitive burden to get things right is just staggering.

The same largely holds for C++. It does make things better by actually supporting you to write correct code, for example with the standard library featuring smart pointers for memory management. However, the truckloads of language complexity make it hard to use correctly at scale as well.

What’s more, all of these aspects of coding in C and C++ come at no guarantee that things are reliable after compilation. This forces developers to study best practices, use compiler sanitizers and static analyzers and resort to extensive testing, just to be more sure that all is fine. Of course, most of these activities should be part of any healthy software developer mindset, but it’s painful to realize that C and C++ offload the requirement of doing this work to the developer, rather than addressing it in the language directly. Developing a language, as any engineering challenge, is an endless succession of tradeoffs, sure, but there’s an undeniable gap between the capabilities of the ‘legacy languages’ and the needs in the software development space right now. Other, newer languages show that it’s possible to meet these requirements while keeping up the performance potential.

''New features improve the language but also inherently increase the already quite substantial complexity, while all the old footguns and dangers like undefined behavior are still there.''

Years away

Most programming languages are constantly being improved over time. If you’re in the C world, however, probably little to nothing is going to change. For many projects today, therefore, it isn’t the right language choice if you want any language safety at all. There are alternatives available, more fit for purpose – if this is possible given your project constraints and preferences.

For C++, it’s a different story. WG21 is now building up to release C++26, which is going to bring huge features to the table, including (most likely) contracts, executors and even static reflection. Game-changers for sure, but mostly addressing language application potential or, in the case of contracts, improving safety and correctness, but still at the cost of manual labor on the part of the developer using it.

New features improve the language but also inherently increase the already quite substantial complexity, while all the old footguns and dangers like undefined behavior are still there. Educating C++ to novices as a trainer remains, in part, an exercise in steering them away from the pitfalls – not really a natural, convenient way to teach or learn.

The ‘parallel universe’ of the Circle C++ language demonstrates how the ostensibly clogged syntax and language definition of C++ is still able to pack many other great features like true enumerators, pattern matching, static reflection and even a borrow checker. Unfortunately, this remarkable one-man show run by Sean Baxter isn’t standardized C++ (and vice versa). Chances are slim that any of these excellent features will land in official C++ anytime soon.

Baxter also has a “Safe C++” proposal, presented to the Safety and Security study group in November of last year. In it, he suggests extending C++ with a “rigorously safe subset” of the language that offers the same safety guarantees as the Rust borrow checker does. I do applaud the effort, but time will tell if at all, and in what form, this proposal will make its way through the often seemingly sluggish C++ language development process. C++26 design work has mostly converged and C++29 is still a couple of years away. Add to that the implementation/industrialization time of these spec versions before they really land on our virtual workbenches, and it might well be a decade from now – if we’re lucky.

Greener pastures

Not all is lost, though. The C++ committee is doing great work progressing the language, and the current state of the language and ecosystem is better than ever. It’s just that the gap between what C++ can offer today compared to what’s shown to be possible in systems programming safety and integrated tooling is huge.

Looking forward a couple of years, I don’t see this gap being filled. Meanwhile, languages like Rust and Swift aren’t standing still. There’s a lot of momentum and prior commitment to C++ in the world, making the industry stick to it, but how long can it sustain the technology gap before industries or application domains move to greener pastures?

ASML system engineer awarded first ECP2 Silver certificate

Buket Sahin working for ASML
ASML engineer Buket Şahin has become the first person to get the ECP2 Silver certificate. For Şahin that’s just the side-effect of her passion for learning. She likes to dig into new fields, and became a better system engineer because of it.

When Buket Şahin was doing her bachelor’s degree in mechanical engineering in Istanbul, she joined the solar car and formula SAE teams of her university. A decision that quickly made her realise the limitations of her knowledge. It put her on a lifelong track to learn about fields different from her own.

“That’s when it all started”, Şahin recalls. “I saw how necessary it was to learn about other disciplines. Obviously, I knew the mechanical domain well, but suddenly I had to work with, for example, electrical engineers. I couldn’t understand what they were talking about, and I really wanted to.”

Şahin eventually graduated with a bachelor’s degree in mechanical engineering and a masters in mechatronics, besides doing an MBA. She first worked as a systems engineer in the Turkish defence industry, before making the transfer to ASML in 2012. She started by working in development & engineering on the NXT and NXE platforms. Currently she works as a product safety system engineer for the EUV machines of ASML.

During that journey, she persistently sought out new knowledge, taking a score of courses in fields such as electronics, optics and mechatronics. At the end of 2024 she became the person who achieved the first ECP2 Silver certificate. ECP2 is the European certified precision engineering course programme that emerged from a collaboration between euspen and DSPE. To receive the certificate she had to take 35 points worth of ECP2-certified courses.

“My goal wasn’t to achieve this certification”, she laughs. “But in the end it turned out I was the first one to get it.”

Helicopter view

Şahin’s position at ASML combines system engineering with a view on safety. “We are responsible for the whole EUV machine from a safety point of view”, she notes. “This includes internal and external alignment, overseeing the program and managing engineers and architects.”

The team in which she works contains up to hundreds of people, of which there is a core team of around fifteen system engineers. One of those is a safety specific system engineer role, such as the one she fulfils.

''I need to maintain a helicopter view, but also be able to dig into the parts.''

Taking that wider, systems, perspective, which combines different fields, is something she likes. It allows her to put into practice the different things she learned throughout her career. “I have broad interests”, says Şahin. “I like all kinds of sub-fields of science and engineering. In systems engineering I can pursue that curiosity. That’s also the reason why I like learning, and taking courses so much. As a system engineer you need to know a complex system, and the technical background of the parts. You need to be able to dig deeper into the design. You need to be able to dive into the different disciplines, but at the same time maintain a helicopter view. Maintaining that balance is something that I like very much.”

Buket Sahin got the first ECP2 silver certificate
Buket Şahin at ASML’s experience center. 

NASA handbook

Şahin started taking courses as soon as she landed at ASML. She realised that she should expand her knowledge beyond what her degrees had taught her. “They were very theoretical”, she admits. “They weren’t very applied. The research and development industry in Turkey isn’t as mature as it is in the Netherlands, particularly for semiconductors. In the Netherlands there’s a very good interaction between universities and industry. I wanted to gain that hands-on knowledge. So I started with courses in mechatronics and electronics. Then I wanted to learn about optics, a very relevant field when you work at ASML. I just continued from there.”

Curiosity is a driving force for Şahin. “Some courses I took because I needed the knowledge in my work, but others were out of curiosity. I wanted to develop myself and learn new things. The courses allowed me to do that.”

Interestingly, she didn’t take any courses on system engineering though. “I was mainly looking to gain a deeper knowledge in various technical disciplines”, she looks back. “My first job was as a system engineer, but the way the role is defined in different companies varies heavily. System engineers in the semiconductor industry require knowledge of the different sub-fields of the industry. An ASML machine is also very complex, so you really need to update what you know. Things can change fast, and you need to stay up to date. That’s why learning is such a big part of my career.”

She did learn how to be a system engineer within ASML, both by learning on-the-job, and by taking internal courses. “There are internal ASML system engineering trainings”, says Şahin. “That’s why I didn’t need external courses. Also, I learned the field from the NASA System Engineer Handbook back in Turkey. That’s also the methodology that ASML uses.”

Hands-on knowledge

When Şahin looks back on all the courses she took since she moved to the Netherlands, it’s the practical ones that stand out. “The most important thing I learned was applied knowledge”, she says. “Going to university taught me the theory, but it’s the day-to-day insights that are important. I particularly like it when courses teach you rules of thumb, pragmatic approaches and examples from the industry itself. That’s the key knowledge for me. It particularly helps when the instructors are from the industry, so they can show us what they worked on themselves.”

Since 2012, learning has also become easier. “When I started there weren’t as many learning structures to guide you. High Tech Institute today, for example, has an easy to access course list. In 2012, however, I had to do much more research, and courses weren’t advertised as much and they were even only in Dutch. I had to ask colleagues and find out for myself. If I had to start today, things would have been much easier.”


“If it helps you achieve your goal, it’s very easy to take courses when you’re working at ASML”, says Şahin.

At ASML they are happy about Şahin’s new certification, and the hunger she shows to learn new things. “My managers always supported me”, says Şahin. “We define development goals, and select the training that would achieve those targets. If it helps you achieve your goal, it’s very easy to take courses when you’re working at ASML.”

''Learning, however, is a goal in itself for me, whether it’s connected to my job or not.''

Şahin is, for now, far from done. For her the learning never stops. “I just started a masters programme at the KU Leuven. It’s an advanced master in safety engineering, and it’s connected to my position at ASML. My short-term goal is to complete this master. After that I want to continue my career here at ASML as a system engineer. Learning, however, is a goal in itself for me, whether it’s connected to my job or not.”

This article is written by Tom Cassauwers, freelancer at Bits&Chips.

 

Software quality is about much more than code

software quality
Starting with punch cards in the early 1980s, Ger Cloudt learned valuable lessons about developing good software. The new High Tech Institute trainer shares his insights about the interplay between processes and skills, about measuring software quality and about fostering an organizational culture where engineers can deliver high-quality software.

Ger Cloudt’s first encounter with programming involved the use of punch cards during his electronics studies at Fontys Venlo University of Applied Sciences in the early 1980s. After graduating, he embarked on a career as a digital electronics engineer, focusing on both designing digital circuitry and developing software to control microprocessors. “This was in assembler, and I remember I created truly unstructured spaghetti code,” Cloudt recalls. “Naturally, this made it exceedingly difficult to troubleshoot and fix bugs, teaching me a tough lesson that there must be a better way.”

Fortunately, during his second assignment, Cloudt was paired with an experienced mentor who taught him to begin with structured pseudocode and then convert it into assembler. “This was the first time I experienced that structure can facilitate the creation of robust code and make debugging easier.” The experience eventually led him to transition to software development a few years later.

On May 20, we organize a free webinar ‘Infamous software failures’ presented by trainer Ger Cloudt. Registration is open.

Process versus skill

Cloudt went on to work at Philips Medical Systems as a software development engineer and later as a software architect, where he learned how processes and skills complement each other. “To execute actions, you need a certain skill level, while to achieve results, actions must be structured by a process. However, the importance of process or skill depends on the type of task. On the one hand, there are tasks like assembly-line work or building Ikea furniture you bought, which involve a strict process but minimal skill requirements. On the other hand, tasks such as painting the Mona Lisa, as Leonardo da Vinci did, rely less on processes but require a high skill level that few possess.”

''I increasingly believed that skill level is more important than processes for software engineers. A process can facilitate applying your skills, but with inadequate skills, no process will help.''

During this period, Cloudt observed a strong emphasis on processes in software development. “This was the period of the Capability Maturity Model’s emergence, aimed at improving software development processes. However, even with processes in place, skills remain essential. In the pursuit of high CMM levels, undervaluing skill is a real danger.” This insight was further reinforced when Cloudt transitioned to management roles at Philips Medical Systems, leading teams of sixty people. “Achieving a specific CMM level quickly turns into a goal in itself, and as Goodhart’s Law states: when a measure becomes a target, it ceases to be a good measure. I increasingly believed that skill level is more important than processes for software engineers. A process can facilitate applying your skills, but with inadequate skills, no process will help.”

Cloudt subsequently learned about the importance of transparency. “In my first quality management role, I had to look at an issue involving the integration of two distinct software stacks. One team developed NFC software, another worked on software for a secure element. Integrating both turned out to be a challenge. When I looked at it deeper, I discovered that although the teams were testing their software, test failures weren’t monitored systematically. So we created daily updated dashboards showing test results, and the developers had daily discussions of the outcomes. We even shared the dashboards with the customer. Naturally, everything appeared red initially, but this served as a strong incentive for the developers to improve. Consequently, the project succeeded.”

Learning by sharing

In his role as a software R&D manager at Bosch, Cloudt started to feel the need to share his insights on software quality. He began by sharing articles on the company’s internal social network, as well as on Linkedin. “I received a lot of positive feedback, particularly from Bosch colleagues,” he says. “So in 2020, I decided to write a book, ‘What is software quality?’. This experience was very enriching, as it made much of my implicit knowledge explicit and revealed gaps in my knowledge as well.”

In a quality committee at Bosch, Cloudt met a young graduate with a Master’s degree in quality management. When he asked whether the graduate had taken a course on software quality, the answer was negative. “This prompted me to approach the Engineering Doctorate program at Eindhoven University of Technology, where they invited me to give a guest lecture. Eventually, I became a lecturer for a quality management course.” Cloudt also began speaking about software quality at events, such as a Bits&Chips event in 2021 and he’s currently launching two training programs at the High Tech Institute, one for engineers and one for managers. His current role is software quality manager for the Digital Application Platform development at ASML.

Measuring software quality

Software quality as such isn’t measurable, Cloudt maintains, due to the concept’s diversity. “You can measure some specific aspects of software quality, known as ‘modeled quality.’ These include cyclomatic complexity of code, dependencies, code coverage, line count and open bugs. Such metrics are useful, but everyone who sets targets on them should be wary of Goodhart’s Law.”

An essential part of quality remains unmeasurable: transcendent quality. To illustrate this, Cloudt compares it to evaluating a painting. “You can measure paint thickness and canvas dimensions, but you can’t measure the painting’s beauty. The same applies to software quality: you can measure code coverage by your unit tests, but that doesn’t determine whether the tests are good. You need an expert opinion for this, supported by the modeled quality you measure.”

''Never underestimate culture. An organization should foster an environment where software engineers can thrive and deliver excellent design, code and product quality.''

When people think about software quality, they often mention aspects such as modularity, clean code and usability. These are examples of design quality (eg modularity, maintainability and separation of concerns), code quality (eg clean code, portability and unit tests) and product quality (eg usability, security and reliability). However, according to Cloudt, these three types of quality require a frequently overlooked element: organizational quality. “This type of quality determines whether your organization is able to build high-quality software. Aspects such as software craftsmanship, mature processes, collaboration and culture are vital to organizational quality. Never underestimate culture. An organization should foster an environment where software engineers can thrive and deliver excellent design, code and product quality.”

Intended and implemented design

There are several well-known best practices for developing high-quality software, including test-driven development (TDD) and pair programming, alongside static code analysis. Cloudt also adds something less common: static design analysis. “Many people don’t realize that there’s a difference between the intended design and the implemented design of software. Software architects document their intended design in UML models. However, a gap often exists between this intended design and its implementation in code. Keeping this gap small is a best practice. Tools can check for consistency between your code and UML models, issuing warnings when discrepancies arise.”

This gap between intended and implemented design often emerges under time constraints, for example due to project deadlines. “In such cases, you take a shortcut by ‘hacking’ a solution that allows you to meet the deadline, with less emphasis on quality,” Cloudt explains. “This is a deliberate choice to introduce technical debt due to time pressure. While this might be the only immediate solution, addressing this technical debt later is crucial. After the release is delivered, you should set aside some time to develop a proper, high-quality solution. Unfortunately, this doesn’t occur often. Managers should recognize the need to give developers time to reduce this gap and this technical debt to prevent future issues. Through their decisions, managers significantly contribute to organizational quality, directly influencing software quality.”

This article is written by Koen Vervloesem, freelancer for Bits&Chips.

 

Cultivating responsible AI practices in software development

As AI technologies become embedded in software development processes because of their productivity gains, developers face complex security challenges. Join Balázs Kiss as he explores the essential security practices and prompting techniques needed to use AI responsibly and effectively.

The use of artificial intelligence (AI) in software development has been expanding in recent years. As with any technological advancement, this also brings along security implications. Balázs Kiss, product development lead at Hungarian training provider Cydrill Software Security, had already been scrutinizing the security of machine learning before the widespread attention on generative AI. “While nowadays everyone is discussing large language models, back in 2020 the focus was predominantly on machine learning, with most users being scientists in R&D departments.”

Upon examining the state of the art, Kiss found that many fundamental concepts from the software security world were ignored. “Aspects such as input validation, access control, supply chain security and preventing excessive resource use are important for any software project, including machine learning. So when I realized people weren’t adhering to these practices in their AI systems, I looked into potential attacks on these systems. As a result, I’m not convinced that machine learning is safe enough to use without human oversight. AI researcher Nicholas Carlini from Google Deepmind even compared the current state of ML security to the early days of cryptography before Claude Shannon, without strong algorithms backed by a rigorous mathematical foundation.”

With the surge in popularity of large language models, Kiss noticed the same fundamental security problems resurfacing. “Even the same names were showing up in research papers. For example, Carlini was involved in designing an attack to automatically generate jailbreaks for any LLM – mirroring adversarial attacks that have been used against computer vision models for a decade.”

Fabricated dependencies

When developers currently use an LLM to generate code, they must remember they’re essentially using an advanced autocomplete function. “The output will resemble code it was trained on, appearing quite convincing. However, that doesn’t guarantee its correctness. For instance, when an LLM generates code that includes a library, it often fabricates a fake name because it’s a word that makes sense in that context. Cybercriminals are now creating libraries with these fictitious names, embedding malware and uploading them to popular code repositories. So if you use this generated code without verifying it, your software may inadvertently execute malware.”

In the US, the National Institute of Standards and Technology (NIST) has outlined seven essential building blocks of responsible AI: validity and reliability, safety, security and resiliency, accountability and transparency, explainability and interpretability, privacy, and fairness with mitigation of harmful bias. “The attack involving fabricated libraries is an example where security and resiliency are compromised, but the other building blocks are equally important for trustworthy and responsible AI. For instance, ‘validity and reliability’ means that results should be consistently correct: getting a correct result one time and a wrong one the next time you ask the LLM to do the same task isn’t reliable.”

''If you’re aware of the type of vulnerabilities you can expect, such as cross-site scripting vulnerabilities in web applications, specify them in your questions.''

As for bias, this is often understood in other domains, such as large language models expressing stereotypical assumptions about occupations of men and women. However, a dataset with code can also exhibit bias, Kiss explains. “If an LLM is trained solely on open-source code from Github, it could be biased toward code using the same libraries as the code it was trained on, or code with English documentation. This affects the type of code the LLM generates and its performance on tasks performed on code that differs from what it has seen in its training set, possibly doing worse when interfacing with a custom closed-source API.”

Balasz Kiss
Credits: Egressy Orsi Foto 

Effective Prompting

According to Kiss, many best practices for the responsible use of AI in software development aren’t novel. “Validate user input in your code, verify third-party libraries you use, check for vulnerabilities – this is all common knowledge in the security domain. Many tools are available to assist with these tasks.” You can even use AI to verify AI-generated code, Kiss suggests. “Feed the generated code back into the system and ask it for criticism. Are there any issues with this code? How might they be resolved?” Results of this approach can be quite good, Kiss states, and the more precise your questions are, the better the LLM’s performance. “Don’t merely ask whether the generated code is secure. If you’re aware of the type of vulnerabilities you can expect, such as cross-site scripting vulnerabilities in web applications, specify them in your questions.”

A lot of emerging best practices exist for creating effective prompts, ie the questions you present to the LLM. One-shot or few-shot prompting, where you provide one or a few examples of the expected output to the LLM, is a powerful technique for obtaining more reliable results, according to Kiss. “For example, if your code currently processes XML files and you want to switch to JSON, you might simply ask to transform the code to handle JSON. However, the generated code will be much better by adding an example of your data in XML format alongside the same data in JSON format and asking for code to process data in JSON instead.”

''With the present state of generative AI, it’s possible to write code without understanding programming. However, if you don’t understand the generated code, how will you maintain it?''

Another useful prompting technique is chain-of-thought prompting – instructing an LLM to show its reasoning process for obtaining an answer, thereby enhancing the result. Kiss has assembled these and other prompting techniques, alongside important pitfalls, in a one-day training on responsible AI in software development at High Tech Institute. “For example, unit tests generated by an LLM are often quite repetitive and hence not that useful. But the right prompts can improve them, and you can also do test-driven development by writing the unit tests yourself and asking the LLM to generate the corresponding code. This method can be quite effective.”

Here to stay

With all these precautionary measures, one might wonder whether the big promise of AI code generation, increased developer productivity, still holds. “A recent study based on randomized controlled trials confirms that the use of generative AI increases developer productivity by 26 percent,” Kiss notes, with even greater benefits for less experienced developers. Yet, he cautions that this could be a pitfall for junior developers. “With the present state of generative AI, it’s possible to write code without understanding programming. Prominent AI researcher Andrej Karpathy even remarked: ‘The hottest new programming language is English.’ However, if you don’t understand the generated code, how will you maintain it? This leads to technical debt. We don’t know yet what effect the prolonged use of these tools will have on maintainability and robustness.”

Although the use of AI in software development comes with its issues, it’s undoubtedly here to stay, according to Kiss. “Even if it looks like a bubble or a hype today, there are demonstrable benefits, and the technology will become more widely accepted. Many tools that we’re witnessing today will be improved and even built into integrated development environments. Microsoft is already tightly integrating their Copilot in their Visual Studio products, and they’re not alone. However, human oversight will always be necessary; ultimately, AI is merely a tool, like any other tool developers use. And LLMs have inherent limitations, such as their tendency to ‘hallucinate’ – create fabrications. That’s just how they work because of their probabilistic nature, and users must always be aware of this when using them.”

This article is written by Koen Vervloesem, freelancer for Bits&Chips.