AI and the future of systems programming

C++
Kris van Rens looks at the future of systems development and how developer happiness is an important aspect of software engineering

Artificial intelligence in general and large language models (LLMs) in particular are undeniably changing how we work and write code. Especially for learning, explaining, refactoring, documenting and reviewing code, they turn out to be extremely useful.

For me, however, having a chat-style LLM generate production-grade code is still a mixed bag. The carefully engineered prompt for a complex constrained task often outsizes the resulting code by orders of magnitude, making me question the productivity gains. Sometimes, I find myself iteratively fighting the prompt to generate the right code for me, only to discover that it casually forgot to implement one of my earlier requirements. Sometimes also, the LLMs generate code featuring invalid constructs: they hallucinate answers, invariably with great confidence. What’s more, given the way LLMs work, the answers can be completely different every time you input a similar query or at least highly dependent on the given prompt.

OpenAI co-founder Andrej Karpathy put it well: “In some sense, hallucination is all LLMs do. They’re dream machines.” This seemingly ‘black magic’ behavior of LLMs is slightly incompatible with my inner tech-driven urge to follow a deterministic process. It might be my utter incompetence at prompt engineering, but from where I’m standing, despite the power of generative AI at our fingertips, we still need to absolutely understand what we’re doing rather than blindly trusting the correctness of the code that was generated by these dream machines. The weird vibe-induced feel and idiosyncrasy of LLMs will probably wear off in the future, but I still like to truly understand the code I produce and am responsible for.

Probably, AI in general is going to enable an abstraction shift in future software development, allowing us to design at a higher level of abstraction than we often do nowadays. This might, in turn, diminish the need to write code manually. Yet, I fail to see how using generated code in production is going to work well without the correctness guarantees of rigorous testing and formal verification – this isn’t the reality today.

''An aspect of software engineering where LLMs can make an overall positive difference is interpreting compiler feedback.''

Positive difference

Another application area of LLMs is in-line code completion in an editor/IDE. Even this isn’t an outright success for me. More than once, I’ve been overwhelmed by the LLM-based code completer suggesting a multiline solution of what it thinks I wanted to type. Then, instead of implementing the code idea straight from my imagination, I find myself reading a blob of generated suggestion code, questioning what it does and why. It’s hit-and-miss with these completions and they often tend to put me on the wrong foot. I’ve been experimenting with embedded development for microcontroller units lately and have found that especially with code in this context, the LLM-based completion just takes guesses, sometimes even making up non-existent general-purpose IO (GPIO) pin numbers as it goes. I do like the combination of code completion LLMs with an AI model that predicts editor movements for refactoring. Refactors are often batches of similar small operations that the models are able to forecast well.

An aspect of software engineering where LLMs can make an overall positive difference is interpreting compiler feedback. C++, for example, is notorious for its hard-to-read and often very long compiler errors. The arrival of concepts in C++20 was supposed to accomplish a drastic improvement here, but I haven’t seen it happen. Perhaps this is still a work in progress, but until then, we’re forced to deal with complex and often long error messages (sometimes even hundreds of lines in length). Because of their ability to interpret or summarize compiler messages, combined with their educational and generative features, LLMs with a large context window are fit to process such feedback, making them a great companion tool for C++ developers. There’s an enormous body of already existing C++ code and documentation to learn from, which is a good basis for training an LLM.

Other drawbacks of C++ are the ever-increasing language complexity and the compiler’s tendency to fight rather than help you. Effective use of LLMs to combat these issues might well save the language in the short term. C++ language evolution is slow, but tool potency is tremendous. Given the sheer amount of existing C++ code in use today, the language is here to stay, and any tool that helps developers work with it is appreciated.

''To me, writing code is a highly creative, educational and enjoyable activity.''

Developer happiness

Using LLMs for code generation also takes away part of the joy of programming for me. To me, writing code is a highly creative, educational and enjoyable activity, honing my skills in the process; having a magic box do the work for me kills this experience to some extent – even manually writing the boring bits and the tests has some educational value.

Fellow educator in the software development space Ger Cloudt in his works on software quality asserts that organizational quality, a part of which is developer happiness, is half the story. According to him, organizational quality is key as it enables design, code and product quality. Sure, clean code and architecture are important, but without the right tools, mindset, culture, education and so on, the development process will eventually grind to a halt.

LLMs undoubtedly help in the tools and education department, but there’s more to programming than just producing code like a robot. Part of the craft of software engineering – as with any craft – is experiencing joy and pride in your work and the results you produce. Consider me weird, but it can bring me immense satisfaction to create beautiful-looking code with my own two hands.

Revisiting the state of Rust

C++
In late 2022, Kris van Rens wrote about the rise of the Rust programming language, largely in the same application space dominated by C and C++. Did the traditional systems programming landscape really change or was it all much ado about nothing?

According to the Tiobe index, Python is lonely at the top of “most popular programming languages,” with a score of 23 percent. It’s followed by C++ (10 percent), Java (also 10 percent) and C (9 percent). The index tries to gain insight from what people are searching for in search engines, the assumption being that this provides a measure of popularity. As a relatively young language, Rust scores 14th place with a little over 1 percent.

In a concluding summary, Tiobe CEO Paul Jansen writes about Rust that “its steep learning curve will never make it become the lingua franca of the common programmer, unfortunately.” Mentioning a language’s steep learning curve as the barrier to becoming a big success feels slightly dubious from the perspective of how popular C++ is in combination with its complexity at scale. I also think overestimating and emphasizing a learning curve for a language is selling developers short – many companies adopting Rust in production have already shown it’s very manageable.

''When it comes to learning in general, I always tend to keep a positive attitude: people are much more capable than we might think.''

Unique feat

Over the past years, Rust has established itself as a worthy alternative in the field of production-grade systems programming. It’s successfully demonstrating how a language can be modern, performant and safe at the same time. It steadily releases every six weeks, so there’s always something new there – we’re at v1.85 at the time of writing. New features land when ready and most language or library changes tend to be more piecemeal.

As Rust is growing more mature, its popularity and adoption have been gradually increasing over time. The risk factor to adopt it as a production language of choice has worn off, as can be concluded from many companies reporting about it. Google has been rewriting parts of Android in Rust for improved security, Microsoft is rewriting core Windows libraries in Rust and Amazon has been known to use Rust in its AWS infrastructure for a long time already.

Another unique feat worth mentioning is that Rust is part of the mainline Linux kernel next to C. It must be said that the effort to expand support for Rust across kernel subsystems isn’t without contention, but progress is being made with the blessing of Linus Torvalds. It will be very interesting to see how this experiment will advance.

''One of my main observations is that switching back from Rust to C++ makes me feel as if I’m being flung back into the dark ages of systems software development.''

Happy developers

I’ve been heavily using Rust alongside C++ for the past number of years. One of my main observations is that switching back from Rust to C++ makes me feel as if I’m being flung back into the dark ages of systems software development. This may sound harsh, but honestly, even when using the leading-edge version, C++23, most coding tasks feel painstakingly hard and limited compared to how they would in Rust. In the early days, I would sometimes miss the ability to directly correlate written code to output machine code as can be done in C++, but this is strictly unnecessary in 99 percent of the cases, and modern compilers are much more competent at optimization than humans anyway.

When it comes to the tooling ecosystem and integration, Rust is on another level altogether and much more up to speed with the web development world today. Whereas the C++ language and compiler often fight me to get things right, Rust’s strictness, type system, sane defaults and borrow checker seem to naturally lead me to the right design decisions – contend vs guide. When my Rust code builds successfully and the tests pass, I can leave the project with the ease of mind that the software won’t crash during runtime and the code can’t easily be broken by a colleague. Also, the Rust macro systems and excellent quality package ecosystem with libraries as well as plugin tools for the build system make a big difference in productivity.

These and other aspects make Rust extremely nice to work with. They make developers happy. There’s a reason why the Stack Overflow developer survey shows Rust as the most widely desired programming language for nine years in a row now.

Dividends

Rust is very much fit for production use, even in critical systems requiring safety certifications (for example by using the Ferrocene toolchain). I see its adoption as a logical move to enjoy the benefits of memory safety, high productivity and increased developer happiness already today, rather than waiting until the current set of tools is up to speed with the rest of the world. Add to that the cross-pollination of becoming a better developer in any other programming language by learning a new one.

When it comes to learning in general, I always tend to keep a positive attitude: people are much more capable than we might think. Yes, the learning curve for Rust is steeper than that of most other languages, but it’s so well worth it and pays dividends in the long term. I would take a steep learning curve and more sane and strict language rules and guarantees over a life with software memory safety bugs any day of the week.

“Calculations that you should be able to do in five minutes on a beer coaster.”

precision engineering
Erik Manders and Marc Vermeulen take on a leading role in the training “Design Principles for Precision Engineering” (DPPE). The duo takes over from Huub Janssen, who was the face of the training for seven years. Part two of a two-part series: training, trends, and trainers.

When it comes to knowledge sharing within the Eindhoven region, the “Design Principles for Precision Engineering” (DPPE) training is considered one of the crown jewels. The course originated in the 1980s within the Philips Center for Manufacturing Technology (CFT), where the renowned professor Wim van der Hoek laid the foundation with his construction principles. Figures like Rien Koster, Piet van Rens, Herman Soemers, Nick Rosielle, Dannis Brouwer, and Hans Vermeulen built upon it.

The current DPPE course, offered by Mechatronics Academy (MA) through the High Tech Institute, is supported by multiple experts. The lead figures among them have the special task of keeping an eye on industry trends. “Our lead figures signal trends, new topics, and best practices in precision technology,” says Adrian Rankers, a partner at Mechatronics Academy responsible for the DPPE training.

When asked about his ‘fingerprints’ on the DPPE training, Janssen refers to his great inspiration, Wim van der Hoek. “I’m not a lecturer nor a professor with long stories. I like to lay down a case, work on it together, and then discuss it. With Van der Hoek, we would sit around a large white sheet of paper, and then the problems would be laid on the table.”

Virtual Play

Janssen says that as a lead figure, he was able to shape the DPPE training. He chose to give participants more practical assignments and discuss those cases in class. Rankers: “Right from the first morning. After we explain the concept of virtual play, we ask participants to start working with it.” Janssen: “Everyone thinks after our explanation: I’ve got it. But when they put the first sketches on paper, it turns out it’s not that simple. That’s the point: because when they do the calculations themselves, it really sticks.”

On the last day of the training, participants are tasked with designing an optical microscope in groups of four. Janssen: “They receive the specifications: the positioning table with a stroke of several millimeters, a specific resolution, stability within a tenth of a micrometer in one minute, etc. Everything covered in this case has been discussed in the days prior: plasticity, friction, thermal center, and more.”

Vermeulen: “The fun part is that people must work together, otherwise, they won’t make it.”

Janssen: “We push four tables together, and they really have to work the four of them as a team. Then you see some people reaching for super-stable Zerodur or electromagnetic guidance or an air bearing, and someone else says: ‘Also consider the cost aspect.’”

'With Wim van der Hoek, we would all sit around a large white sheet of paper, and then the problems would be laid on the table.''

Not Easy

Participants experience the difficulty level very differently, regardless of their educational background, Janssen observes: “It depends on their prior knowledge, but it’s challenging for everyone. People are almost always highly educated, but when they need to come up with a design, they often don’t know whether to approach it from the left or right.”

However, he believes it’s not rocket science. “It’s not complex. It’s about calculations that you should be able to do in five minutes on a beer mat.”

All four of them agree that it’s about getting a feel for the material. “You should also be able to quantify it, quickly calculate it,” emphasizes Vermeulen.

Janssen offers a simple thought experiment: “Take two rubber bands. Hold them parallel and pull them. Then knot them in series and pull again. What’s the difference? What happens? Where do you have to pull hardest to stretch them a few centimeters? Not everyone has an intuitive grasp of that.”

Rankers: “It’s a combination of creativity and analytical ability. You have to come up with something, then do some rough calculations to see how it works out. Some people approach it analytically, others can construct wonderfully. They may not know exactly why it works, but they have a great feel for it.”

Calculation Tools

Creativity and design intuition cannot be replaced by calculation tools, they all agree. “You can let a computer do the calculations,” says Janssen, “but then you still have to assess it. What if it’s not right? There are thousands of parameters you can tweak. It’s about feeling for construction, knowing where the pain points are. You don’t need a calculation program for that.”

''For every design question, you must go all the way back to the beginning, keep your feet on the ground, and start simple.''

Manders: “We talk about the proverbial beer mat because you want to make an initial sketch or calculation in a few minutes. If you let a computer calculate, you’re busy for days. Building an initial model takes a long time. But a good constructor can put that calculation on paper in a few minutes. If afterwards you are busy for an hour, you have a good sense of which direction it’s going. I think that’s the core of the construction principle course: simple calculations, not too complicated, choose a direction, and see where it goes.”

White Sheet of Paper

Manders observes that highly analytical people are often afraid to put the first lines on a blank sheet of paper. To start with a concept. “Often, they are so focused on the details that they get stuck immediately. Creatives start drawing and see where it goes.”

For Manders, training is a way to stay connected with the field of construction. “In my career, I’ve expanded into more areas, also towards mechatronics. But my anchor point is precision mechanics. By training, I can deepen my knowledge and tell people about the basics. It sharpens me as well. Explaining construction principles in slightly different ways helps me in my coaching job.”

He often learns new things during training. “Then I get questions that make me really think. If it’s really tough, I’ll come back to it outside the course. I’ll puzzle it out at home and prepare a backup slide for the next time.”

Vermeulen says he gets a lot of satisfaction from training a new generation of technicians. “That gives me energy. For the current growth in high-tech, it’s also necessary to share knowledge. That applies to ASML, but also to VDL and other suppliers. If we don’t pass on our knowledge, we’ll all hit a wall.”

''We could emphasize considering the costs of production methods more.''

Complacency

Janssen observes that a certain bias or complacency is common among designers. “When there are many ASML participants in the class, they immediately pull out a magnetic bearing when we ask for frictionless movement. But in some cases, an air bearing or two rollers will do. I’m exaggerating, but designers sometimes have a bias because of their own experience or work environment. With every design question, they really need to go back to the basics, feet on the ground, and start simple.”

Vermeulen: “The simplest solution is usually the best. Many designers aren’t trained that way. I often see copying behavior. But the design choice they see their neighbor make is not necessarily the best solution for their own problem. You could perfectly fine use a steel plate instead of a complex leaf spring. It works both ways, but if you choose the expensive option, you better have a good reason.”

Quarter

“It’s always fun to see how Marc starts,” says Rankers about Vermeulen’s approach in training. “When he talks about air bearings, he asks participants if they use them, what their biggest challenge is, where they run into problems. In a quarter of an hour, he explores the topic and knows what’s familiar to them. Who knows a lot, who knows nothing, or who will be working with it in a project soon.”

Vermeulen: “In my preview, I go over the entire material without diving deep into it. That process gives me energy. In fact, the whole class is motivated, but the challenge is to really engage them at the start. You don’t know each other yet. But I want to be able to read them, so to speak, to get them involved. They need to be eager, on the edge of their seats.”

So it’s not about the slides, Vermeulen emphasizes once again. “It’s about participants coming with their own questions. They all have certain things in mind and are wondering how to make it work.” That’s the reason for the extensive round of questions at the start. “I ask about the different themes they’re encountering. Then I use that as a framework. When a slide about a topic they mentioned comes up, I go into it a bit. That makes it much easier for them to follow. They stay focused.”

Basic Training

DPPE is a basic training. Manders and Vermeulen don’t expect major changes in the material covered, though they see opportunities to bring the content more up to date.

However, participants must still learn fundamental knowledge and principles. Janssen on stiffness, play, and friction—the topics he teaches: “I spend a day and a half on those, but they’re three crucial things. If you don’t grasp these, you’ll never be a good designer. That’s the foundation.” Concepts like passive damping come up briefly, but that’s a complex topic. No wonder Mechatronics Academy offers a separate three-day training for that.

The “degrees of freedom” topic that Manders teaches is another fundamental element. “That just takes some time. You have to go through it,” says Manders.

Vermeulen: “Then comes the translation to hardware. Once participants are familiar with spark erosion, they need to have the creativity to turn to cheaper solutions in some cases. We could emphasize the critical assessment of production method costs more. If you get a degree of freedom in one system with spark erosion, you shouldn’t automatically reach for this expensive production method next time. We could delve more into that translation to hardware. It’s also good to strive for simplicity there.”

''The core is simple calculations, not too complicated, choose a direction and see where it leads.''

Overdetermined

By the way, Wim van der Hoek also looked critically at costs. Rankers: “A great statement from him was that many costs in assembly are caused by things being overdetermined.”

The terms “determined” or “overdetermined” in precision construction essentially refer to this: A rigid body has six degrees of freedom (3 translations and 3 rotations) that fully define its position and orientation. If you want to move that object in one direction using an actuator, you need to fix the other degrees of freedom with a roller bearing, air bearing, or leaf spring configuration.

If you as a designer choose a configuration of constraints that fixes more than five degrees of freedom, the constraints may interfere with each other. Rankers: “That’s called statically overdetermined, and you might get lucky if it works, as long as everything is neatly aligned. The people doing that have ‘golden hands,’ as Wim van der Hoek put it. But the neat alignment can’t change, like with thermal expansion differences.” Especially the gradients and differences in expansion of various components play a big role.

Rankers: “Of course, it’s impossible to perfectly align everything. It also changes over time during use. So internal forces arise within the object you wanted to hold or position due to the ‘fighting’ between the constraints. If that object is a delicate piece of optics that must not deform, you’ve got a big problem. That means you need to avoid overdetermination in ultra-precision machines.”

Vermeulen: “So if you design it to be better determined, it’s easier to assemble, and that gives you a bridge to costs.”

Rankers also notes that the cost aspect should receive more attention than before. He thinks guest speakers could enrich the training with practical examples. Showing examples of affordable and expensive versions. Vermeulen immediately offers an example where you need to guide a lens. “If you make a normal linear guide, the lens sinks a little on the nanometer scale. You can compensate with a second guide, but then the solution might be twice as expensive and twice as complex. Is that really necessary? So as a designer, you can challenge the optics engineer: ‘You want to make it perfect, but that comes at a high cost. We need to pay attention to these things.’”

This article is written by René Raaijmakers, tech editor of Bits&Chips.

The magic of Precision Engineering

precision engineering
Erik Manders and Marc Vermeulen are taking a leading role in the “Design Principles for Precision Engineering” (DPPE) training. The duo is taking over from Huub Janssen, who was the lead for seven years. Part one of a two-part series: trends in construction principles.

Precision technology is not a fixed concept; this toolkit for high-tech engineers evolves over time. To gain insight into this, High Tech Systems magazine invited Huub Janssen, Erik Manders, Adrian Rankers, and Marc Vermeulen for a discussion about the precision world, the changing trends and requirements in high-tech, and what it’s like to work in this field. In the second part, we will delve into the impact this has on the Design Principles for Precision Engineering (DPPE) training.

Like Janssen, Manders and Vermeulen have been active in high-tech for decades, although their roles and interests differ. Janssen is the owner of a high-tech engineering firm and was the figurehead of the DPPE training for seven years. The new duo setting the broad direction now works at ASML, Manders as Principal Systems Architect for Mechatronics, and Vermeulen as Principal Mechanical System Architect. Adrian Rankers, who previously worked as Head of Mechatronics Research at Philips CFT, is now a partner at Mechatronics Academy (MA) and is responsible for the DPPE training that MA offers through the High Tech Institute.

 

“Thirty years ago, positioning to the micrometer was a field from another planet,” said Janssen in 2019 when he became the face of the DPPE. When he graduated in the mid-eighties, designers were still working with micrometers. “Over the years, this has shifted to nanometers,” he observes today.

Since the early nineties, with his company JPE, he has been developing mechatronic modules for high-tech, scientific instruments for research, and more recently, systems for quantum computers. “If you talk to those physicists now, they talk about picometers without blinking an eye. To me, that almost feels philosophical.”

Erik Manders and Marc Vermeulen have been involved as trainers in the Design Principles for Precision Engineering training for years. The training was originally developed at the Philips Center for Manufacturing Technology (CFT), where both started their careers. Vermeulen has been part of a group of DPPE trainers at Mechatronics Academy for several years. Manders taught the course for many years with Herman Soemers at Philips Engineering Services, until the mechatronics group of this activity was transferred to ASML in 2023.

Not straightforward

The concept of precision technology is difficult to define. It’s a toolbox that offers designers significant room for creativity. Give ten designers the same problem, and you’ll receive different solutions that vary in both direction and detail. The design approach differs greatly depending on the application but is also subject to trends and changing requirements. In a few years, the requirements and approaches may barely change, but look ahead ten years, and the designs and methods that are used to bring them to fruition can be entirely different.

''You keep running into new physical phenomena that previously had no influence and suddenly appear.''

Interferometer Suspension

There is no holy grail or universal design rules in precision technology. Best practices differ depending on the market, system, or application. Huub Janssen discovered this when he first joined ASML freshly out of school. “At first, I learned to build something statically determined from Wim van der Hoek,” he says. “But at ASML, I found that this approach didn’t always work. For the PAS2500 wafer stepper, we initially developed a new interferometer suspension to measure the position of the stage in the x and y directions. This design followed Van der Hoek’s principles, with elastic elements and so forth. But when we tested it, we found that there was no damping. It is reproducible, but everything kept vibrating. It was a disaster. I learned that you can’t just apply certain Van der Hoek construction principles everywhere; you have to know when to use them.”

Increasing Demands

The ever-increasing demands for precision strongly influence design choices. Vermeulen explains, “With increased accuracy, complexity increases. Each time you have to peel away the problem a little further. You continuously encounter new physical phenomena that didn’t matter before but now have an impact. You then need to get to the core: what’s happening physically here?”

Vermeulen gives the example of the application of passive damping on the short-stroke wafer stage of lithographic scanners. ‘That was quite a hurdle that we had to take around 2015, because what you design has to be predictable. If you think in terms of stiffness and mass, that is still possible. But in the beginning we did not know how a damper would behave.

Would it age? Creep? We had to understand that completely. That meant modeling how damping affects the dynamics. We couldn’t match that at first, but when we finally got it right we could match the measurements and the model. Only after we were reasonably sure that we understood it, could we take the next step. If you don’t do this properly, it remains guesswork, you can’t predict the behavior well and you will be surprised later.’

Another example is problems that can arise when increasing productivity. Especially with water-cooled components, it is a challenge to keep this under control. Everyone knows the bursting of the water pipe when you quickly close a tap. In the same way, acceleration creates pressure waves in systems with water cooling. ‘You have to dampen those waves, because pressure pulses cause deformation’, says Vermeulen, ‘you have to understand how that works.’

Manders adds, “On a micrometer scale, you wouldn’t notice this, but on a nanometer scale, even a glass block deforms if the pressure changes. This is a physical issue at the system level.”

Simplicity

The main approach is to strive for simplicity. This leads to robust and cost-effective constructions. But there’s another important reason to keep things simple. Once a chosen solution is embedded in a product, designers who build on it won’t quickly change that subsystem. “If you opt for complexity, you’ll never be able to remove it,” summarizes Rankers. “If you don’t enforce simplicity from the start, you’ll keep struggling with it. It’ll keep nagging at you.”

Janssen: “If it works, no one dares to touch it. If you build in reserves, no one will later suggest removing them. Because everyone will counter: ‘Are you sure it will still work then? You can guess what the outcome will be.'”

Vermeulen: “Exactly. No one dares to go back. You start with a design, set up a test rig, and once it has more or less proven itself, you go with it.”

Manders: “You must avoid complex adjustments or calibrations because they will never go away. The project team that comes afterward will say, ‘We’ll just copy this because it works. We’ll do it the same way.'”

These are tough decisions, says Janssen. Design choices can vary greatly and depend on the application and market. “For semiconductor equipment, you want to recalculate everything a hundred times before you build the machine. Designers may build in some reserve to make the construction work. But small margins in various budgets sometimes make a solution impossible or overly complicated. Sometimes you really have to pull out all the stops to achieve that last bit of precision. But once it’s done, you can’t go back.”

At his company JPE, Janssen encourages his designers to sometimes take more risks. “It can often be cheaper. Something thinner and a little less stiff can be finished faster and more cheaply. But you really have to dare to do it.”

Manders: “But sometimes reserve costs almost nothing. By designing smartly, accuracy can often be achieved without going through many extra manufacturing steps. For example, by smartly looking at whether you can mill multiple surfaces in one setup and take advantage of today’s highly accurate milling machines. In any case, it’s important to develop a feel for it.”

''The process of creating a design is magical. You just can’t design the more complex modules alone.''

System architect

Manders started at Philips CFT as a designer. In recent years, he had a more coaching role as a systems architect in the mechatronics department of Philips Engineering Services, which transitioned to ASML in 2023, working with a team of about a hundred colleagues and technicians at suppliers. “Yes, then you’re in a lot of reviews.”

He sees his role as “maintaining the overview between the disciplines.” “I try to be the cement between the bricks. In the end, it has to function. That’s the game.”

Twenty Balls

Janssen chose to start his own company early in his career, Janssen Precision Engineering, later JPE. Manders and Vermeulen, on the other hand, work in a larger organization where they must coordinate with many colleagues and suppliers. “I have to keep twenty balls in the air with challenging technique,” describes Janssen, who also sees his job as a hobby. “Meanwhile, I have to look at what the market needs. We’re not a large company, but we have a significant impact worldwide.”

What’s it like in a much larger organization like ASML? Vermeulen says, “Someone who just joined will be working on a very small part. The challenge is to help them understand how their contribution fits into the bigger picture.”

Manders adds, “Thousands of people work on our machines. You can’t immediately grasp it as a newcomer. The complexity is overwhelming.”

The founders at ASML, according to Manders, had the advantage of starting with simpler devices. “They could understand those better, and that was their anchor point when the machines became more complex. People who join later can’t immediately see the whole picture. People who only start working can’t see the forest from the trees at first. They have to grow into it and discover the context over time.”

Conductor

In such a large team, everyone has their role. “What the servologists and flow dynamics experts in my team calculate, I couldn’t do myself,” says Manders, who sees himself more as a conductor. “I try to give less experienced colleagues direction and a feel for the context. Why are we doing this? Where are we heading? You try to make the team play together and create something beautiful. But a good orchestra essentially plays on its own.”

Rankers adds, “On your own, you can’t accomplish these complex modules. It’s like a football team. The coach doesn’t score goals either.”

Vermeulen recognizes this. “I’m responsible for the technology, but also for how we work together. This is probably half of my time: providing leadership. You have influence over how the team collaborates. As a systems architect, you bring everything together and provide direction. You ask your experts what the best solution is from their perspective, and that leads to a balanced design. There can be a hundred or a hundred and fifty people in a team, but how they work together is key.”

''The most important approach is to strive for simplicity.''

Big projects

Manders regrets not constructing things himself anymore, but he finds his current role just as challenging. “Now, I’m more focused on keeping everything balanced and making system choices in large projects.”

Vermeulen relates to this role as a coach. “It’s about zooming out and zooming in. Keeping an eye on the big picture.”

Manders explains, “Lots of one-on-one discussions, crouching next to colleagues, brainstorming where we need to go. Sometimes you have to zoom out and realize you’re on the wrong track. The approach needs to change entirely.”

Manders refers to this as “the charm of designing”. “All the considerations you make with your team lead to something beautiful if it’s done right. It’s exciting to see it grow from the side as an architect. Sometimes, people come up with very surprising ideas at the coffee machine. The process of creating a design is magical. You just can’t design the more complex modules alone.”

Vermeulen adds, “One plus one equals three. One person says something, which sparks an idea in another person. A third then comes up with something surprising, and so on.”

Janssen concludes, “But eventually, someone needs to choose a direction.”

This article is written by René Raaijmakers, tech editor of Bits&Chips.

Infamous software failures

Free webinar on the use of AI for code generation

Free webinar

On May 20, 2025, from 3 – 4 PM, High Tech Institute organizes a free webinar on infamous software failures. The webinar is presented by Ger Cloudt, trainer of the new “Software quality for engineers” and “Understanding software quality for managers” courses and author of the book “What is Software Quality?”.

Objective
Do you realize how much our society depends on software? Did you ever thought about the role of software in your personal life? A proper look at your close environment will convince you that software is ubiquitous.  Your smartphone is run by software, your computer is run by software, your vacuum cleaner is run by software, your television is run by software, your car is run by software. Can you imagine any device that is not influenced by software in some shape and form? What if the quality of that software is inferior? Let’s have a look at three completely different infamous software failures in this session!

Target audience
Everybody interested in what can go wrong in software.

Program

  • What is software quality?
  • The disappearance of the Mars Climate Orbiter
  • Unintended acceleration causing a person to die
  • Crowdstrike update crippling airports, train stations, hospitals and more
  • Conclusion

Trainer
Ger Cloudt

 

Revisiting the state of C++

C++
In late 2022, Kris van Rens wrote about the state of C++ at that time and its challengers. A follow-up after two more years.

In 2022, out of discontent with the evolution process of C++, Google pulled out a substantial amount of its resources from working on C++ and the Clang compiler front-end. As an alternative, it announced the long-term project Carbon, a successor language that can closely interoperate with C++. This and subsequent events posed a watershed moment for C++ because it faced serious criticism for the amount of technical debt it accrued and its relatively slow development pace. From that moment on, it seemed, everybody had a (strong) opinion and loudly advertised it – the amount of critique could no longer be ignored by the C++ committee.

Another aspect of C++ that has been under attack is its lack of memory safety. Memory safety in a programming language refers to the ability to prevent or catch errors related to improper memory access, such as buffer overflows, use-after-free bugs or dangling pointers, through built-in features and guarantees in the language itself. This can be extended to general language safety where all undefined behavior and unspecified semantics are eliminated from the language. Language safety is a concept defined on a spectrum rather than a binary property; some languages are more safe than others. Examples of languages considered safe and still relatively low-level are Swift, Ada and Rust.

Following the intense proverbial heat of the summer of 2022, a series of blows in various forms of public advisories on memory safety explicitly put C as well as C++ in bad daylight. In late 2022, the NSA first came in with a white paper urging us to move away from C and C++. Then, CISA (the US Cybersecurity Infrastructure Security Agency) started advocating for a memory safety roadmap. In 2023 and 2024, even the White House and US Consumer Report proclaimed we should take memory safety more seriously than ever and move to memory-safe languages. There were many more events, but suffice to say that all of them didn’t go unnoticed by the C++ committee.

''C is quite a simple language; it’s easy to learn and get started with. However, it’s very hard to become advanced and proficient at it at scale.''

Admittedly, some of the efforts by C++ committee members to rebut the public attacks came across as slightly contemptuous, often almost downplaying memory safety as “only one of the many potential software issues.” This, to me, sounds an awful lot like a logical fallacy. Sure, many things can go wrong, and a safe language isn’t a panacea. However, software development requirements have drastically changed over the last forty years and today, memory safety is a solved problem for many other languages usable in the same application domain. Officially, ISO workgroup 21 (WG21) instated study group 23 (SG23) for “Safety and Security,” tasked to find the best ways to make C++ a safer language while keeping up the other constraints like backward compatibility – not so easy.

Undeniable gap

I’ve worked with various programming languages in production simultaneously over the past decades. What really stands out to me from all my experiences with C and C++ is the sheer cognitive load they put onto developers.

C is quite a simple language; it’s easy to learn and get started with. However, it’s very hard to become advanced and proficient at it at scale. As a simple language, it forces you to manually address many important error-prone engineering tasks like memory management and proper error handling – staple aspects of reliable, bug-free software. There’s plenty of low-level control, yes, but the ceremony and cognitive burden to get things right is just staggering.

The same largely holds for C++. It does make things better by actually supporting you to write correct code, for example with the standard library featuring smart pointers for memory management. However, the truckloads of language complexity make it hard to use correctly at scale as well.

What’s more, all of these aspects of coding in C and C++ come at no guarantee that things are reliable after compilation. This forces developers to study best practices, use compiler sanitizers and static analyzers and resort to extensive testing, just to be more sure that all is fine. Of course, most of these activities should be part of any healthy software developer mindset, but it’s painful to realize that C and C++ offload the requirement of doing this work to the developer, rather than addressing it in the language directly. Developing a language, as any engineering challenge, is an endless succession of tradeoffs, sure, but there’s an undeniable gap between the capabilities of the ‘legacy languages’ and the needs in the software development space right now. Other, newer languages show that it’s possible to meet these requirements while keeping up the performance potential.

''New features improve the language but also inherently increase the already quite substantial complexity, while all the old footguns and dangers like undefined behavior are still there.''

Years away

Most programming languages are constantly being improved over time. If you’re in the C world, however, probably little to nothing is going to change. For many projects today, therefore, it isn’t the right language choice if you want any language safety at all. There are alternatives available, more fit for purpose – if this is possible given your project constraints and preferences.

For C++, it’s a different story. WG21 is now building up to release C++26, which is going to bring huge features to the table, including (most likely) contracts, executors and even static reflection. Game-changers for sure, but mostly addressing language application potential or, in the case of contracts, improving safety and correctness, but still at the cost of manual labor on the part of the developer using it.

New features improve the language but also inherently increase the already quite substantial complexity, while all the old footguns and dangers like undefined behavior are still there. Educating C++ to novices as a trainer remains, in part, an exercise in steering them away from the pitfalls – not really a natural, convenient way to teach or learn.

The ‘parallel universe’ of the Circle C++ language demonstrates how the ostensibly clogged syntax and language definition of C++ is still able to pack many other great features like true enumerators, pattern matching, static reflection and even a borrow checker. Unfortunately, this remarkable one-man show run by Sean Baxter isn’t standardized C++ (and vice versa). Chances are slim that any of these excellent features will land in official C++ anytime soon.

Baxter also has a “Safe C++” proposal, presented to the Safety and Security study group in November of last year. In it, he suggests extending C++ with a “rigorously safe subset” of the language that offers the same safety guarantees as the Rust borrow checker does. I do applaud the effort, but time will tell if at all, and in what form, this proposal will make its way through the often seemingly sluggish C++ language development process. C++26 design work has mostly converged and C++29 is still a couple of years away. Add to that the implementation/industrialization time of these spec versions before they really land on our virtual workbenches, and it might well be a decade from now – if we’re lucky.

Greener pastures

Not all is lost, though. The C++ committee is doing great work progressing the language, and the current state of the language and ecosystem is better than ever. It’s just that the gap between what C++ can offer today compared to what’s shown to be possible in systems programming safety and integrated tooling is huge.

Looking forward a couple of years, I don’t see this gap being filled. Meanwhile, languages like Rust and Swift aren’t standing still. There’s a lot of momentum and prior commitment to C++ in the world, making the industry stick to it, but how long can it sustain the technology gap before industries or application domains move to greener pastures?