AI and the future of systems programming

C++
Kris van Rens looks at the future of systems development and how developer happiness is an important aspect of software engineering

Artificial intelligence in general and large language models (LLMs) in particular are undeniably changing how we work and write code. Especially for learning, explaining, refactoring, documenting and reviewing code, they turn out to be extremely useful.

For me, however, having a chat-style LLM generate production-grade code is still a mixed bag. The carefully engineered prompt for a complex constrained task often outsizes the resulting code by orders of magnitude, making me question the productivity gains. Sometimes, I find myself iteratively fighting the prompt to generate the right code for me, only to discover that it casually forgot to implement one of my earlier requirements. Sometimes also, the LLMs generate code featuring invalid constructs: they hallucinate answers, invariably with great confidence. What’s more, given the way LLMs work, the answers can be completely different every time you input a similar query or at least highly dependent on the given prompt.

OpenAI co-founder Andrej Karpathy put it well: “In some sense, hallucination is all LLMs do. They’re dream machines.” This seemingly ‘black magic’ behavior of LLMs is slightly incompatible with my inner tech-driven urge to follow a deterministic process. It might be my utter incompetence at prompt engineering, but from where I’m standing, despite the power of generative AI at our fingertips, we still need to absolutely understand what we’re doing rather than blindly trusting the correctness of the code that was generated by these dream machines. The weird vibe-induced feel and idiosyncrasy of LLMs will probably wear off in the future, but I still like to truly understand the code I produce and am responsible for.

Probably, AI in general is going to enable an abstraction shift in future software development, allowing us to design at a higher level of abstraction than we often do nowadays. This might, in turn, diminish the need to write code manually. Yet, I fail to see how using generated code in production is going to work well without the correctness guarantees of rigorous testing and formal verification – this isn’t the reality today.

''An aspect of software engineering where LLMs can make an overall positive difference is interpreting compiler feedback.''

Positive difference

Another application area of LLMs is in-line code completion in an editor/IDE. Even this isn’t an outright success for me. More than once, I’ve been overwhelmed by the LLM-based code completer suggesting a multiline solution of what it thinks I wanted to type. Then, instead of implementing the code idea straight from my imagination, I find myself reading a blob of generated suggestion code, questioning what it does and why. It’s hit-and-miss with these completions and they often tend to put me on the wrong foot. I’ve been experimenting with embedded development for microcontroller units lately and have found that especially with code in this context, the LLM-based completion just takes guesses, sometimes even making up non-existent general-purpose IO (GPIO) pin numbers as it goes. I do like the combination of code completion LLMs with an AI model that predicts editor movements for refactoring. Refactors are often batches of similar small operations that the models are able to forecast well.

An aspect of software engineering where LLMs can make an overall positive difference is interpreting compiler feedback. C++, for example, is notorious for its hard-to-read and often very long compiler errors. The arrival of concepts in C++20 was supposed to accomplish a drastic improvement here, but I haven’t seen it happen. Perhaps this is still a work in progress, but until then, we’re forced to deal with complex and often long error messages (sometimes even hundreds of lines in length). Because of their ability to interpret or summarize compiler messages, combined with their educational and generative features, LLMs with a large context window are fit to process such feedback, making them a great companion tool for C++ developers. There’s an enormous body of already existing C++ code and documentation to learn from, which is a good basis for training an LLM.

Other drawbacks of C++ are the ever-increasing language complexity and the compiler’s tendency to fight rather than help you. Effective use of LLMs to combat these issues might well save the language in the short term. C++ language evolution is slow, but tool potency is tremendous. Given the sheer amount of existing C++ code in use today, the language is here to stay, and any tool that helps developers work with it is appreciated.

''To me, writing code is a highly creative, educational and enjoyable activity.''

Developer happiness

Using LLMs for code generation also takes away part of the joy of programming for me. To me, writing code is a highly creative, educational and enjoyable activity, honing my skills in the process; having a magic box do the work for me kills this experience to some extent – even manually writing the boring bits and the tests has some educational value.

Fellow educator in the software development space Ger Cloudt in his works on software quality asserts that organizational quality, a part of which is developer happiness, is half the story. According to him, organizational quality is key as it enables design, code and product quality. Sure, clean code and architecture are important, but without the right tools, mindset, culture, education and so on, the development process will eventually grind to a halt.

LLMs undoubtedly help in the tools and education department, but there’s more to programming than just producing code like a robot. Part of the craft of software engineering – as with any craft – is experiencing joy and pride in your work and the results you produce. Consider me weird, but it can bring me immense satisfaction to create beautiful-looking code with my own two hands.

Revisiting the state of Rust

C++
In late 2022, Kris van Rens wrote about the rise of the Rust programming language, largely in the same application space dominated by C and C++. Did the traditional systems programming landscape really change or was it all much ado about nothing?

According to the Tiobe index, Python is lonely at the top of “most popular programming languages,” with a score of 23 percent. It’s followed by C++ (10 percent), Java (also 10 percent) and C (9 percent). The index tries to gain insight from what people are searching for in search engines, the assumption being that this provides a measure of popularity. As a relatively young language, Rust scores 14th place with a little over 1 percent.

In a concluding summary, Tiobe CEO Paul Jansen writes about Rust that “its steep learning curve will never make it become the lingua franca of the common programmer, unfortunately.” Mentioning a language’s steep learning curve as the barrier to becoming a big success feels slightly dubious from the perspective of how popular C++ is in combination with its complexity at scale. I also think overestimating and emphasizing a learning curve for a language is selling developers short – many companies adopting Rust in production have already shown it’s very manageable.

''When it comes to learning in general, I always tend to keep a positive attitude: people are much more capable than we might think.''

Unique feat

Over the past years, Rust has established itself as a worthy alternative in the field of production-grade systems programming. It’s successfully demonstrating how a language can be modern, performant and safe at the same time. It steadily releases every six weeks, so there’s always something new there – we’re at v1.85 at the time of writing. New features land when ready and most language or library changes tend to be more piecemeal.

As Rust is growing more mature, its popularity and adoption have been gradually increasing over time. The risk factor to adopt it as a production language of choice has worn off, as can be concluded from many companies reporting about it. Google has been rewriting parts of Android in Rust for improved security, Microsoft is rewriting core Windows libraries in Rust and Amazon has been known to use Rust in its AWS infrastructure for a long time already.

Another unique feat worth mentioning is that Rust is part of the mainline Linux kernel next to C. It must be said that the effort to expand support for Rust across kernel subsystems isn’t without contention, but progress is being made with the blessing of Linus Torvalds. It will be very interesting to see how this experiment will advance.

''One of my main observations is that switching back from Rust to C++ makes me feel as if I’m being flung back into the dark ages of systems software development.''

Happy developers

I’ve been heavily using Rust alongside C++ for the past number of years. One of my main observations is that switching back from Rust to C++ makes me feel as if I’m being flung back into the dark ages of systems software development. This may sound harsh, but honestly, even when using the leading-edge version, C++23, most coding tasks feel painstakingly hard and limited compared to how they would in Rust. In the early days, I would sometimes miss the ability to directly correlate written code to output machine code as can be done in C++, but this is strictly unnecessary in 99 percent of the cases, and modern compilers are much more competent at optimization than humans anyway.

When it comes to the tooling ecosystem and integration, Rust is on another level altogether and much more up to speed with the web development world today. Whereas the C++ language and compiler often fight me to get things right, Rust’s strictness, type system, sane defaults and borrow checker seem to naturally lead me to the right design decisions – contend vs guide. When my Rust code builds successfully and the tests pass, I can leave the project with the ease of mind that the software won’t crash during runtime and the code can’t easily be broken by a colleague. Also, the Rust macro systems and excellent quality package ecosystem with libraries as well as plugin tools for the build system make a big difference in productivity.

These and other aspects make Rust extremely nice to work with. They make developers happy. There’s a reason why the Stack Overflow developer survey shows Rust as the most widely desired programming language for nine years in a row now.

Dividends

Rust is very much fit for production use, even in critical systems requiring safety certifications (for example by using the Ferrocene toolchain). I see its adoption as a logical move to enjoy the benefits of memory safety, high productivity and increased developer happiness already today, rather than waiting until the current set of tools is up to speed with the rest of the world. Add to that the cross-pollination of becoming a better developer in any other programming language by learning a new one.

When it comes to learning in general, I always tend to keep a positive attitude: people are much more capable than we might think. Yes, the learning curve for Rust is steeper than that of most other languages, but it’s so well worth it and pays dividends in the long term. I would take a steep learning curve and more sane and strict language rules and guarantees over a life with software memory safety bugs any day of the week.

Revisiting the state of C++

C++
In late 2022, Kris van Rens wrote about the state of C++ at that time and its challengers. A follow-up after two more years.

In 2022, out of discontent with the evolution process of C++, Google pulled out a substantial amount of its resources from working on C++ and the Clang compiler front-end. As an alternative, it announced the long-term project Carbon, a successor language that can closely interoperate with C++. This and subsequent events posed a watershed moment for C++ because it faced serious criticism for the amount of technical debt it accrued and its relatively slow development pace. From that moment on, it seemed, everybody had a (strong) opinion and loudly advertised it – the amount of critique could no longer be ignored by the C++ committee.

Another aspect of C++ that has been under attack is its lack of memory safety. Memory safety in a programming language refers to the ability to prevent or catch errors related to improper memory access, such as buffer overflows, use-after-free bugs or dangling pointers, through built-in features and guarantees in the language itself. This can be extended to general language safety where all undefined behavior and unspecified semantics are eliminated from the language. Language safety is a concept defined on a spectrum rather than a binary property; some languages are more safe than others. Examples of languages considered safe and still relatively low-level are Swift, Ada and Rust.

Following the intense proverbial heat of the summer of 2022, a series of blows in various forms of public advisories on memory safety explicitly put C as well as C++ in bad daylight. In late 2022, the NSA first came in with a white paper urging us to move away from C and C++. Then, CISA (the US Cybersecurity Infrastructure Security Agency) started advocating for a memory safety roadmap. In 2023 and 2024, even the White House and US Consumer Report proclaimed we should take memory safety more seriously than ever and move to memory-safe languages. There were many more events, but suffice to say that all of them didn’t go unnoticed by the C++ committee.

''C is quite a simple language; it’s easy to learn and get started with. However, it’s very hard to become advanced and proficient at it at scale.''

Admittedly, some of the efforts by C++ committee members to rebut the public attacks came across as slightly contemptuous, often almost downplaying memory safety as “only one of the many potential software issues.” This, to me, sounds an awful lot like a logical fallacy. Sure, many things can go wrong, and a safe language isn’t a panacea. However, software development requirements have drastically changed over the last forty years and today, memory safety is a solved problem for many other languages usable in the same application domain. Officially, ISO workgroup 21 (WG21) instated study group 23 (SG23) for “Safety and Security,” tasked to find the best ways to make C++ a safer language while keeping up the other constraints like backward compatibility – not so easy.

Undeniable gap

I’ve worked with various programming languages in production simultaneously over the past decades. What really stands out to me from all my experiences with C and C++ is the sheer cognitive load they put onto developers.

C is quite a simple language; it’s easy to learn and get started with. However, it’s very hard to become advanced and proficient at it at scale. As a simple language, it forces you to manually address many important error-prone engineering tasks like memory management and proper error handling – staple aspects of reliable, bug-free software. There’s plenty of low-level control, yes, but the ceremony and cognitive burden to get things right is just staggering.

The same largely holds for C++. It does make things better by actually supporting you to write correct code, for example with the standard library featuring smart pointers for memory management. However, the truckloads of language complexity make it hard to use correctly at scale as well.

What’s more, all of these aspects of coding in C and C++ come at no guarantee that things are reliable after compilation. This forces developers to study best practices, use compiler sanitizers and static analyzers and resort to extensive testing, just to be more sure that all is fine. Of course, most of these activities should be part of any healthy software developer mindset, but it’s painful to realize that C and C++ offload the requirement of doing this work to the developer, rather than addressing it in the language directly. Developing a language, as any engineering challenge, is an endless succession of tradeoffs, sure, but there’s an undeniable gap between the capabilities of the ‘legacy languages’ and the needs in the software development space right now. Other, newer languages show that it’s possible to meet these requirements while keeping up the performance potential.

''New features improve the language but also inherently increase the already quite substantial complexity, while all the old footguns and dangers like undefined behavior are still there.''

Years away

Most programming languages are constantly being improved over time. If you’re in the C world, however, probably little to nothing is going to change. For many projects today, therefore, it isn’t the right language choice if you want any language safety at all. There are alternatives available, more fit for purpose – if this is possible given your project constraints and preferences.

For C++, it’s a different story. WG21 is now building up to release C++26, which is going to bring huge features to the table, including (most likely) contracts, executors and even static reflection. Game-changers for sure, but mostly addressing language application potential or, in the case of contracts, improving safety and correctness, but still at the cost of manual labor on the part of the developer using it.

New features improve the language but also inherently increase the already quite substantial complexity, while all the old footguns and dangers like undefined behavior are still there. Educating C++ to novices as a trainer remains, in part, an exercise in steering them away from the pitfalls – not really a natural, convenient way to teach or learn.

The ‘parallel universe’ of the Circle C++ language demonstrates how the ostensibly clogged syntax and language definition of C++ is still able to pack many other great features like true enumerators, pattern matching, static reflection and even a borrow checker. Unfortunately, this remarkable one-man show run by Sean Baxter isn’t standardized C++ (and vice versa). Chances are slim that any of these excellent features will land in official C++ anytime soon.

Baxter also has a “Safe C++” proposal, presented to the Safety and Security study group in November of last year. In it, he suggests extending C++ with a “rigorously safe subset” of the language that offers the same safety guarantees as the Rust borrow checker does. I do applaud the effort, but time will tell if at all, and in what form, this proposal will make its way through the often seemingly sluggish C++ language development process. C++26 design work has mostly converged and C++29 is still a couple of years away. Add to that the implementation/industrialization time of these spec versions before they really land on our virtual workbenches, and it might well be a decade from now – if we’re lucky.

Greener pastures

Not all is lost, though. The C++ committee is doing great work progressing the language, and the current state of the language and ecosystem is better than ever. It’s just that the gap between what C++ can offer today compared to what’s shown to be possible in systems programming safety and integrated tooling is huge.

Looking forward a couple of years, I don’t see this gap being filled. Meanwhile, languages like Rust and Swift aren’t standing still. There’s a lot of momentum and prior commitment to C++ in the world, making the industry stick to it, but how long can it sustain the technology gap before industries or application domains move to greener pastures?

Quantifying the ROI of secure coding trainings

Return on Investment for secure coding trainings
How do secure coding trainings influence real-world ROI? Delve into a transformative approach and its tangible business outcomes.

In the business domain, agility is essential. A transformative approach is to perceive employees as human capital rather than mere resources. The critical investment lies in focused training – especially when it comes to secure coding tranings. The returns are multifaceted, as it nurtures a workforce equipped to steer the organization toward progress. By valuing and developing the human element, organizations pave the way for sustainable growth and success.

By filling out this form you will be redirected towards the online article about how to measure your business outcomes from Secure coding trainings.

 

C++ and Rust

Despite a host of up-and-coming alternatives, C++ is still a force to be reckoned with, certainly in the legacy-fraught high-tech industry. In a series of articles, High Tech Institute trainer Kris van Rens puts the language in a modern perspective. In our new 4-day training course, Kris van Rens introduces participants to the language basics and essential best practices.

Every couple of years, C and C++ are declared dead. Recently, Microsoft Azure CTO Mark Russinovich publicly stated that they should be deprecated, in favor of Rust. Though expressed in a personal capacity, it’s an interesting take for someone from Microsoft, which has an enormous C++ code base and many active members in the C++ committee.

Regardless, C and C++ are still very much alive and kicking – each the lingua franca of many industrial software development environments. Yet, Rust is the apparent ‘place to be’ these days in systems programming land. What is it about this language that makes it so attractive? Let’s look at it from a C++ perspective.

Notable differences

First off, Rust offers a clear and objective advantage over C++: guaranteed memory safety for the compiled result. Essentially, Rust trades off compilation time (and compiler complexity) for a safer runtime. The compiler will try to prove to itself that the code you feed it is memory safe, using its type system and sometimes the help of the developer to indicate things like variable or reference lifetime dependencies.

There have been multiple independent security surveys on large C and C++ code bases that have shown a consistent and whopping 70 percent of all bugs and security issues to be memory safety related. In light of this, trading compile time for a memory-safe runtime seems a no-brainer. Writing quality C++ using the right tools, tests, best practices and compiler sanitizers can get you there too, but it doesn’t provide hard guarantees from the get-go. The ‘guard rails’ Rust implements to protect you from shooting yourself in the foot may sometimes be complex or even irritating, but think about the considerable benefits a runtime memory safety guarantee brings to programming concurrent software.

''As software engineers, we should use the right tool for the job.''

Another notable difference to C++ is the way Rust integrates the handling of errors and function result values. Unlike C++, it has no exceptions; it offers mechanisms to deal with errors using regular control flow. Result values must be processed, forcing the developer to implement error handling and prevent incorrect code, so errors or exceptions rarely fly under the radar. If during runtime, things really go sideways – say memory is exhausted – Rust will generate a so-called panic, a fatal error, stopping the thread in question (which can be handled gracefully).

The Rust compiler, built on top of the LLVM compiler back-end, is evidently very pedantic, to uphold memory safety and correctness of code, but quite helpful at the same time. Consider it a pair programmer, looking over your shoulder and providing readable error messages, even suggesting potential solutions. Aside from the compiler, most of the Rust tooling ecosystem revolves around Cargo, which is a build system, a package and dependency manager and much more, all in one. Despite the wealth of package managers, build systems and other tools available for C++, setting up a serious C++ project can still be a pain in the rear end.

Being relatively young, Rust has had the luxury of assimilating 40+ years of language development into its syntax and structure. Much of the inspiration was drawn from both imperative languages like C++ and functional languages like Haskell and Scheme. This gives it the subjective benefit of a very ‘modern’ feel. Moreover, many things in Rust are expressions, providing more flexibility in code notation.

In my previous contribution, I argued that the user base of a programming language is of paramount importance. Rust, with help of the Rust Foundation, is gradually building this user base, as can be deduced from many indicators, like the Tiobe index and the Stack Overflow survey, and by the adoption of Rust in the Linux v6.1 kernel – not a minor feat. As more and more security reports claim net positive effects strongly related to the use of memory-safe languages like Rust, the user base will continue to grow. Every pointer (no pun intended) seems to indicate a bright future for Rust.

Not all sunshine and roses

Should we then just deprecate C and C++ in favor of Rust? In my opinion, this poses a false dichotomy. Why should we have to choose between one or the other? As software engineers, we should use the right tool for the job.

Migrating to Rust isn’t all sunshine and roses either. When you have a large body of existing C++ code, Rust isn’t directly going to be of help, unless you’re willing to drop to C APIs for interaction. There are tools for C++-level interoperability, but these are still in their infancy. Of course, you could also rewrite your code – but that isn’t going to be easy, especially not when you lean heavily on advanced templates. And when you’re relying on absolute maximum performance requirements, safe Rust may not be up to it (yet?). Furthermore, in stricter environments like automotive or aviation, standards often prescribe the use of a formally specified programming language, which Rust currently is not.

The best advice is to not pin yourself down on a single programming language. Learning multiple languages is generally really beneficial and will improve your competence, style and knowledge in every language you master. So take a look at C++, Rust and other alternatives to broaden your perspective – or just for the sheer fun of it.

Challenging C++

Despite a host of up-and-coming alternatives, C++ is still a force to be reckoned with, certainly in the legacy-fraught high-tech industry. In a series of articles, High Tech Institute trainer Kris van Rens puts the language in a modern perspective. In our new 4-day training course, Kris van Rens introduces participants to the language basics and essential best practices.

“There are only two kinds of programming languages: the ones people complain about and the ones nobody uses.” This is a famous quote attributed to Bjarne Stroustrup, the creator of C++. Hidden in it are a couple of truths.

First and most obvious: no single programming language is perfect for solving every problem in every domain. Especially when a language is advertised as “general-purpose,” like C++, it can be applied nearly everywhere, but chances are there’s a mismatch between the tool used and the tool required. For example, it’s perfectly feasible to write a complete web application in C++, but is it the right tool for the job? Personally, I wouldn’t say so.

Then, as a language evolves and ages, it’s very important that there’s a clear process to deal with (breaking?) changes. A conservative and safe approach is to maintain backward compatibility from the first stable release onward. This is the approach C++ has followed and abided by for decades now – which, unfortunately, also blocks the adoption of some language improvements.

Another hidden truth from the opening quote: the user base is extremely important. You could design the most beautiful, safe, secure and pleasant programming language ever. But what good does that do if only a few people are using it?

Good contenders

A good model for programming language significance is a mechanical flywheel; the larger the user base, the bigger the flywheel and rotational velocity. The user base size is defined by the number of active developers, existing code bases, separate code base interdependencies and other factors like third-party integration support. For C++ at least, this flywheel still has enormous momentum. Yet, there are forces at work slowly eating away at this momentum. Other languages in the systems programming realm are winning over parts of the C++ user base.

Previously, I wrote about the announcement of the Carbon programming language, a C++ successor started by Google, but there are many more alternatives. Some of them, like Zig, Odin and Go, are more aimed at C rather than C++ – I’m not going to cover these here. Then, for the sake of being pragmatic, I’m going to skip languages that are too small or experimental, like Nim, Val, Vale, Cpp2 and Jakt. That leaves only a handful of ‘serious’ alternatives, including Rust, Swift, D and Circle.

''A good model for programming language significance is a mechanical flywheel.''

What makes a language a good contender for large-scale C++ user base adoption? We can start by looking at the properties where C++ generally falls short. For example, does the alternative have a ‘modern syntax’ throughout? Does it feature built-in guaranteed safety of some kind, like memory or math operation safety? Does it come with a tooling/packaging ecosystem? C++ interoperability is another very important aspect. A C-style foreign function interface (FFI) is nice, but it feels like a downgrade if we have to adapt our C++ interfaces to that.

Having one of these properties isn’t enough, though. A modern and clean syntax is very nice but isn’t going to cut it on its own. A great tooling ecosystem is fantastic but, again, not good enough on its own. In my opinion, if you want to have any chance of succeeding C++, you’re going to need at least the following three ingredients: a 10x improvement on some aspect of the language, guaranteed memory safety and good interoperability with existing C++ code.

Ticking the boxes

Looking at our serious alternatives, a language like D is missing the 10x improvement. This is why I think it has never really taken off, even though it’s already 20 years old. One-man project Circle is very impressive, especially for its language development experimentation capabilities. And above all, except for taking feature flags to enable language improvements, it’s completely compatible with C++. Unfortunately, Circle isn’t openly governed and lacks guaranteed memory safety. Swift comes with excellent tooling, but it’s too focused on the Apple platform and it has only partial/work-in-progress C++ interoperability.

Rust ticks most of the boxes. It even has quite a decent user base already. Although it also lacks true, mature C++ interoperability, it’s the most promising contender today. More on Rust in my next contribution.

The state of C++

Despite a host of up-and-coming alternatives, C++ is still a force to be reckoned with, certainly in the legacy-fraught high-tech industry. In a series of articles, High Tech Institute trainer Kris van Rens puts the language in a modern perspective. In our new 4-day training course, Kris van Rens introduces participants to the language basics and essential best practices.

Last July, the Carbon programming language was officially announced at the CppNorth C++ conference in Toronto, Canada. Carbon is presented as “an experimental successor to C++” and was started as an open-source project, by Google no less. Wait… Google is going to create a C++ successor? Until recently, the company was heavily involved in developing the C++ language and engineering the Clang C++ front-end for the LLVM compiler. With tens of thousands of engineers within Google working on billions of lines of code, choosing the path of a completely new language seems rather bold.

Why would a huge company such as Google venture into such a daring project? Well, it’s a symptom of the state and development of C++. For those who haven’t caught up with the language’s evolution in the past few years: there have been some major discussions. Of course, having discussions is the whole point of the C++ committee meetings, but one topic has been popping up again and again without settlement: whether or not it’s worth improving language design at the cost of backward compatibility.

Leaner governance

C++ has been around for about forty years now and is being used to create performance-critical software all over the world. After a period of relative quiet following the initial ISO standardization in 1998, the committee managed to steadily introduce great improvements every three years since 2011. As a result, the language has grown quite different from what those of us old enough used to work with in the nineties and noughties. The addition of features like concepts, ranges and modules in C++20 alone pack a powerful punch.

At the same time, though, the C++ language evolution process is known to be extremely challenging. The weight of carrying decades of technical debt while maintaining backward compatibility is substantial – too much for some, it seems. Trying to add a significant language feature may cost up to ten years of lobbying, discussions, reviews, testing, more reviews and meticulous wording. Of course, introducing considerable changes in a project with this many stakeholders is no mean feat, but ten years in today’s tech world is a literal lifetime. Another challenge is that the ISO committee is predominantly Western, with a heavy underrepresentation of big Asian C++ users like India or China. These downsides don’t look good, especially not in the light of rapidly-growing, modern, openly governed (and relatively young) languages like Rust or Swift.

Sigasi Extension for Visual Studio Code

Sigasi announces the release of thei VS Code Extension with rich support for SystemVerilog, Verilog, and VHDL. Our extension provides features and language support such as code navigation, project management, linting, code formatting, tooltips, outline, autocomplete, hover, and much more!

''Still, I think right now is a very important time for C++ to consider its position in the systems programming universe; it can’t ignore the signals any longer.''

Is the technical debt of the C++ language really of such gargantuan proportions that it’s next to impossible to add new high-impact features? One-man army Sean Baxter of the Circle C++ compiler has shown that it’s not. In the past months alone, he single-handedly demonstrated that it’s possible to add considerable features like a true sum type and language-level tuples. Granted, an implementation in a single compiler of a C++ dialect without a thoroughly reviewed proposal is far from an official C++ language feature, but at least it shows how much wiggle room and opportunity there is in the syntax and language as a whole – if we really set our minds to it. It also shows that the burden of technical debt alone isn’t the limiting factor in the language development.

The C++ language governance model isn’t likely going to change anytime soon, being so tied in with the ISO process and the committee stakeholders. Still, I think right now is a very important time for the language to consider its position in the systems programming universe; it can’t ignore the signals any longer. Perhaps a leaner governance structure will help, or allowing for breaking changes to shed technical debt in a future version – who knows. Unfortunately, such substantial changes to the process will most likely take years as well.

Wait and see

Will the drawbacks cause C++ to be eliminated anytime soon? No, definitely not. The sheer momentum of the existing code and user base is overwhelming. ‘Just’ switching to another language isn’t an option for everyone, not even for Google. For that to work out, true interoperability with C++ (not just C) is needed, which is where alternatives like Rust and Swift still fall short. Not for nothing is Google advertising C++ interoperability as a key feature of Carbon, allowing to step-by-step adopt the language from a large existing C++ code base.

At the moment, however, Carbon isn’t much more than a rough specification and announcement. We’ll have to wait and see if it can live up to the expectations. In the meantime, C++ will evolve as well, hopefully positively inspired by the possibilities of Circle and other languages in the field.

 

System architecting for politicians

System architect

Over there, under the parasol, cap, sunglasses, beer, that must be our prime minister.
If I arrange another beer, can I join you?

Beer is welcome and if you don’t talk politics, you can join us.
System architect in politics
Illustration Rutte with Luud Engels

Deal! I am a political illiterate. I give a training here and can only talk a little about high-tech system architecting.
 

Sounds interesting! I’ve been on quite a few trade missions and I know the Netherlands plays a leading role there.

 
It certainly does! I’ve had the opportunity to work for companies that could predict which high-tech product they needed to have on the market in three years and put genius researchers and supremely capable engineers to work to reach that goal.

Precisely because it takes a considerable number of different areas of expertise to develop, manufacture and maintain such a high-tech product, a tangle of conflicting requirements arises from that multitude of disciplines. But the successful companies stand out because despite this tangle, they can agree on an approach and thus make the right decisions in a timely manner.
 

That must indeed be enormously complex. But fortunately, those bright minds and handy hands know which calculations and models to apply. In my work we also apply models, but they are more fodder for discussion than leading to consensus and correct decisions. With us, it’s more human work.

 
There may be more similarities there than you would think. All experts in high-tech are lords and masters in their fields and often take the stage to showcase exactly that: beta superiority.

On the one hand, you desperately need the expertise, models and calculations to keep those professionals innovating in their fields, digging deeper and deeper tunnels. And on the other hand, every new insight in a certain discipline is used as a weapon to beat the brains out of experts from other tunnels.

Islands arise, sometimes even camps, and the plague is that they all have a valid point.
 

Okay, okay, so it’s human work too. But you said just now that they do come to an agreement. So how do they do that?

 
It’s all about system architecting. They reach working agreements – you can call it an approach – in which the various disciplines provide each other with insight into where, in essence, the contradiction is manifesting itself and for which parameters a balanced solution must be found. So this is not about negotiating or trying to reach consensus, but making jointly weighted choices. Once they all have an overview and agree on the entire system, these bright minds subordinate their own tunnel wisdom to, say, the higher good.


 

Nice that that’s how it works in high-tech, but how different it is with us. No doubt you have seen debates where people are too busy proclaiming their own party truth and unwilling to listen to each other, let alone understand each other. That system architecting doesn’t work with us.

 
I’m going to play devil’s advocate; those debates do not have the common purpose that does prevail within successful companies. In the debates, the system goal is conspicuous by its absence.
 

No, it can’t be because of that. For example, we have set a very clear goal for nitrogen reduction: half less by 2030. So how concrete do you want the goal to be? 

 
Here you touch on a basic error. You see, that reduction is not a system goal. This is exactly where constructive companies differ from politics. Let me explain.

The system goal will include terms such as food quantity, food quality, sustainable operations and conservation of environment. However, no system has ever been developed with the goal of reducing nitrogen, which is exactly why many protest as soon as you do set that as a goal. Don’t get me wrong, I’m no climate wimp. I see the excess nitrogen deposition as a negative effect that needs to be fixed.

I am pretty sure that farmers, citizens and businesses subscribe to the system goal of producing food in a sustainable way in the Netherlands. If you had invited them to keep heading towards that system goal while the nitrogen surplus needs to be repaired, you would have got cooperative thinkers instead of counter-thinkers. The system goal always involves a desired effect, and most people therefore want to participate in it.
 

I see your point. So the Netherlands can be governed by system architects?

 
Govern not, but even politicians would benefit from practices and methods such as those within system architecting:

''Proclaiming system goals results in solution supporters.''

''Proclaiming solutions results in aimless opponents.''

 

So we messed up?

 
In this line of approach, certainly yes! However, some things have also been messed up in high-tech, and misses will continue, but every mistake is an opportunity to improve. How else do you think systems architecting came about?
 

By the way, you wouldn’t be talking about politics!

 
I didn’t, we just talked about decision making.
 

Trend 4: The W model

System requirements engineering trainer
High Tech Institute trainer Cees Michielsen highlights a handful of trends in the field of system requirements engineering. For High Tech Institute, he provides the 2-day training ‘System requirements engineering‘ several times a year.

 

In high tech, people often refer to the V model when they talk about the process from initial ideas to product implementation when it comes to system requirements engineering. This model starts with a functional breakdown, the left leg of the V. In practice, however, you can’t include all requirements in the traditional functional breakdown. Typical examples are the physical properties of products – mass, volume, that sort of information.

It would be wise to expand the V model to what I call the W model. This model starts with two parallel trajectories in the left leg: the functional and the physical flow. Both accommodate separate system aspects: the functional flow is primarily concerned with ensuring that the required functionality is implemented, while the physical flow ensures that the physical aspects are budgeted down to the relevant system elements.

The two left legs join forces at the so-called ‘building block’ level, where the elements of a functional system, eg the braking system of a car, are specified and designed according to their requirements. These elements have both functional and physical characteristics. In the braking system example, one of the building blocks is the brake pedal, which is specified by functional requirements that make clear what the pedal is supposed to do and by physical requirements that specify the constraints concerning the pedal mass, the allowed design envelope, material and more.

Building blocks are defined at the level at which specific functionality is specified, designed and implemented. This doesn’t mean that they’re always single parts; they can be quite complex, like the motor of an electrical car. It is important, however, that they always have two ‘parents’: a functional parent (to ensure that the braking system can rely on the functionality of the brake pedal) and a physical parent (to ensure that the pedal fits the intended location and that it meets the volume restrictions in the driver’s cabin, as well as other interfaces).

The use of building blocks prevents models from becoming too detailed. At the same time, it enables practical product management, especially for complex systems, both within the product design and within the manufacturing domain. For an average passenger car, around 400 building blocks are defined; the latest ASML machines have around 2,000.

Once released, the building block design is virtually integrated into the functional and the physical structure, both all the way up to the system level. The purpose is to demonstrate that the design satisfies the requirements at each level, functional as well as physical. These are the two upward legs that form the middle part of the W model.

This approach has several advantages. It brings clear responsibilities at the system level for both functional and physical requirements throughout the system’s life cycle. It enables unambiguous budgeting of physical aspects such as mass and volume downstream. It facilitates the early detection of possible integration issues (“it doesn’t fit, product not balanced, too heavy, interfaces not adhered to”) – during the design phase, before regular product integration. It makes the responsibilities for the ‘functioning’ of the elements explicit – the team designing the braking system must demonstrate that it works according to the requirements; there are no excuses to wait until the whole product is integrated.

It’s good to realize that the average functional subsystem, like a braking system or a level sensor, doesn’t appear on a manufacturing bill of materials. You don’t order a braking system, you order its building blocks. This means that, in the manufacturing or logistic environment, it’s difficult (if not impossible) to say what function a component on the factory floor or even in the end product fulfills. If, however, you would follow the logic as explained in the W model, it would be a piece of cake as the implementation of the building block also traces back to its functional parent(s).

 

 

Trend 3: Traceability

System requirements engineering trainer
High Tech Institute trainer Cees Michielsen highlights a handful of trends in the field of system requirements engineering. For High Tech Institute, he provides the 2-day training ‘System requirements engineering‘ several times a year.

 

The principle idea of traceable requirements is really cool: being able to both top-down follow a requirement from its creation to its implementation and bottom-up from its implementation to its origin. Most textbooks give good-looking diagrams and tables showing how requirements are linked to other requirements and test cases. Unfortunately, these diagrams and tables have little to do with everyday practice.
 

System requirements follow systems engineering principles. Let’s take the example of a waterway lock. At the system level, there will be a requirement stating that the lock shall enable boats to travel upstream and downstream through the canal. How to break this down if we haven’t yet decided how to solve the problem, or in requirements terminology: how to satisfy this requirement?
 

In practice, we see a large variety of solutions, from the magnificent Scottish Falkirk Wheel (a true boat lift) to the Dutch locks at Eefde. Two completely different solutions for the same problem. So, what to do with the traceability of our system requirement?
 

In the case of Eefde, a lock consists of two gates and a chamber. Each component has its own capabilities but none of the components can satisfy the system requirement on its own. That’s why we need the design decisions as an anchor point in the requirements flow (top-down, requirement to design decision to requirement) and in the trace-back (bottom-up). Through the design, we understand that when we open one of the gates, the water level in the chamber will become equal to the level at the opened gate. Subsequently, the boats can move into the chamber through the open gate. Therefore, one of the so-called derived requirements for the gate subsystem is that it can be opened and closed.

[Courseinstructor heading=”‘Why do we have this requirement and why does it have this value?'”

The questions that we need to answer and for which we need the bottom-up traces are: why do we have this requirement and why does it have this value? The answers can only be found through traces from requirements at the subsystem level to the design decisions one level up, in our case the system level of the waterway lock.
 

System engineers also need to deal with resource budgets, like mass, volume, energy, operational space and material cost. All of these enter the requirements specification at the system level as individual requirements, but they’re never passed on to the subsystems without analyzing them in coherence, creating a design that takes all pros and cons into consideration and finally deciding on which solution best satisfies these system requirements.
 

In reality, there are many dependencies between, in this case, resource property requirements. Especially in the automotive industry, mass and material cost are two vehicle properties that are strongly intertwined. At the system level, the overall mass budget is established: the vehicle shall weigh less than 1,500 kg, and so is the material cost budget: the total cost of materials shall not exceed 7,000 euros. Then, in case tough decisions must be made, a priority is determined, eg less weight is more important than material cost.
 

The design will try to find a solution for this and allocates budgets for mass and material cost, balancing these requirements and, at the same time, assuming certain capabilities of the subsystems that contribute to these budgets. Finally, when a decision is made for a solution, parts of the mass and material cost budgets are allocated to the subsystems.
 

If you’re responsible for one of these subsystems and were given a mass budget of 100 kg, where would you look for answers to the questions: why do I have a mass budget and why is it 100 kg?