Virtual classroom course “Effective communication for engineers”

Jaco Friedrich and his team have developed an online version of the “Effective communication for engineers” training, as an alternative for their classroom course. The e-learning program is full of interactive assignments and several virtual classroom sessions. The first edition is scheduled for this June.

To learn about communication via an online course calls for more than just a webinar or video conference. Such a course needs to provide lots of interaction like break-out rooms, individual and group exercises, role plays, feedback rounds (both giving and receiving), all under the constant guidance of the trainer. As a result, the first online course for ”Effective communication for engineers” will be offered in our completely new online-learning platform together with six live virtual classroom sessions.

The focus of this online course will be on the three most important topics in corporations:

  • Create clear communication and mutual understanding.
  • Convince (mixed) groups of stakeholders and transform resistance into buy-inn.
  • Learn the psychology behind your own behavior and that of others.

For those who prefer the classroom training, a new session is available in October.

November 2020: System architect(ing) edition in Leuven, Belgium

High Tech Institute is planning a special edition of the training System architect(ing) in November 2020.

 Tech companies in Belgium have shown an increasing appetite for the System architect(ing) (Sysarch) training, and that’s why High Tech Institute decided to plan a special edition in the city of Leuven – which is easily accessible from Brussels, Liège and Antwerp.

Luud Engels will be the trainer for Sysarch in Leuven. Engels is a senior systems architect with extensive experience in the consumer electronics and high tech industry.

The course is being held at ‘De Hoorn’, located at Sluisstraat 79 in Leuven near the train station. Ample parking, all within walking distance, is available.

All social distancing measures, aimed to prevent the spread of COVID-19, will be taken care of by De Hoorn and High Tech Institute.

This special edition takes place from 16–20 November 2020.

AI engineering part 2: data versioning and dependency management

In my last column, I presented our research agenda for AI engineering. This time, we’re going to focus on one of the topics on that agenda, ie data versioning and dependency management. Even though the big data era has been with us for over a decade now, many of the companies that we work with are still struggling with their data pipelines, data lakes and data warehouses.

As we mostly work with the embedded systems industry in the B2B space, one of the first challenges many companies struggle with is access to data and ownership issues. As I discussed in an [earlier column](https://bits-chips.nl/artikel/get-your-data-out-of-the-gray-zone/), the key thing is that rather than allowing your data to exist in some kind of grey zone where it’s unclear who owns what, it’s critical to address questions around access, usage and ownership of data between your customers and your company. And of course, we need to be clear and transparent on the use of the data, as well as how the data is anonymized and aggregated before being shared with others.

The second challenge in this space is associated with the increasing use of DevOps. As data generation is much less mature as a technology than, for instance, API management in software, teams tend to make rather ad-hoc changes to the way log data is generated as they believe they’re the only consumers of the data and it’s only being used by them to evaluate the behavior of the functionality that the team is working on. Consequently, other consumers of the data tend to experience frequent disruptions of the data stream, as well as its content.

The frequent changes to data formats and ways of generation is especially challenging for machine learning (ML) applications as the performance of the ML models is highly dependent on the quality of the data. So, changes to the data can cause unexpected degradations of performance. Also, as ML models tend to be very data hungry, we typically want to use large data sets for training and, consequently, combine the data from multiple sprints and DevOps deployments into a single training and validation data set. However, if the data generated by each deployment is subtly (or not so subtly) different, that can become challenging.

The third challenge is that data pipelines tend to have implicit dependencies that can unexpectedly surface when implementing changes or improvements. Consumers of data streams can suddenly be switched off and as there typically is a significant business criticality associated with the functionality implemented by the consumer, this easily leads to firefighting actions to get the consumer of the data back online. However, even if this may be a nice endorphin kick for the cowboys in the organization, the fact of the matter is that we shouldn’t have experienced these kinds of problems, to begin with. Instead, the parties generating, processing and consuming data need to be properly governed and the evolution of the pipeline and its contents should be coordinated among the affected players.

'We’re working on a domain-specific language to model data pipelines'

These are just some of the challenges associated with data management. In earlier research, we’ve provided a comprehensive overview of the data management challenges. In our current research, we’re working on a domain-specific language to model data pipelines, including the processing and storage nodes, as well as their mutual connectors. The long-term goal is to be able to generate operational pipelines that include monitoring solutions that can detect the absence of data streams, even in case of batch delivery of data, as well as a host of other deviations.

In addition, we’ve worked on a “data linter” solution that can warn when the content of the data changes, ranging from simple changes such as missing or out-of-range data to more complicated ones such as shifting statistical distributions over time. The solution can warn, reject data and trigger mitigation strategies that address the problems with the data without interrupting the operations. Please contact me if you’d like to learn more.

Concluding, data management, including versioning and dependencies, is a surprisingly complicated topic that many companies haven’t yet wrestled to the ground. The difference in maturity between the way we deal with software and with data is simply staggering, especially in embedded systems companies where data traditionally was only used for defect management and quality assurance. In our research, we work with companies to make a step function change to the way data is collected, processed, stored, managed and exploited. As data is the new oil, according to some, it’s critical to take it as seriously as any other asset that you have available in your business.

AI engineering: making AI real

Few technologies create a level of hype, excitement and fear these days as artificial intelligence (AI). The uninitiated believe that general AI is around the corner and worry that Skynet will take over soon. Even among those that understand the technology, there’s amazement and excitement about the things we’re able to do now and lots of prediction about what might happen next.

'Rolling out an ML/DL model remains a significant engineering challenge'

The reality is, of course, much less pretty as the beliefs we all walk around with. Not because the technology doesn’t work, as it does in several or even many cases, but because rolling out a machine learning (ML) or deep learning (DL) model in production-quality, industry-strength deployments remains a significant engineering challenge. Companies such as Peltarion help address some of these and do a great job at it.

Taking an end-to-end perspective, in our research we’ve developed an agenda that aims to provide a comprehensive overview of the topics that need to be addressed when transitioning from the experimentation and prototyping stage to deployment. This agenda is based on more than 15 case studies we’ve been involved with and over 40 problems and challenges we’ve identified.

The AI Engineering research agenda developed in Software Center

The research agenda follows the typical four-stage data science process of getting the data, creating and evolving the model, training and evaluating and then deployment. For generic AI engineering, we identify, for each of the stages, the primary research challenge related to architecture, development and process. These challenges are mostly concerned with properly managing data, federated solutions, ensuring the various quality attributes, integrating ML/DL models in the rest of the system, monitoring during operations and infrastructure.

In addition to the generic AI engineering challenges, we recognize that different domains have their own unique challenges. We identify the key challenges for cyber-physical, safety-critical and autonomously improving systems. For cyber-physical systems, as one would expect, they’re concerned with managing many instances of a system deployed out in the field at customers. For safety-critical systems, explainability, reproducibility and validation are key concerns. Finally, autonomously improving systems require the ability to monitor and observe their own behavior, generate alternative solutions for experimentation and balance exploration versus exploitation.

Concluding, building and deploying production-quality, industry-strength ML/DL systems require AI engineering as a discipline. I’ve outlined what we, in our research group, believe are the key research challenges that need to be addressed to allow more companies to transition from experimentation and prototyping to real-world deployment. This post is just a high-level summary of the work we’re engaged in in Software Center, but you can watch and read or contact me if you want to learn more.

Why Agile matters

Recently, I got an e-mail asking me why we should care about Agile if the overall product development process, including mechanics and electronics, is measured in years and is completely waterfall. The question took me by surprise. I’ve been working with Agile practices for the better part of two decades now and for me it’s a given that fast feedback loops are better than slow ones.

However, after more careful reflection, I realized that the question is based on a few assumptions that, in turn, are founded on our beliefs around our ability to predict. The first assumption is concerned with our ability to optimally predict requirements for our products months, quarters or years down the line. In many industries where products contain mechanical and electronic components, the production pipeline requires long lead times. Consequently, the product requirements are formulated long before the start of production. The fallacy is, of course, that requirements change all the time due to new technologies becoming available, changing customer preferences, actions taken by competitors and so on. One rule of thumb in software says that requirements change with 1 percent per month – a very conservative estimate if you ask me.

So, how to respond to constantly changing requirements? There are fundamentally two approaches. Either you adopt agility and continuously respond to changes or you resist requirement changes, reject all that you can and grudgingly accept those that you really can’t ignore. The result of the latter approach is, of course, an inferior product as it’s based on the best insights from years ago.

The second assumption is that we can predict the effect of our requirements. These are defined as we hope to achieve a specific outcome as a consequence of realizing the requirement. We see this most often with usability requirements, but it basically extends to any quality attribute of the system. Online companies use A/B testing of solutions to determine the effects of different realizations of functions and features on users. These companies don’t do that because they’re so poor at requirements engineering, but because the effect of features and functions is fundamentally unknown when it comes to the way humans respond to software functions.

Traditional engineering companies pride themselves on their ability to predict the capabilities of systems before they build them as engineering offers a set of mathematical tools for modeling, simulating and predicting. These models are typically then confirmed by lab tests and in some cases small-scale tests in real-world contexts before fully committing to a specific design. Although this works quite well in many circumstances, it remains the case that measuring in real-world deployments provides much higher validity than mathematical models and lab tests. As I’ve shared in earlier posts, research by us and others shows that at least half of all the functions in a typical system are never used or used so seldomly that the R&D investment is a waste. So, wherever we can use techniques to deploy slices of functionality or features and measure the effect before building more, we should as it allows for a major improvement in the effectiveness of our R&D.

'We need real-world experiments to continuously improve'

Although many understand that real-world experimentation concerning usability and user behavior is a necessity, the same is true for all quality attributes. Think of all the security fixes that we need to roll out. Often these concern vulnerabilities to threats that were known before the design of the system was finished. It just turned out that the mitigation strategies that engineers designed into the system didn’t suffice. Similarly, do we know for a fact that the current system design gives us the highest performance, the best robustness, the highest energy efficiency? Of course not! Rather than relying on models and lab tests, we need real-world experiments with our products at customers in the field to continuously improve. The models and lab tests are still needed, but mostly to protect us from the downside of less successful experiments before deployment.

Concluding, if you’re able to perfectly predict the optimal set of requirements for a system or product years ahead of the start of production or deployment and if you’re able to accurately predict the effect of each requirement on the user, the customer and the quality attributes of the system, then you don’t need Agile. In all other cases, Agile (both pre-deployment and post-deployment – DevOps) offers the opportunity for a massive improvement in the effectiveness of your R&D (as measured in value created for each unit of R&D). It’s not that we can’t build products using traditional waterfall processes – of course we can as we’ve done so for decades. The challenge is that we’re much less efficient doing so, which increases the risk of disruption for our company.

Workshop Thermal design & cooling of electronics goes online

We are really proud of our trainers Wendy Luiten and Clemens Lasance, who managed to develop an online version of their workshop “Thermal design & cooling of electronics” as an alternative for their classroom course. The training, which traditionally attracts many trainees from abroad, will now offer easier access via the online course modules.  The first edition is scheduled for this May.

The COVID-19 pandemic calls for a different approach in exchanging knowledge. Since video conferencing is indispensable these days, people are getting more comfortable utilizing this type of communication. As a result, the first online course for ”Thermal design and cooling of electronics” will be offered through Microsoft Teams.

The online version has been split up into two segments: the thermal design-oriented part, followed by the advanced topics portion. The thermal design section is being extended by two half days, allowing for more opportunities to practice and achieve an active skill level for designing new thermal applications, evaluating existing thermal applications, and assessing computational simulation models. The advanced section builds on this foundation and is scheduled several weeks later. This provides participants with more time to get familiar with the material and facilitates the uptake of the advanced material.

For those who prefer the classroom workshop, a new session is available in November.

Digitalization accelerated

For all the human suffering and economic impact caused by corona, there’s one thing that has just surprised me over and over again these last weeks: companies and professionals just adjust and adjust quickly. Teams and departments that were stuck in old ways of working suddenly have found that it’s entirely possible to work in a remote setup.

During this week’s Software Center steering committee meeting, all the companies present shared how they kept the business going despite everything. Those developing software, meeting customers or doing administrative work were working from home, but things were progressing. Those that required access to complex machinery or worked in manufacturing were still at the company but had taken measures to protect against infection to the best extent possible.

All these new work setups required everyone to spend time adjusting, required some more infrastructure in some cases and gave IT departments a busy time. But after the first week or so, most people got into the groove and things seem to be moving forward at largely the same rate of progress.

Now, I’m not at all implying that the current situation is ideal. Some companies have shut down or are working at 40-60 percent of capacity. Many experience loneliness due to the lack of human contact. And for all the video conferencing in the world, nothing beats standing together in front of a whiteboard during a brainstorm session. My point is that we’re able to push forward, to conduct R&D, to drive sales and to keep things going to a much larger extent than what I’d initially feared.

'Necessity is the mother of invention'

And, of course, there’s the notion of digitalization. Changes in working behavior, interactions with customers, activities that were viewed as simply requiring physical presence have now digitalized at a phenomenal pace. Necessity is the mother of invention and it’s clear that things that were considered impossible or at least sub-par are suddenly entirely possible and will soon be the norm.

As a leader, you now have a choice to make. Either you change as little as possible with the intent of changing back to the old ways of doing things as soon as possible. Or you use this opportunity to drive as much change as possible and use this as a springboard for accelerating all kinds of changes in your organization, ranging from the business models, interactions with customers and the way sales is conducted to the way you conduct R&D, what and how you automate processes and where you use humans. As the saying goes: Never waste a good crisis!