ASML system engineer awarded first ECP2 Silver certificate

Buket Sahin working for ASML
ASML engineer Buket Şahin has become the first person to get the ECP2 Silver certificate. For Şahin that’s just the side-effect of her passion for learning. She likes to dig into new fields, and became a better system engineer because of it.

When Buket Şahin was doing her bachelor’s degree in mechanical engineering in Istanbul, she joined the solar car and formula SAE teams of her university. A decision that quickly made her realise the limitations of her knowledge. It put her on a lifelong track to learn about fields different from her own.

“That’s when it all started”, Şahin recalls. “I saw how necessary it was to learn about other disciplines. Obviously, I knew the mechanical domain well, but suddenly I had to work with, for example, electrical engineers. I couldn’t understand what they were talking about, and I really wanted to.”

Şahin eventually graduated with a bachelor’s degree in mechanical engineering and a masters in mechatronics, besides doing an MBA. She first worked as a systems engineer in the Turkish defence industry, before making the transfer to ASML in 2012. She started by working in development & engineering on the NXT and NXE platforms. Currently she works as a product safety system engineer for the EUV machines of ASML.

During that journey, she persistently sought out new knowledge, taking a score of courses in fields such as electronics, optics and mechatronics. At the end of 2024 she became the person who achieved the first ECP2 Silver certificate. ECP2 is the European certified precision engineering course programme that emerged from a collaboration between euspen and DSPE. To receive the certificate she had to take 35 points worth of ECP2-certified courses.

“My goal wasn’t to achieve this certification”, she laughs. “But in the end it turned out I was the first one to get it.”

Helicopter view

Şahin’s position at ASML combines system engineering with a view on safety. “We are responsible for the whole EUV machine from a safety point of view”, she notes. “This includes internal and external alignment, overseeing the program and managing engineers and architects.”

The team in which she works contains up to hundreds of people, of which there is a core team of around fifteen system engineers. One of those is a safety specific system engineer role, such as the one she fulfils.

''I need to maintain a helicopter view, but also be able to dig into the parts.''

Taking that wider, systems, perspective, which combines different fields, is something she likes. It allows her to put into practice the different things she learned throughout her career. “I have broad interests”, says Şahin. “I like all kinds of sub-fields of science and engineering. In systems engineering I can pursue that curiosity. That’s also the reason why I like learning, and taking courses so much. As a system engineer you need to know a complex system, and the technical background of the parts. You need to be able to dig deeper into the design. You need to be able to dive into the different disciplines, but at the same time maintain a helicopter view. Maintaining that balance is something that I like very much.”

Buket Sahin got the first ECP2 silver certificate
Buket Şahin at ASML’s experience center. 

NASA handbook

Şahin started taking courses as soon as she landed at ASML. She realised that she should expand her knowledge beyond what her degrees had taught her. “They were very theoretical”, she admits. “They weren’t very applied. The research and development industry in Turkey isn’t as mature as it is in the Netherlands, particularly for semiconductors. In the Netherlands there’s a very good interaction between universities and industry. I wanted to gain that hands-on knowledge. So I started with courses in mechatronics and electronics. Then I wanted to learn about optics, a very relevant field when you work at ASML. I just continued from there.”

Curiosity is a driving force for Şahin. “Some courses I took because I needed the knowledge in my work, but others were out of curiosity. I wanted to develop myself and learn new things. The courses allowed me to do that.”

Interestingly, she didn’t take any courses on system engineering though. “I was mainly looking to gain a deeper knowledge in various technical disciplines”, she looks back. “My first job was as a system engineer, but the way the role is defined in different companies varies heavily. System engineers in the semiconductor industry require knowledge of the different sub-fields of the industry. An ASML machine is also very complex, so you really need to update what you know. Things can change fast, and you need to stay up to date. That’s why learning is such a big part of my career.”

She did learn how to be a system engineer within ASML, both by learning on-the-job, and by taking internal courses. “There are internal ASML system engineering trainings”, says Şahin. “That’s why I didn’t need external courses. Also, I learned the field from the NASA System Engineer Handbook back in Turkey. That’s also the methodology that ASML uses.”

Hands-on knowledge

When Şahin looks back on all the courses she took since she moved to the Netherlands, it’s the practical ones that stand out. “The most important thing I learned was applied knowledge”, she says. “Going to university taught me the theory, but it’s the day-to-day insights that are important. I particularly like it when courses teach you rules of thumb, pragmatic approaches and examples from the industry itself. That’s the key knowledge for me. It particularly helps when the instructors are from the industry, so they can show us what they worked on themselves.”

Since 2012, learning has also become easier. “When I started there weren’t as many learning structures to guide you. High Tech Institute today, for example, has an easy to access course list. In 2012, however, I had to do much more research, and courses weren’t advertised as much and they were even only in Dutch. I had to ask colleagues and find out for myself. If I had to start today, things would have been much easier.”


“If it helps you achieve your goal, it’s very easy to take courses when you’re working at ASML”, says Şahin.

At ASML they are happy about Şahin’s new certification, and the hunger she shows to learn new things. “My managers always supported me”, says Şahin. “We define development goals, and select the training that would achieve those targets. If it helps you achieve your goal, it’s very easy to take courses when you’re working at ASML.”

''Learning, however, is a goal in itself for me, whether it’s connected to my job or not.''

Şahin is, for now, far from done. For her the learning never stops. “I just started a masters programme at the KU Leuven. It’s an advanced master in safety engineering, and it’s connected to my position at ASML. My short-term goal is to complete this master. After that I want to continue my career here at ASML as a system engineer. Learning, however, is a goal in itself for me, whether it’s connected to my job or not.”

This article is written by Tom Cassauwers, freelancer at Bits&Chips.

 

Software quality is about much more than code

software quality
Starting with punch cards in the early 1980s, Ger Cloudt learned valuable lessons about developing good software. The new High Tech Institute trainer shares his insights about the interplay between processes and skills, about measuring software quality and about fostering an organizational culture where engineers can deliver high-quality software.

Ger Cloudt’s first encounter with programming involved the use of punch cards during his electronics studies at Fontys Venlo University of Applied Sciences in the early 1980s. After graduating, he embarked on a career as a digital electronics engineer, focusing on both designing digital circuitry and developing software to control microprocessors. “This was in assembler, and I remember I created truly unstructured spaghetti code,” Cloudt recalls. “Naturally, this made it exceedingly difficult to troubleshoot and fix bugs, teaching me a tough lesson that there must be a better way.”

Fortunately, during his second assignment, Cloudt was paired with an experienced mentor who taught him to begin with structured pseudocode and then convert it into assembler. “This was the first time I experienced that structure can facilitate the creation of robust code and make debugging easier.” The experience eventually led him to transition to software development a few years later.

On May 20, we organize a free webinar ‘Infamous software failures’ presented by trainer Ger Cloudt. Registration is open.

Process versus skill

Cloudt went on to work at Philips Medical Systems as a software development engineer and later as a software architect, where he learned how processes and skills complement each other. “To execute actions, you need a certain skill level, while to achieve results, actions must be structured by a process. However, the importance of process or skill depends on the type of task. On the one hand, there are tasks like assembly-line work or building Ikea furniture you bought, which involve a strict process but minimal skill requirements. On the other hand, tasks such as painting the Mona Lisa, as Leonardo da Vinci did, rely less on processes but require a high skill level that few possess.”

''I increasingly believed that skill level is more important than processes for software engineers. A process can facilitate applying your skills, but with inadequate skills, no process will help.''

During this period, Cloudt observed a strong emphasis on processes in software development. “This was the period of the Capability Maturity Model’s emergence, aimed at improving software development processes. However, even with processes in place, skills remain essential. In the pursuit of high CMM levels, undervaluing skill is a real danger.” This insight was further reinforced when Cloudt transitioned to management roles at Philips Medical Systems, leading teams of sixty people. “Achieving a specific CMM level quickly turns into a goal in itself, and as Goodhart’s Law states: when a measure becomes a target, it ceases to be a good measure. I increasingly believed that skill level is more important than processes for software engineers. A process can facilitate applying your skills, but with inadequate skills, no process will help.”

Cloudt subsequently learned about the importance of transparency. “In my first quality management role, I had to look at an issue involving the integration of two distinct software stacks. One team developed NFC software, another worked on software for a secure element. Integrating both turned out to be a challenge. When I looked at it deeper, I discovered that although the teams were testing their software, test failures weren’t monitored systematically. So we created daily updated dashboards showing test results, and the developers had daily discussions of the outcomes. We even shared the dashboards with the customer. Naturally, everything appeared red initially, but this served as a strong incentive for the developers to improve. Consequently, the project succeeded.”

Learning by sharing

In his role as a software R&D manager at Bosch, Cloudt started to feel the need to share his insights on software quality. He began by sharing articles on the company’s internal social network, as well as on Linkedin. “I received a lot of positive feedback, particularly from Bosch colleagues,” he says. “So in 2020, I decided to write a book, ‘What is software quality?’. This experience was very enriching, as it made much of my implicit knowledge explicit and revealed gaps in my knowledge as well.”

In a quality committee at Bosch, Cloudt met a young graduate with a Master’s degree in quality management. When he asked whether the graduate had taken a course on software quality, the answer was negative. “This prompted me to approach the Engineering Doctorate program at Eindhoven University of Technology, where they invited me to give a guest lecture. Eventually, I became a lecturer for a quality management course.” Cloudt also began speaking about software quality at events, such as a Bits&Chips event in 2021 and he’s currently launching two training programs at the High Tech Institute, one for engineers and one for managers. His current role is software quality manager for the Digital Application Platform development at ASML.

Measuring software quality

Software quality as such isn’t measurable, Cloudt maintains, due to the concept’s diversity. “You can measure some specific aspects of software quality, known as ‘modeled quality.’ These include cyclomatic complexity of code, dependencies, code coverage, line count and open bugs. Such metrics are useful, but everyone who sets targets on them should be wary of Goodhart’s Law.”

An essential part of quality remains unmeasurable: transcendent quality. To illustrate this, Cloudt compares it to evaluating a painting. “You can measure paint thickness and canvas dimensions, but you can’t measure the painting’s beauty. The same applies to software quality: you can measure code coverage by your unit tests, but that doesn’t determine whether the tests are good. You need an expert opinion for this, supported by the modeled quality you measure.”

''Never underestimate culture. An organization should foster an environment where software engineers can thrive and deliver excellent design, code and product quality.''

When people think about software quality, they often mention aspects such as modularity, clean code and usability. These are examples of design quality (eg modularity, maintainability and separation of concerns), code quality (eg clean code, portability and unit tests) and product quality (eg usability, security and reliability). However, according to Cloudt, these three types of quality require a frequently overlooked element: organizational quality. “This type of quality determines whether your organization is able to build high-quality software. Aspects such as software craftsmanship, mature processes, collaboration and culture are vital to organizational quality. Never underestimate culture. An organization should foster an environment where software engineers can thrive and deliver excellent design, code and product quality.”

Intended and implemented design

There are several well-known best practices for developing high-quality software, including test-driven development (TDD) and pair programming, alongside static code analysis. Cloudt also adds something less common: static design analysis. “Many people don’t realize that there’s a difference between the intended design and the implemented design of software. Software architects document their intended design in UML models. However, a gap often exists between this intended design and its implementation in code. Keeping this gap small is a best practice. Tools can check for consistency between your code and UML models, issuing warnings when discrepancies arise.”

This gap between intended and implemented design often emerges under time constraints, for example due to project deadlines. “In such cases, you take a shortcut by ‘hacking’ a solution that allows you to meet the deadline, with less emphasis on quality,” Cloudt explains. “This is a deliberate choice to introduce technical debt due to time pressure. While this might be the only immediate solution, addressing this technical debt later is crucial. After the release is delivered, you should set aside some time to develop a proper, high-quality solution. Unfortunately, this doesn’t occur often. Managers should recognize the need to give developers time to reduce this gap and this technical debt to prevent future issues. Through their decisions, managers significantly contribute to organizational quality, directly influencing software quality.”

This article is written by Koen Vervloesem, freelancer for Bits&Chips.

 

Cultivating responsible AI practices in software development

As AI technologies become embedded in software development processes because of their productivity gains, developers face complex security challenges. Join Balázs Kiss as he explores the essential security practices and prompting techniques needed to use AI responsibly and effectively.

The use of artificial intelligence (AI) in software development has been expanding in recent years. As with any technological advancement, this also brings along security implications. Balázs Kiss, product development lead at Hungarian training provider Cydrill Software Security, had already been scrutinizing the security of machine learning before the widespread attention on generative AI. “While nowadays everyone is discussing large language models, back in 2020 the focus was predominantly on machine learning, with most users being scientists in R&D departments.”

Upon examining the state of the art, Kiss found that many fundamental concepts from the software security world were ignored. “Aspects such as input validation, access control, supply chain security and preventing excessive resource use are important for any software project, including machine learning. So when I realized people weren’t adhering to these practices in their AI systems, I looked into potential attacks on these systems. As a result, I’m not convinced that machine learning is safe enough to use without human oversight. AI researcher Nicholas Carlini from Google Deepmind even compared the current state of ML security to the early days of cryptography before Claude Shannon, without strong algorithms backed by a rigorous mathematical foundation.”

With the surge in popularity of large language models, Kiss noticed the same fundamental security problems resurfacing. “Even the same names were showing up in research papers. For example, Carlini was involved in designing an attack to automatically generate jailbreaks for any LLM – mirroring adversarial attacks that have been used against computer vision models for a decade.”

Fabricated dependencies

When developers currently use an LLM to generate code, they must remember they’re essentially using an advanced autocomplete function. “The output will resemble code it was trained on, appearing quite convincing. However, that doesn’t guarantee its correctness. For instance, when an LLM generates code that includes a library, it often fabricates a fake name because it’s a word that makes sense in that context. Cybercriminals are now creating libraries with these fictitious names, embedding malware and uploading them to popular code repositories. So if you use this generated code without verifying it, your software may inadvertently execute malware.”

In the US, the National Institute of Standards and Technology (NIST) has outlined seven essential building blocks of responsible AI: validity and reliability, safety, security and resiliency, accountability and transparency, explainability and interpretability, privacy, and fairness with mitigation of harmful bias. “The attack involving fabricated libraries is an example where security and resiliency are compromised, but the other building blocks are equally important for trustworthy and responsible AI. For instance, ‘validity and reliability’ means that results should be consistently correct: getting a correct result one time and a wrong one the next time you ask the LLM to do the same task isn’t reliable.”

''If you’re aware of the type of vulnerabilities you can expect, such as cross-site scripting vulnerabilities in web applications, specify them in your questions.''

As for bias, this is often understood in other domains, such as large language models expressing stereotypical assumptions about occupations of men and women. However, a dataset with code can also exhibit bias, Kiss explains. “If an LLM is trained solely on open-source code from Github, it could be biased toward code using the same libraries as the code it was trained on, or code with English documentation. This affects the type of code the LLM generates and its performance on tasks performed on code that differs from what it has seen in its training set, possibly doing worse when interfacing with a custom closed-source API.”

Balasz Kiss
Credits: Egressy Orsi Foto 

Effective Prompting

According to Kiss, many best practices for the responsible use of AI in software development aren’t novel. “Validate user input in your code, verify third-party libraries you use, check for vulnerabilities – this is all common knowledge in the security domain. Many tools are available to assist with these tasks.” You can even use AI to verify AI-generated code, Kiss suggests. “Feed the generated code back into the system and ask it for criticism. Are there any issues with this code? How might they be resolved?” Results of this approach can be quite good, Kiss states, and the more precise your questions are, the better the LLM’s performance. “Don’t merely ask whether the generated code is secure. If you’re aware of the type of vulnerabilities you can expect, such as cross-site scripting vulnerabilities in web applications, specify them in your questions.”

A lot of emerging best practices exist for creating effective prompts, ie the questions you present to the LLM. One-shot or few-shot prompting, where you provide one or a few examples of the expected output to the LLM, is a powerful technique for obtaining more reliable results, according to Kiss. “For example, if your code currently processes XML files and you want to switch to JSON, you might simply ask to transform the code to handle JSON. However, the generated code will be much better by adding an example of your data in XML format alongside the same data in JSON format and asking for code to process data in JSON instead.”

''With the present state of generative AI, it’s possible to write code without understanding programming. However, if you don’t understand the generated code, how will you maintain it?''

Another useful prompting technique is chain-of-thought prompting – instructing an LLM to show its reasoning process for obtaining an answer, thereby enhancing the result. Kiss has assembled these and other prompting techniques, alongside important pitfalls, in a one-day training on responsible AI in software development at High Tech Institute. “For example, unit tests generated by an LLM are often quite repetitive and hence not that useful. But the right prompts can improve them, and you can also do test-driven development by writing the unit tests yourself and asking the LLM to generate the corresponding code. This method can be quite effective.”

Here to stay

With all these precautionary measures, one might wonder whether the big promise of AI code generation, increased developer productivity, still holds. “A recent study based on randomized controlled trials confirms that the use of generative AI increases developer productivity by 26 percent,” Kiss notes, with even greater benefits for less experienced developers. Yet, he cautions that this could be a pitfall for junior developers. “With the present state of generative AI, it’s possible to write code without understanding programming. Prominent AI researcher Andrej Karpathy even remarked: ‘The hottest new programming language is English.’ However, if you don’t understand the generated code, how will you maintain it? This leads to technical debt. We don’t know yet what effect the prolonged use of these tools will have on maintainability and robustness.”

Although the use of AI in software development comes with its issues, it’s undoubtedly here to stay, according to Kiss. “Even if it looks like a bubble or a hype today, there are demonstrable benefits, and the technology will become more widely accepted. Many tools that we’re witnessing today will be improved and even built into integrated development environments. Microsoft is already tightly integrating their Copilot in their Visual Studio products, and they’re not alone. However, human oversight will always be necessary; ultimately, AI is merely a tool, like any other tool developers use. And LLMs have inherent limitations, such as their tendency to ‘hallucinate’ – create fabrications. That’s just how they work because of their probabilistic nature, and users must always be aware of this when using them.”

This article is written by Koen Vervloesem, freelancer for Bits&Chips.