It seems that much of my work these days is concerned with bringing AI to the embedded-systems domain, understanding what the implications are and how companies should deal with it. In the discussions with technical experts and business leaders, however, I constantly run into several misconceptions.

First, there are still several people out there who think about ML/DL models as single-point solutions for a specific problem where we can build the model, integrate it, deploy the resulting software and be done. Although one could operate in this way, it’s basically ignoring the vast majority of benefits that you could deliver. Instead, it’s about continuous everything. It’s about a constant flow of software updates to the field, a constant flow of data coming back from your systems, constant retraining and updating of models. We all read about DevOps, DataOps and AIOps, but that really is what it’s all about. The shorter you can make the cycles, the more value you can deliver.

Second, many view AI as a technical challenge to be solved by R&D but unrelated to the rest of the company. The fact is that it will change your business model. If you’re currently using a transactional business model where you get a one-time payment when you sell the product but need to provide software updates throughout the economic life of the product, you’re in trouble as your cost is continuous while your revenue is not. The only way to get around this is to align your revenue model with your cost model. For most companies that I work with, it means combining the upfront product sales with a continuous service revenue model. This is actually a great way to generate more revenue from your customers.

Third, especially in companies that build large, complex and expensive systems, such as in automotive, telecommunications or automation, there’s no clear definition or understanding of the actual value provided to customers. The whole package sells and consequently, the whole package is valuable. When moving towards continuous improvement of systems, you need to choose where to focus your energy and time. And this requires you to understand what’s actually valuable in your offering as improving commodity functionality using ML/DL models or otherwise doesn’t help your customer. We’ve worked quite a bit with companies on modeling the value of their solutions or products to customers and it’s surprisingly hard to be precise and concrete.

'You can’t use AI in isolation'

Fourth, you can’t use AI in isolation. Machine and deep-learning solutions require data for training and operations. The data is generated by software that instruments the system. And of course, ML/DL models are themselves software. The only difference is that AI software is programmed by data and pattern recognition, rather than by humans in an algorithmic fashion. This means that you need to be good at all digital technologies.

The final misconception is that many believe that once they have trained an ML/DL model and it performs well in prototyping, the hard part is done. This couldn’t be further from the truth. The easiest part is actually creating a model and the hard part is industrializing it. Industrializing means setting up the data pipelines, putting monitoring and logging in place, ensuring correct (or at least acceptable) behavior in all cases, developing solutions for DevOps, DataOps and AIOps, and so on. In an earlier post, I wrote about our AI engineering research agenda where we capture the major challenges to be addressed.

AI is moving to the edge – and it should because it has the potential of delivering enormous value. The challenge is that there still are quite a few misconceptions out there about what this actually means in practice. I’ve discussed five of these and provided my viewpoint. AI belongs on the edge, but there’s a lot of work around it that needs to be put in place at the same time. And as it won’t happen by itself, it’s critical that we get going on this yesterday.