Entrepreneur lesson #1: Too much early funding kills a startup

My grandparents from my maternal side were farmers and had cows. Their mature milking cows were put in fields with lots of lush grass and could eat what they wanted. The young cows that hadn’t yet been bred were put in the fields that had already been cleaned out by the milking cows or on poor pastures. My initial belief was that this was a cost-saving measure as the young cows didn’t give milk yet, but that was wrong. The real reason was that feeding young cows just enough so that they don’t go hungry but stay lean causes them to become bigger cows when they grow up, with a stronger frame and better milk production.

The funny thing is that the same is true for startups. In my experience, having too much funding during the early stages of a startup is counterproductive and may well cause the venture to fail. Most founders I talk to about this shake their heads in disbelief as they feel they’re constantly scraping by, closing deals by bending over backward for customers and permanently walking around with a feeling that they’re deviating from the company’s original mission.

Also the fear of running out of money and disappointing the friends and family who invested their hard-earned savings as well as the employees who are depending on you for their paycheck is real and results in existential angst in anyone with a bone of empathy in their body. Nobody wants to be a failure and for all the noise of the “celebrate your failures” movement, everyone I know prefers to celebrate other people’s failures.

The point is, however, that in all your jockeying for closing deals with customers, being responsive to their needs and wishes, adjusting your vision to the feedback you’re receiving, you’re navigating the design space of your offering and gradually nailing the content and functionality you need to have to build a successful company.

When a startup has too much funding in the early stages, the founders and employees tend to focus internally, build products based on their own opinions and ignore customer input. Because of this, the constant interaction with the market doesn’t result in a constant adjustment to customer input. Instead, the customer feedback is explained away and often translated into features that will be added to the offering later after we’ve built what we’re certain the customer needs.

Another consequence is that the company rapidly builds up to a burn rate that vastly outpaces the revenue coming in, causing a perpetual dependency on raising more money. And, believe it or not, raising money is actually addictive. In lieu of market success, it’s incredibly satisfying to at least have found investors who believe in your dream and are willing to carry the company forward for another couple of months. The problem is, of course, that raising money only confirms your storytelling ability, not the viability of the business. Also, all the time you’re busy raising money, you’re not building the business with customers.

'Startups only really benefit from large amounts of funding after they’ve nailed the offering'

There’s a time when startups really benefit from large amounts of funding, which is after they’ve nailed the offering to the market and the investment goes to executing a scaling strategy that allows for regional expansion as well as expansion to serve multiple industries. However, this should happen after you’ve nailed the offering and you have the revenue to prove it.

In the end, the balance for investors is to give startups enough funding so that they don’t have to raise money all the time and the founders can focus on building the business. And, at the same time, not so much that the company stops feeling the dread of possible extinction (staring into the abyss, as Elon Musk calls it), which causes the focus on adjusting yourself to customer needs rather than a fictive, unconfirmed belief about the market. As a founder, most of what you believe about your market, customers and offering is wrong, but you don’t know which part. Constant interaction with the market causes you to kill your darlings to get to a viable product and company that manages to become cash flow positive and, preferably, grow like a weed once you’ve nailed it. But for that, you need to stay lean, just like my grandparents’ cows.

“Without reproducibility, you have nothing”

High-precision mechatronics is one of the strengths of the region. To maximize the system performance, it is crucial to have a good metrology and calibration strategy. “Think ahead,” advises Rens Henselmans, teacher at High Tech Institute. “And beware what is really needed.”

 

Suppose you want to build a machine that can drill a hole in a piece of metal. The holes have to be drilled with such a level of accuracy that, once drilled, two separate pieces will fit perfectly together and can be connected with a dowel. What would that machine look like? And how will you reach the required precision? When you drill both holes slightly skewed in the same way, the pin will probably still fit. But if the deviation is not the same from one piece to another, you are screwed. And what if you place two drilling machines next to each other and combine their outputs; what will be the requirements then? Or more extreme, what if you buy the first part in China and the second in the US, what measures are necessary to ensure the dowel fits?

Even in an example as simple as drilling a hole, it turns out that it isn’t at all trivial to reach a superhigh level of accuracy. Parameters such as measurement uncertainty, reproducibility and traceability must be well defined. If you haven’t mastered that as a system designer, you can forget about accuracy.

The term accuracy is often misused, says Rens Henselmans, CTO of Dutch United Instruments and teacher at High Tech Institute. “It is a qualitative concept: something is accurate or not. But there is no number attached to it,” he explains. That in itself is not a bad thing, he has experienced, “as long as everyone knows what is meant. Usually, it concerns the measurement uncertainty. That is, a certain value plus or minus one standard deviation.”

Rens Henselmans: ‘You can’t add calibration in your system afterwards.’

 

The meter

Reproducibility is often mixed-up with repeatability. The latter term describes the variation that occurs when you repeat processes under exactly the same conditions. “Same weather, same time of day, same history,” says Henselmans, summing up the list of boundary conditions. “Reproducibility is the same variation, but under variable conditions, such as a different operator or even a different location. It is the harder version of repeatability since more factors are in play.” However, that system requirement is essential. “Without reproducible behavior, you have nothing,” declares Henselmans. “If your machine doesn’t always do the same thing, you can’t correct or calibrate system errors. Reproducibility is the lowest limit of what your machine will ever be able to do, if you could calibrate the systematic errors perfectly.”

Then traceability. “Internationally, we have made agreements about the exact length of a meter,” says Henselmans. “At the Dutch measurement institute NMI, they have a derivative of this, and every calibration company has a derivative of that. The deeper you get into the chain, the greater the deviation from the true standard and therefore the greater the uncertainty. When you present a measurement with an uncertainty, you should actually indicate how the uncertainties of all parts in the chain can be traced back to that one primary standard. Very simple, but it is often forgotten when talking about accuracy.”

Fortunately, that is not always necessary. “When you describe a wafer, it doesn’t matter at all whether or not the diameter of that wafer is exactly 300 mm,” says Henselmans. “The challenge is to get the patterns neatly aligned. And even if the pattern is slightly distorted, it’s not disastrous, as long as that distortion is the same in every layer. It only gets tricky when you want to do the next exposure on a different machine, or even on a system from another manufacturer. Then they must at least all have the same deviation. Gradually, you come to the point that you want to track everything back to the same reference and thus ultimately to the meter of the NMI.”

 

Common sense

What is really needed, depends strongly on the application and on the budget you are given as a designer. “Technicians are prone to want too much and to show that they can meet challenging requirements. But that often makes their design too expensive,” warns Henselmans. His company, Dutch United Instruments, is developing a machine to measure the shape of aspherical and free-form optics, based on his PhD research from 2009. “At the start of that project, we wanted to achieve a measurement uncertainty of 30 nanometers in three directions. At some point, the penny dropped. Optical surfaces are always smooth and undulating. If you measure perpendicular to the surface with an optical sensor, an inaccuracy in that direction is a one-to-one measurement error. That’s where nanometer precision is really needed. But parallel to the surface, you don’t measure dramatical differences. Laterally, micrometers suffice. That insight suddenly made the problem two-dimensional instead of three-dimensional.”

During the training, Henselmans regularly uses the optics measuring machine from his own company, Dutch United Instruments, as an example.

So always use common sense when thinking about accuracy. “It is okay to deviate from the rules, as long as you know what you are doing,” says Henselmans. The required knowledge comes with experience. “You learn a lot from good and bad examples.” That is why Henselmans uses many practical examples during the training ‘Metrology and calibration of mechatronic systems’ at High Tech Institute, including his own optics measuring machine and a pick-and-place machine. “We do a lot of exercises and calculations with hidden pitfalls so participants can learn from their own mistakes.”

 

Abbe

As for the metrology in your machine, you have to think carefully about where to place the sensors. “Think of a caliper,” says Henselmans. “The scaling there is not aligned with the actual measurement. So, if you press hard on those beaks, they tilt them a bit and you get a different result. This effect occurs in almost all systems, even in the most advanced coordinate measuring equipment. Between the probe and the ruler in those machines you’ll find all kinds of components and axes that can influence the measurement.”

Bringing awareness to these effects is what Henselmans calls one of the most important lessons of the training. “It comprises the complete measurement loop with all elements that contribute to the total error budget,” he explains. Generally speaking, you want to keep that loop small and bring the sensor as close to the actual measurement as possible. “Unfortunately, there is often a machine part or a product in the way which makes it difficult to comply with that Abbe principle. Also, you should realize that you are not alone in the world. The metrologist might indeed prefer short distances to achieve the highest accuracy according to the Abbe principle. The dynamics engineer, however, would prefer to measure in line with the center of gravity, otherwise all kinds of swings will disrupt his control loops. The metrologist will argue that these oscillations are interesting precisely because they influence system behavior. Together, they have to find the right balance.”

Making that decision is one of the discussion points in the course. One important aspect of this discussion is the need to have sufficient knowledge of the various sensors, and their advantages and disadvantages. During the training, interferometers, encoders and vision technology, among others, are therefore explained by specialists.

 

Reversed spirit level

Once you’ve got the metrology and reproducibility in your system in order, it’s time for calibration. “To correct for systematic errors,” Henselmans clarifies. The second half of the training is about how to do that. “The lesson to be learned is that you can’t add calibration in your system afterwards. You have to consider in advance how you are going to carry out the calibration and where you need which sensors and reference objects. If you wait until the end of your design process, you surely won’t be able to fit them in anymore.”

Before you have painted yourself into the corner, you must have a list of error sources, which ones you need to calibrate and especially how you are going to do that. Henselmans: “During my time at TNO, we once made a proposal for an instrument to measure satellites. A system about a cubic meter in size. We could test that in our own vacuum chamber. We had already set up all kinds of test scenarios when one of the optical engineers pointed out that you had to do a certain measurement at a distance of about seven meters, since that was where the focal point lay. So we had to carry out the calibration in a special chamber at a specialized company in Germany, which costed thousands of euros per day. It’s nice that we found this out before we sent our offer to the client.”

There are certainly calibration tools and reference objects available on the market, but in Henselmans experience you get stuck pretty quickly. “Certainly for larger objects, the list of options dries up quickly,” he says. Designers then have to fall back on ingenious tricks like reversal. “A wonderfully beautiful and simple concept,” says Henselmans and he explains: “Think of a spirit level. You can hold it against a door frame to determine how skewed it is. Then turn the spirit level over and see if the bubble is now exactly on the other side of the center. If not, the vial is apparently not properly aligned within the spirit level. You then have two measurements, so two equations with two unknowns which means you can calibrate the offset of the spirit level and the door at the same time. You can use that trick in more complicated situations, with more degrees of freedom and nanometer accuracy. That means you can get much further than with tools available commercially.”

Even better is to incorporate this technique in your design so that the machine can calibrate itself. “Make it part of the process of your machine,” advises Henselmans. “Then the stability requirement of the system drops drastically, and the system design becomes much simpler.”

 

This article is written by Alexander Pil, tech editor of High-Tech Systems.

Recommendation by former participants

By the end of the training participants are asked to fill out an evaluation form. To the question: 'Would you recommend this training to others?' they responded with a 8.5 out of 10.

Decades of experience drives one-of-a-kind switched-mode power supply training

For 40+ years, Frans Pansier has worked designing, developing, teaching and training advanced power supplies. According to him, challenging the mindset of young engineers is how he draws his energy. His favorite part? Sharing his knowledge and information that people simply can’t get at university – or anywhere else.

Power supplies are probably not something you spend a lot of time thinking about when you purchase a new laptop or TV. Most people just plug them into the power source and never think about them again. In reality, though, power supplies are a crucial part of fueling just about every piece of electronic equipment you own. They do this by taking the full power of the alternating current (AC) input from the grid, known as mains, and converting it into the usable voltage that gives life to electronics.

“Essentially every piece of electronic equipment, with the exception of a very few, needs an AC adaptor, externally or internally, to make use of the energy from the mains,” explains Frans Pansier, former Philips and NXP power supply specialist and High Tech Institute instructor with more than decades of experience in the domain. “Otherwise, the full flow of the 230 volts from the mains would fry the electronics and cause a lot of safety issues.”

HTI Frans Pansier 04 Joyce Caboor

Credit: Joyce Caboor

Development of modern power supplies really took off during the 1980s. Led by television technology companies, it was brands like Panasonic, Sony, Siemens and Philips, among a few others, that really made power supplies producible for industrial use. “Back then, every part, piece and component had to be developed in-house, because there were no manufacturers of suitable transformers, capacitors, and so on. There was really no market for that sort of thing at the time, so we had to do it all ourselves,” explains Pansier, who joined the Philips television division in 1986 to spend twenty years developing receivers, power supplies and other power electronics.

Outrageous

Conventional wisdom, perhaps guided by Moore’s Law, would suggest that as electronics continue to advance, newly developed technologies will become more efficient and less costly. However, when it comes to powering these modern technological marvels, wisdom is anything but conventional. In fact, according to Pansier, the information lining the textbooks at technical universities has hardly any relation with reality, and much of what the industry is using today stems from developments out of the Philips consumer electronics division – some forty years ago.

'With power supplies, you get the best performance for the lowest price when you know exactly what you can do with each of the components, and just as importantly, the things you better not do'

With a master’s degree in electrotechnical materials from Delft University of Technology, Pansier was familiar with a full spectrum of electronics components, ranging from semiconductors to magnetics, capacitors and more. But it wasn’t until he got several years of professional experience at Philips that it all came together. “With power supplies, you get the best performance for the lowest price when you know exactly what you can do with each of the components, and just as importantly, the things you better not do,” jokes Pansier. “But let me tell you, there aren’t a whole lot of people in the world that simply have this kind of knowledge.”

In fact, when Pansier looks back at his time at Philips, it becomes even more clear just how strong their development work really was. “In hindsight, I see just how outrageous and cutting edge our work was,” suggests Pansier. “Most evident is that, both then and now, consumer electronics companies are lightyears ahead of the TUs when it comes to this technology. It’s not a criticism of the TUs, it’s just that development in the area of power supplies can only come with years and years of experience, not a four-year PhD project. Even today, you’ll find that much of the material being taught at the TUs is the same as what I was learning and working with since 1980.”

HTI Frans Pansier 02 Joyce Caboor

Credit: Joyce Caboor

One of a kind

After years of working on development of power supplies, including the tedious work of patent applications for new designs and technology, Pansier was asked to set up a course, together with other specialists. Realizing how uncommon his experience was, from both the electronic components and industry standpoints, he wanted to help spread his knowledge and really challenge the mindset of younger and less experienced engineers. So, he became a trainer in Philips CTT, teaching about the ins and outs of power electronics, which at the time also focused on the picture tube and how to generate high voltage and deflection.

Pansier: “That course was completely designed by us, and I wrote five or six different parts for the training. It was so unique because, during my work, I visited various factories manufacturing the components and spoke to the design engineers to get the complete story, from characteristics to the physical parts. This information got woven into the one-of-a-kind course.”

By the end of the 90s, though, Philips had abandoned its TV development and the CTT course as well. But compelled to continue sharing information, Pansier took the decades-worth of accumulated knowledge and continued spreading it at NXP, where he worked as a power supply architect. Simultaneously, he worked with TU Delft to help guide students just getting into power electronics, and ultimately back at ‘home,’ as an instructor for High Tech Institute – the legacy of Philips CTT.

In the six-day “Swith-mode power supplies” training, Pansier walks participants through his long tenure in power electronics and helps increase their knowledge and comforts, as well as aids them in avoiding a number of the pitfalls that many engineers encounter. “We’ve put a lot of effort into cultivating a training that’s informative and thoroughly comprehensive,” describes Pansier.

“From the boundary conditions of both continuous and non-continuous modes in power electronics to the basic topologies of power supplies to the design, simulation and calculation methods needed to evaluate them, and reaching compliance standards for safety, reliability, EMI and efficiency – we really cover it all. That’s what makes this course stand, as it offers a unique view of the whole process and system, a view that has been built over several decades. And the biggest draw for people to come is easy. You simply can’t find this accumulation of information and experience anywhere else.”

This article is written by Collin Arocho, tech editor of Bits&Chips.