The Evolution Of Digital Twins – SemiEngineering

Digital twins are starting to make inroads earlier in the chip design flow, allowing design teams to develop more effective models. But they also are adding new challenges in maintaining those models throughout a chips lifecycle.

Until a couple of years ago, few people in the semiconductor industry had even heard the term digital twin. Then, suddenly, it was everywhere, causing confusion because it appeared to be nothing more than development models. Some degree of clarity finally is emerging, but not about how it will affect the development process. What may change are new customers for those models and new ways in which value can be extracted from them, and over time that may influence the models created.

To start, the term digital twin is very broad. Digital twins allow mimicking of real things through simulation, says Aki Fujimura, CEO of D2S. Flight simulators are digital twins. Driving simulators are digital twins, especially if you want to model different ways in which a car can crash, or a pilot needs to learn to control the plane under severe duress. It is a good idea to be dangerous only in a simulated context.

The term has been defined by a number of people. A digital twin is a virtual representation of a physical system that is used to answer questions that cannot be answered, or are difficult to answer, by a physical system, says Marc Serughetti, senior director in the verification group at Synopsys.

But looking at it with too large a context can be problematic. Digital twins are a great idea, but not a reality yet at least not to the extent of seamlessly covering all abstraction layers of a complex system, says Rob van Blommestein, head of marketing for OneSpin Solutions. One of the big issues is interoperability. Right now, there are a few big players that are trying to develop proprietary platforms for modeling a digital twin of a car, for example. In principle, that is great. It would allow manufacturers to have a complete virtual view of the car, including electronic, electrical, mechanical, and physical aspects. The problem is that monolithic platforms are weak by construction.

What has started to become clear are the types of digital twin that exist, and the roles they can perform. There are three elements of digital twins, says Frank Schirrmeister, senior group director for solutions marketing at Cadence. One: Digital twins for the development purpose. These are the ones that naturally fall out of the semiconductor flow. It includes things like a virtual platform, which is a digital twin for software development. Two: Digital twins related to the data. If you look at healthcare, they digital twin the hospital to optimize the process. They digital twin the person. The third element is data over time, and this is often connected to maintenance issues. So in the health domain, people look at your digital twin being your chart.

Schirrmeister uses a bathtub curve, as shown in figure 1, to show how this fits into the semiconductor world.

The digital twin for predictive maintenance has nothing to do with the digital twin used for development, continues Schirrmeister. This is a model nightmare like you wouldnt believe. In order for this to be effective, you actually need to have models of your thing in real life, and you need to maintain these models, over time. So three digital twins for development, for predictive maintenance, and then the operational and production aspect which is, how to optimize your product production line.

This ends the notion that models stop being useful at tape-out.

An emerging standard, Portable Stimulus, could become an important part of this. If high-fidelity and high-accuracy models are used for the individual components of the system, such as a sensor or applications processor, digital twins can help reduce development time by enabling realistic scenario simulation and, crucially, validation, says Chet Babla, vice president for Arms Automotive & IoT Line of Business. This can lead to better products in a shorter period of time.

Others agree. A digital twin can be better than a physical system in that they make it possible to quickly change parameters, operating conditions, functions and overall make the product more configurable instead of having to rebuild a physical system every time, says Syed Alam, global semiconductor lead at Accenture. The semiconductor industry is moving toward more configurable systems, and digital twins are a great way to design, test and manufacture them faster and cheaper.

The boundaries between stages is blurring. Suppose we tape out a design and target a new production facility, says Fram Akiki, vice president of electronics industry strategy at Siemens PLM Software. The production people may come back and say that based on the design you are using a lot of high-performance transistors, you have added a couple of layers of metal, etc. They may conclude that they need to spend another $30 million or $40 million of capital to make sure they optimize the production line for this design (see figure 2). If they dont spend the money and it turns out to be a bottleneck, you are dead. If you do spend the money and later realize that you didnt need it, you also will be in trouble because of the unnecessary expenditure. This has become even more important today. You want to optimize your manufacturing line early on. It doesnt matter if you are operating under an IDM format or if you are a fabless company because costs are costs and risks are risks, and someone has to account for them.

The digital twin is a means to an end. The first question that really needs to be addressed is, What questions are you trying to answer?' says Synopsys Serughetti. Why do I build the digital twin?

Production twinsIn semiconductor manufacturing, the concept of digital twins is finding several applications. We do wafer plane analysis (WPA) from a mask SEM picture to project what that mask will expose on wafer resist by doing a lithography simulation on the extracted contour of a SEM picture, says D2S Fujimura. That is a digital twin of the wafer lithography machine.

This concept can be taken further than would be possible when restricted to real data only. One might want to have a deep learning network that can learn how to classify different types of defects, says Fujimura. But semiconductor fabs and mask shops are very good at avoiding defects. So it is difficult to collect the millions of specific example pictures of each type of a defect in order to properly train the network. Digital twins that generate pictures that look like SEM images, or inspection images out of CAD data, enable a programmer to generate any number of any kind of defect image at will without finding a way to manufacture one to take a picture of.

Even after devices have been deployed in the field, models remain useful. Post-silicon is a different scope of digital twin, but thats one that may become more important as we move forward and as systems become more complex, says Serughetti. In the semiconductor world we tend to think of hardware and software. But the reality is, they live in an environment. And if you are starting to think about digital twins in that space, you need to be able to model the hardware, the software and the environment that has an influence on that hardware and software.

Knowing how your chips are being used is important. Live data from physical chips, either being manufactured, assembled and tested in silicon form, on system/board or in end deployment, are all being collected and integrated with the Digital Thread data, says Accentures Alam. This enables design and product engineers access to the next level of analysis, which helps them improve their designs as well as provide data into the semiconductor ecosystem to resolve design and performance issues.

And being able to predict their performance when used in new situations adds value. Digital twins enable the possibility to test multiple deployment scenarios, including challenging corner cases, with a highly automated and consistent approach, but without the limitation of needing access to physical systems, says Arms Babla. This ability to configure, test, analyze, adjust, configure, and test enables a powerful feedback mechanism to optimize system specifications and finalize or enhance designs.

But some of the necessary models remain elusive. In order to cope with actual field data, real-time execution is often a requirement for digital twins, says Roland Jancke, head of department for design methodology at Fraunhofer IIS Engineering of Adaptive Systems Division. The main advantage of a digital twin as opposed to conventional system-level models is its existence over the complete system lifecycle. This helps during the development and verification phase, but improves itself later by using field data, thereby keeping an eye on safe operation as well as expected responses.

Over time, those models will reinforce themselves. Being able to mine data from the digital twin will provide design engineers the next level of confidence in the simulation models they are building, says Alam. The ability to analyze different test results as well as power consumption and performance parameters means that there will be less re-spins of a product. The design engineers can produce better configurations and derivatives of devices faster and at a lower cost.

Digital twins shift leftModels within the development flow are changing. We are seeing a shift left, and a lot of things that in the past were done in series, such as the development of hardware and do the software later when the hardware has matured, are now being done simultaneously, says Siemens Akiki. To do this effectively is dependent on having effective digital models.

The development of virtual prototypes has been happening for some time. In addition, accuracy can be traded with speed of modeling by deploying models of varying abstraction levels, says Babla. For example, a simple programmers view model of components may suffice for base functionality checking and initial software development, whereas a detailed physics-based image sensor model combined with a full processor RTL model will allow high-accuracy modeling of vehicle or industrial robot dynamics under real-life environmental conditions, such as humidity, temperature or even road surface conditions.

And those models are being connected into flows. Virtual prototyping can start early, before RTL is available, says Serughetti. Then you go into hybrid, for example, with the virtual prototyping and emulation. And then you can go to prototyping, which is FPGA-based, which can also be hybrid. Now you may be talking more system validation, where you connect to the rest of the system. So there is definitively a flow where all those technologies play a role along the development chain. I dont think they are exclusive of each other. And they are not like one finishes, and the next one starts.

Portable Stimulus can help keep those models in sync. Not all tests will run on each level of accuracy, says Cadences Schirrmeister. Some tests may require you to have enough implementation detail to actually work, where others, which are more abstract elements, will work across all fidelities. Some may require the bare metal drivers to be available. If you run the same test on a more abstract implementation of it, then you dont run the real drivers. But you still can run the test itself.

Getting the right abstractions can be tricky. Models used for a digital twin should run as fast as possible while being as accurate as necessary, says Fraunhofers Jancke. The abstraction needs to be representing interactions in the system that are in question, while neglecting what seem to be unnecessary details for the investigation at hand.

The uptake in virtual prototypes is leading to design gains. Customers are being more proactive in the specification phase, says Serughetti. They are benefiting from that because the value of moving to a virtual environment is providing more clarity between hardware and software. There has been a very strong benefit on that side, and the shift left has happened because now you can deploy those verification and validation technologies much more efficiently.

One problem is that digital twins tend to be proprietary. What the industry needs is an open-source, or at least interoperable, platform where open-source and commercial tools, including EDA tools, can be plugged-in seamlessly, says OneSpins van Blommestein. This is an obvious need for safety, where having independent tools checking each other is a requirement. But its not only about safety. It is also important to make sure OEMs and their supply chain have access to the best, most competitive solutions for each specific design and verification task.

Another challenge is the cost of maintaining models over time. Is the investment you have to put into that digital twin manageable for the return youre trying to get out of it, asks Serughetti. When you are at the beginning of the project and theres no physical system, it has a huge value because you can start earlier, and you get more information. Once you are later in the project, the question becomes, Whats my modeling effort for that digital twin? And what is the return for it? If the effort is too high, then people are not going to maintain it. If you are talking about a project that has a very short timeframe, the modeling effort may be too high in order to get returned value.

Digital twins created within the semiconductor flow remain primarily for internal consumption. Digital twins rely on the creation of a flow where these things happen naturally, says Schirrmeister. They are a natural byproduct of developing something. We are not really there yet, even from RTL down. There are times when you need to take into account not only the functional and timing accuracy, but also thermal effects, electromagnetic effects, power effects and all that. We have not figured that out, and its highly dependent on which question you want to answer.

The industry long has held the desire to have an executable specification from which implementations can be derived, and this would create the ultimate digital twin. What if my digital twin was my specification, asks Serughetti. If I start modifying it, I start by modifying the specification, and from that I derive the implementation. Unfortunately, I havent seen it as something that has been practical to realize so far.

ConclusionThe industry is getting more clarity on digital twins, both within the semiconductor development flow and during production and deployment. As the perceived values of doing that increase, additional time and effort will be put into the creation of the appropriate models and the means to keep them in sync. Ultimately, a digital twin has value if it provides a better way to get an answer. The better way could be because its earlier, its cheaper, or it has less risk.

In this time of COVID where more people are working remotely, digital twins may offer additional value because they may be accessible anywhere and at any time.

See original here:

The Evolution Of Digital Twins - SemiEngineering

Related Posts

Comments are closed.