This article appeared in Evaluation Engineering and has been published here with permission.
There has been a great deal of disruptive change within the embedded electronics design community, and this has created pressure on developers to create the next generation of advanced digital ICs. The next generation of microcontrollers, ASICs, and FPGAs will all be systems-on-chip (SoCs) to one extent or the other, with multiple cores and advanced functionality.
This means that it is critical to ensure that the data moves around the chip between blocks efficiently using a network-on-chip (NoC). Without a proper on-chip communications solution, any given chip would need significantly more memory to operate as effectively without latency, which is not cost-effective. Every part of the chip that needs high-speed wide-bandwidth data flow between them should be located as close together as possible, but without a proper bus setup processor performance will be compromised.
A complex network of interconnections is needed to route the data traffic between blocks, in addition to data from off-chip memory. This could mean over a dozen layers of horizontal interconnections, plus a number of vertical connections between those layers. All this must be dynamically controlled within the NoC, with buffering to smooth and optimize data flow as demand changes, like when two IP blocks are using the same memory.
We sat down recently with Anne-Françoise Brenton, Sondrel’s NoC expert. The company is known as a provider of high-quality IC designs across multiple end markets, offering a turnkey service from system to silicon supply.
Now, Anne, when we talk about the issues involved in digital chip design, big or small, it’s easy to say it’s all in the NoC, but what does that mean? How does that translate to the designer?
Okay. So in the SoC you have a processor or graphic engine, and SPI, whatever interface you can think of, which are on this massive SoC. You can see them as LEGO blocks. Unfortunately, they have not the same footprint, and they don’t talk to each other very nicely. So you need something in between all these blocks, which will facilitate the transport of the information from the CPU to the various IP blocks, to control these IPs, and from IPs to memory to transport the data. So they interconnect in the middle, take the information from one of the blocks, and transmit it to another. It’s also taking care of the change of format such as protocol and frequency.
If it has an inter-, or I should say intra-chip, within the chip communication, granted, SoCs are expanding tremendously, but we’ve had SoCs around for quite a while?
Yes. So in the past, when I started more than 20 years ago, you had a very localized data transfer, so communication was really point-to-point. And you can do it quite easily with a set of wires and muxes. But as the chips grow in complexity, you need to take into account performance requirements, as well as being able to lay out all of these gates on a floorplan, in order to go to the fabrication process.
The layer in the middle, which could be seen as very simple, is just like a traffic light. But as soon as you get a lot of other blocks of IPs around, it starts to be a little bit more messy. You need to try to go from one corner of the SoC floorplan, to all sorts of places. And you need to take care about frequency constraints, some timing constraints, and technology rules while keeping the performance needed for the application. So it’s something which is unique for each given SoC, unique compared to a technology. And it has to be really done tactically.
So then at that point, where is the challenge in the NoC?
You need to maintain the performance, and so you need to have a lot of information about the IP blocks themselves, what traffic they will generate, and what kind of bandwidth they need, to get sustained in the SoC. You need to be able to model these traffics and be sure that the performance will be met for a given use case having all these flows in parallel through the interconnect.
And then you go to the implementation, physical implementation. Here, you find a new set of challenges, because you have distance to go, and you need to emit a frequency, so you have timing constraints, which are quite severe on the big SoC. Then you go back to your board and your design, your NoC, in order to meet the layout constraints associated to the location of the IP on the die.
So the challenge is you need to have a NoC available first in order to start the integration, because all the IP would be plugged on… It’s like a backboned board. So you need all your IPs to be plugged on this backbone. your NoC design up to the last minute in order to be sure that you will be able to place and route the full SoC.
Now, where does Sondrel put their value-add in this challenge to provide a solution for the engineer?
It’s along the full chain, so you have an architect during the early days of a project, helping the customer define the product, understanding the performance they want, so it can be implemented. We need to do early simulation with a high-level model to understand if it will make sense to have all these IP blocks, for example, sharing one memory.
We have the people who are able to do this early analysis. Then, when you have proven that it would work, you can go to a more precise verification, where you are using an RTL description for the interconnect. But you are still using some traffic modeling to be sure that your platform would work.
Final performance verification using the full SoC RTL will need to use specific customer software, but this is almost never ready upfront. So you need to continue to use a modelling platform, but here we are using the real RTL for the SoC backbone. This performance-critical path is usually the NoC, and any performance IP, like the DDR controller, but could include low-level cache or or any available specific IP for which we need to verify performance requirements toward the memory.
So as soon as you have the RTL for this SoC, then you have this performance verification element while you are still using theoretical bandwidth description for the IP. So this tells you that the backbone is not introducing a performance bottleneck. And then you go to the implementation phase, and at each phase, we can look back to be sure that the performance is maintained, which is key.
Okay. So then how does it tie together then? Where do you step in to aid the company? You know what I mean? Like, I’m an engineer and I’m putting together my SoC. I call you up, where do we go from there? How do you walk the engineer through your process to help them implement your solution in theirs?
So we need to help to understand the requirements or what type of use case the customer has in mind. We need to translate this description into bandwidth and licensing requirements. Then we configure the modeling environment and work very closely with the customer to review modeling results against their expectation..
Then using interconnect providers technology, you can quickly generate an FTL and then start a more precise performance verification process. The team is really trying to get all the information from the customer in a more natural way. And translate this in precise parameters and input to generate the interconnect. Once you have this, then of course you have your specification that you give to the SoC team, which is taking into account each technology parameter. The technology node, the size of the layout, the size of the block. And we try to mitigate, but always having the performance verification as the judge. To maintain the performance that you need.