AMD Rolls Out Desktop Processor Lineup Based on Zen 3 Architecture


What you’ll learn:

  • The challenges of vehicle computing.
  • A look at different software platforms.
  • A new OS technology: multi-kernel.

Connected, autonomous, accessible, and sustainable mobility is an enticing vision that places intense pressure on automotive innovators. The systems on-board tomorrow’s vehicles must deliver high performance with real-time determinism, which can only partly be met by new many-core processing hardware. A fundamental change at the OS level, adopting a distributed microkernel architecture, is also needed.

Automotive Trends Driving Change

Disruptive trends in the automotive market, such as connectivity, autonomous and automated driving (AD), mobility as a service, and drivetrain electrification, impose demands for high-performance computing that must also be extremely energy-efficient and power-conscious. This high-performance computing must coexist with the large numbers of smaller processors that handle traditional body, chassis, powertrain, and other vehicle electrical systems. There can be 30 to 100—or more—of these ECUs, depending on the model and market positioning.

Change is coming to vehicle electrical/electronic infrastructures as more advanced features are introduced. Distributed ECUs are coalescing into larger domain controllers to offset the rapidly rising cost, weight, and complexity of the vehicle’s wiring. Increasingly centralized electrical/electronic (or E/E) architectures are aggregating and integrating ECUs and high-performance computing across multiple domains. In turn, it’s driving a move to heterogeneous, many-core processors as the most high-performing, energy-efficient, and lowest-power solution to handle the diverse workloads.

On the other hand, connectivity and automated driving in particular raise demands for new safety standards as well as increased cybersecurity, preventing malicious agents from hacking autonomous vehicles is an issue of national security. Established safety standards like ISO 26262 arguably may not be enough for emerging use cases like autonomous driving. Newer standards such as SOTIF (Safety Of The Intended Functionality) and UL4600 are being developed to suit these applications.

To make things even more challenging, establishing the public acceptance of what is “safe autonomous driving” requires discussion of the ODD (operational design domain) and OEDR (object event detection and response) of ADS (autonomous driving system). However, this is difficult as they are OEM-specific and tightly coupled to the future vehicle specification that OEMs need to keep confidential. To meet these intensified safety challenges, OEMs and tier 1s need hardware and software architectures that can be the foundation to successfully overcome these hurdles.

At the same time, the emerging architectures must also be highly scalable to enable OEMs to create differentiated product ranges cost-effectively and deliver new models within tough time-to-market targets. Scalability is needed to let manufacturers accommodate performance variations between different vehicle specifications, utilize different hardware platforms of varying cost and complexity throughout their product ranges, implement different applications and features on different models, and manage a variety of system configurations as needed. Not to mention the possibility of deploying and enabling new functionality after physical delivery, over-the-air.

Scalability is also essential so that rapidly evolving heterogeneous many-core processors can be infused into embedded high-performance compute engines. Moreover, it allows OEMs to adopt the latest E/E architectures while new models are being developed.

Upgrading the Software Architecture

Effectively, OEMs are tasked with creating systems that provide a combination of high-performance, energy efficiency, real-time or determinism, cost-effectiveness, and safety that’s never been achieved before. And as if scaling heterogeneous compute engines isn’t difficult enough, satisfying all of these demands in an architecture that’s suitable for mass production further intensifies the challenge.

To handle these diverse challenges to the overall vehicle E/E architecture, it makes sense to consider not only the hardware, but also the software platform. This is particularly the case with the architecture of the operating system, which ties up together all of the computing elements that are drastically changing.

A variety of approaches and software platforms are currently used throughout the automotive industry. In addition to standards-based platforms like AUTOSAR, alternatives include Open Source Software (OSS) platforms, such as Linux-based platforms, as well as various proprietary platforms created and managed by OEMs, tier 1s, and independent vendors. All typically vary in terms of the software layers and functional domains they cover.

Figure 1 describes an automotive software platform, based on a service-oriented architecture (SOA), which is suited to handling domain controller-based E/E architecture and the more advanced central and zone-based E/E architecture.

1. Diverse software architecture still requires software standards and a scalable approach.

The SOA provides a flexible platform for delivering advanced AD functions such as lane recognition, object detection, and driver status monitoring as services accessed through a standard interface. Moreover, the SOA provides transparency in terms of mapping, whereby the location of the server providing the service is independent of its use. This is key in applying the distributed computing model. Also, the SOA ensures transparency of implementation by maintaining a consistent interface regardless of how the service is deployed within the server.

The architecture shown incorporates the AUTOSAR Adaptive Platform (AUTOSAR AP), which standardizes foundation-layer software and allows for planned dynamics. This is a concept that permits adaptability without compromising the handling of safety-critical processes.

Planned dynamics is achieved through several measures, such as making sure that all processes are registered during system integration and restricting privileges for starting processes. Also, communication between application processes and external entities is managed according to strict policies established during system integration.

One of the central concepts in functional safety is freedom-from-interference (FFI). The described transparency of SOA provides a good foundation for FFI, as different functionalities of a platform are separated into mostly independent services and used by the (again mostly) independent application.

While that does provide good logical architecture, the actual guarantee of the FFI must be provided by a physical mechanism like the memory management unit (MMU) of the processor. The OS virtualizes this mechanism in the form of OS processes, which are the physical instances of the services and applications.

The FFI is essential for safety and as you can see in Figure 1, a lot of components run as the processes. They indeed need to interact with each other frequently—e.g., if an application process needs to use a service that’s run as another process, they need to communicate. IPC (inter-process communication) is the OS feature to all of the interaction. Since these two processes, which are originally intended to be protected from each other, now need to cross the boundary to talk, it can be much more costly performance-wise than intra-process communication. This can evolve into a significant system performance issue when all of the software is integrated.

Returning for a moment to consider the adoption of many-core processors, fast communication between these cores is becoming more of an imperative, too. From a hardware perspective, high-speed network-on-chip (NoC) infrastructure has evolved to support high-speed, low-latency, inter-core communication.

Given these changing aspects of high-performance computing in the vehicle, bringing demands for unimpeded IPC in software and large numbers of intercommunicating processor cores, traditional single-kernel OSes will increasingly fall short in their ability to service all parts of the system adequately to maintain performance.

A multi-kernel OS architecture, also referred to as distributed microkernel OS (seen in the lower layers of Figure 1), is inherently suited to servicing the large numbers of interlinked cores and processes, hence meeting the future needs of automotive OEMs. eSOL has developed such a multi-kernel OS, dubbed eMCOS, to provide the required performance and scalability as vehicle architectures continue to become more advanced.

In addition to providing the scalability to handle either small or large sets of functions, this distributed microkernel OS is also helps deliver fast and deterministic response for real-time control applications in domains such as powertrain. The OS can scale in multiple ways, applications can be connected between the microkernels, and users are able to customize the adaptation layer to suit their intended purpose.

Distributed Microkernel OS

The distributed microkernel OS is unlike typical microkernel OSes. With no need for cross-core kernel locks to prevent performance-sapping concurrent accesses, this architecture ensures parallelism is preserved. In eMCOS, a layered scheduling mechanism also enables hard real-time determinism and permits high-throughput computing combined with load-balancing. Standard support is available for multi-process POSIX and AUTOSAR programming interfaces, and there are special-purpose APIs for functions such as distributed shared memory (DSM), fast-messaging, NUMA memory management, thread-pool, and others.

Figure 2 shows an example of how the distributed microkernel architecture supports a heterogeneous set of GPGPU, FPGA, many-core, and even MCU.

2. A heterogeneous compute environment can be implemented using a multi-kernel.2. A heterogeneous compute environment can be implemented using a multi-kernel.

In addition to the many variants of Arm-based multi-core processors from dual cores to octa-cores and multiple MCU architectures, eMCOS is also being deployed now in advanced many-core processors. One example is the latest Coolidge MPPA (Massively Parallel Processor Array) chips from Kalray, which are aimed at emerging automotive applications as well as fast-growing sectors of edge computing and AI, such as modern data centers, networks (5G), aerospace, healthcare equipment, Industry 4.0, drones, and robots.

The OS lets AUTOSAR CP and AUTOSAR AP run on the same chip (Fig. 3), enabling the devices to combine high speed and real-time determinism with low power consumption. The I/O clusters run eMCOS and AUTOSAR AP, while the computing clusters run multiple instances of AUTOSAR CP with eMCOS AUTOSAR. The AP and CP instances are connected via eMCOS inter-cluster message passing.

3. The AUTOSAR Adaptive Platform and Classic Platform can be hosted on a single chip with eMCOS.3. The AUTOSAR Adaptive Platform and Classic Platform can be hosted on a single chip with eMCOS.

Conclusion

Connectivity, autonomous driving, mobility services, and electric vehicles are important automotive trends that raise expectations in terms of in-vehicle high-performance computing and enhanced E/E architectures. Processing hardware is moving toward heterogeneous many-core compute, initially in multiple chips and ultimately through single-chip integration as chiplets, to keep pace with performance demands. At the same time, new and significant safety challenges are coming, and OEMs need help to handle pressure to achieve acceptable time-to-market for new features and models.

The software platform is a critical element in solving these challenges. Scalability is imperative and software based on a SOA is the proven approach. A multi-kernel, or distributed microkernel, OS technology is well-suited to managing emerging hardware architectures while at the same time supporting SOA with inter-kernel message passing.



Source link