A new webinar on how virtualization can help you add new technology to existing designs.First things first: should you say “
hypervisor” or “virtual machine monitor”? Both terms refer to the same thing, but is one preferable to the other?
Hypervisor certainly has the greater sex appeal, suggesting it was coined by a marketing department that saw no hope in promoting a term as coldly technical as virtual machine monitor. But, in fact, hypervisor has a long and established history, dating back almost 50 years. Moreover, it was coined not by a marketing department, but by a
software developer.
“Hypervisor” is simply a variant of “supervisor,” a traditional name for the software that controls task scheduling and other fundamental operations in a computer system — software that, in most systems, is now called the OS kernel. Because a hypervisor manages the execution of multiple OSs, it is, in effect, a supervisor of supervisors. Hence hypervisor.
No matter what you call it, a hypervisor creates multiple virtual machines, each hosting a separate guest OS, and allows the OSs to share a system’s hardware resources, including CPU, memory, and I/O. As a result, system designers can consolidate previously discrete systems onto a single system-on-chip (SoC) and thereby reduce the size, weight, and power consumption of their designs — a trinity of benefits known as SWaP.
That said, not all hypervisors are created equal. There are, for example, Type 1 “bare metal” hypervisors, which run directly on the host hardware, and Type 2 hypervisors, which run on top of an OS. Both types have their benefits, but Type 1 offers the better choice for any embedded system that requires fast, predictable response times — most safety-critical systems arguably fall within this category.
Moreover, some hypervisors make it easier for the guest OSs to share hardware resources. The QNX Hypervisor, for example, employs several technologies to simplify the sharing of display controllers, network connections, file systems, and I/O devices like the I
2C serial bus. Developers can, as a result, avoid writing custom shared-device drivers that increase testing and certification costs and that typically exhibit lower performance than field-hardened, vendor-supplied drivers.
Adding features, without blowing the certification budgetHypervisors, and the virtualization they provide, offer another benefit: the ability to keep OSs cleanly isolated from each other, even though they share the same hardware. This benefit is attractive to anyone trying to build a safety-critical system and reduce SWaP. Better yet, the virtualization can help device makers add new and differentiating features, such as rich user interfaces, without compromising safety-critical components.
That said, hardware and peripheral device interfaces are evolving continuously. How can you maintain compliance with safety-related standards like ISO 26262 and still take advantage of new hardware features and functionality?
Enter a new webinar hosted by my inimitable colleague Chris Ault. Chris will examine techniques that enable you to add new features to existing devices, while maintaining close control of the safety certification scope and budget. Here are some of the topics he’ll address:
- Overview of virtualization options and their pros and cons
- Comparison of how adaptive time partitioning and virtualization help achieve separation of safety-critical systems
- Maintaining realtime performance of industrial automation protocols without directly affecting safety certification efforts
- Using Android applications for user interfaces and connectivity
Webinar coordinates:Exploring Virtualization Options for Adding New Technology to Safety-Critical DevicesTime: Thursday, March 5, 12:00 pm ESTDuration: 1 hourRegistration: Visit TechOnLine