Rabu, 20 Mei 2015

Reimagining digital instrument cluster design

Guest post by Jason Clarke, vice president, sales and marketing, Crank Software

Technology in cars has been advancing at an impressive rate. From rich infotainment systems to intelligent digital instrument clusters, today’s automobile has evolved to become a cool reality that many of us only envisioned as a possibility a few years ago. But while the technology has changed, the driver has stayed the same. Drivers still need to get from point A to point B as efficiently and safely as possible, while perhaps listening to some favorite road trip tunes on the journey.

What has changed for drivers is the sheer volume of information that is available while behind the wheel. Today’s vehicle can tell you more than the fact that you are desperately in need of finding the nearest gas station. It’s smart enough to let you know when you are getting close to hitting the neighbor’s garbage can… again. It can alert you to traffic pattern changes, road hazards, inclement weather, your affinity to your lead foot, and to the fact that your spouse is texting you to remind you to pick up the dry cleaning. It can also effortlessly re-route you back to the dry cleaners after you realize you’ve forgotten, providing you with helpful turn-by-turn navigation in your instrument cluster.

That’s a lot of information. And it’s only a small slice of what’s available to today’s driver. The simplicity, reliability, and safety capabilities of platforms by QNX Software Systems make it a possible to have a wide range of technologies and features in a single vehicle, offering up an abundance of data for driver consumption.

So, how do we make this data useful for drivers? What do we need to consider when designing the UI for digital instrument clusters?

How much information does the driver REALLY need?
Information should be helpful, not intrusive or distracting from the task at hand — driving. The point of having more data available to drivers isn’t to show it all at the same time. That’s visually noisy and complex. Complex isn’t better; context is better. Turn-by-turn information can be displayed in the instrument cluster, based on communication from the navigation system. Video of the car’s surroundings can be displayed when parking assist services are engaged. Advanced Driver Assistance Systems (ADAS) can present in the cluster alerts to immediate hazards and objects.

Using tools that support rapid prototyping of design scenarios empowers teams to deliver the best user experience possible, serving up only the most relevant information. Using Storyboard Suite from Crank Software, teams can quickly cycle through design prototypes and perform testing on real hardware, focusing on the needs of the driver.

How do we best visualize the data?
It’s critical that drivers see and interpret displayed information as easily and quickly as possible. Clear visual representation of data is required, so it’s important to keep design considerations at the forefront in the development process. This is where the graphic designer comes in.

Crank Software’s Storyboard Suite allows the graphic designer to be integrated into the development process from concept to final HMI delivery, working in parallel with the engineers to ensure that fine details and subtle design nuances aren’t lost. With Storyboard Suite, designers don’t hand over a mockup to a developer to visually represent with code and then walk away. As the graphics change and evolve to satisfy usability requirements, the designer stays engaged throughout the entire process, helping to deliver a polished HMI.

Automotive cluster designed and developed with Crank Software Storyboard Suite, running on QNX Neutrino OS

Can we respond quickly to design change?
Remaining focused on the usability of the end design is critical to ensuring the safest driving experience. Delivering a high-performance, user-centric HMI requires testing, design refinements, retesting, and even further changes. This isn’t a linear process. While iterative process is important, it’s often cost prohibitive because it can introduce lengthy redesign cycles. Storyboard Suite provides teams the functionality to prototype and iterate through designs easily, using features such as Photoshop Re-import to quickly evaluate design changes on hardware and shorten development cycles. In addition, support for collaboration enables teams to share design and development work, thereby reducing the load on individuals and further optimizing time and resources.

A faster development process coupled with a user-focused end design is the key to delivering a highly usable and safe digital instrument cluster to market on schedule and within budget.

A digital instrument cluster developed with Storyboard Suite will be on display at TU-Automotive Detroit in the QNX Software Systems booth, #C92, and the Crank Software booth, #C113. Check out a previous Crank Software and QNX Software Systems collaboration with a Storyboard Suite UI in a QNX technology concept car.


Jason Clarke has over 15 years of experience in the embedded industry, in roles that span development, sales, and marketing. Jason heads up Crank Software’s marketing and sales initiatives.

Visit Crank Software here.


Selasa, 12 Mei 2015

Top 5 challenges of digital instrument clusters

Guest post by Olli Laiho, director, product marketing, Rightware

Digitalization of the modern car is progressing at breakneck speed, with research showing that over 70% of cars will ship with a digital display in the cluster by 2017 (Automotive User Interfaces 2014, IHS Automotive, 2014). While digital user interfaces have long been available in the center stack of the vehicle, they are now quickly making their way into the heart of the car’s dashboard — the instrument cluster. However, the migration from traditional, physical instrumentation to the digital Human Machine Interface (HMI) is posing various challenges for auto manufacturers. Here are the top five challenges Rightware is seeing today.

1. Deliver a winning user experience
With the digital cluster, auto manufacturers must deliver a user experience that makes consumers insist on having a digital cluster and makes them think they could never live without one. The car companies need to increase their investment in digital user experience design in order to provide consumers with a digital driving experience they’ll love.

User experience is all about... the user! With the help of target group research, auto manufacturers need to find the key use cases and features for different buyer profiles. While more senior buyers appreciate a digital design featuring traditional big gauges and needles combined with maps in the middle, millennials long for a cluster that connects them with their personal data at the right time, while having a modern look and feel with a real wow effect.

QNX Software Systems' technology concept car 2014 based on the Mercedes CLA 45, featuring a cluster created with Rightware Kanzi®

2. Find the right design-cost-performance combination
In creating HMIs such as digital clusters, finding the right balance among design, cost, and performance becomes essential. It’s all about:

Design — Delivering a stunning user experience
Cost — Minimizing software development, hardware, and maintenance costs
Performance — Choosing the right OS, System-on-a-Chip (SoC), etc.

Automotive user interface designers need to learn to work with the capabilities of the hardware and software platform of the cluster in mind. Designers need to create user experiences that strengthen the auto manufacturer’s brand image while still being possible to implement with the chosen tool chain and hardware and software platforms.

Choosing the SoC that can deliver the best user experience at the best price is essential. While proper automotive SoC benchmarking tools are not yet available in the market, auto manufacturers need to invest in their own measurements and trials for finding the right cost/performance level of the SoC for their project.

QNX Software Systems' technology concept car 2015 based on the Maserati Quattroporte, showing
system diagnostics in the cluster created with Rightware Kanzi

3. Reduce development time
Consumers have become accustomed to having access to the latest technology and innovations on their mobile devices. That expectation has now extended to HMIs in the car.

To meet consumer expectations, the automotive industry must shorten the development time of new vehicles and determine how to provide compelling software upgrades during the car’s lifecycle. Digital clusters need to be designed for upgradeability from the ground up. Through upgrades, the cluster should provide the necessary access to new app platforms and innovations. Streamlining the software development process and choosing the right tool chain for HMI development is key to creating HMIs faster and with more valuable features.

4. Accelerate update cycles

Consumers utilize their mobile devices daily and have learned to expect a constant update cycle that brings new features and enhancements to their device. This “update drug” has created a trend where the customer is waiting for the next update to their beloved devices — a customer that is always looking for more.

Until today, there have been few tangible software upgrades for a car during its lifetime. As an example, when you pick up your car from service, you’ll often see a line on the bill that says “software updates.” Leaving the garage, you can discern no difference in how the car behaves.

Auto manufacturers need a plan for providing consumers with constant software upgrades that give them value during the entire lifecycle of their vehicle. Upgrading the digital cluster doesn’t have to mean that it should look like next year’s model, but the upgrade should provide consumers with either features that add value or a clear, visual difference that they understand is an upgrade. Increasing the upgradeability of HMIs in the car will be a major opportunity for improving customer retention.

5. Establish design ownership
As automotive devices evolve into the digital age, they will also transform the way auto manufacturers create designs for their customers. Unlike a mobile device, HMI design will be specific not only to the manufacturer’s brand, but also to that model. Digital screens will give automotive UI designers the flexibility to create unique designs, and they will need full control of the UI framework to be able to deliver these stunning user experiences.

Consumers are increasingly connected 24/7 to ecosystems from companies such as Google and Apple. Due to the increase in consumer demand, these technologies are also making their way into the car cockpit in various forms — from simple content integration (SMS, mail, media) to sandboxed but comprehensive solutions like Apple CarPlay and Android Auto.

Automotive companies must invest in creating branded digital user experiences that can rival and exceed any third-party designs in the vehicle. They should invest in a UI solution and operating system that can deliver the design as intended.

Audi Q7 Virtual Cockpit, running on QNX Neutrino OS, featuring a cluster created with Rightware Kanzi



Visit Rightware at TU-Automotive Detroit (booth #C115) to witness next-generation HMI demos built with Kanzi and a first chance to see a brand new Kanzi product. You’ll also find Rightware’s technology in the QNX booth (#C92).



Olli Laiho has been working in software development for over 15 years. An avid car enthusiast, Olli heads Rightware’s global marketing activities.

The Rightware Kanzi UI Solution and the QNX Neutrino OS can already be found together in several vehicles, including the Audi TT, Audi Q7, and the Audi R8. Rightware has created several digital clusters for QNX technology concept cars, including the 2014 Mercedes CLA 45 and the 2015 Maserati Quattroporte.

Visit Rightware here.


Kamis, 07 Mei 2015

Getting in sync with brought-in devices

Building a head unit that needs to sync with smartphones, media players, memory cards, and USB sticks? With the QNX CAR Platform, you won’t be left to your own devices.

Paul Leroux
In previous posts, I discussed how the QNX CAR Platform for Infotainment is adept at juggling multiple concurrent tasks. For instance, it can perform 3D navigation, process voice signals, provide active noise control, display vehicle data, manage audio, run multiple application environments, and still deliver a fast, responsive user experience. If that’s not enough, it can also detect and play content from an array of media devices, including local drives, SD cards, and iPods, as well as Bluetooth, DLNA, and MTP devices.

When plugging a media device into a car’s head unit, most users expect immediate access to the device content; they also want to browse the content by metadata, such as genre, title, or artist. To present this content, the head unit must perform metadata synching. The question is, how can the head unit make the content instantly available, even when the media device contains thousands of files that may take many seconds or even minutes to fully synchronize?

To complicate matters, users often want to switch from one media source to another. For instance, a user listening to music stored on a DLNA device may ask the head unit to switch to an Internet radio station. From the user’s perspective, the switch should be fast, simple, and intuitive.

Handling device attachments (and
detachments) gracefully.
The head unit must also cope with the vagaries of user behavior. For instance, if the user yanks out a USB media stick during synching or playback, the system should recover gracefully; it should also provide appropriate feedback, such as displaying a menu that asks the user to choose from another media source. Likewise, if the user yanks out the media device and re-inserts it, the system shouldn’t get confused. Rather, it should simply resume synching content where it left off.

Handling scenarios like these is the job of the QNX CAR Platform’s multimedia architecture.

Architecture at a glance
The multimedia architecture integrates several software components to automatically detect media devices, synchronize metadata with media databases, browse the contents of devices, and, of course, play audio and video files. Together, these components form three layers:

  • Human machine interface, or HMI
  • Multimedia components
  • OS services



Let’s look at each of these layers in turn, starting with the HMI.

At the top of the HMI layer, you’ll see the Media Player, a reference application that allows end-users to control media browsing and playback. Developers can customize this player or write their own player apps, using APIs provided by the QNX CAR Platform.

The Media Player comes in two flavors, HTML5 and Qt 5. To communicate with the architecture’s multimedia engine (mm-player), the HTML5 version uses the car.mediaplayer JavaScript API while the Qt version uses the QPlayer library. In addition to these interfaces, custom apps can use the multimedia engine’s C API. All three interfaces — car.mediaplayer, QPlayer, and C API — provide an abstraction layer that allows a media player app to:

  • retrieve a list of accessible media sources: local drives, USB storage devices, iPods, etc.
  • retrieve track metadata: artist name, album name, track title, etc.
  • start and stop playback
  • jump to a specific track
  • handle updates in playback state, media sources, and track position

The interfaces that provide access to these operations aren’t specific to any device type, so player apps can work with a wide variety of media hardware.

The media player can quickly access and display a variety of metadata (artist name, album name, track title, etc.) stored in a small-footprint SQL database.



Multimedia components layer
If you look at the top of the multimedia components layer, you’ll see a box labeled mm-player; this is the architecture’s media browsing and playback engine. The mm-player does the dirty work of retrieving metadata, starting playback, jumping to a specific track, etc., which makes custom player apps easier to design. It also supports a large variety of media sources, including:

  • local drives
  • USB storage devices
  • Apple iPod devices
  • DLNA devices, including phones and media players
  • MTP devices, including PDAs and media players
  • devices paired through Bluetooth

To perform media operations requested by a client media player, mm-player works in concert with several lower-level components that help navigate media-store file systems, read metadata from media files, and manage media flows during playback. The components include a series of plugins (POSIX, AVRCP, DLNA, etc.) that interface with different device types. For instance, let’s say you insert an SD card. The POSIX plugin supports this type of device, so it will learn of the insertion and inform mm-player of the newly connected media source; it will also support any subsequent media operations on the SD card.

If you look again at the diagram, you’ll see several other components that provide services to mm-player. These include:

  • mm-detect — discovers media devices and initiates synchronization of metadata
  • mm-sync — synchronizes metadata from tracks and playlists on media devices into small-footprint SQL databases called QDB databases
  • mm-renderer — plays audio and video tracks, and reports playback state
  • io-audio — starts audio device drivers to enable the output of audio streams

OS services layer
The lowest layer of the multimedia architecture includes device drivers and protocol stacks that, among other things, detect whether the user has inserted or removed any media device. The following diagram summarizes what happens when one of these services detects an insertion:

  1. User inserts the device.
  2. The corresponding driver or protocol stack informs device publishers of the insertion.
  3. The publishers write the device information to Persistent Publish Subscribe (PPS) objects in a directory monitored by the mm-detect service. (Read my previous posts here and here to learn how QNX PPS messaging enables loosely coupled, easy-to-extend designs.)
  4. To start synchronizing the device’s metadata, mm-detect loads the device’s QDB database into memory and passes the device’s mountpoint and database name to mm-sync.
  5. mm-sync synchronizes the metadata of all media files on the device.
  6. mm-sync uses media libraries to read file paths and other information from media tracks found on the device. It then copies the extracted metadata into the appropriate database tables and columns. Applications can then query the QDB database to obtain metadata information such as track title and album name.

These steps may describe how the architecture detects and synchronizes with devices, but they can't capture the efficiency of the architecture and how it can deliver a fast, responsive user experience. For that, I invite you to check out this video on the QNX CAR Platform. The section on multimedia synchronization starts at the 1:32 mark, but I encourage you to watch the whole thing to see how the platform performs multimedia operations while concurrently managing other tasks:



Media browsing and playback
I’ve touched on how the multimedia architecture automatically detects and synchronizes devices. But of course, it does a lot more, including media browsing and media playback. To learn more about these features, visit the QNX CAR Platform documentation on the QNX website.


Previous posts in the QNX CAR Platform series:
 
  • A question of getting there — wherein I examine how the platform gives customers the flexibility to choose from a variety of navigation solutions
  • A question of architecture — wherein I discuss how the platform simplifies the challenge of integrating multiple disparate technologies, from graphics to silicon
  • A question of concurrency — wherein I address the a priori question: why does the auto industry need a platform like QNX CAR in the first place?

Selasa, 05 Mei 2015

Bringing safety assurance to automotive instrument clusters

Guest post by Chris Giordano, director of global business and software support, DiSTI Corporation

Digital instrument clusters in automobiles are here and almost any aviator could tell you this change was coming. Since the 1970s pilots have benefited from the use of digital screens in the cockpit to depict and convey aircraft status information.

The technology came as a response to the growing number of elements that were competing for space within the cockpit and for the pilot’s attention. What was needed was a way to process the raw aircraft system and flight data into an easy-to-understand picture of the aircraft’s situation: position, orientation, altitude, speed. Engineers at NASA Langley Research Center teamed with industry partners to develop the display concepts that would become the foundation of today’s primary flight displays (PFD).

Notional example of a primary flight display

By the early 1980s, as software continued to replace the functionality found in hardware components, certification had become more complicated. Potential flaws could be prevalent in both the hardware and the software. To alleviate this problem, standards for software development for aircraft systems emerged. In the U.S., DO-178 became the standard and the Europeans ratified the ED-12 equivalent. These standards not only took a logical assessment and validation of the input and output of a system, but dove further into the development cycle to prove that procedures were in place to prevent and minimize risk of a system failure. As a result, whenever a passenger walks down the jetway and onto their flight, these software standards help ensure they arrive safely.

In the past decade the automotive industry has progressed through a similar expansion in software use. Today, electronics and software drive 90% of all innovation. Electronics and software also determine up to 40% of the vehicle’s development costs. Anywhere from 50% to 70% of the development costs for an Electronic Control Unit (ECU) are related to software (Challenges in Automotive Software Engineering, Manfred Broy, Institut für Informatik Technische Universität München, 2006). New vehicles are monitoring complex engines, providing route guidance, communicating with other networks, avoiding accidents, and serving up media. Each new feature adds to system complexity, furthering the need to use software development best practices in order to avoid a big bowl of spaghetti code.

Notional example of an advanced instrument cluster start-up system check

The need for safety becomes more prevalent in the embedded system software as graphics-based instrument clusters continue to replace traditional analog-based gauge clusters. Enter the ISO 26262 standard for functional safety of electrical and electronic components in production passenger vehicles. Formally released in November 2011, the standard establishes the state-of-the-art for the automotive industry and assures the functional safety of these systems.

By using the QNX Neutrino OS and the DiSTI GL Studio toolkit, a development team can reduce the time and effort required to certify their solution to the automotive ISO 26262 functional safety standard up to Automotive Safety Integrity Level D (ASIL D), the highest classification of safety criticality defined by the ISO 26262 standard. This compliance allows automakers and Tier 1s to use this solution to meet safety certification requirements within the scope they choose.

This QNX Neutrino OS and DiSTI GL Studio solution will be on display at this year’s TU-Automotive Detroit. Check it out in the QNX booth, #C92 and the DiSTI booth, #A21.

Visit the DiSTI blog here.


Chris Giordano has been developing and supporting commercial HMI software for over 16 years and has been the lead engineer or program manager for 58 different visual programs at The DiSTI Corporation. Currently, Chris manages DiSTI’s Global Business and Software Support and is the program manager for several automotive OEM and Tier 1 supplier companies that utilize DiSTI’s GL Studio for their HMI development efforts. Chris worked very closely with the team at DiSTI that took GL Studio through the ISO 26262 certification process.