Approaching the Complexities and Subtleties of an Automotive UX

Concept car for Jaguar

In the early years of my career, every user experience I was designing comprised a single modality and a single visual UI (with accessibility accommodations). Recent years have yielded an ever-expansive ecosystem of modalities, including products with smaller UIs, and even without UIs at all.

With these new modalities, the process of building a meaningful UX has become quite complex. But I spent the better part of two years working with a brilliant team on an in-vehicle infotainment (IVI) system for an automobile client (under NDA), and gained some valuable insights.

The overall goals of this project were to conduct generative research on the target audience, explore features and technologies necessary to fulfill the needs/desires of that audience, and conduct evaluative research and user testing with a partial prototype. Our research and test results would help inform the client about which features would be both meaningful and successful in their automobiles.

Crafting Scenarios and a User Journey

Once we had our very high-level journey map, we could begin considerations for how to get from beginning to end. We faced a number of challenges, but three, in particular, garnered a lot of our attention:

  • Assessing an appropriate balance of control between driver and automobile (and its autonomy)
  • Working with a potential for multiple modalities (aural, visual, and haptic)
  • Understanding and accommodating cognitive load while driving

Balancing Control Between Driver and Automobile

A great way to answer these questions was to use familiar metaphors to help us understand the relationship between driver and car, and ultimately the level of control the driver has. We gave the metaphors agency, which is essentially personas, complete with names and avatars.

Let’s look at how two agents — a dog and a butler — might help us compare two types of user experiences.

This afforded us an opportunity to explore two very different relationships for the overall UX, and assess what kind of balance of control would be appropriate. The dog would react to our needs while requiring training to learn about our preferences. The butler, on the other hand, would anticipate our needs but would need deeper permissions to learn our preferences.

Accommodating Multiple Modalities for Input and Output

In this case, we looked at inputs like mobile devices, wearables, sensors, internal controls voice, and even IoT inputs like GPS and external infrastructure-systems. Outputs might be visual displays (including a Heads-Up Display, or “HUD”), haptic feedback, aural cues, mechanical devices, and environmental systems (light, temperature, etc.).

Every scenario presented myriad challenges, each requiring us to define the source of input and the method of feedback. The perfect activity for this was “How Might We?” (HMW). This is one of my favorite UX activities because everyone can be involved and it yields rapid results.

Let’s look at how this works, using a scenario when a drawbridge would disrupt a mapped trip. We would begin by generating a number of HMW questions around the scenario:

  • “HMW alert the driver before she sees the obstacle?”
  • “HMW know about a bridge draw in advance?”
  • “HMW help the driver reroute mid-trip?”

Once we had generated a series of questions for each scenario, we could begin to map out input/output modalities. This would help us build mental models for use with multiple modalities — including defining the primary function for each of the four visual displays — creating the groundwork for a didactic UX that would dramatically reduce a driver’s cognitive load.

Storyboarding and User Testing

Our facilitator walked participants through our storyboards, one for each of our two final agents, which enabled participants to see themselves in a role with an automobile. Their reactions helped us understand which features and what agent would be most desirable.

Crafting a Dimensional Prototype for Testing Cognitive Load

We used a virtual world from a Unity game environment to emulate the view through a windshield. We then mapped out a predefined path through a city that would enable us to fully realize our entire journey, introducing characters and other environmental variables as necessary. Participants were seated within a virtual automobile cockpit, complete with a steering wheel, pedals, and four visual displays.

Our facilitator sat in the passenger seat and helped guide the test drivers through the journey. They had to pay attention to traffic, pedestrians, traffic lights, and other components of a real driving experience, while interacting with an array of prompts that were embedded into the game environment, based on the scenarios in our timeline.

While this environment was a simulation, it had the necessary components to help participants get a feel for using our features while driving. This was far more realistic than storyboards, wireframes, or even a single-screen experience. We learned a lot from these sessions, and our findings guided us deep into confident recommendations for the full IVI system.

Conclusion

Some of the data in these images was modified/cloaked to honor an NDA. An early version of this article was originally published to the Optimal Workshop blog in advance of my talk on voice-based UX systems at UX New Zealand 2017.

Meditator, experience designer, technologist, international public speaker, writer, family man, soccer addict, activist ✊🏻

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store