In the early years of my career, every user experience I was designing comprised a single modality and a single visual UI (with accessibility accommodations). Recent years have yielded an ever-expansive ecosystem of modalities, including products with smaller UIs, and even without UIs at all.
With these new modalities, the process of building a meaningful UX has become quite complex. But I spent the better part of two years working with a brilliant team on an in-vehicle infotainment (IVI) system for an automobile client (under NDA), and gained some valuable insights.
The overall goals of this project were to conduct generative research on the target audience, explore features and technologies necessary to fulfill the needs/desires of that audience, and conduct evaluative research and user testing with a partial prototype. Our research and test results would help inform the client about which features would be both meaningful and successful in their automobiles.
Crafting Scenarios and a User Journey
We began with a number of scenarios that would help us define a spectrum of prospective feature sets. Then we compiled those scenarios into an abbreviated user journey spanning part of a single day. This would eventually be crafted into an immersive prototype for testing and showcasing. The first step, however, would be to outline the journey that we would use for the duration of the project.
Once we had our very high-level journey map, we could begin considerations for how to get from beginning to end. We faced a number of challenges, but three, in particular, garnered a lot of our attention:
- Assessing an appropriate balance of control between driver and automobile (and its autonomy)
- Working with a potential for multiple modalities (aural, visual, and haptic)
- Understanding and accommodating cognitive load while driving
Balancing Control Between Driver and Automobile
First, we needed to decide who would be in control. Did people want a partner or a manager? Autonomy or dependency? An anticipatory or reactionary system? Did people even want their automobiles outsmarting them?
A great way to answer these questions was to use familiar metaphors to help us understand the relationship between driver and car, and ultimately the level of control the driver has. We gave the metaphors agency, which is essentially personas, complete with names and avatars.
Let’s look at how two agents — a dog and a butler — might help us compare two types of user experiences.
This afforded us an opportunity to explore two very different relationships for the overall UX, and assess what kind of balance of control would be appropriate. The dog would react to our needs while requiring training to learn about our preferences. The butler, on the other hand, would anticipate our needs but would need deeper permissions to learn our preferences.
Accommodating Multiple Modalities for Input and Output
With a journey map and agents in place, we could begin to explore options for interactions between driver and automobile. This would require taking inventory of input/output modalities, then connecting the dots between them.
In this case, we looked at inputs like mobile devices, wearables, sensors, internal controls voice, and even IoT inputs like GPS and external infrastructure-systems. Outputs might be visual displays (including a Heads-Up Display, or “HUD”), haptic feedback, aural cues, mechanical devices, and environmental systems (light, temperature, etc.).
Every scenario presented myriad challenges, each requiring us to define the source of input and the method of feedback. The perfect activity for this was “How Might We?” (HMW). This is one of my favorite UX activities because everyone can be involved and it yields rapid results.
Let’s look at how this works, using a scenario when a drawbridge would disrupt a mapped trip. We would begin by generating a number of HMW questions around the scenario:
- “HMW alert the driver before she sees the obstacle?”
- “HMW know about a bridge draw in advance?”
- “HMW help the driver reroute mid-trip?”
Once we had generated a series of questions for each scenario, we could begin to map out input/output modalities. This would help us build mental models for use with multiple modalities — including defining the primary function for each of the four visual displays — creating the groundwork for a didactic UX that would dramatically reduce a driver’s cognitive load.
Storyboarding and User Testing
User tests would be a critical step to see how our features would be received by the target audience. It was too early to build a prototype, but storyboards would help people visualize each scenario on the journey map.
Our facilitator walked participants through our storyboards, one for each of our two final agents, which enabled participants to see themselves in a role with an automobile. Their reactions helped us understand which features and what agent would be most desirable.
Crafting a Dimensional Prototype for Testing Cognitive Load
We now had a pretty solid understanding about where we could go with our IVI system, and were ready for our journey to be realized as a working prototype. We wanted to test the features in a simulated driving experience, to see how the UX would hold up with a high cognitive load. The prototype would need to guide participants through our journey of an imaginary day.
We used a virtual world from a Unity game environment to emulate the view through a windshield. We then mapped out a predefined path through a city that would enable us to fully realize our entire journey, introducing characters and other environmental variables as necessary. Participants were seated within a virtual automobile cockpit, complete with a steering wheel, pedals, and four visual displays.
Our facilitator sat in the passenger seat and helped guide the test drivers through the journey. They had to pay attention to traffic, pedestrians, traffic lights, and other components of a real driving experience, while interacting with an array of prompts that were embedded into the game environment, based on the scenarios in our timeline.
While this environment was a simulation, it had the necessary components to help participants get a feel for using our features while driving. This was far more realistic than storyboards, wireframes, or even a single-screen experience. We learned a lot from these sessions, and our findings guided us deep into confident recommendations for the full IVI system.
Crafting a UX for an automobile environment is quite complex. Truly understanding the relationship between driver and automobile was mission critical, and was a determining factor in our success. Adding in the complexities of multiple modalities and cognitive load placed a tall ask in front of our team. While there are many ways to approach a project like this, our approach proved to be successful. And we learned a lot that we could carry with us moving forward.
Some of the data in these images was modified/cloaked to honor an NDA. An early version of this article was originally published to the Optimal Workshop blog in advance of my talk on voice-based UX systems at UX New Zealand 2017.