UX Design

Audio Auditioning

Design of a module for iHeartMedia's radio talent to preview and edit audio before it airs
Aug 2021

Background

The project and my role

As a UX Design Intern at iHeartMedia, I was tasked with designing the entire workflow for talent to audition audio tracks.

Problem

What is audio auditioning?

HeartMedia's Sound+ is a platform for talent (radio hosts) to run live shows from anywhere in the world, with only their laptop.

When talent goes to insert a specific song, piece of news, live call, or any other audio into the live playlist, they need the ability to

  1. Preview an audio track (off the air)
  2. Set a custom start and end point for a track

This functionality is basically referred to by the term auditioning, and I was tasked with designing it.

Why auditioning is relevant - use cases

Having the ability to audition tracks is vital for talent during live shows. Through conversation with talent, I identified some core use cases that his end solution would need to enable.

As a radio talent running a live show, I want

  • to make sure the selected song Champion is the right one
  • to remove the 15 seconds of silence at the start of this Taco Bell ad
  • to skip the long drawn out intro of the song For Whom The Bell Tolls
  • to play one line of the song Stayin' Alive while a survivor tells a story
How might we enable radio-talent to audition audio during a live broadcast?

Research

Studying existing patterns

I began my research with a thorough look at the existing audio playout system (called NextGen) for patterns and flows that would be relevant to auditioning. I examined patterns from audio playback to voice tracking, fundamental parts of the software.

Voice tracking interface in NextGen, where talent can record themselves speaking between tracks

Understanding the user

I also took advantage of the rich research and data that was already collected by the UX team, which included detailed user personas, journey maps, interview transcriptions, and much more. In addition to saving myself a lot of time, this enabled me to learn about the users' pain points, and have a holistic picture of the workflows and scenarios that need to be designed for.

Persona of an on-air talent - the primary user (*not created by Kish)

Competitive Analysis

The most insightful part of the research process was taking an in-depth look at similar audio streaming/editing softwares. This was to

  1. Examine how/when auditioning occurs in other software
  2. Identify common design patterns and conventions
  3. Further understand the taxonomy/visual language of audio editing

To do this, I documented the relevant flows (via screenshots) from each of the platforms, abstracted and wireframes the core patterns of each, and outlined takeaways/learnings for each.

Define

Auditioning Workflow

With an understanding of how auditioning is performed in other software and how talent use the existing software, I created a simple diagram to illustrate the various actions that need to be designed for and how they fit together.

Design

Sketches

I began my design process on paper, sketching out multiple ideas and variations of patterns based on his research. This exploration served as a launch pad for the rest of the design process. Right off the bat, I identified two important factors to consider:

  1. Auditioning needs to happen in place - Sound+ is a fast-moving and time-sensitive platform, navigating the user away from the main screen or obscuring too many things from view likely won't be helpful
  2. Modes and signifiers will be crucial - since auditioning is not a linear process, it can happen anytime and for various reasons, it needs to be clear what state the interface is in and what actions are available at all times

Early wireframes and exploration

I translated my paper sketches to Figma, and mocked them up in minimum fidelity to maintain the sketches' simplicity and focus on the auditioning patterns. Some of the ideas from the sketches were immediately ruled out, and new ones were identified.

With the ideas mocked up, I presented these to the UX team to gather some initial feedback. I improved these wireframes iteratively until the team and I felt confident about 1-2 patterns that he could then test in mid-fidelity.

More developed wireframes for testing

After narrowing down the range of feasible designs, I created mid-fidelity wireframes to begin detailing the patterns in more detail as well as have something visually okay to show users in a usability test.

Detailed mockups

After a round of guerilla testing, I created hi-fi designs and a fully functional prototype to conduct in-depth usability testing with.

Full prototype

Testing

Guerilla and Usability Testing

To ensure the patterns and behaviors being established were usable and make sure the designs aligned with users' expectations, I conducted two rounds of user testing – guerilla testing and usability testing.

Screen grab from guerrilla test

Detailing takeaways

I outlined the insights and action items from both rounds of testing in a diagram like the one below. This made it easier for myself, as well as for other team members to grasp the main learnings at a glance.

Development

Detailing specs for implementation

With all the designs completed and validated by users, I thoroughly documented how the auditioning module will work, documenting everything from what each button does to resizing behavior.

Conclusion

Outcomes

  • According to the UX team and other stakeholders, the auditioning designs I created were very helpful and fit right into the rest of Sound+ system
  • Overall based on feedback from talent in the usability testing, the patterns and workflow created was easy to use and exciting