top of page

MY DESIGN PROCESS

Human-Musician Interaction

 

"What is the most effective way a robot can help with musical improvisation?" 

This project was one of the very first time in which I had the opportunity to apply human-centered designs skills in the robotics space. Our team was interested in enhancing non-verbal communications between musicians in a jazz combo group but at the same time still allowing spontaneous decisions being made. 

In a team of 5, I was mainly responsible for user research, interaction flow design, and final poster design for the project. However, our team worked very strong in collaborative design synthesis. Hence, we played everyone's strength well in their assigned tasks but also ensured to give constant feedback. It was a very unique design process in that we had to consider not only social and spatial factors but also music tastes. 

1. User group observation

Insights from observation

Our team observed a jazz combo consisting of drums, bass, piano, guitar, tenor saxophone and bass clarinet. Jazz is characterized by improvisation; no two performances of the same song by the same people are ever the same, and much of the performance is decided spontaneously.

 

 

  • All cues are non-verbal because nobody wants the music interrupted

  • There are numerous decisions made in the middle of a performance, such as: who will take solos and in what order, how long each solo will be, when should the band come back in following the rhythm section solos, and how should the piece end as a group?

  • There is also no defined conductor or leader.

  • Eye contact is one of the biggest components of group performance success, because that was the single largest issue that we saw - musicians often failed to look at each other to give or receive cues because they were too concentrated on sheet music or just weren’t noticing.

2. Initial brainstorm

From our observations, our team proposed to build a robot that would focus on facilitating more eye-contact. Specifically, the bot would create proactive encouragement or reactive signaling. The bot could be controlled by a non-performing third party, or could be controlled by a member of the performing group. This bot could provide visual signals to players regarding transitions, assigning solos, etc. (like a subtle conductor)

 

 

 

 

To gain more insights into visual cues in non-verbal creative tasks, our team designed a lab experiment in which:

  • A group has to silently draw a cohesive scene (have to decide what to draw and then within time)

    • Task 1 - each has a sheet of paper but work on in adjacency

    • Task 2 -each has a separate sheet and works on it independently

We recorded the task (view video here) then coded the task by number of times participants gave eye-contact (or non-verbal) signals:

 

 

 

Through the task and benchmarking, our team identified the importance of non-verbal cues, empathic intelligence, and use of space in creative tasks’ communication. Our team then designed the flow with those information in mind.

5. Final design & task

Anchor 1

Diagram of Observed Group

Teamwork task design

3. Prototyping

Design Requirments:

  • Able to both retreat into environment and extend into view when necessary

  • Able to redirect attention of members

  • Should be able to intervene ahead of decisions

Benchmarking

Prototype I: 

Prototype II: 

Prototype III: 

  • Critical Motions:

    • Mobility, raising head and turning head

  • Incorporate different expressive behavior by changing frequency of motion in each case

  • Association of movement to expressive behavior

  • Final thoughts: Mobility will likely distract from task rather than facilitate

  • Perhaps incorporate other parts of design into final product.

  • Articulated arms

  • Three arms with pivot points that extend for expressive motions/movements

  • Jumper wires attached to each of the arms for motion

  • Multiple arms allow for more motions at same time

  • Circular design can direct attention in 360 degree sweep

  • Objects that can retreat out of sight.

  • Target individual musicians, cue when necessary

  • Multiple heads, swivel and raise independently

  • Mechanism can rotate within can

4. User testing

  • Our team set up a Qualtrics survey to show participants movement video and asked 1 open ended question along with 2 likert scale questions to see (1) If people could identify our robot’s expressions and (2) If they were confident in their identifications.

User testing insights:

  • Without the sound component, our robot’s movement was confusing and didn't make sense to most

  • Beat change motion may have to be more dramatic than we expected

  • Environmental factors affect participants’ interpretation of robot motion

    • For the last motion specifically, because our video’s environment was two students writing, participants’ answered circled around “homework” “writing” “students” instead of focusing on the attention direct

  • Robot’s appearance was constantly commented to be “not like a robot” and “not noticeable” → which is exactly what we want (won’t be distracting)

Intercation flow chart

Final robot design

  • Incorporate elements from all three prototypes

  • a robot with a round, lantern body and two articulated arms

  • LED idle beats

  • Each arm can rotate around 180 degrees, and can simultaneously perform bending and extending motions.

  • Rice paper lanterns themselves are noticeable, aesthetically pleasing but not “loud” or distracting

User testing

To test our final robot, our team found two musicians, Michael and Jacob, and provided them with jazz sheet music. The tune chosen was Equinox by John Coltrane, and it was a song that they have never seen before. This requirement was a conscious decision, as we wanted to see if our robot could help the two musicians look at each other more, and not stay buried in the page, as this was a problem identified in our very first observations. The two performed with the robot centered between them to give cues, and they could react as they wished. We ran the experiment in two rounds.

  • Round 1: The musicians were to perform without any discussion of robot motions. We did not inform them about intended meaning of actions. Moreover, the musicians were not allowed to discuss and decide ahead of time on order of solos, tempo, intros and outros, etc.

  • Round 2: For the second round, musicians were allowed to briefly discuss what they thought the robot motions meant. However, they still were not allowed to make performance decisions ahead of time.

Interactive video

  • Combined motions could offer expression that we did not intend

    • e.g. shaking side to side and lowering to signal dynamic change, etc.

  • Musicians said they actually would have liked more involvement from the robot.

    • More involvement at points other than the end of choruses and transitions

    • Liked the first trial better because with neither of them having any clue with what the robot meant, they had so much room for freedom and interpretation

    • Also helped them make more eye contact because they would check to make sure other person was on same page

  • They liked that the robot was not human

    • Something with more body would have made it seem more like there was another person in the room to account for and have to coordinate with as opposed to just providing some subtle cue suggestions

Key insights from user testing:

6. Final poster

MY ROLE

UX Designer

 

WHAT I DID

  • Ideation

  • User Research

  • Benchmarking

  • Interaction Flow Design

  • User Testing

  • Poster Design

 

TEAM

Justin Tung

Ryan Enderby

Abenezer Lemma

Sarah Choe

Paris (Peggie) Hsu

Project Website

http://info5410dp3.strikingly.com/

2016

 

 

 

click to enlarge
Anchor 2
bottom of page