Victor Shepardson

Investigating the Lived Experience of Statistical AI with Microphenomenology and Notochord

When

Thematic Session 4: Creativity and Expressivity (Tuesday, 11:05)

Abstract

High-dimensional statistical models like GPT-3 for text and Stable Diffusion for images promise to transform creative work, and similar techniques are emergent in the musical domain. These models possess a sort of statistical intelligence derived rather crudely from inhuman quantities of data, yet which is at times strikingly capable. As these models get faster and the user interfaces for them improve, they are bound to become more interactive, more ready-to-hand, and easier to improvise with. To study the impending collision of this statistical AI with the embodied experience of creative work, we designed Notochord, a real-time model for MIDI sequences. More so than similar models, Notochord (or instruments derived from it) can be used by a musician from within the flow of music, via MIDI controller or live coding. Notochord enables very fine-grained interactions: part of each MIDI event can come from a performer and part from the model in various ways; an instrument builder can program Notochord to implement a harmonizer, a pitch selector, machine accompaniment, and other kinds of ‘intelligent’ musical instruments. Because musicians can interact with Notochord in an embodied way, it is also a scientific instrument uniquely equipped for investigating the moment-to-moment lived experience of interaction with statistical AI. We can use intelligent musical instruments built on Notochord to elicit the experience of embodied interaction with statistical AI. We propose to document those experiences using the microphenomenological interview method, which recovers the fine details of brief moments of lived experience. In this talk I will briefly demonstrate Notochord and then discuss ongoing work from our investigation of musicians’ lived experience with it.

Bio

I am a doctoral student in the Intelligent Instruments Lab at LHI. Previously I worked on neural models of speech as a machine learning engineer and data scientist. Before that I was an MA student in Digital Musics at Dartmouth College and and BA student in Computer Science at the University of Virginia. My interests include machine learning, artificial intelligence, generative art, audiovisual music and improvisation. My current project involves building an AI augmented looping instrument and asking what AI means to people, anyway.

Published Oct. 22, 2022 7:39 PM - Last modified Oct. 22, 2022 7:39 PM