Open State

Mathew Emmett (Composer), Adam Benjamin (Other)

Research output: Non-textual formSound/Music

Abstract

Open State is a contemporary dance soundscape developed over iteration of two years and two venues spanning Japan and UK. The research uses audio sensors and augmented video conferencing to capture vocal disfluencies from dancers alienated by speech disorders and physical impairment. Utilizing technologically supported chorographical encounters, a hidden language is revealed between disabled and non-disabled dancers, providing material for the spatial and sound environment that readdresses existing barriers of communication.

A digital environment, enabling multicultural performers with disabilities to communicate was created as an ‘intelligent’ platform. Rather than the ‘classical’ model of mute bodies and blank spaces upon which the choreographer works (Melrose & Butcher 2005), the project advanced a technological-space interface that extends accessibility through virtual exchange. Dancers’ articulations were processed in Praat, a software tool commonly used by linguists to analyze the phonetics of speech. A script was written to record the intensity of audio signal and frequency of sounds ranging in pitch between 75 to 300 Hertz. This numeric information in combination with sibilant and hesitant words was translated into sonic values using Pure Data by patching a sine wave oscillator to the data sets. Adam Benjamin, choreographer, created a dance performance in response to the data-specific soundscape. Open State received funding from Art and Culture Promotion Fund, Arts Council Tokyo & Great Britain Sasakawa Foundation. Premiered at Tokyo Art Centre, Japan, 2015 and The House, Plymouth, 2017.
Original languageEnglish
Publication statusPublished - 19 Jul 2015

Keywords

  • choreography
  • disability
  • integrated dance
  • Soundscape

Fingerprint

Dive into the research topics of 'Open State'. Together they form a unique fingerprint.
  • Open State

    Emmett, M., 19 Jan 2015

    Research output: Non-textual formSound/Music

    Open Access
    File

Cite this