Mapping Strategies - Gesture to Sound Relationships


#1

Hi Glovers (Will I be done for copyright?),

I am Josh, I am currently writing my dissertation for my Masters in Music Production (MA). My dissertation is looking into devised and evaluating a standardised MIDI mapping framework for a bespoke gestural control system I have developed.

I am currently looking into how users of physical gesture systems (like the Mi Mu gloves) go about mapping a gesture to sound. For instance on my system, I am restricted to a very set of gestures. These gestures are X, Y and Z axis (up - down, left - right, front - back). My general strategy to mapping gesture to sound or events is as follows:
Y axis - controls parameters such as volume, filter cut off and dry/wet of effects such as reverb/delay.
X axis - creative effects such as beat repeat division, reverse audio etc.
Z axis - is reserved for anything that needs space like reverb size and delay time.

My system is very basic and has limited usage however it can be quite creative when the macro dials and Fxs racks are utilised in Ableton and then it becomes very interesting and creative.

I am reaching to you to find out how you create a mapping/s and strategies you use when creating a mapping?

Thanks

Josh


#2

Not sure, if I this helps you, cause as u said, its about the MIDI mapping in general. Nevermind, I am facing a “similar” problem at the moment. Maybe you already found this information with your research but take a look into the bridge-project from MiMu providing the data they use to recognize the gestures and postures, to find some inspiration.

Do you have some kind of UI to create the mappings so far?


#3

Hi Chris,

I have a UI which I have been developing over the 18 months. I have used a Xbox 360 Kinect and wiimote to pick up the gestures using synapse. The data from Synapse is then fed into a patch I have created in Max 7 called ‘Kinect2OSC’ which the gestures are then mapped into Ableton Live. There is a number of different links and work on my website which you can check out if you desire.

Thanks :slight_smile:

Josh


#4

Hi Glovers,

Just an update, after thinking about my project. I am not going to bother looking at gesture to sound relationship, I am interested in knowing what your gesture to parameter relationship like I discussed in the previous post.
[This is a repeat from the earlier post]
The reason I am finding this information out is I am trying to develop a standardized mapping framework for my gestural control system. However, I want to hear from users of gestural control systems what gestures they relate to different parameters. For example, I relate volume to the Y axis and panning to the X axis.

I am reaching to you to find out how you create a mapping/s and strategies you use when creating a mapping?

Thanks

Josh


#5

Hey Josh,

in case you missed it, I just found that paper referenced on the mi.mu website. It may address your question and also references another (I think) interesting piece of information namely:

[23] T. Winkler. Making motion musical: Gesture mapping strategies for interactive computer music. In ICMC Proceedings, pages 261–264, 1995.

I took a look into your Kinect project, awesome work! I don’t know how the Kinect detects the gestures but I have to find out soon… of course it analyses the given information from the different cameras and so on but I want to talk about the software side… did you read some papers explaining the algorithm behind the analysis like dynamic time warping or HMM?


#6

Hey Chris,

Thanks for the link to the paper, this paper is a great paper and I used it on my undergraduate project.

As far as my Kinect project, So the gestures are detected by a piece of software called Synapse (http://synapsekinect.tumblr.com). This data is then fed into Max 7 where the data from Synapse is placed through a number of complex gates in order to allow the Wiimote to change the parameter selected. This data is packaged in Max 7 as CC Values which are sent onwards to any third party midi accepting application such as in my case Ableton Live.
I didn’t read much about the process of the algorithms behind the Synapse software mainly because I am a Creative Music Technologist and not a computer programmer it all went a bit over my head.

Thanks for taking the time to look at my work :slight_smile: if you want to talk further about my project feel free to drop me a line (Email address is available on my website)

Josh