Hi Glovers (Will I be done for copyright?),
I am Josh, I am currently writing my dissertation for my Masters in Music Production (MA). My dissertation is looking into devised and evaluating a standardised MIDI mapping framework for a bespoke gestural control system I have developed.
I am currently looking into how users of physical gesture systems (like the Mi Mu gloves) go about mapping a gesture to sound. For instance on my system, I am restricted to a very set of gestures. These gestures are X, Y and Z axis (up - down, left - right, front - back). My general strategy to mapping gesture to sound or events is as follows:
Y axis - controls parameters such as volume, filter cut off and dry/wet of effects such as reverb/delay.
X axis - creative effects such as beat repeat division, reverse audio etc.
Z axis - is reserved for anything that needs space like reverb size and delay time.
My system is very basic and has limited usage however it can be quite creative when the macro dials and Fxs racks are utilised in Ableton and then it becomes very interesting and creative.
I am reaching to you to find out how you create a mapping/s and strategies you use when creating a mapping?