DIY 3D sound control


Surface V2 is currently in Developement, redesigned from ground up, now with a mechanical surface, data-processing on arduino, and serial midi-signals to be interpreted directly by the DAW of your choice! it supports up to 16 parallel inputs on 3 axis, and if i ever get around to complete the whole thing it'll be awesome! xD

Until then, please enjoy the Surface V1:

(The code for Surface V2 can be found on my Github)


the SURFACE is a sound control surface that allows for xyz axis control of multible simultaneous inputs. It generates control data which can be used to control digital synthesizers in third party software like SuperCollider or Max4Live.

The Surface.

The control-surface is a construction of rubber bands and textiles and is designed to give tactile feedback on finger position and pressure. It’s lack of a traditional claviature underlines it’s attempt to leave the knowen intervals provided by classical scales, but encourages the musician to investigate the possibilites that lie beneath and in between those.

Data Processing.

Data is read from a Microsoft Kinect Camera which is placed under the controlsurface. The DepthData is used to calculate fingerposition and pressure (currently in the Processing PDE). An ID is assigned to each detected finger to track it’s position over the progression of time – a necessary step to ensure optimum multitouch performance. The Blob tracking itself is a three step process.

– downsampling of input data
– blob detection through a flood-find algorithm
– tracking through position/distance properties
– In a last step the gathered informations are sent as OSC messages, and include scaled values with a range from currently 1024 steps. (further investigation is needed to determine optimum ranges / format of the generated OSC messages)


Supercollider / Max4Live

The generated data is sent per UDP as standard OSC messages, and further interpreted by SuperCollider or Max4Live.



I’m quite pleased with the trackingalgorithms performance, in my opinion ~1ms is a very acceptable value (considering that this is only the second tracking algorithm I’ve ever written) Still a minor problem is the time necessary to get the current depthdata from the kinect camera. My guess is that there is a synchronisation problem, but I dont tend to investigate further as it is planned to omit the camera entirely and move to an arduino-controlled pressure-sensor matrix for data generation.