Miha Ciglar_1

Ultrasonic toys

Miha Ciglar / Ultrasonic toys

Miha Ciglar is a composer and researcher in the field of audio technologies. In 2008, Ciglar founded the Institute for Sonic Arts Research - IRZU. He is the initiator and curator of the international sonic arts festival EarZoom, which takes place annually, since 2009 in Ljubljana. In 2011 he founded Ultrasonic audio technologies – a start-up company, developing a wide range of products including new musical interface controllers based on non-contact tactile feedback and computer vision, directional speakers based on modulated ultrasound, as well as several mobile applications combining music making and gaming.

Syntact™ is a new, “hands free” musical interface/controller, utilizing a non-contact tactile feedback technology based on airborne ultrasound. Through high energy ultrasound, a force field in mid-air is created, that can be sensed in a tactile way. The interface allows musicians to feel the actual sound (its temporal and harmonic texture) while a computer vision system is interpreting their hand gestures, allowing them to virtually mold and shape the sound - i.e. change its acoustic appearance - directly with their hands. The method of generating tactile feedback in multi-media applications by using airborne ultrasound was first proposed by Hoshi et.all. Hoshi's group created a tactile display for adding tactile sensation to holographic images.

Syntact™ consists of 121 ultrasound transducers arranged on a concavely shaped surface. The piezoelectric transducers are operating at the resonant frequency of 40kHz. The input audio signal is modulating the amplitude of the 40kHz sinus carrier, which is used to drive the transducers. As the high-frequency content is filtered out by our ears as well as by the tactile sensors of our skin, the effective output is perceived as (roughly) equal to the input audio signal. The acoustic energy projected through all 121 transducers is condensed in the focal point of the instrument 25 cm above the transducers, in the center of the virtual sphere which defines the concavity of the surface. This point is equally distant to all of the transducers and therefore, all 121 acoustic signals exhibit an equal phasing. As a consequence all the sonic energy adds up in this point and enhances the acoustic pressure to the maximum. In this way a strong pressure field is established that can be sensed by the skin in a tactile way.

The general idea behind Syntact™ is to enable a playful interaction with sound. Therefore, one of the crucial components of the instrument is its feed-forward / motion sensing section. Since 2010, Syntact™ has undergone several modifications of its input section concept. One of the more successful implementations based on acoustic feedback is still being used by the author in his performances. The feed forward solution, in which three ultrasonic receivers placed around the emitter array are analyzing the amount of acoustic energy being reflected by the musician’s hand in different positions. The final realization however uses a computer vision system in order to track and analyze motion, location and shape of the hand.

Syntact™ can be seen as a new musical instrument/controller. Therefore, it was important to match the novelty also in its feed forward section. During testing and development, it became evident that having only raw image descriptor data to work with, it would require a lot of effort from the user in order to create interesting and diverse real-time compositions. Hence, the goal was to offer a mapping solution, which was attractive for a wide range of users while still allowing access to low level image descriptor data for the experienced musicians to create individual mappings. The default image of the interface is now based on a pre-defined, sophisticated mapping concept, which allows easy and playful generation of meaningful and diverse musical structures. The generation of sound is based on MIDI files provided by the user. The multi-track files can either be pre-composed by the user or any other existing MIDI tracks can be used. With different hand gestures the musician can then trigger different instruments or instrument groups, which generate output according to the pitch and time information contained in the MIDI files “played” (silently) in the background. The MIDI files therefore only define the possibility of a note taking place at a certain time, which is further conditioned by a combination of image descriptors. While the possible onset times are quantized according to a selected grid of smallest time units (e.g. sixteenth notes), the pitch can also be reorganized in real-time - through different gestures and with regard to an automatic analysis of harmonic progressions in the selected MIDI composition. The result is a musical structure where the pitch is always “right” and all the musical events are always “in time”. The hand gestures define the dynamic variations, temporal density of events and some basic harmonic alterations in the pre-selected / pre-composed piece.

see more in ENTER+ / slovenia