/ ˈ L · U ː . K· I . D· U · S /
( AMicroscope Performance )




LU.KI.DUS. is a live cinema and sound performance focusing on scale, pattern, and awareness. We developed a technique to automate a microscope’s stage position through computer controlled motors. Our goals revolve around exposing the unseen yet omnipresent events which unfold at the microscopic scale. We will be crafting a distinctive kind of microscopic world that leans more towards the experimental rather than scientific.

Using different combinations of crystals, contaminated water and alcohol, we set ourselves on a journey to find the unknown. The audience is invited to witness this journey. Few rehearsals took place. We were presenting to the public live matter / life, and nobody can rehearse for life.

Juan Manuel Escalante & Kurt Kaminski

Mechanical advisor: Akshay Cadambi
Music: Juan Manuel Escalante
Chemicals and Reactions: Kurt Kaminski



Photograph: Curtis Roads    

Photographs taken at Elings Hall / UCSB (US)








To achieve a cinematic effect and reduce vibration caused by hand movement, a 3D printed platform was designed. Both axis (X and Y) of the microscope are controlled by stepper motors. Kurt Kaminski's previous Houdini experience – back at Dreamworks – come into play to model the platform. Akshay Cadambi calculated the exact rotation needed for the gears.










A Processing code captured the live feed from the microscope. Before displaying it on the big screen, it applied basic mirroring effects and slight code distortions. These distortion were applied at discretion during the concert.




A 15 minute electronic music soundtrack was written for the live visual content. The soundtrack is composed of five different movements (as seen on the sketch above):

1. Opening
2. Caffeine
3. Acrylic
4. Crystals
5. Spring hommage

It featured -mainly-: droning sounds, atmospheres, and medium to high pitched sequences. Both the wave equation and acoustic vibration principles seen in class, were implemented to generate the inputs of these sequences.



Layers of sound for the "Skin care" movement.


Spatializing sound on Elings Hall.


Record of sound channels per movement


Motifs and general composition ideas.

First substance mixing registry.


Secondary map of the five movements.


A. Graphic Output of wave equation images were taken as the source to generate some of the sequences.
This values were saves as a TXT file.


B This numbers were thenused as sequence INPUTS in SuperCollider (Important note: this data was to be plugged into different \synths. This increased our possibilities of composing a sound piece, since the exact same numbers could generate different tonalities and timbres when connected to different instruments \synths. This would also mean we would get different results other than white noise).

An example of a synth is screen-captured below.



C. SuperCollider reads the file, and then controls timing and sequence repetitions (loops) of the piece (in order for them not to play at the same time).

An example of the above in the following image. The first sequence of code, reads the TXT file and stores it as an Array. All of the frequencies are stored in the first position of a nested array.

In the highlited section of the code, a sample Routine that shows how the data is actually inserted into one /synth. In the example a Gendy Unit Generator (with samples below).



D. Finally, all the data was sent to Logic X which equalized, compressed and fine tuned the sounds for live performance (reverberations and slight distortionswere also applied at this stage). Total sound channels used: 18 and 4 buses. For live performance, the file was re-configured from Stereo to Surround (DST 7)*

* Notice the sound spatialization on each track.




Diagram of the process:








Modified in Logic X with a Flanger, Reverberation and Compression/EQ. These sounds are used to create an atmosphere before the Electronic part.

To avoid a complete randomized result a % (modulo) is used.


A linear-interpolating sound generator based on the equations above. Used for the creation of a semi-random melody. This melody can be heard during the climax of the piece.



A dynamic stochastic synthesis generator
conceived by Iannis Xenakis. Three frequencies are given to each Gendy trigger. In this case, the data from the


The bitmap data is used to generate noise. The values are inserted into the trigger of an impulse. These sounds are wave/ocean like. They are used as a transition after the opening and as a passage separation in them iddle of the electronic part (Crystals).


A single note with a long release duration.


Single number as sound.


Equation average as a single note.


An single note with a bitmap defined impulse.




A sequenced forest. 3 seconds spaced between each.


Sequenced number from an Array. 1.5 sec spaced.



Same note in a lower range.


Several notes in a sequence.





Same sequence. 0.125 sec spaced.



3 notes (from three images) with a space of 1 second.





Randomizing the frequency multiplier (1 and two)



15 Gendys being put together.








Higher pitched sounds.









Here is a diagram of the specific places within the composition where this sounds are placed. To notice: the opening, the dust, tubular bells and white noise waves.



The full soundtrack of the performance lasts 15 minutes (the soundtrack presented in the video is a modified version). Above the full version.

Download full version as:
MP3 ( 17.3 MB)
WAV ( 230 MB )






  >> Download CV
>> Previous and further work at Realität
>> "Logics of Elusion" - a photo journal from Cali.

No other work featured in any other UC website is endorsed by the author, unless included or mentioned here.

The contents, including all opinions and views here expressed, do not necessarily represent the opinions or views of anyone else, including other employees in my department or at the University of California. My department and the University of California are not responsible for the material contained in this publication.