Computation is odd. It is one of the strangest things we have discovered and 80 years on we continue to fail to fully grasp its inner workings. And so, we struggle to keep unseen ramifications of our decisions under control. Faced with such strange yet essential beast, the only sane strategy is to try and tame it.
Betablocker is an artistic contribution to understanding low-level computation intersecting with livecoding practise. It offers strategies for presenting computation as tangible material, e.g., by making process and dynamics as much hearable as the code that describes it.
The talk gives insights into years of artistic research practice covering algorithmic composition, sound generation and autonomous coding in the light of self-manipulating code and artistic research practice.
»Oxidising the spectrum« (Belfast, 2004) is a life-form music generator exploring the possibilities of microbial electrochemistry in the compositional environment. It is a collaborative work between Ricardo Climent (music composition) and Quan Gan (chemical engineer). An »ensemble« of Microbial Fuel Cells (MFC) can be built with relatively unsophisticated equipment and is capable of generating complex low-voltage patterns. As the microorganisms oxidise the fuel (a carbohydrate), they start a process of voltage charge and discharge, which was the basis for constructing a musical system based on its output data. However, rather than straight-mapping into sound the cycle of life of these cultures, the creative process explored ways of destabilising the living conditions of the cells, in order to reverse-engineer their cycles and the patterns, in search for musical expression. The performer of the microbial ensemble is restricted to the manipulation of the biological and chemical system (as »the instrument«) and cannot operate the computer, which is receiving and mapping the MFC data into musical parameters. The Microbial ensemble are: Jathinobacterium Libidum, Picchia Anomala, Saccharomyces Diastaticus, a mix with other four Yeast and Proteus Vulgaris.
The talk focuses on the sonification of neurodynamics, biosignals and presents visions for the music of the future. A Brain-Computer-Music-Interface (BCMI) transforms brain activity into audio or sonified signals resulting in sound or music and could be realized as a real time, thought-controlled music instrument. Further engineering design may implement the Human-CMI and the »Neurosynthesizer«. Just as a synthesizer generates sound using control-voltages, the Neurosynthesizer should be able to process and control the sound synthesis though neuronal activity. These novel music devices could create new music genres. Potential applications for art performances as well as for therapy, e. g. for treatment of auditory processing disorder, chronic depression and autism, will be discussed.
Sonification can be approached as a practice, which is based on data. As such sonification is indifferent towards distinctions along popular categories such as art and science. The sonic arts however, have a lot to say about the sonic aspects of sonifications and scientific practices offer methods to deal with data, which in turn audibly influences the result. In my presentation, I will look at listening modes as a point of departure, which offers ways to shed light onto sonification as a practice that combines data handling and creative sonic decision making. I will illustrate my talk with examples of sonifications in multisensorial artistic as well as scientific contexts and I’ll argue that a listening mode based approach contributes to sonifications beyond the art science distinction.
The term molecular sonification encompasses all procedures that turn data derived from chemical systems into sound. Nuclear magnetic resonance (NMR) data is particularly well suited for molecular sonification as their range of resonant frequencies span only a few tens of kHz. The structure of the molecule being analysed is directly related to the features present in its NMR spectra. It is therefore possible to select molecules according to their structural features, in order to create sounds in preferred frequency ranges and with desired frequency content and density. The talk focuses on data sources, molecule selection and sonification strategies of commonly used ½ spin systems including 1H, 13C, 15N, 19F, 31P nuclei. Implications of using chemical data in music composition is discussed.
Sonification as a widespread phenomenon emerged not before both various kinds of datasets and suitable technical media for reinterpreting them in the sonic domain became easily accessible. Despite earlier examples of analogue sonification, as a scholarly subject it only appeared in the digital age. A universal representation of data is indeed a technical precondition for sonification, but not necessarily a digital, i.e., discrete and symbolic one. Most likely, beyond technical also cultural circumstances contributed to the emergence of sonification, both in scientific research and in the arts. My presentation will identify a few of them, namely the increased significance of sound and some related media conditions, along with recent novel approaches to data from artistic perspectives.
In recent decades, sonification has become increasingly popular in the domains of art and science popularisation. In this talk, I argue that sonification grips public imaginations through the promise of sublime experiences. In the public discourse surrounding sonification, sound is often situated as immersive and emotional, in contrast to the supposedly detached sense of vision. Thus, the idea that sound has no place in specialist science is reinforced, inadvertently undermining the efforts of sonification researchers to establish sonification as a scientific method.
The Auditory Culture of Science: Trained Ears, Auditory Displays and the Prehistory of Data Sonification
This talk ventures into the prehistory of sonification, the auditory equivalent to data visualization. By looking at key examples from diverse fields such as medicine, the life sciences, physics and the geo sciences, it locates the role of hearing, acoustic technologies and practices of trained listening within the modern experimental sciences. More specifically, it shows how acoustemic practices, i.e. sonic practices and modes of listening used to generate new insights, evidence and knowledge, moved from auditory observation of acoustic phenomena to deliberate transduction of signals and data structures into sound events. By connecting the infrastructures of epistemic listening to surrounding cultural, political, intellectual and material contexts, the paper portrays a lively and fertile auditory culture of scientific practice and discusses the discursive and material grounds which gave rise to scientific sonification as a field of research and practice in the 1980s and 90s.
Model-Based Sonification (MBS) is a technique to sonify data based on the data’s inherent structure. In contrast to Parameter Mapping Sonification, in MBS the data serves to define dynamical systems alike physical model which users can excite interactively, in turn receiving the system response as the auditory representation. The ability to hear the inherent features of a dataset makes MBS a suitable option in exploratory data analysis where the explicit knowledge of the data is absent. The talk will demonstrate a technique called Particle Trajectory Sonification as an example of MBS to analyse cluster information of high dimensional data.
The term ‘Functional Sound’ defines the designed sound to serve specific purposes. Most sound can be regarded as a functional sound, e.g. language is used for communication, alarm belt is to trigger certain warning, movie soundtracks can enhance emotional experience. Our mission is to expand the spectrum which sound can be useful and interacted with. This talk will present several examples on how we can develop interactive sound systems that can serve particular functionalities, such as communicative tool for emotional expression, speech intelligibility augmentation.
The talk discusses music composition based on sonification as a speculative vantage point in the field of arts and science. Much more than the sonified ‘Thing-in-itself’ I’m interested in epistemic aspects occurring in the underlying transfer. The semiotic dichotomy of re-presentation appears as a useful origin for artistic research: How can composers bypass conventions in music while addressing non-musical subjects? How can we compose sonic experience through dealing with sonification? What is the scientific use for non-linguistic information? The utilization of various types of sonification also discloses the relations of intuition and formal processes as it enforces counter-intuitive and unconventional compositional directions.