Background

Johannes Goebel: The Institute for Music and Acoustics (1992)

Listeners, performers, audience; sounds, instruments, sound, microphone, loudspeaker; composers, sheets of notes, screens, conductors; un-heard music, old music with unseen images, rotating sounds around the dancers; with the right sound, carrying off actors together with the audience into the underworld.

Scientific research of the foundations of sound-emitting material and the perception thereof, narrowing down the theoretically possible and the practically desirable; artistically making technical possibilities accessible, contrasting aspiration and realization: Implementing these activities requires very complex tools, flexible administration, and the right mix of discipline and freedom.

We hope to allow future visitors and colleagues to the Center, which is presently under construction, the possibility to experience such a dynamic musical sphere. Artistic work and research will interlace. At present, the aim is to create the content-based, organizational, and spatial foundations.

So, what music should be produced at the Institute for Music and Acoustics? Should it be E-music, as in »Ernst« (serious) – to stick with the categories of administrated music – or U-music, such as »Unterhaltung« [entertainment] or even EU-music, as in »gute« [good]? Linked to these questions is that of admission to the institute. Who can do what here? First, a comparison: musical »fast food« will hardly be produced or served here. But that doesn’t mean that tasteful things are excluded. On the contrary, the claim is made of using only the freshest of ingredients – no preservatives; and digestibility is not a question of the cuisine alone, but also the constitution of the guests. Hence, the criteria for what sound is made, have nothing to do with what musical style is made identifiable by this or that piece, but rather how artistic presentation, technical realization, and mediation to the ear can culminate in a »fulfilling« time (and time that is fulfilling is not the kind of time you want to kill) for all involved. Music as pure time-based art – namely, art that is produced and received in a different context than, for instance, images – must be made possible aesthetically, and simultaneously meaningfully discussed to avoid having such a mighty institute as the Center turn into a pure administrator. Experimentation will be possible to the same extent as exemplary performances of already existing music. Decisive is not the »degree of innovation« or the aesthetic school, but simply the consideration that the space, that of the building, and the time that the people working and recording there made available, are filled in conscious effort.

Composition Environment

In recent years, the work of composing with computers has gone through a major wave-like movement. During the 1960s and 1970s, work on programs was carried out on mainframes, which allowed for wide variety but tested the composers’ patience – everything took such an incredibly long time! Then, in the 1980s, digital technology asserted itself in the music sphere in the consumer market. The musical instrument industry adopted a standard, thus enabling at least a common foundation (»MIDI«) for information exchange among devices from various manufacturers. Therefore, although musically the »digital« was located in the »E« sphere, where it was researched and invented, it was the »U« sphere that took up mass-production. When you turn on the radio, the consequences of this can still be heard today: violins, drums, spatial effects – everything comes from the computer.
 
Many people also placed great hopes on the industrial development of new devices in the »E« sphere, since the mainframes were very expensive. In several areas, such as live electronics – the change and control of sounds being generated at the moment – new, less expensive possibilities did emerge. And yet after a while, in the sphere of complex noise synthesis – noises adapted to the ear and the compositional exploitation of special possibilities of the computer – a sense of disappointment began to set in among many circles. From this the desire arose to further expand the foundations developed in the years prior to MIDI, and to implement them independently of machines.

The Open System

Herein is an important area for the Institute of Music and Acoustics. We are working together with the Center for Computer Research in Music and Acoustics (CCRMA) at Stanford University on the installation of an environment for composers that integrates the experiences of recent years in the sphere of sound synthesis, sound processing, and computer-controlled compositions independent of machines. To this end, Heinrich Taube (Center for Art and Media) and William Schottstaedt (CCRMA) have developed Common Music und Common Lisp Music. One aspect is, without doubt, the speed with which a composer can achieve its acoustic objective. And yet, »real time« does not signify a limitation. Should the imperative of imagination call for a sound that takes more time to compute than is subsequently heard, the compositional environment allows this, too.
 
When one considers how long it takes for musicians to master their instruments, or composers to master the necessary techniques for writing a traditional orchestra, then one sees that it makes little sense for it to be any different with computers. With the computer, other, new possibilities for musical work are available. These have to be researched, learned, and worked out: what can be done with the computer that could not be achieved otherwise? What tricks are necessary for implementing an idea? What does the already existing tradition of electronic music sound like? Many questions must be dealt with in a practical manner. In its artistic application, the computer is a very specific tool and does not, per se, answer any aesthetic, artistic questions. Thus, the composition environment created at the Center is designed as an open system to be »mastered« by composers and then made available in a flexible form. (Of course, introductory courses are also offered for this.)

The Rapid System

As an antithesis to such an open system, important will be the availability of industrially manufactured electronic music devices (including, among others, the devices mentioned above that communicate via MIDI). In most cases, these devices are optimally designed for limited possibilities. In this way, synthesizers, samplers, instruments for live electronics, MIDI controllable mixers, etc., can be drawn on – for both pure musical productions, as well as musical-theatrical stagings and other cross-genre realizations.

The Development of Special Instruments

For the development of some artistic ideas, it will be necessary to develop instruments that do not yet exist. These instruments could, for example, facilitate new control possibilities for electronic sound production, but could also be acoustic instruments with which players produce hitherto »unheard« sounds. A loud-speaker orchestra, which creates new sound spaces for certain performances, is also part of this field. The appropriate ZKM workshops will be made available for this

Research

The impulse behind the research and scientific activities at the Institute for Music and Acoustics emerges from the tension caused by the artistic examination and application of – primarily electronic – tools and instruments. Electronic sound generation has existed for nearly 100 years. This contrasts the development of music, which is as old as humanity itself: in each culture, it assumed a unique expressive form in the shape of special acoustic instruments, singing techniques, compositional scope, and, ultimately, the ear – to which the human being is attached. In that the actual generation of sound, the making of music, has been separated from the experience of listening through the reproducibility of musical performances – by means of vinyl records, radio, audiotape, and, finally, digital media such as the CD – with its permanent accessibility, music has acquired an entirely different function. The final ramification of this has been reached; sounds are no longer generated mechanically by muscle power, but indirectly, namely, electronically. And with the use of the computer in music, the final phase in making music quantifiable has been attained. The purely physiological resolution of the sense organ, ear, is faced with a quantitatively equivalent »instrument.«

This gives rise to a plethora of possibilities – and questions. And  all demand a response. The computer, with the techniques that it enables, can be used to examine these more precisely. Among other fields is that of psycho-acoustics, focusing on the transition from acoustic event into the »inner life« of the human being: in what ways are the acoustic, physiological, and psychological »dimensions« determined? What do we hear, and how; what are we aware of, and how; and what is the connection between acoustic event and evaluative interpretation (see the introductory paragraph at the beginning of this text). The computer is simultaneously microscope and test tube for what can reach our ear. This is of interest not only for acousticians, psychologists, scholars of music, and communications scientists; composers also dig further here: how do I achieve an ice-cold or a moist-warm sound, how can my constructivist idea find its pendant in an appropriate sound. And already this form of expression reflects the gap to the digital tool. Will we be able to find a transition between manual skill, science, and art, as was optimally achieved, for example, in the development of the concert piano, from the pianoforte through to its contemporary »optimum« use? However, in our field, for the realization of an instrument we need not only cabinetmakers, iron foundry craftsmen, engineers, and piano-makers, but also electricians, programmers, and signal processing experts. They will work together at the Institute for Music and Acoustics on what »holds the ear together in its innermost depths« and what artistic qualities can emerge from digital, quantifiable technology.

The Future Spatial Structure of the Institute for Music and Acoustics

In the future, the work outlined above will require a spatial structure and arrangement that facilitates both interfusing of artistic work and research, and collaboration with the Institute of Visual Media. In addition to the video studio, there will be a 250-square-meter sound recording studio – also available for chamber concerts and small-scale experimental performances. A floor for dance events is also envisaged here. This studio is attached to a conductor’s space of almost 50 square meters, large enough for optimum listening, as well as carrying out events with smaller groups, such as advanced training.  A machine room is also attached to this, to store all sound-generating devices required in the conductor’s space. At the same time, the technical facilities installed here will be directly connected to the small conductor’s space mentioned below, and to the conductor’s space for the media theater. A voice space compliments this complex.
 
A second area – consisting of two smaller recording studios, a listening space with various loudspeaker systems (also useable as recording studio), and a space for scientific-experimental structures – is grouped around a small conductor’s space. By way of a space-in-space construction, these rooms also offer optimal sound insulation and production possibilities. The third complex comprises five »studios.« The studios are meant for work on artistic projects. They are not designed as space-in-space, but offer a higher degree of sound insulation between them than, for example, the office spaces. Furthermore, an anechoic space for scientific measurement and experiments will be installed. The fourth zone is formed by the office spaces. These are also connected to the digital and analogue audio network, which means that they can be used for more than simply »office« functions.