First performance of Fantasie#1 (2019) vor radio telescope, artificial intelligence and self-playing organ by duo Quadrature in collaboration with Christian Losert a spart of PODIUM festival 2019 in Esslingen

2020-11-13

Compositions for Cognitive Systems

How New Algorithms of Machine Learning Could Supply Still Lacking Desiderata in Music

From intelligent agents to neural synthesis of sound: by now, many new potentialities of using AI in music are tangible, potentialities that reach far beyond generating musical scores and offer new possibilities for co-creativity between humans and machines.

BY YANNICK HOFMANN

»We can all pack up and go home — hip hop is now available as a plugin for PC, […] equipped with a digital beat-button that offers all styles, from Lil Jon to Pete Rock.«[1]
Will these four polemic lines from a 2004 rap song become reality?

An advertising agency in the USA has trained a machine learning AI model with MIDI data and lyrics by the hip hop artist Travis Scott, and earlier this year they released the song »Jack Park Canny Dope Man« by their moderately authentic deepfake replica called Travis Bott. However, that AI created the song is obviously far from the truth — because in 2020, too, AI is neither intrinsically motivated to create its own rap songs, nor able to produce music autonomously that is fit for publishing. Philippe Eisling, head of the research group Artificial Creative Intelligence and Data Science at IRCAM (Institut de Recherche et Coordination Acoustique/Musique, English: Institute for Research and Coordination in Acoustics/Music) in Paris, points out that AI does nothing on its own, and there is always an actual person that does post-production on the results.[2]

Models like these that create symbolic music at the push of a button pervert the core idea of algorithmic composing; namely, to create symbolic music automatically from formally describable operations. Based on the technology of machine learning with large amounts of data, so-called Deep Learning, they could lead to an artistic and creative dead end, where they are then used for the AI-based mass production of music in advertising, or in products of the video game industry.

Furthermore, in recent years various new applications have also been published in the category of computer-aided composing which are so banal they are highly unlikely to interest the composers of contemporary music: With »Amper Music«, »Jukedeck«, »Watson Beat«, and »Flow Machines«, people with no composing skills at all will manage to produce a finished musical product with just a few individual inputs.

The ability of artificial neural nets to recognize patterns in vast amounts of data is, however, in no way limited to the symbolically displayable sound processes of note-based music. Many new potentialities are long since available that go far beyond merely generating musical scores: from new approaches in the field of audio signal processing and sound synthesis to the use of so-called intelligent agents. The new techniques won’t put composers/producers out of work, just as the invention of electronic storage technology, the synthesizer, and the computer didn’t in the past.

 

Artist Damian T. Dziwis connects live coding and artificial intelligence
Artist Damian T. Dziwis connects live coding and artificial intelligence

A Paradigm Shift: From Symbolic to Neural AI

Through the narrative of tech blogs and driven by the AI hype of recent years, the terms AI and artificial neural networks are often used synonymously. This could give the impression that AI is not even ten years old. Fact is that some AI techniques were already being introduced into music from the late 1950s, long before it was feasible to utilize neural networks. This is substantiated by the canonized early works of algorithmic compositions, like for instance the computer-generated string quartet »Illiac Suite« (1957) by Lejaren Hiller und L.M. Isaacson, whose automated generation of scores also uses generative grammar and Markov chains. Or Iannis Xenakis, who used Markov chains to compose »Analogique A« for 9 strings (1958) and »Analogique B« for four-channel audiotape (1959). Xenakis addressed the utilization of probabilistic methods in manifestos of composition theory, and developed an algorithmic composition software based on stochastic processes. Many more examples of the application of classic AI methods, based on »good old-fashioned artificial intelligence« (GOFAI) symbolic information processing, ought to be mentioned here.

Some of the superficial discourses revolving around the subject of music and AI must seem like a rehash to the producers of computer-aided algorithmic compositions or computer-aided compositions. For example, already since the early 1980s, all the stops were being pulled out in symbolic AI research in order to distill the artistic signature of composers from their œuvre. Johann Sebastian Bach composed over 370 choral works, which in this context represents a particularly popular pool of data, because making computer-generated accompaniments to choral works that convincingly imitate Bach’s original style seems to be a benchmark for AI researchers. In the meantime, deep learning-based AI models such as DeepBach stand alongside rule-based expert systems like David Cope’s »Emmy« (Experiments in Musical Intelligence) or Kemal Ebcioglu’s »CHORAL«.

Although it may be interesting to observe in what way approaches by two different AI paradigms have been used to solve the same problem, as a rule these models usually serve more the scientific proof of a concept, than possibly triggering a musical revolution. However, that a neural network which has been trained on Bach choral works can be integrated into a contemporary musical composition, is demonstrated by the audiovisual performance »Fantasie#1« (2019) for radio telescope, artificial intelligence, and self-playing organ by the duo Quadrature in collaboration with media artist and developer Christian Losert. Here, signals from space recorded by a self-constructed radio telescope are converted into MIDI data, and fed into the neural network. It begins »to fantasize familiar melodies with alien sounds«[3], and during the performance transmits Bach-like sound patterns to the manual of a self-playing playing organ.

 

 

Radio telescope on the forecourt oft he Esslingen church on occasion of the first performance of »Fantasie#1« (2019) for radio telescope, artificial intelligence and self-playing organ of duo Quadrature in collaboration with Christian Losert
Radio telescope in front of the Esslingen town church on occasion of the first performance of »Fantasie#1« (2019) for radio telescope, artificial intelligence and self-playing organ by duo Quadrature in collaboration with Christian Losert

Co-creativity of Human and Machine

Already in the 1960s Karlheinz Stockhausen was in no doubt that many things which so far had only been regarded as doable by professional musicians of average skill after the requisite training, could also be done by machines.[4] The applications described, which generate scores or audio, however, are rather useless for musicians and composers, Philippe Esling finds, if the goal is not to create music at the push of a button.[5]

One of the aspects researched in the interdisciplinary field of computational creativity, which is located at the interface of science, technology, art, and philosophy, is the partial or total automation of musical tasks by software agents. These are computer programs that react to inputs, make autonomous decisions, and are capable of adapting their behavior.

The Belgian artist Peter Beyls is considered a pioneer in artistically working with musical agent systems in the area of collaborative improvisation of humans and machines. In the 1980s, he developed at the Artificial Intelligence Lab of Brussels University the computer program »OSCAR« (OSCillator ARtist), which is an expert system capable of improvising live with human musicians. Because expert systems sooner or later reach their limits, Beyls described how his long-term goal is to implement the ability to learn into the computer program. These days, this can be done using the advanced tools of machine learning.

Intelligent agents, which are based on the algorithms of machine learning, are the speciality of composer and postgraduate student Artemi-Maria Gioti, who is working at the Institute for Electronic Music and Acoustics, University of Music and the Performing Arts Graz, together with Gerhard Eckel on the research project »Inter_agency: Composing Sonic Human–Computer Agent Networks«. With this project, they are pursuing the idea of co-creativity of humans and machines and attempting to create the parameters for creative exchanges on an equal footing. Last year, Gioti produced an interactive composition for a robotized drum kit and a human drummer. She first had to develop a system that listened and understood mechanically, so that the machine would recognize different instruments and techniques of playing. The discipline of machine listening pursues the goal of training computers to comprehend audio content, and combines the technical methods of audio signal processing and automatic learning to obtain meaningful information from natural sounds, everyday sounds, and recorded music. In this context, the utilization of machine learning represents a substantial step forward in research on co-creativity of human and machine.

 

 

Drummer Manuel Alcaraz Clemente and a robotized drum kit perform Artemi – Maria Giotis composition »Imitation Game« as part of the Giga-Hertz award festival 2019 at ZKM | Center for Art and Media Karlsruhe
Drummer Manuel Alcaraz Clemente and a roboticized drum kit give a rendition of Artemi-Maria Gioti's composition »Imitation Game« on occasion of the Giga-Hertz Award Festival 2019 at the ZKM | Center for Art and Media Karlsruhe

For the collaborative composition project »CECIA« (Collaborative Electroacoustic Composition with Intelligent Agents), which Gioti carried out in 2019 in collaboration with Kosmas Giannoutakis at the ZKM | Hertz-Lab on a cloud platform specially developed for it, intelligent agents were programmed which analyzed the compositional preferences of five composers and sound artists, and generated electroacoustic miniatures. The human artists then cast a democratic vote on whether the miniatures would remain in the final composition. The project ended with the successful premiere of a musically coherent, electroacoustic composition.

Neural Sound Synthesis

A field that still offers much scope for research and development is neural sound synthesis. A few years ago, during Google’s research project Magenta, the »NSYNTH« (Neural Synthesizer) was developed for designing timbre, and was published as open source software. Here, sound characteristics and training data are learned from a body of existing sounds, between which it is possible to interpolate in order to create new sounds. This technique was used by Italian composer Martino Sarolli for his electroacoustic composition »Lapidario_01« which sonifies silicon crystals, for which he received the Giga-Hertz-Award in the category of artificial intelligence at the ZKM in 2018.

The electro-acoustic composer Martino Sarolli (on the left) has received the 2018 Giga-Hertz special award in the field of artificial intelligence
The electro-acoustic composer Martino Sarolli (on the left) has received the 2018 Giga-Hertz Special Award in the field of artificial intelligence

The sounds of »GRANNMA (Granular Neural Music and Audio)« are completely unpredictable. The neural sound synthesis process was developed by artist and programmer Memo, with the aim of creating sounds that are only vaguely reminiscent of the training data. For the performance »Ultrachunk« (2018) with composer and vocal soloist Jennifer Walshe, the neural network was trained with improvised voice solos by Walshe. During the live performance, she improvises in a duet with an artificial yet familiar-seeming version of her own voice that can be manipulated in real time.

In terms of unpredictability, here it comes full circle to one of the earliest artistic experiments in the field of neural sound synthesis: the »ETANN Synthesizer« (Electronically Trainable Analog Neural Net) is considered the first analog neural synthesizer, and was developed for live electronic feedback performances by composer-performer David Tudor. Tudor used it for »Neural Network Plus« and »Neural Synthesis Nos. 1–9« (1992–1994).

Thus, what passes for AI technology depends on the technological zeitgeist and the dynamically changing term »intelligence«. On this topic, the computer-musician and publisher Curtis Roads already wrote in the 1980s, »The exact limits of AI is hardly graspable, because they are connected to what people perceive as intelligent behavior.«[6] According to Roads, for some people AI represents what we have not yet achieved, independent of what problems have been solved already — a possible reason why many AI approaches are under threat of being forgotten in the face of the new and impressive possibilities of neural artificial intelligence.

 

[1] Olli Banjo feat. Eizi Eiz, »Durch die Wand,« on Olli Banjo, CD Sparring, Headrush Records, 2004.

[2] See Julia Benarrous, »Artificial intelligence and music: A tool shaping how composers work,« at https://medium.com/@julia.benarrous/artificial-intelligence-and-music-a-... (last accessed 7.6.2020).

[3] PODIUM Festival Esslingen, »Quadrature,« at https:// bebeethoven2020.com/fellows/quadrature/ (last accessed 7.6.2020). 

[4] Karlheinz Stockhausen (1978), cited in Curtis Roads, »Research in music and artificial intelligence,« ACM Computing Surveys (CSUR) no. 2 (1985) 186.

[5] See Javier Nistal, »Deep beers: A chat with Philippe Esling,« at https://mip-frontiers.eu/2020/01/04/Philippe_Esling.html (last accessed 7.6.2020). 

[6] Curtis Roads (1985), »Research in music...,« 163. 

 

This article first appeared in German in the journal »Neue Zeitschrift für Musik« no. 4 (2020).

Yannick Hofmann (*1988 in Offenbach am Main, Germany) lives and works in Karlsruhe as a media artist and curator. At the ZKM | Center for Art and Media Karlsruhe, he leads the research and development project »The Intelligent Museum«. He teaches at the Karlsruhe Institute of Technology (KIT).