The moon who makes the birds dream in the trees

by Andrew Pfalz

for the NIPS 2019 workshop on creativity


This work was produced by training a model to predict audio one sample at a time. It generated all the audio heard in the piece by listening to a segment from a famous work of the Classical canon and trying to guess what comes next. The results were arranged to create the final piece. I am presenting this work in order to raise a number of ethical questions that are relevant to the field of work created with AI. These questions fall into three categories: authorship, privilege, and algorithmic integrity.

Is it morally imperative to give credit for the borrowed source material if it is so transformed that the original source cannot be easily recognized? Who is the author of such a work? Does it make more sense to ask who contributed the most to the final product? Was it the composer of the source material, or the performer who interpreted it, or the designer of the algorithm, or the model who actually generated the new audio, or the final arranger of the new audio to form a new piece?

The machine used to train this model was purchased by a large research university and required special electrical and cooling infrastructure to run and maintain. If tax payers funded it, and we all suffer the effects of climate change incurred, in part, by running large high-performance computing centers, shouldn’t more than an elite handful of researchers have access to such exclusive hardware?

The algorithm that generated this audio allows the model to learn while it makes predictions. What can we say the model learned based on the output it generated? Does it matter what the model learned? Is it wrong to cherry pick the results in order to hide weaknesses in the algorithm? Wouldn’t sharing bad results along with good help others gauge the success of their own experiments? If the project isn’t open source, or if the algorithm isn’t presented so as to be easily understood, does it count as a contribution to the field? If the process is part of the art, then don’t artists bear some responsibility to explain themselves? If the results of an algorithm are difficult to reproduce does this take away from the value of the artifacts or does it make them all the more precious?










A more detailed description of the algorithm is available at apfalz.github.io/rnn/rnn_demo.html.

The code used to produce this work is available at github.com/apfalz/audio_lstm