Topic outline

  • General

    • Description

      In composer EdgĂ rd Varese's words "music is organized sound". But what else can be organized from sound than music? Pleasant and annoying noises certainly, and perhaps time itself? And what ĂŠsthetic, phenomenological and political role do technical media such as software code and algorithms, Spotify, MIDI, and the MP3 play in this work of organization?

      On this praxis seminar we will use synthesizers, recorders and specialized programming languages for sound (p5.sound, Pure Data) together with transmedial techniques like sound visualization, data sonification and frequency shifting to examine, explore and reproduce familiar sonic phenomena ranging from noise pollution to ASMR, from Zoom glitches to smartphone notifications sounds, from "the halfalogue" to vaporwave. No prior technical knowledge of music or software is required for this course, only open ears, laptop and headphones.

      Broadly speaking the journey of this seminar is the following: we start from how micro-scale mechanics of sound operate in machinic and human bodies using FM synthesis, then stretch out in time toward meso- and macro-levels of meaning with drones and field recordings, and then returns back to microlevel with granular (re)synthesis.

      Further use as Open Educational Resource (OER) is encouraged: This work and its contents are – unless otherwise stated – licensed under the Creative Commons (CC) BY-NC 4.0. Please cite according to the TULLU rules as “Software and the sonic subconscious of the digital” by Mace Ojala, licensed under CC BY-NC 4.0. Other licensed content, works and works are excluded from the license (see https://moodle.ruhr-uni-bochum.de/course/view.php?id=53702).

    • Outcome

      Each participant will make a 10 minute audio paper (Groth & Samson, 2016). The audio paper on this seminar is a relatively freeform, creative, academic, non-musical audio composition with a about 3 minutes of text, and mostly a soundscape about a chosen topic which relates to the digital world. The audio paper is research project, conceived and conducted from the get-go in a way which is better documented in sound than writing.

    • How to adapt/adopt

      Here are a few suggestion for teachers and educators who would like to adapt/adopt the OER in full or in part.

      • This is a praxis seminar – doing > talking. Don't fall for the unfortunate theory vs. praxis dichotomy however. Mace's Philosophical position is that the antonym of "practical" is "unpractical" rather than "theory". Praxis is theory is praxis.
      • Sound unfolds over time. Spend the necessary time listening together. In a classroom this can feel unfamiliar, it is suggested you sit with this discomfort. A decent sound system and a acoustically neutral room are advantages, but make the best use of what you have.
      • Make fieldtrips if plausible; only so much can be said about sound in a classroom. Fieldtrips could be individual, pair, or group trips. The original seminar visited research facilities of Institute for Communication Acoustics, and campus radio CT at Ruhr-University Bochum. Public places are great destinations.
      • Live-program if possible. During the first first half and later on a few simple, custom synthesizers are explored. Program them live in p5.js and Pd if plausible, and explain the thinking as you go along.
      • If participants have experience in music, make use of it. This is however not a music seminar so make sure to challenge, contextualize and deconstruct the authority of musical knowledge.
    • Background of this OER

      Software and the sonic subconscious of the digital draws from and brings together software studies, sound studies, anthropology and creative programming. It was originally a full-semester seminar for a six students at the Institut fĂŒr Medienwissenschaft at Ruhr-University Bochum during Summer 2023, designed and taught by Mace Ojala as part of Freiraum 2022 project media practice knowledge. Each student produced a 10 minute audio paper of their selected topic.

      At the time of production of this OER in August 2023, the student audio papers are being produced into a limited edition C-cassette release.

      (Photo by Mace Ojala)

    • Literature

      Core literature
      Secondary literature
      • Blackwell, Alan F., und Nick Collins. „The Programming Language as a Musical Instrument“. In Proceedings of the 17th Annual Workshop of the Psychology of Programming Interest Group, PPIG 2005, Brighton, UK, June 29 - July 1, 2005, 11. Psychology of Programming Interest Group, 2005. https://ppig.org/papers/2005-ppig-17th-blackwell/.
      • Cascone, Kim. „The Aesthetics of Failure: ‚Post-Digital‘ Tendencies in Contemporary Computer Music“. Comput. Music. J. 24, Nr. 4 (2000): 12–18. https://doi.org/10.1162/014892600559489.
      • Church, Luke, Chris Nash, und Alan F. Blackwell. „Liveness in Notation Use: From Music to Programming“. In Proceedings of the 22nd Annual Workshop of the Psychology of Programming Interest Group, PPIG 2010, Madrid, Spain, September 19-21, 2010, 2. Psychology of Programming Interest Group, 2010. https://ppig.org/papers/2010-ppig-22nd-church/.
      • Döbereiner, Luc. „Models of Constructed Sound: Nonstandard Synthesis as an Aesthetic Perspective“. Comput. Music. J. 35, Nr. 3 (2011): 28–39. https://doi.org/10.1162/COMJ_a_00067.
      • Ernst, Wolfgang. „Experimenting with Media Temporality: Pythagoras, Hertz, Turing“. In Digital Memory and the Archive, 184–92. University of Minnesota Press, 2013. https://www.jstor.org/stable/10.5749/j.ctt32bcwb.17.
      • ———. „Toward a Media Archaeology of Sonic Articulations“. In Digital Memory and the Archive, 172–83. University of Minnesota Press, 2013. https://www.jstor.org/stable/10.5749/j.ctt32bcwb.16.
      • ———. „Towards a Media-Archaeology of Sirenic Articulations Listening with Media-Archaeological Ears“. The Nordic Journal of Aesthetics 24, Nr. 48 (2015). https://doi.org/10.7146/nja.v24i48.23066.
      • ———. „The Audio Paper. From Situated Practices to Affective Sound Encounters“. Seismograf, 28. Juni 2019. https://doi.org/10.48233/seismograf2106.
      • Haworth, Christopher, und Georgina Born. „Music and intermediality after the internet: aesthetics, materialities and social forms“. In Music and Digital Media. A Planetary Anthropology, herausgegeben von Georgina Born, 378–438. UCL Press, 2022. https://doi.org/10.14324/111.9781800082434.
      • Kittler, Friedrich. „Real Time Analysis, Time Axis Manipulation“. Übersetzt von Geoffrey Winthrop-Young. Cultural Politics 13, Nr. 1 (2017): 1–18.
      • KrĂ€mer, Sybille. „The Cultural Techniques of Time Axis Manipulation: On Friedrich Kittler’s Conception of Media“. Theory, Culture & Society 23, Nr. 7–8 (1. Dezember 2006): 93–109. https://doi.org/10.1177/0263276406069885.
      • Labelle, Brandon, Hrsg. The Listening Biennial Reader. Vol. 1: Waves of Listening | Pro qm. Errand Bodies Press, 2022.
      • Lison, Andrew. „New Media, 1989: Cubase and the New Temporal Order“. Computational Culture, Nr. 8 (15. Juli 2021). http://computationalculture.net/new-media-1989-cubase-and-the-new-temporal-order/.
      • Maguire, Ryan. „The Ghost in the MP3“. In Proceedings ICMC. Athens, 2014. http://hdl.handle.net/2027/spo.bbp2372.2014.038.
      • Mansoux, Aymeric, Brendan Howell, DuĆĄan Barok, und Ville-Matti HeikkilĂ€. „Permacomputing Aesthetics: Potential and Limits of Constraints in Computational Art, Design and Culture“. In Ninth Computing within Limits 2023, 2023. https://doi.org/10.21428/bf6fb269.6690fc2e.
      • Miyazaki, Shintaro. „Algorhythmics: A Diffractive Approach for Understanding Computation“. In The Routledge Companion to Media Studies and Digital Humanities. Routledge, 2018.
      • ———. „Listening to Algorhythmics“. Gehalten auf der Humanising Algorithmic Listening, Workshop 1, Sussex, 27. Juli 2017.
      • ———. „Probing tapping listening“. Gehalten auf der Exploring Edges, Lausanne,
      • Roads, Curtis. Microsound. MIT Press, 2004. https://mitpress.mit.edu/9780262681544/microsound/.
      • Schulze, Holger. Sonic Fiction. The Study of Sound. Bloomsbury Academic, 2020. https://www.bloomsbury.com/uk/sonic-fiction-9781501334795/.
      • ———. „The Sonic Persona and the Servant Class“. On Sounds Absurd podcast, episode 2. 24:45 minutes. 2020
      • ———. „What Is an Audio Paper?“ Http://www.soundstudieslab.org. Sound Studies Lab Blog (blog), 6. September 2016. http://www.soundstudieslab.org/what-is-an-audio-paper/.
      • Snape, Joe, und Georgina Born. „Max, music software, and the mutual mediation of aesthetics and digital technologies“. In Music and Digital Media. A Planetary Anthropology, herausgegeben von Georgina Born, 220–66. 2022-09-12, 2022. https://doi.org/10.14324/111.9781800082434.
      • Tagg, Philip. „From Refrain to Rave: The Decline of Figure and the Rise of Ground“. Popular Music 13, Nr. 2 (1994): 209–22.
      • Tagg, Philip, und Karen E. Collins. „The Sonic Aesthetics of the Industrial: Re-Constructing Yesterday’s Soundscape for Today’s Alienation and Tomorrow’s Dystopia“. Darlington, 2001.
    • Audio papers

      Other sources

      • The Composing with process podcast serie by Mark Fell and Joe Gilmore on Radio Web MACBA.
      • Son[i]a podcast serie on Radio Web MACBA.
      • PROBES podcast serie by Chris Cutler podcast serie on Radio Web MACBA.
  • Introduction. Sounding and listening

    • Take a walk, 20-45 minutes in length, through a familiar territory and listen to it. What do you hear? What machines do you encounter, either directly of indirectly? Are there machines you do not hear? What sounds are missing? Describe the sounds briefly; are they loud or soft, high or low, pleasant or unpleasant, distinct of ambient? Why? What makes a similar sound? What is hard to describe with vocabulary you currently have?
    • For exploration

      • Read Collins. Studying Sound chapters 1 and 2, maybe intro too
        • Do exercise 1.8 soundmarks.
        • Do exercise 1.9 sonic fingerprint.
        • Do exercise 1.17 online hearing test (google for it). If your city has companies offering free hearing tests, do one.
        • Do exercise 1.32 listening to media
      • Read either
  • Lets build a synthesizer

    • Listen to the cover of a-ha's Take on Me by YouTube user Power DX7. Do the other 80's hit songs on the video sound familiar? There are plenty more videos like this on YouTube if you search for "dx7". Have you got strong opinions on 1980s pop-music, or "the 80s sound"? What would you say are the differences between typical 1980s sound compared to typical sound of the 1970s and the 1990s?

    • Install and poke around in Dexed, a free software FM (frequency modulation) synthesizer to explore more complex FM tones. It comes with many preset sounds, and you can modify them as you want. Start simple. Many interesting sound presets are available online by other people, including all the DX7 sounds. You can use the computer keyboard from A to L for white keys in 4th octave, and the keys above for the black keys like a piano. Each piano key corresponds to a musical note, and is actually specific frequency, you can google for the frequencies.

    • Simon Hutchinson talks about FM synthesis and computer games (6 minutes).
    • Three little synthesizers

      Here are three synthesizers, written in programming language p5.js, a flavour of the ubiquitous JavaScript. You can program in p5.js online at editor.p5js.org.

      Synthesizer 1

      A simple sine tone. You can see in the reference documentation for p5.Oscillator that you can enter the frequency parameter on line 2 when the oscillator is created.

      function setup() {
        sound = new p5.Oscillator();
        sound.start();
      }
      

      What extreme tones could you create? What is the range of frequencies of your hearing, your laptop speakers, and your headphones? If you have pets, test if they seem to hear higher or lower than you? What about your parents? What happens if you run multiple synthesizers at the same time? You can try it also on your phone, does it sound different? Where in the world can you encounter sine waves? Do sine waves even exist, or are they a mathematical ideal?

      Synthesizer 2

      Adds interaction with a slider. The function setup runs exactly once when you press Play on the editor, and from then on the function draw runs typically at 60FPS = 60Hz = 60 times per second.

      function setup() {
        sound = new p5.Oscillator();
        sound.start();
        slider = createSlider(200, 1000);
      }
      
      function draw() {
        sound.freq(slider.value());
      }
      

      In the reference documentation for createSlider you can see that it takes minimum and maximum values as it's two arguments inside the ( and ) on line 4.

      Synthesizer 3

      The third synthesizer is an FM, ie. frequency modulation synthesizer; it uses multiple oscillators organized so that the second (the "modulator") changes the first one's (the "carrier") frequency. These arrangements are called algorithms.

      base_freq = 220;
      modulation_ratio = 3;
      modulation_depth = 100;
      
      let carrier, modulator;
      
      function setup() {
        createCanvas(400, 400);
        
        carrier = new p5.Oscillator(base_freq);
        modulator = new p5.Oscillator(base_freq * modulation_ratio);
        
        modulator.disconnect();
        modulator.amp(modulation_depth);
        carrier.freq(modulator);
      
        carrier.start();
        modulator.start();
      }
      

      What values for base_freq, modulation_ratio and modulation_depth on lines 1, 2 and 3 seem familiar, musical, extreme or unpleasant? Could you combine the programs for synthesizers 2 and 3 by building sliders into the FM synthesis? Or could you add a second modulator to modulate the first modulator?

    • A recording of synthesizer 3 above with modulation_rate 2 on the left and 3 on the right, recorded with Audacity. Can you convince yourself that for each cycle of the carrier, the modulator changes it twice on the left, and three times on the right?

      A waveshape with modulation rates 2 and 3.
      Screenshot of a waveform by Mace Ojala (CC BY-NC-SA), with Audacity (GNU GPL v2)
    • For exploration

      • Modify the above software synthesizers.
      • Record your modifications. Try to google for "record internal audio" or something like that. Windows computer might have this feature built in with Windows Recorder. On macOS use SoundFlower on pre-Intel chips, or BlackHole for M1 and M2 chips. Use your smartphone if nothing else.
  • Time, silence and data sonification

    • International Conference on Live Coding (ICLC) 2023 too place in Utrecht. "Live coding" aka "on-the-fly coding" is a musical and artistic practice to make music with computer programming. The community is very open, and concerts/algoraves are often accompanied by workshops. The community has a strong punk/D.I.Y. ethos. ICLC 2023 was complemented by a month of local satellite events around the world. The partymode is called "algorave".
       
       
      Photos from ICLC by Mace Ojala.
       
      Here you can see the talks/presentations on YouTube at https://www.youtube.com/@incolico, and there is also an archive on YouTube.
       
    • Synthesizer with an amplitude envelope

      The FM synthesizer from earlier, this time with an amplitude envelope. Press a key to play the sound evelope

      
      base_freq = 220;
      modulation_ratio = 2;
      modulation_depth = 400;
      
      let carrier, modulator, envelope;
      
      function setup() {
        createCanvas(400, 400);
        
        carrier = new p5.Oscillator(base_freq);
        modulator = new p5.Oscillator(base_freq * modulation_ratio);
        
        modulator.disconnect();
        modulator.amp(modulation_depth);
        carrier.freq(modulator);
      
        carrier.amp(0)
        carrier.start();
        modulator.start();
        
        envelope = new p5.Envelope(0.05, 1, 0.1, 0)
        
        freq_slider = createSlider(200, 1000);
      }
      
      function keyPressed() {
        envelope.play(carrier);
      }
      

      What are some extremely long sounds? How long has the morning choir of birds been going on? Birds evolved around 60 million years ago, but when did they start singing? When did the ocean start to make a noise? How long is the Big Bang? Do sounds ever go away and end, or do they just decay to very, very low in amplitude and frequency? What about some short sounds? How long is your heartbeat?

      Synthesizer with sequencer

      This program runs through a sequence of data points, and uses them to change pitch of a little data melody.

      
      data = [14, 22620, 16350, 9562, 14871, 17773];
      
      base_freq = 220;
      modulation_ratio = 2;
      modulation_depth = 100;
      
      let carrier, modulator, envelope;
      
      function setup() {
        createCanvas(400, 400);
        
        carrier = new p5.Oscillator(base_freq);
        modulator = new p5.Oscillator(base_freq * modulation_ratio);
        
        modulator.disconnect();
        modulator.amp(modulation_depth);
        carrier.freq(modulator);
      
        carrier.start();
        modulator.start();
        
        carrier.amp(0);
        
        envelope = new p5.Envelope(0.1, 1, 0.1, 0);
      }
      
      function keyPressed() {
        playSequence(data);
      }
      
      function playSequence(data) {
        datum = data.shift() / 10;
        console.log(datum, data);
        carrier.freq(datum);
        // modulator.freq(datum / 10);
        envelope.play(carrier);
        
        if(data.length > 0) {
          setTimeout(() => {
            playSequence(data);
          }, datum / 10); // milliseconds, ie. 1000 = 1 second
        }
      }
      

      Can you guess where the numbers data, set on the first line come from? They are from Mace took on week 16 of 2023... according to his smartphone. Try your own data, e.g. your steps, number of calories in your meals, how many emails you received in the past year, the weather in your neighbourhood. The sequence can be as long as you want. Think about different value ranges, e.g. human hearing between 20-20000 Hz for carrier frequencies, and 1000 milliseconds is the timeout is one second. You you use the envelope also to change frequencies or amplitudes of the carrier or modulator(s).

      Try out different envelope parameters by making the beginning ("attack") or end ("release") of the sound longer, change the timeout between sounds.

    • An exhibition documentary about the work of Ryoji Ikeda, by Zentrum fĂŒr Medien und Kunst (ZKM) in Karlsruhe. Ikeda's work is an extreme, immersive example of "data sonification", turning data to sounds. You can see also e.g. his work at and in . Can you find some less extreme data sonifications online?

    • For exploration

      • Choose an episode of the Composing with Process podcast production by Mark Fell and Joe Gilmore for MACBA RWM, listen through it, and summarize it. What was it about, and why did you choose it? What was interesting, what was boring, what was surprising? A fun story? Something strange? What sounds did you hear, from the perspectives of causal, semantic and reduced listening?
      • Read one of the following two texts
        • Döbereiner, Luc. „Models of Constructed Sound: Nonstandard Synthesis as an Aesthetic Perspective“. Computer Music. Journal 35, Nr. 3 (2011): 28–39. https://doi.org/10.1162/COMJ_a_00067.
        • Haworth, Christopher, und Georgina Born. „Music and intermediality after the internet: aesthetics, materialities and social forms“. In Music and Digital Media. A Planetary Anthropology, herausgegeben von Georgina Born, 378–438. UCL Press, 2022. https://doi.org/10.14324/111.9781800082434.
      • Download and install Pure Data (alias "Pd"), a free and open source programming language. We'll use it in the next section.
      • Watch Sound Simulator's video Let’s Start With Just 3 Objects! Intro To Pure Data (Pure Data Tutorial #1)
  • Let's explore sound and senses by building more synthesizers

    • See Casey Conner's 42 Audio Illusions & Phenomena – Psychoacoustics

      There are plenty more in the multipart 42 Audio Illusions & Phenomena – Psychoacoustics playlist. Can you explore some of the ideas using p5.js or Pd? Which one of the audio illusions and phenomena would be simplest to start with?

    • Discuss the episodes of Composing with Process podcast production by Mark Fell and Joe Gilmore for MACBA RWM which you have listened to. Some of the episodes explore for instance looping, scales, time and other sonic/subconscious phenomena. Could some of the sounds even make you physically ill?

      If you like repetition check out one minute segment on ostinato, insistence, on Oscar from Underdog Electronic Music School video about music theory for techno.

      What other sounds – outside music – are bearable, ordinary or even pleasant because they are ostinato ie. insisting and repeating? Think about everyday sounds in your life.

      Algorave generation; we love repetition

      You can read about UK Criminal Justice and Public Order Act of 1994, which targeted "repetitive music" and rave culture in Poppy Reid little article It’s been almost 25 years since the UK Government tried to ban raves or googling for something like "uk rave ban" or "repetitive music ban".

      Also discuss the paper Döbereiner, Luc. „Models of Constructed Sound: Nonstandard Synthesis as an Aesthetic Perspective“. Computer Music. Journal 35, Nr. 3 (2011): 28–39. https://doi.org/10.1162/COMJ_a_00067. It claims to address specifically "nonstandard" synthesis. What would you say is instead "standard" techniques?

    • FM drone synth in Pd, corresponding to our first FM synthesizer we programmed with p5.js, called Synthesizer 3 above in section Let's build a synthesizer. Note the [vslider] which serves as a volume knob has range from 0 to 1. We can observe some strange (read: fun) visual artefacts with default volume values 0-127, while the oscilloscope array has range 0-1.

      FM drone program by Mace Ojala (GNU GPL v3), using Pure Data (BSD-3-Clause). Screenshot by Mace Ojala (CC BY-NC-SA)

      Can you read the p5.js and Pd versions side by side, and see how the objects correspond to one another.

      FM drone programs by Mace Ojala (both GNU GPL v3), using p5.js (left; GNU LGPL) Pure Data (right; BSD-3-Clause). Screenshot by Mace Ojala (CC BY-NC-SA)

      If you make the values the same in both programming languages, do you hear minuscule or obvious differences in their sound? Surprisingly, with this Pd program you can make the carrier and modulator frequencies as well as the modulator depth negative, < 0. Does this mean time flows backwards? What if you make the volume negative, will the universe collapse in a reverse Big Bang? Can you do this in p5.js?

    • The second program (Pd programs are called "patches"), a version of the simple FM drone this time with an ampliture envelope. Click the ⧇ object labeled trigger to produce sound. This corresponds to the Envelope FM synthesizer p5.js programmer earlier.

      FM drone with amplitude envelope by Mace Ojala (GNU GPL v3), using Pure Data (BSD-3-Clause). Screenshot by Mace Ojala (CC BY-NC-SA)

      The envelope generator is the [line~] object and it's parameters above it. The [1 100(, [delay 500] and [0 500( objects control the envelope attack, sustain (=note length) and release times. Try changing the values to produce a short sound, a long sound, a slowly appearing sound or a slowly disappearing sound. Can you connect the envelopes not only to amplitude but also to carrier and modulator frequencies? This is next level FM synthesis smile DX7 already had this feature, but you could expand the delay and message objects to produce crazy envelopes with more than three segments. How about six segments? Nine? What about 99?

      The Prophet-5 on that track is a very nice synthesizer, but since it is analog rather than digital, it's out of the scope of this seminar.

    • This Pd program ("patch"), extending the two above. This one responds to keypresses and play notes. The functionality is implemented by the [keyname] object on the right, which is routed to different messages to change the carrier frequency.

      FM drone with amplitude envelope and key input by Mace Ojala (GNU GPL v3), using Pure Data (BSD-3-Clause). Screenshot by Mace Ojala (CC BY-NC-SA)

      You could explore parametrizing the keyboard further, perhaps some keys would produce shorter, longer or louder sounds, or change the modulation parameters? We use the same word keyboard for the ⌚ and đŸŽč. Could you invent an entirely new keyboard? Or could you tune the above to a familiar (or unfamiliar) musical scale, knowing that musical notes are names for certain frequencies.

    • For exploration

      • Download and install Audacity. You'll find tutorials on YouTube if you want.
      • Listen to Nicole De Brabandere, Graham Flett (2016). Hearing on the verge Cuing and aligning with the movement of the audible. Seismograph. https://doi.org/10.48233/seismograf1603
      • Listen to Robert Willim (2019). Mundania. Just above the noise floor. Seismograph. https://doi.org/10.48233/seismograf2301
      • Watch (and listen!) to Hildegard Westerkamp Listening for the State of Our World, on World Listening Day 2021.
      • Listen to radio.earth for precisely 11 minutes. Listen attentatively, without using your phone or otherwise multitasking. Take notes, preferably on pen+paper and do all of causal, semantic and reduced listening (check Chion's Three Modes of Listening, or Karen Collins's Studying Sound chapter 1).
      • Watch (and listen!) PMTVUK TV piece with Hayley Suviste
  • Sound capture

    • Discuss and summarize the two audio papers by Nicole De Brabandere and Graham Flett (2016), and by Robert Willim (2019). What are they about, and how do they make their argument. What do you hear in the audio papers? Organize your answer in terms of causal, semantic and reduced listening.

      Consider your experience listening to radio.earth, and listening to soundscapes, the concept denoting the sonic equivalent of a landscape. This is one possible modality we can adopt in listening; analogous to landscape painting or photography, often nothing particular get's our attention; the focus is on the background rather than foreground. Often the foreground is empty.

      A landscape view of Landschaftspark Duisburg-Nord (photo by Mace Ojala).

      A landscape view West from Bochum's Bismarckturm toward Bergbaumuseum (photo by Mace Ojala).

      What sound might you expect, remember, fear, or hope to hear in the above landscapes? What spectral and temporal qualities do those sounds have? What might these places sound like in 1923, or in 2123? What about an hour, a week, a month, or thousand years from now? What might they sound like to a child? To an immigrant? To a factory owner? To a family? To a dog, or a raven? To Siri or Google Assistant or Amazon Alexa? To an MP3 compression algorithm? Which sounds are gone from these soundscapes, and are some of the sounds extinct? One of the photographs is from a Facebook group. Can you imagine a Facebook group which would consist of posts of historical sounds of a city, rather than photographs? One of them is from a Landschaftpark, ie. a landscape park. If your task was to design and build a "soundscape park", what would you do?

      You can find collected soundscapes online, see for example radio aporee and Record the Earth. Some museums have collected sounds of e.g. work. Do you have a collection of sounds? What's in it, and how is it organized?

    • MP3

    • This bleep was recorded on the 1st floor if building GB on RUB campus when Mace updated his transponder to get access to the seminar room on the 8th floor. It is therefore a sound of power, of access control, of legitimate rights, and of computer software.

    • After a few media operations (amplify, mix to mono, compress to mp3, align with original, invert, mix down, amplify) in Audacity, we discover a "ghost" in an mp3. This ghost is the compression artefact of the MP3 algorithm.

      Screenshot of a spectrogram by Mace Ojala (CC BY-NC-SA), with Audacity (GNU GPL v2)

      What does the ghost sound like? How would you analyze it's causal, semantic and reduced, ie spectral characteristics? What about aesthetic?

      Try other MP3 compression settings. What happens if you repeat the operations on another audio file? Or using another compression algorithms than the familiar MP3? Which ones do you know and can find in Audacity?

      For more on MP3, you can check out the work of Jonathan Sterne's book MP3: The Meaning of a Format, a media studies classic, published by Duke University Press in 2012. See also Sterne's paper The death and life of digital audio in Interdisciplinary Science Reviews, 2006, vol 31, no 4.

    • In Proceedings of International Computer Music Conference, 2014. The abstract:

      The MPEG-1 or MPEG-2 Layer III standard, more commonly referred to as MP3, has become a nearly ubiquitous digital audio file format. First published in 1993 [1], this codec implements a lossy compression algorithm based on a perceptual model of human hearing. Listening tests, primarily designed by and for western-european men, and using the music they liked, were used to refine the encoder. These tests determined which sounds were perceptually important and which could be erased or altered, ostensibly without being noticed. What are these lost sounds? Are they sounds which human ears can not hear in their original contexts due to our perceptual limitations, or are they simply encoding detritus? It is commonly accepted that MP3's create audible artifacts such as pre-echo [2], but what does the music which this codec deletes sound like? In the work presented here, techniques are considered and developed to recover these lost sounds, the ghosts in the MP3, and reformulate these sounds as art.

    • Ryan's project webpage theghostinthemp3.com with audio and image examples, and explanation of his process. Ryan also has them on Soundcloud if you want to focus on listening.

      Image Example 2. White, Pink, and Brown Noise - Lowest Possible Bit Rate MP3 (8kbps) by Ryan Maguire, from The Ghost in the MP3 project website
    • Aphex Twin's Windowlicker (Warp Records, 1999), whose video itself directed by Chris Cunningham itself caused some controversy at the time...


      Also some intense visual to be seen in a spectroscope, related to the transmedia operations of looking at sounds and listening to images.

    • You can read more about haunted sounds and spectralities in the works of Mark Fisher and especially What is Hauntology? published in Film Quarterly in 2012. The term "hauntology" (haunting+ontology) was coined by Jacques Derrida in his Spectres of Marx (1993 [1994]). Jonas Čeika puts it well in his video Hauntology, Lost Futures and 80s Nostalgia , speaking about the digital via computer games and other media objects.

    • "SPEAR is an application for audio analysis, editing and synthesis. The analysis procedure (which is based on the traditional McAulay-Quatieri technique) attempts to represent a sound with many individual sinusoidal tracks (partials), each corresponding to a single sinusoidal wave with time varying frequency and amplitude."

      Very strange smile
    • There was an observant question during the seminar about why the MP3 file is longer than the original WAV file. The reasons gets at the nittygritty of the MP3 file format, and it's encoder and decoder delays. See questions 1 and 2 in this FAQ of LAME, the MP3 encoder implementation Audacity uses.

    • One of the students has explored hauntology, breakbeats, computer fans and vaporwave in their music production. It's about hauntology, breakbeats, Computer fans, and vaporave somehow."

      Ole also shared his Eratekk breakcore project album IFLF v1.1. What on earth is "breakcore", you might ask? Maybe listening leaves no need for clarification.

    • Recorders

      Photo by Mace Ojala

      Try to acquire, either borrow or buy, a high-quality audio recorder, such as a Zoom H4n Pro. Typically you can listen to your recordings directly on the recorder, or access the SD card with an external reader. You can also connect the recorder to a computer either as an SD reader, or as an sound card.

    • For exploration

      • If you succeeded in acquiring a sound recorder, watch a 10-20 minute tutorial or review about the sound recorder you got on Youtube. Use brand and model names for searching. What associations do the brand name (e.g. "Zoom", "Tascam", "Sony", "Nagra") and model name (e.g. "H4n Pro", "DR-05x", "SD") give you.
      • Read Anette Vandsø (2016) Listening to Listening Machines. On Contemporary Sonic Media Critique. Leanardo Music Journal.
      • Record sounds of machines, especially computers, digital devices and systems, and software. Experiment recording close and far, landscapes and objects (e.g. the bleep), short sounds (seconds) and long sounds (minutes or tens of minutes) etc. Use headphones while recording. Record at least 30 minutes of material per person. Make sure everyone gets to use the recorders. Be experimental and HAVE FUN!!
  • Listening to machines and algorhythmics

    • Field recordings

      Perhaps you want to listen more closely to computer mice and keyboards? Mechanical keyboards are fashionable with plenty of nerding out on YouTube. People really like to listen to keyboards. Do you have a favorite? Does listening to recordings make you sensitive to hearing keyboards "in the wild"? Which words would you use to describe keyboard sounds? What can we infer from listening... what is the user doing, are they happy or sad, who are they? What would be your mediatheoretical take on this whole phenomenon?

      For reading on revival of mechanical keyboard culture, check David Rambo's Building, Coding, Typing. published 2021 in Computational Culture, issue 8.

      This article describes the technoculture of custom mechanical keyboards, with an emphasis on the author’s experience of building, programming, and relearning to type Colemak on a non-standard mechanical keyboard called the Planck. The Planck is not merely a technical object but a metonymy for a technical activity. Drawing on Gilbert Simondon’s philosophy of individuation and theory of the technical object, the author proposes that technology is first and foremost a process that modifies phenomena by intentionally intervening in the media that condition them. Human experience and cultural meaning are no less integral to technology’s purposeful coordination of heterogeneous processes than the physical and electronic components. With building, coding, and typing on the Planck as a guide, the article argues that proprioceptive preference, coded customization, visual taste, materials quality, consumer fads, community belonging, and personal expression all factor into the technological individuation of the mechanical keyboard technological activity. In particular, it criticizes both Simondon’s theory of the technicity of technical objects and Wolfgang Ernst’s media archaeology for keeping factors other than physical materiality outside the domain of technical media. Ultimately, this article contributes to Eric Schatzberg’s work on the technology concept. Its formulation of technology as its own individuation intervenes between predominant media theories that rely on a either a paradigm of prosthesis or the materiality of nonhuman actors to explain the nature of technology.

      You might want to look into or autonomous sensory meridian response (ASMR) on YouTube, TikTok etc. What is it about? Is it creepy or cool? Disgusting or erotic? Perhaps you want to try out ASMR in your audio paper...

      Here is artist Carolyn Kirschner's take Iconoclash Slow Squeeze on ASMR at Matter. Non-Matter. Anti-Matter, ZKM Karlsruhe.

    • Electromagnetic radiation, EMR

      This extremely simple contraption, a coil, picks up electromagnetic radiation (EMR). It is basically the same as a guitar pickup mic.

      Here are seven minutes EMR recordings from devices Mace's home, such as computer screens, a transformer, LEDs, a small speaker, a synthesizer, smartphone connecting to the internet et cetera. Do you recognize some of the sounds?

      A simple D.I.Y. coil for listening to EMR (Audio and photo by Mace Ojala).

      Consider trying out EMR listening, it's very simple especially with help of a good quality recorders which also have a good quality amplifiers.

      See the Shintaro Miyazaki's video Probing, tapping, listening above from 15:45 onwards, as well as the TV spot with Hayley Suviste also above where listen to street advertisements.

      🩇 Suggested bonus reading: Thomas Nagel's absolutely classic philosophy of mind paper What Is It Like To Be a Bat published 1974 in The Philosophical Review, Vol. 83, No. 4, pp. 435-450.

    • Here is a little digital instrument for you which combines concepts we have learned early in the course about synthesis with our recordings. What the program does, is transfer the amplitude ie. loudness.of a field recordings to the pitch of a synthesizer. Can you still recognize the original sound? Can make a copy of the program and use your own recordings to drive the sound? Could you imagine some other computer program which takes data from one source and uses it to generate something new in some other context?

    • Here is another digital instrument for you which manipulates a recording at a sample level, typically 44100 per second, and distorts it in unusually ways. Does this kind of distortion remind you of sounds you have heard earlier somewhere? Try making a copy, uploading a few seconds of your own recording experiments, and change the line 4 to point to your file. Use the three sliders to make new sounds.

      Watch out, this can get really loud and noisy!

    • Blaues Rauschen

      Blaues Rauschen festival is a sound festival in Ruhrgebiet. The program is quite diverse, ranging between music, fieldrecordings, synthesis, sound performance, noise etc. but all experimental and very digital. Such a festival experience can be possible source for audio paper inspiration. Speak with the festival organizers, artists and audience members is always a good idea, don't hesitate!

       

      Tomoko Sauvage with her work Audio Bowls (photo by Mace Ojala).

      Ji Youn Kang with New Work Live (photo by Mace Ojala).

    • Mace encountered this sound when riding the bike in Dortmund home from one of the Blaues Rauschen concerts. Can you recognize it? Does it cause some affect in you, and what could the sound mean? What do it's waveform and spectrum look like? How would you describe the sonic qualities of the sound? Tip: it's a field recording from the street.

    • For exploration

      • Make more recordings; use headphones to monitor, and listen to your recordings afterwards. Have fun!
      • Read Winthrop-Young, Geoffrey. „Translator’s introduction to Real Time Analysis, Time Axis Manipulation“. Cultural Politics 13, Nr. 1 (2017): 1–5.
      • Read one of the following; you might have encountered this canon item on another seminar already smile
        • Kittler, Friedrich. „Real Time Analysis, Time Axis Manipulation“. Übersetzt von Geoffrey Winthrop-Young. Cultural Politics 13, Nr. 1 (2017): 1–18.
        • The above Kittler in original German, if you find it.
        • Krämer, Sybille. „The Cultural Techniques of Time Axis Manipulation: On Friedrich Kittler’s Conception of Media“. Theory, Culture & Society 23, Nr. 7–8 (1. Dezember 2006): 93–109. https://doi.org/10.1177/0263276406069885.
      • Read Curtis Roads Microsound (The MIT Press, 2001) the very beginning, pages 2-9, up until the section Infinite Time Scale.
      • Draft 3 audio paper ideas. Think of a potential name of your audio paper, and some references. Write down a paragraph of each idea.
  • Granular synthesis and microsound

    • Granular synthesis

      The world of sounds larger than a sample but smaller than a note is called "microsound". Granular synthesis is a family of techniques of organizing "grains", small audio objects in time, and serve as a "microscope" into sound.

      Figure from Curtis Roads.;Microsound (MIT Press, 2001), p. 5

      Granular techniques can be used on synthetic sounds, or used as effects on recorded or live sound. Besides experimental stuff, some practical applications of granular synthesis include time-stretching without affecting the pitch (e.g. changing playback speed on YouTube, or a DJ changing the speed of a record to beat-match), space-effects e.g. reverberation. Streaming audio over internet is also kind of "granular synthesis", as audio is split into small TCP/IP packets for transmission, and re-organized at the receiving end. Sometimes packets don't arrive in correct order, causing the glitching that is now, after Corona, so familiar to all of us from Zoom meetings. You might have learned about packet switching technologies of the Internet on another media studies seminar.

      Friedrich Kittlers' famous text Real Time Analysis, Time Axis Manipulationlast discusses similar ideas. Friedrich Kittler is a founding figure of the so-called "German media studies", and worked here at RUB from 1987 until 1993. You can read more about Kittler and his legacy in a dedicated special issue of Theory, Culture & Society., which is about the same topic as granular synthesis, namely the extremely precise control computer technology enables, affords and dictates.

      Some classics on the topic of granular synthesis, sometimes called "pulsar synthesis", include Curtis Roads' book Microsound (2001, MIT Press) which was a reading last week, the oeuvre of Autechre and Kim Cascone's album Pulsar Studies (2000, Tiln).

      You can try explore granular synthesis based on recordings with the EmissionControl2, a free granulator program created by Curtis Roads and others. Use it to explore your fieldrecordings.

      Screenshot of granulation process by Mace Ojala (CC BY-NC-SA), using EmissionControl2 (GNU GPL v3)

      Check out also the programs Granular scanner and Granular kiss which emulates audio glitching. Both are built in p5.js for you to learn about granular synthesis, try if you can follow the code in the functions called granulate() and granulate_p5(), respectively. Tip: read the code between { and } aloud. You can make a copy of the program via menu File→Duplicate on the p5.js web editor, and upload your own recordings. If you are already a bit familiar with p5.js, you could try manipulating the code, and see what happens 😀 It is unclear whether audio files longer than a few minutes work – experiment! You can always cut up your longer recordings with Audacity, as well as change encoding from m4a (MP4), MP3, AAC or other formats to WAV, is uncompressed audio.

    • Tools for audio paper

      So far we have learned about the following tools which you can mix-and-match for building your audio paper:

      • Audacity (throughout, and AK Medienpraxis workshop)
      • Audio loopback devices for recording audio coming out of your computer (seminar Lets build a synthesizer )
      • Dexed (seminar Lets build a synthesizer)
      • FM synthesis with p5.js (seminars Time, silence and data sonification and Let's build a synthesizer). Adopt and repurpose programs linked on this Moodle page.
      • Pure Data aka Pd. (seminar Yeah more synthesizers)
      • High quality audio recorders (seminar Sound capture)
      • Electro magnetic radiation with a coil (seminar Listening to machines and algorhythmics)
      • EmissionControl2 (seminar Granular synthesis and microsound for real; audio paper redux)
    • For exploration

      • Find and collect some academic sources for your audio paper. There are plenty in the literature section of this OER.
      • Start developing your script.
      • Collect/curate some audio material, based on your choice of audio paper topic.
        • Import audio files into Audacity and get comfortable moving pieces around, ie. composing.
  • Audio paper workshop

    • WDR Hörspielspeicher re-published the following audio-drama

      On the Tracks - Auf der Suche nach dem Sound des Lebens

      Here's the blurb from WDR.

      Jeder Mensch ist ein Kosmos und trägt ihn folglich mit sich herum. Wohin? Das Hörspiel folgt unbekannten Menschen auf der Straße. Aus den Protokollen dieser Verfolgungen und der Musik von Console wird der Soundtrack von siebenmal Leben.

      WDR Hörspielspeicher republication of Andreas Ammer's 50 minute audio drama On the Tracks - Auf der Suche nach dem Sound des Lebens, originally published by WDR in 2002.

      Radio dramas are a quite a marginal and interesting format.If a public broadcaster like WDR (also Finnish YLE and Danish DR have some amazing productions both ongoing and backlog) wouldn't produce them, nobody would! On the other hand the second wave of podcasting is going well, and audio books are quite popular too. How would you characterize how audio dramas, podcasts and audio books differ from one another? How is an audio paper (Krogh Groth and Samson 2016) different from audio drama? How are they similar? What is good for what purpose? What stuff we have learned on this seminar would be useful for making audio dramas? Do you know some so-called "audio games"?

      If you listen to the On the Tracks - Auf der Suche nach dem Sound des Lebens, how are the creators composing everyday sounds, music, tone, body-noises to create a mood, an atmosphere, a story and a sense of place and time? Is it convincing? What would you do otherwise? What machines, computers and software can you imagine in the drama?

    • A soundscape from Klagenfurt

      Audio by Mace Ojala (CC BY-NC-SA)

      Audio, photo and spectrogram image from Audacity (GNU GPL v2) all by Mace Ojala (all CC BY-NC-SA).

      Isolating that very high-pitched sound object close to 19000Hz between 00:45 and 00:57 first with a low-pass filter at 16000Hz, then pitch-shifting -1 octave and -3 octaves to bring it within human hearing range. The original was picked up by the high-fidelity recorder, but is almost inaudible especially in the noisy street context.

      Spectrogram image by Mace Ojala (CC BY-NC-SA), using Audacity (GNU GPL v2).

      -1 octaves (by Mace Ojala, CC BY-NC-SA)

      -3 octaves (by Mace Ojala, CC BY-NC-SA)

      This is the sort of things we find in our sonic subconscious when go through some (psycho/techno)analysis beyond our everyday perception! ;) What does causal, semantic and reduced listening (Chion 1993) tell us about that sound?

    • Digital Audio Workstations (DAW)

      Besides our familiar Audacity, we can get a little peeks into two other, commercial digital audio workstation (=DAW) software, namely Ableton Live and FL Studio. You can check out an Maya-Roisin Slater's interview with Gerhard Behles Robert Henke titled One adds and the other subtracts about Live and how it rose from the needs of their project Monolake. An interesting story of art and technology merging in software, and then getting a live (pun intended) of it's own. FL Studio too is a development of Fruity Loops, an influential, loop-based sample player software. Both, as well as Audacity and Pure Data (Pd) which we used to make our own synthesizer can host "virtual" instruments and effects inside of them from lively market of "VSTs". Live, FL Studio, Audacity and other DAWs are organized around the timeline view with tracks, automations, instrumentation etc. Such left-to-right view is not unfamiliar from sheet music. For the history of the this view, and it's implications you can see Andrew Lison's paper. "New Media, 1989: Cubase and the New Temporal Order" in Computational Culture, link in the course syllabus.

      (timeline view of Steinberg Cubase 1.51, quoted from Andrew Lison's "New Media, 1989: Cubase and the New Temporal Order" in Computational Culture, 2020)

      Seriously stranger, non-linear, process-oriented and "Bergsonian", or we even queerer (non-straight) time-axis manipulation (Kittler, see also Winthrop-Young; KrÀmer) is achieved with software such as Pure Data (Pd), MaxMSP, SuperCollider instead.

      The track or "performance" view of DAWs simulates often even skeuomorphically a studio mixing desk... elevated to the status of the main instrument itself as an instrument by legendary Jamaican dub studio engineers such as Scratch "Lee" Perr, King Tubby, Scientist... itself a major inspiration for "dub techno" sound, looping back to Monolake mentioned above, as well as "dubstep" from UK.

      Here is youtube user mihrantheupsetter demonstrating the style of performing with the mixing desk

      And here is classic Berlin sound aka Mark Ernestus and Moritz von Oswald aka Basic Channel

      Observe how the sound of the equipment (noise, compression, distortion, what can be done with the controllers) plays into the work. Observe and appreciate the sound of your equipment in you audio paper.

      A definition for culture:

      culture
      praxis influences technology influences praxis
    • For exploration

      Work on your audio paper, and prepare to present (draft) project in Audacity and/or other audio editing software.

      • What is the current status?
      • What is your research question?
      • What are your sources... literature, sounds, ideas, designs, inspirations... stuff from other seminars is more than welcome
      • How is audio the unique and best format to conduct this research?
      • What do you need help or feedback with?

      Work together and collect feedback to improve the paper content (interestingness, relevance, focus etc.) and implementation (levels, noise, pacing and flow, project organization etc.).

  • fin!

    • Thank you!

      This marks the end, thank you everyone for choosing to think about software and the sonic subconscious of the digital!

      Complete you audio paper, and make it available for instance via SoundCloud, on social media such as Instagram. What if it was available physically on cassette, vinyl, USB-stick or something else? Use it to initiate a conversation with peers, friends, family or the public.

      If you have enjoyed this open educational resource (OER) or have found something for your teaching, studies, or general inspication, it would be great to hear from you! Comments, critique, and other discussion is also welcome of course, you email me at mace.ojala@ruhr-uni-bochum.de.

      Keep your ears open! smile