The world of sounds larger than a sample but smaller than a note is called "microsound". Granular synthesis is a family of techniques of organizing "grains", small audio objects in time, and serve as a "microscope" into sound.
Figure from Curtis Roads.;Microsound (MIT Press, 2001), p. 5
Granular techniques can be used on synthetic sounds, or used as effects on recorded or live sound. Besides experimental stuff, some practical applications of granular synthesis include time-stretching without affecting the pitch (e.g. changing playback speed on YouTube, or a DJ changing the speed of a record to beat-match), space-effects e.g. reverberation. Streaming audio over internet is also kind of "granular synthesis", as audio is split into small TCP/IP packets for transmission, and re-organized at the receiving end. Sometimes packets don't arrive in correct order, causing the glitching that is now, after Corona, so familiar to all of us from Zoom meetings. You might have learned about packet switching technologies of the Internet on another media studies seminar.
Some classics on the topic of granular synthesis, sometimes called "pulsar synthesis", include Curtis Roads' book Microsound (2001, MIT Press) which was a reading last week, the oeuvre of Autechre and Kim Cascone's album Pulsar Studies (2000, Tiln).
You can try explore granular synthesis based on recordings with the EmissionControl2, a free granulator program created by Curtis Roads and others. Use it to explore your fieldrecordings.
Screenshot of granulation process by Mace Ojala (CC BY-NC-SA), using EmissionControl2 (GNU GPL v3)
Check out also the programs Granular scanner and Granular kiss which emulates audio glitching. Both are built in p5.js for you to learn about granular synthesis, try if you can follow the code in the functions called granulate() and granulate_p5(), respectively. Tip: read the code between { and } aloud. You can make a copy of the program via menu File→Duplicate on the p5.js web editor, and upload your own recordings. If you are already a bit familiar with p5.js, you could try manipulating the code, and see what happens 😀 It is unclear whether audio files longer than a few minutes work – experiment! You can always cut up your longer recordings with Audacity, as well as change encoding from m4a (MP4), MP3, AAC or other formats to WAV, is uncompressed audio.
Tools for audio paper
So far we have learned about the following tools which you can mix-and-match for building your audio paper:
Audacity (throughout, and AK Medienpraxis workshop)
Audio loopback devices for recording audio coming out of your computer (seminar Lets build a synthesizer )
Dexed (seminar Lets build a synthesizer)
FM synthesis with p5.js (seminars Time, silence and data sonification and Let's build a synthesizer). Adopt and repurpose programs linked on this Moodle page.
Pure Data aka Pd. (seminar Yeah more synthesizers)
High quality audio recorders (seminar Sound capture)
Electro magnetic radiation with a coil (seminar Listening to machines and algorhythmics)
EmissionControl2 (seminar Granular synthesis and microsound for real; audio paper redux)
For exploration
Find and collect some academic sources for your audio paper. There are plenty in the literature section of this OER.
Start developing your script.
Check chapters 8 Sound and Meaning and 9 Sound for Story of our text book Studying Sound by Karen Collins
Remember that your audio paper is a research undertaking