Seeking the Unstable: A Practice Amidst Digital Analogies By Marcel Sagesser B.A., Bern University of the Arts, 2003 M.A., Bern University of the Arts, 2007 M.A., Zurich University of the Arts, 2010 Thesis Submitted in partial fulfillment of the requirements for the Degree of Master of Arts in the Department of Music at Brown University PROVIDENCE, RHODE ISLAND MAY 2018 This thesis by Marcel Sagesser is accepted in its present form by the Department of Music as satisfying the thesis requirements for the degree of Master of Arts Date Todd Winkler, Advisor Approved by the Graduate Council Date Andrew G. Campbell, Dean of the Graduate School ii Acknowledgments First, many thanks to the four violists Ashley Frith, Jordan Dykstra, David Schnee and Zsolt Sörés who have not only spent so many hours with me in the studio and on stage, but have also taken the risk of not knowing what exactly will happen in the concerts. Their ideas have much contributed to this work, and without their sounds, Pulsations [Bitumen] would not exist. For reading and reviewing my writing, for motivating me to polish my many ideas, and for contributing inspiration for my music, I am grateful to Kiri Miller, Kristina Warren and Todd Winkler. For continuously engaging me in critical conversation, I thank my mentors Vijay Iyer, James Moses, Eric Nathan, Ed Osborn, and Lu Wang. For contributing resources and critical thoughts, very special thanks to my colleague Brian House, who also is the developer of the Braid software that I was using in this project. The live concert at Brown University in February 2018 was primarily supported by the Department of Music, by Butch Rovan and the Brown Arts Initiative. Thank you. iii Contents Introduction: Defining a Field of Inspiration ................................................................................................. 1 Digital and Analog ....................................................................................................................... 1 Instability ...................................................................................................................................... 4 Bitumen and the Video Projections .............................................................................................. 5 First Instability: Playing the Meta-Instrument ............................................................................................... 9 The Performative Crisis ................................................................................................................ 9 My Performance Instrument ....................................................................................................... 11 Stuttering and Swarming ............................................................................................................ 16 Sound Material: Digitally Shaped Viola .................................................................................... 18 Sound Material: Sine Tones ....................................................................................................... 20 Sound Material: Electronic Beats .............................................................................................. 21 Playing the Sound Materials ...................................................................................................... 23 The Idea of the Gate ................................................................................................................... 23 Second Instability: The Violist’s Workstation ............................................................................................ 29 Why the Viola?............................................................................................................................ 29 “Ensemble-ness” of the Ensemble.............................................................................................. 30 The Violists and the Mechanical Pulse ....................................................................................... 31 A Videogame or a Notation System? .......................................................................................... 33 The Notation’s Digital Aesthetics ............................................................................................... 42 Inspiration: Graphic Notation, Tabs & Blocks-on-Grids .......................................................... 43 The Violist as a Software Instrument .......................................................................................... 46 Conclusion: A Field of Potentialities ........................................................................................................... 47 Bibliography ................................................................................................................................................ 51 Appendices .................................................................................................................................................. 54 Excerpts from the First Interview with Violist Ashley Frith, November 2017 ........................... 54 Excerpts from the Second Interview with Violist Ashley Frith, April 2018 ................................ 62 Excerpts from the Interview with Violist David Schnee, March 2018 ........................................ 64 Excerpts from the Interview with Violist Zsolt Sörés, February 2018 ....................................... 66 iv Figures Figure 1: Live performance of Pulsations [Bitumen] .................................................................................... 4 Figure 2: Stills from the three-channel video work ....................................................................................... 8 Figure 3: Playing the meta-instrument ........................................................................................................ 12 Figure 4: The on-screen interface for performance ..................................................................................... 12 Figure 5: Distributions used for deviating pulses ........................................................................................ 15 Figure 6: Self-made deviation plugin .......................................................................................................... 17 Figure 7: Applying a looped amplitude envelope pattern ........................................................................... 19 Figure 8: The meta-instrument is controlling the algorithm ........................................................................ 20 Figure 9: Braid code for a rhythm pattern ................................................................................................... 21 Figure 10: Musical notation of the afore given Braid code ......................................................................... 22 Figure 11: The model of an analogue synthesizer with a ‘Low Frequency Oscillator’ .............................. 25 Figure 12: Self-built Max For Live device that is the basis for the notation system ................................... 33 Figure 13: Network receiving part of the notation system .......................................................................... 34 Figure 14: Paper list as mnemonic for performance: overview of rhythm presets ...................................... 35 Figure 15: On-screen notation for Pulsations [Bitumen] ............................................................................. 36 Figure 16: On-screen notation for Pulsations [Bitumen] ............................................................................. 37 Figure 17: On-screen notation for Pulsations [Bitumen] ............................................................................. 37 Figure 18: The author’s notation system versus a potential equivalent in traditional staff notation ........... 40 Figure 19 “Blocks-on-grids” notation ......................................................................................................... 43 Figure 20: The author’s notation system versus a potential equivalent in traditional staff notation. .......... 45 v Media Samples Media Sample 1: Full documentation of the live performance of Pulsations [Bitumen] .............................. 3 Media Sample 2: Uniform equal distribution .............................................................................................. 15 Media Sample 3: Gaussian standard normal distribution ............................................................................ 15 Media Sample 4: Live performance, irregular “swarming” section ............................................................ 18 Media Sample 5: Live performance, irregular “swarming” section ............................................................ 18 Media Sample 6: Demonstration of gated live viola ................................................................................... 19 Media Sample 7: Demonstration of sine tones ............................................................................................ 21 Media Sample 8: Demonstration of Braid-generated beats ......................................................................... 22 Media Sample 9: Long amplitude envelopes for viola. ............................................................................... 28 Media Sample 10: Short amplitude envelopes for viola. ............................................................................. 28 Media Sample 11: Jordan Dykstra demonstrates the graphic notation ........................................................ 37 vi Introduction: Defining a Field of Inspiration Digital and Analog Pulsations [Bitumen] is a one-hour long music work for an ensemble of four violists, an electronics performer, and speakers, with accompanying visuals. The major idea that went into the creative process of Pulsations [Bitumen] is the challenging of the boundary between the “digital” and the “analog” along with the various “instabilities” resulting from crossing from analog to digital or from digital to analog. This will become primarily audible and visible by the use of conceptualized forms of mechanical pulse and its various deviations, by the designing of a “meta-instrument” consisting of analog, human-made timbre (the viola) and digitally made temporal contour, and by the graphic notation system that I developed in the scope of Pulsations [Bitumen]. Both terms are widely used as a pair of opponents in foremost electronic music practices in their electromechanical sense, where analog is a way of measuring the contours of world via electric current that oscillates in a continuous field of voltage, while digital systems are based on a previously defined space made up of discrete values. As observes Roger Moseley, “Analog modes trace continuity, materially delineated by vector, contour, or waveform; at the same time, like metaphors, they span and measure the gap that separates the resembled from the resembling” (Moseley 2016, 85). In this gap between the resembled and the resembling lies the virtual, in which many parts of my projects are situated. Analog and digital in this sense open up a huge field of metaphors, which have served as inspiration in the creation of Pulsations [Bitumen]. The chart underneath represents my various metaphors that I have made productive in this process, many of which I will investigate in this essay. Digital Analog Simulation of real-world Real-world 1 Discrete Continuous Machine-made Human-made Inorganic Organic Artificial / Stable Natural / Decay Artificial amplitude envelopes Timbre of viola Virtual Real The thread introduced here will subsequently lead me to discuss my music as sampling music, or, a live remediation, drawing on Bolter and Grusin’s definition: “we call the representation of one medium in another remediation, and we will argue that remediation is a defining characteristic of the new digital media” (Bolter and Grusin 1999, 45). In Pulsations [Bitumen], the live viola, while continuously audible in the concert space in its acoustic version, is remediated though a digital system of algorithms and amplitude envelopes and being audible via speakers as “live samples.” I will hence investigate the use of recording technology in the live concert. The materials that I am preparing beforehand in the studio are not a fixed “text” or recording1, but raw materials, which Joseph G. Schloss would call my “paint,” 2 and it is this paint that must change in the moment I am bringing it to the stage. In their article “Live mediation: performing concerts using studio technology,” Yngvar Kjus and Anne Danielsen describe this practice as 1 What I imply with “text” is the conventional practice of Western art music which is based on a score that is often, especially in traditional musicology, considered the work of art. With “recording” I am referring to the practice that has become popular in the last decades especially in electronic dance music contexts: the work of art is not present as a score, but as a fixed media track (i.e. different stems for percussion, bass, synthesizers) that are then being live mixed on stage and thereby interpreted. 2 Schloss, in his Book Making Beats, describes a common misunderstanding around hip-hop: “if you believe that musicians should make their own sounds, then hip-hop is not music, but, by the same token, if you believe that artists should make their own paint, then painting is not art. The conclusion, in both cases, is based on a preexisting and arbitrary assumption” (Schloss 2004, 23). 2 the bringing of materials “from the studio to the temporal, spatial and social settings of the live performance” (Kjus and Danielsen 2016, 335). In this essay I will also look at what is at play when the musicians—the four violists and myself—make music together on stage. I will describe how the violist is surrounded by a “workstation” consisting of a computer screen and a headphone and how their acting resembles the playing of a computer game, while I create and update the rules for this “game” in real time. Hence, I have the four violists as quasi “software instruments” or “plugins” at my disposal, yet do they constitute an agentive3 potential. It is this particular “use” of the violists that I believe is most characteristic for Pulsations [Bitumen] and which is my original contribution to the field. https://www.youtube.com/watch?v=5mt-3IfszdA Media Sample 1: Full documentation of the live performance of Pulsations [Bitumen] 3 In this essay, when using “agency,” I am drawing on Laura Ahearn’s 2001 paper: “The provisional definition I offered at the outset of this essay—that agency refers to the socioculturally mediated capacity to act—leaves a great deal unspecified. For example, where is agency located? Must agency be human, individual, collective, intentional, or conscious? Some studies of agency reinforce received notions about Western atomic individualism, while others deny agency to individuals, attributing it instead only to discourses or social forces. It is absolutely crucial that theorists consider the assumptions about personhood, desire, and intentionality that might unwittingly be built into their analyses” (Ahearn 2001, 130). I do understand agency, when describing the violists in Pulsations [Bitumen], as different from the sole momentary making of choices for I aim to include the embodied knowledge, experience and “habitus” (Bourdieu 1977) that the performers bring into the concert, along with their expectations and assumptions that all build on a larger “cultural archive” that they have achieved during lifetime (Tomlinson 2015, 15). 3 Figure 1: Live performance of Pulsations [Bitumen] with four violists, on-screen notation, an electronics performer, speakers and video projections Instability Being in the piece myself as the electronics performer, I was invested in designing a “meta- instrument” (consisting of primarily recording studio technology) which affords for the improvisational (or a practice in which composition and improvisation move closer together), thus rendering my practice analog as an analogy to the playing of an acoustic instrument, that is, bodily in-the-moment movement in a continuous rather than discrete space. This is an undertaking that I am not taking quite literally as will be revealed later on. In “From Outside the Window: Electronic Sound Performance,” Pauline Oliveros describes her primary concern in electronic music. “I was not interested in studio techniques for constructing a composition. Clearly, my devotion was to improvisational real-time performance” (Oliveros 2011, 469). Oliveros, as a logical consequence, was interested in operating her electronics in an unconventional way. “Soon, through improvisation I was creating my first electronic music. … I had created a very unstable nonlinear music-making system” (Oliveros 2011, 469). With her hands on two dials she was controlling the frequencies of two inaudible oscillators, of which the difference tone was the 4 audible result. It was improvisation along with a built-in instability that rendered her practice a practice of playing. What strikes me most with Oliveros’ words, here, is the term unstable. When I read her treatise, I realized that I had, in my thinking and in my artistic practice, crafted an—admittedly reductive!—notion of the digital as a pure, stable, boring counterpart to the analog as full of noise, errors, organicism, and variation. Retroactively, I realized that in most of my previous artistic work, I have been— unconsciously—creating digital systems that somehow allow, afford, or even provoke the unstable, by which I mean a certain tension created by spontaneity and precarious moments in performance—in fact as much for the audience members as for the performers themselves. Following Oliveros, I realized I needed to more consciously design my system for instability. It is Pauline Oliveros who made me deeply connect the seeking for instability to my inspirational field of digital/analog, as is suggested in the title of this essay. Bitumen and the Video Projections The music performance of Pulsations [Bitumen] is accompanied by a three-channel video work, which consists of three independent slide shows of still imagery from asphalt surfaces—for which bitumen is a synonym—that are being projected on three walls of the theater, behind and around the musicians. A software algorithm continuously chooses new subsections from large images and will visibly or almost visibly downgrade them in resolution, as is illustrated in figure 2. Asphalt, a technological product that is made and spread on roads with machines, is exactly used for its regularity, as it minimizes the resistance with the car’s tire, unlike an unpaved road or a meadow. Yet, a closer look on asphalt’s “regular” surface unveils a remarkable detail. The tiny grains of material, of which the machine-made surface consists, are far from being regular in their arrangement. On the contrary, the particles could not look more irregular with different sizes, shapes and colors, and yet are in a way reminiscent of digital pixels (or is it the other way around?). In this sense, the road is remarkably similar to the computer screen, that is made out of single pixels, which, only in their totality and seen 5 from a distance, form a seemingly seamless surface. I am using this comparison to point to differences between inner texture (when zooming in) and outer shape (when zooming out). The basis for my video installation are real-world photographs from asphalt, that are digitized. The installation that I designed will algorithmically choose every now and then a new subsection of large- scale and high-resolution pictures and drastically downgrade them in resolution. The viewer is thus “tricked,” that is, they are never quite sure if the “bad,” pixelated look of the asphalt is a poor digital quality, a digitally created simulacrum, or a purposefully chosen aesthetics. My installation remediates the photograph of analog, real-world asphalt with digital artifact and contour (the intentional downgrading in resolution). Thus, what I am doing in the video installation is conceptually deeply connected to my music making practice, as the same principles of remediation are utilized. While on the visual level I take photographs from real-world asphalt and shape them digitally, on the sonic level I take the human-made viola sound and shape it with digital temporal contour, namely amplitude envelopes. 4 In both cases, I am crafting5 a new material that is made up of “organic” inner texture and digital outer shape, and in both cases, the gap between the original (asphalt, or viola) and the digital copy (downgraded asphalt picture, or live viola samples) provokes a fascinating “interplay,” as is described by Bolter and Grusin: “The interplay happens, if at all, only for the reader or viewer who happens to know both versions and can compare them” (Bolter and Grusin 1999, 45). For both the viola and the asphalt it is true that the audience members know both the original and the copy. With asphalt, we assume that everybody in the audience 4 An “amplitude envelope” is the function of amplitude over time, or the temporal course of what we perceive as a single note of an instrument, namely its “starting transient and subsequent decay of sound” (Gough 2007, 585). Gough adds, “In general, the envelope has a starting transient” and is an “important a factor in the recognition of any musical instrument” (Gough 2007, 584). 5 The term “craft” is interesting as it involves “habitus” (Bourdieu 1977) that reflects my artistic practice. Retroactively I observe that I have built a personalized embodied knowledge that has become my habitus, namely the construction of novel sounds that are a mix of “organic” and “artificial” material. “Crafting” sounds, for me, has thus become a primary part of my practice. In the words of Schloss, I am constantly making and remaking my own “paint” (Schloss 2004, 23). 6 knows how it looks like in the real world and will thus make the connection between my remediated version and the original. With the viola it is even more striking: not only is the remediated viola audible in the speaker, but also is the original viola playing (its acoustic, direct sound) simultaneously audible in the concert space. It is as if the audience would hear eight violas where there are in fact only four. The word “Pulsations” I added to the title only days before the performance so to illustrate differences between human-felt and machine-driven pulses as I will outline in the first chapter. Choosing the work’s title establishes a philosophical territory that is not only the underlying inspiration of Pulsations [Bitumen], but also of this essay. A key aspect of this project, then, is a creative process deeply rooted in the seemingly simple questions, what is digital, and what is analog, along with the more pragmatic concern: How can I render my studio and my live practice more improvisatory and more temporally immersive, in the sense of Pauline Oliveros, while keeping up the high level of machine-time precision and the “techno-dystopian” aesthetics that blend the organic with the digital? Pulsations [Bitumen] is a tentative answer to this question. The structure of this essay will follow the path of “action” in the live concert of Pulsations [Bitumen]: the first chapter will offer an investigation of the designing and the playing of my “meta- instrument” for live performance, and also includes the discussion of fundamental principles of mechanical pulse that constitute the basis of my piece. In the second chapter, I am going to look at what is at the other side of my instrument: the violists and their “workstations” consisting of a headphone with a (complex) click track and a computer screen that displays graphic notation. Defining the violists’ playing as “software instrument plugins,” “live samples” and a sort of “game playing” will reveal my conceptualization of a semi-human, semi-digital playing that could arguably be seen as my original contribution to the field. I must note that this essay is idiosyncratic to my artistic practice realized under my artist-moniker ‘Marcel Zaes.’ What I am seeking to accomplish is thus an account of a situated thinking/writing of a lived experience as a performer-composer. It is my hope that this essay is capable of offering an additional emotive dimension through writing that a pure musicological analysis would not be capable of. 7 Figure 2: Stills from the three-channel video work; similarities and differences between real-world photography and digital simulacrum are being made evident 8 First Instability: Playing the Meta-Instrument The Performative Crisis In order to fully account for my “instrument,” I need to make a brief excursus on how it came into being. Pulsations [Bitumen] started out almost three years ago, when I found myself getting bored with my musical practice—not so much with the resulting sound, as rather with the methodologies I used. Going more and more “digital,” that is to say, relocating more and more components of my practice into the computer, and from the recording studio into my bedroom6—which sounds like a dumb metaphor, yet in my case was effectively true as I lived in a tiny one-bedroom studio where I produced music in the kitchen corner next to my bed at that time—made me surely benefit from an incredible flexibility than I ever had before. Yet, it was back then, when I first realized, that I never quite felt like a musician. I clearly started to miss the performative component of my practice: the live concert performance that was more than a mere playing back of a fixed media work along with some real-time mixing. I should note here that it is not the pure “expressive agency” (Kjus and Danielsen 2016, 334) or any sort of proving “creative authorship” (Kjus and Danielsen 2016, 334) that I was lacking; for I was less interested in the building of some sort of artificial liveness that would authenticate my sounds for an audience in the sense of common EDM practices such as David Guetta’s, where the pushing of a button (that most likely is not connected to anything) is performed as a dramatic bodily move. Rather, it was about how my own body was invested in the creation of sound. Above all, I missed that performative aspect not only in live performance, but also whilst working on my music in the bedroom studio. While surely many other musicians involved with digital music would rely on their Midi keyboard to record bass lines, hi-hat lines, and melodic overdubs to their work, and while doing so would introduce a certain performativity—one that is based on as little parameters as when a note gets played, and how loud—I was never quite the 6 “Bedroom” is often used in music studies to describe how with state-of-the-art technology, foremost young people can produce music in their home studios: “Indeed, the bedroom has become a metonym for a new cultural politics of access and empowerment” (Toynbee 2000, 95). 9 person to feel comfortable with a keyboard or any Midi controller. It felt too much of an unsatisfying approach both to the piano and to my music, which often will not use a tempered pitch system. Rather than trusting my body and my embodied musicality to find the right “feel,” that is, “their own ways of relating to an isochronous pulse” (Iyer 2002, 398), I would try to mentally analyze the feel I was looking for and would program it solely with the cursor. The programming itself of course is a bodily act executed with mouse and keyboard, and so is the next stage. Playing back the programmed music was another bodily act; an act of perception, and an act of embodied cognition. It was my body that would then tell me if the crafted line felt right or not. And it normally would not, thus it was a constant process of going back and forth between mind and body, between programming and listening. Borrowing Nina Sun Eidsheim’s term, making music in the studio was a “vibrational practice” (Eidsheim 2015, 17) that entirely linked my body to the pure physical vibrations of the material my speakers were made of.7 It was these very membranes of whatever speakers I was using at the time, that turned the digitally programmed into a physical experience, and often times I realized how particular characteristics of a certain speaker influenced my composing purely by amplifying or cutting off certain vibrations more than others. This conglomerate of a computer, a mouse and a keyboard, some software, some cables, speakers, and the surrounding room became my instrument as much as my studio. Including myself, it became an “assemblage” (Deleuze and Guattari 1987, 343) that would shape my practice. This performative crisis led me to what I described in the introduction: a desire to render my practice more improvisational while keeping its aesthetics as “digital” as possible. What I normally did in previous pieces was to constructed music in the studio that would later be adapted for performance. Arguably, this approach is the default practice in much popular and electronic 7 In her book “Sensing sound: singing and listening as vibrational practice,” Nina Sun Eidsheim redefines what music is thoroughly. In her account, music is situated in a “continuous field” (Eidsheim 2015, 156) consisting of material vibration in a fluid continuum between human bodies and technological material. Music, thus, is by no means reducible to its sounding, nor its social and technological components. They are just single components of it (Eidsheim 2015). 10 music.8 Reproducing studio music in a live context was thus what Michael Veal calls “versioning,”9 for the practice of live mixing of stems and tracks would create different versions of an existing recorded basis. Discarding the idea of a studio-produced recording as the basis for live performance meant, for me, to also discard the idea of composing as an act that precedes the live performance. What seems a minor change is, as a matter of fact, the collapsing of what had been my established practice for live performance. My Performance Instrument With Pulsations [Bitumen], I am profoundly designing my electronics and studio setup an “instrument” in Oliveros’ sense, and my role has become the role of an improviser. The basis of my performance instrument is an Ableton Live set extended with home-made, self-built extensions written with Max For Live and Braid.10 Figure 3 represents my “assemblage” in performance, and the screenshot in figure 4 depicts the screen of my computer during performance. 8 “DJ and/or MC performance traditions in later electronica and hip-hop forms in which mix components, breaks and stems are controlled, than popular music genres in which traditional ‘instrumentalist’ approaches are manifested, where performers control music at small event level” (Knowles and Hewitt 2012, 2). It must also be said, that also in Western art music a version of this live mixing practice can be found, namely “‘acousmatic diffusion’ [that] was exemplified by prominent composers in the tradition such as Bernard Parmegiani” (Knowles and Hewitt 2012, 4) or Karlheinz Stockhausen. 9 The term “versioning” has been coined by Michael Veal to describe the Jamaican Dub practice which is based on prerecorded “stems” that, for each performance, are mixed down in real-time to result in a new versions. For detail, see (Veal 2007). 10 Ableton Live is a customary, commercial software for music production and computer-based live performance, for detail cf. www.ableton.com. Max For Live is an extension package for Ableton Live, created and sold by Ableton as well, which allows self-programmed Max patches be fully integrated in Ableton Live sets. Thus, self-built/ experimental/unconventional code can be operated within the commercial interface of Ableton Live. Braid is a Midi- based rhythm generator programmed as an extension to the programming language Python 3, programmed by Brian House. For detail, cf. www.braid.live. 11 Figure 3: Playing the meta-instrument, a setup consisting of computer, Midi devices, keyboard, and mouse Figure 4: The on-screen interface for performance: Ableton Live with Max For Live extensions 12 Using the ‘Clip View’ in Ableton Live, 11 there is no timeline given or visually represented. Instead, the ‘Clip View’ consists of independent elements that are endlessly looped, which is the idea of a sequencer. Mechanical Pulsations As the metric reference for Pulsations [Bitumen], I am using Ableton Live’s internal metronome at an unchanged tempo of 92 beats per minute which is constantly running, although not always audible, and sometimes audible at multiples or dividers of 92—i.e. at 46, 61.33, 138, or 184 beats per minute. Having a mechanical grid, that is, a regular pulse as the metric basis of my work, along with cyclic repetition as the primary building tool, naturally leads me to the question: now, what can I do to the pulse and to the repetition? In my practice I am deeply influenced by popular electronic music genres, especially those grounded in African American music practices which build on mechanical time and cyclic repetition 12 and their “funk,” 13 and I draw on Jeff Mills’ statement that “techno music should endorse new thinking and new approaches to what can be done with sound and rhythm” (Sicko and Brewster 2010, 145). 11 Choosing Ableton’s ‘Clip View’ with an ongoing pulse as my instrument for performance is a choice that arguably references a practice common for EDM live performances in the Western world as of 2018. 12 Susan McClary observes that more and more music is appropriating what she calls “cyclic repetition” (McClary 2004, 289) from African American music. She refers to music that is pulse-driven, and that is characterized by formal building blocks that are repeated, such as ongoing chord progressions with little variation, bass lines, or sampled or programmed drum beats. McClary shows how these characteristics are taken from African American popular music contexts and utilized in global popular music and Western art music. Ben Neill observes, that many academic composers since the 1980s have started to incorporate such elements of African American popular music in their work (Neill 2004). 13 Tricia Rose, when defining the “Six Guiding Principles for Progressive Creativity, Consumption, and Community in Hip Hop and Beyond” in her book The Hip Hop Wars, deeply criticizes that many listeners are seduced by the pure funk of Hip Hop tracks, that is the rhythm section and its capacity to ecstatic bodily movement, while they are blending out the lyrics and the story being told. For her, “The funk is the Achilles’ heel for lovers of black music” (Rose 2008, 264). I acknowledge Rose’s perspective inasmuch as I acknowledge some extent of cultural appropriation in my practice as it references African American “funk” along with the affective potential for bodily movement, yet I argue that it denies this potential for long periods of time that are exactly not “funky” as they are based on open-time viola textures or irregular rhythmic “swarms.” 13 African American popular electronic music genres, starting with “the quantized, otherworldly sounds of the Roland TR-808, an analog-synthesis drum machine popular in the early 1980s (Iyer 1998, 159) are based on cyclic repetition and a quantized, mechanical pulse. Rendering my mechanical pulse-based practice more unstable was my goal, but not only on the level of the viola playing as will be illustrated later, but also by inclusion of “deviation” in the process of the digital pulse making itself. That is, I was looking for a digital analogy of the viola playing. I was seeking to simulate continuous forms of musical time in the digital realm, by looking for ways to deviate from the quantized grid. What I am building on is a concept that is common for most digital audio workstations under the name “humanization:” rendering the pulse and the loudness irregular, and by so doing, more “human.” It is obviously reductive that this simulation of human behavior is drawing on rendering irregular what otherwise is regular, for the human and social creation of pulse is such a complex activity, as suggested earlier. Albeit the “making human” part of the humanizer must prove a conceptual failure, its attempt of making something more syncopated and more “funky” is a worthwhile consideration when working with mechanical time keepers. I looked closer at how humanizers work and found that there are two different histories of this concept. In popular music production, with the advent of digital audio workstations (DAWs), artists who programmed beats on their machines have started to analyze the microtiming of human drummers, namely how much the off-beats are swung or syncopated and implied those measurements as “groove patterns” in their quantization settings in the DAWs. By ‘humanizing’ they are quantizing their strictly straight beats towards a syncopated human-like “funk,” as is described by Jono Buchanan (Buchanan 2014) and by Gareth Michael Jones, a beat-maker and audio technology blogger in the Resident Advisor blog (Jones 2013). Even drum machines that existed at the advent of digital audio workstations were sometimes equipped with a “shuffle mode” that added syncopation. On the other hand, in Western art music contexts, with analog synthesis, artists begun to desire some irregularity for their extremely regular and stable oscillators and hence drew on principles such as using noise as a source for random values to control the modulators for oscillators. 14 I did not use the syncopation/groove method in Pulsations [Bitumen], but instead I started to experiment with different ways of random generation. I noted that what many artists use when working in the Max software, with the “random” object, was the uniform equal distribution. Through experimentation I realized that using the Gaussian standard normal distribution instead offered a subtler deviation from the regular grid. Figure 5 compares the deviating pulses produced with either of the two distributions and makes evident that the uniform equal distribution produces more arbitrary results, and the Gaussian normal distribution slighter deviations and remains closer to the original grid. Figure 5: Distributions used for deviating pulses; top row: uniform distribution, function curve (left) and resulting pulse (right); bottom row: Gaussian standard normal distribution, function curve (left) and resulting pulse (right) https://soundcloud.com/marcelzaes/deviation-from-regular-pulse-1/s-BqNl9?in=marcelzaes/sets/thesis/s- 9exK3 Media Sample 2: Uniform equal distribution https://soundcloud.com/marcelzaes/deviation-from-regular-pulse/s-i2YcZ?in=marcelzaes/sets/thesis/s- 9exK3 Media Sample 3: Gaussian standard normal distribution 15 Stuttering and Swarming I consequently equipped every single voice in my Ableton Live set with a home-made Gaussian distribution deviation that simply delayed the central regular pulse by a normally distributed value from the regular grid. If my knob was at 0, the maximum deviation was 0, hence the pulse running on time. If I opened up the knob to say 246 milliseconds, it would produce deviations mostly between 0-20 milliseconds along with some outliers up to 246 milliseconds. Figure 6 depicts my self-built “deviation knob” along with its inner Max For Live patch. While opening the knob only a little would produce the quite common “humanizer” effect, the upper extreme turned out to be particularly striking, as opening up the distribution to roughly 1000 milliseconds produced something that did not even closely resemble a regular pulse, or in other words, did not resemble a pulse at all. These irregular pulses I used not only to drive the electronic voices in my Ableton Live set, but also sent them out to the four violists in the form of irregular click tracks. Ashley Frith, one of the violists, in an interview conducted with her after the performances, said: “I was able to feel the irregular pulse as a rhythm” (Frith 2018).14 The extreme position of my deviation hence created rhythm where normally would be pulse, and it was quite a particularly shaped rhythm: the single voice indeed began to “stutter” for it produced an irregular feeling that reminded me of the linguistic phenomenon that is stuttering. Furthermore, as each violist’s metronome (and also my electronic voices) run individually through such a deviation module, the overall rhythmic output was not a single stuttering voice but an array of different stutters that reminded me of lags, glitch, or a swarm. This technique, having the violists deviate via an irregular individual version of the pulse, is my first way of confusing the boundaries between the digital and the acoustic music realms. The irregularity occurs at the level of the meter itself, a direct translation from a digital phenomenon into acoustic music. This is one way I am rendering the viola playing a more “digital” practice. Violist Zsolt Sörés explains: 14 For transcription see appendices. 16 The irregular click track, to me, was like an instruction of ‘try to follow the click track,’ but it’s impossible to follow. You probably would have to rehearse endlessly. The problem is that there is a delay between hearing the next click and then the realization of what this rhythm will be. Your brain will try to find some scheme in this click track; what is its rhythm if it’s not regular, but you really can’t because it’s irregular. So, you will get the next click always at a very indeterminate moment. It’s really like a work [a job], namely: how can you try to follow it? You always make a big effort and will never succeed [laughs out loudly]. You definitely need a brain, intuition, but also passion for these irregular sections. You have to be at once inside and outside. (Sörés 2018) Figure 6: Self-made deviation plugin, programmed in Max For Live; screen interface for performance (left) and the inner life of the plugin (right) Thus, my instrument was capable of moving the musical action on a continuous axis from a quantized uniform grid to slight stutter to a glitchy, chaotic swarm with all the in-between possibilities via one single “deviation knob” on my Midi controller. This is one of several “playable” elements of my instrument. For demonstration, I am including here a specific section from the dress rehearsal, in which the formerly regular grid slowly started to deviate and swarm. 17 https://youtu.be/5mt-3IfszdA?t=5m12s Media Sample 4: Live performance, irregular “swarming” section (minute 5:12) https://youtu.be/5mt-3IfszdA?t=14m2s Media Sample 5: Live performance, irregular “swarming” section (minute 14:02) Sound Material: Digitally Shaped Viola Now that I have described my interface (Ableton Live with Max For Live) and the metric unit at its base, let us move on to the sound materials that I have at my disposal for live performance. The materials that I will describe in the following belong to two different typologies of sound—constructed, that is, fully technological sounds; and “organic” sounds (the live viola). However, both typologies are equally digitally shaped in time via digital amplitude envelopes. First and most characteristic (both visually and in terms of sound) are the four violas, which technically speaking appear as a “software instrument plugin” in my Ableton Live set. It is me “playing them,” meaning, that in a constant stream of momentarily decisions I set the rule of the “game” for them; namely as instructions that then appear in the form of graphic notation on their computer screens. This stream of momentarily decisions encompasses choices for pitch, timbral quality, and a characteristic for the rhythm pattern that they will execute accordingly. The sound played by the four violists subsequently was microphoned, passed back into my computer, where it was processed with artificial amplitude envelopes, before it was spatially remediated via four speakers placed around the audience. The most comprehensible example for this process, as illustrated in figure 7, is that I have them play long sustained tones with a long fade-in and fade-out (communicated to them via the graphic on-screen score), while I apply a looped pattern of short amplitude envelopes to their microphoned signal. What will be audible in the space in addition to the direct acoustic viola is a digitally created percussive pattern consisting of viola-provided timbre. As is shown in figure 7, there lies a high degree of uncertainty as to what the violist will be producing at every new iteration of the gate opening (amplitude envelope) pattern. 18 Figure 7: Applying a looped amplitude envelope pattern (bottom) to a non-synced open-time loop of the viola (top); the resulting pattern is to some extent unpredictable https://soundcloud.com/marcelzaes/materials-gated-live-viola/s-ZA3Nt?in=marcelzaes/sets/thesis-1/s- k2roT Media Sample 6: Demonstration of gated live viola Andrea Young, in her 2007 thesis, offers a striking definition of this variation from succession to succession of a pattern: If a pattern does not change over time then the sound would be purely repetitive, and can be described as static. The more noise, the more irregularity in time interval, and the less repetition, will create a more dynamic, a more rhythmic sound. Most importantly, rhythm will be considered the state between these extremes; rhythm is itself the dynamic structure moving between periodicity and aperiodicity. (Young 2007, 30) And she concludes, saying that, If we can discuss fluid forms, discrete and continuous sounds, then the movement between these stages can be described, with mathematical articulation, as the rhythm behind the overall form. (Young 2007, 74) It is here where my seeking for the in-between of the continuous and the discrete, as particularly evident with the gated viola “live sampling,” is producing a rhythm that is, in Young’s sense, non-static. It is this rhythm, then, that constitutes the ground of Pulsations [Bitumen]. It is however important to state that the digitally shaped four violas (the “live sampling music”) coming out of the speakers are 19 always accompanied by the acoustic unamplified direct sound of the four violas in the space. My “playing” of the violists thus consists of two actions: first, the choice of a specific set of instructions communicated to them via the notation system, and second, the playing of amplitude envelope patterns that will shape the violas’ sound artificially in time. Figure 8: The meta-instrument is controlling the algorithm that is shaping the violist’s sound Sound Material: Sine Tones Second, I have two sine tone oscillators at my disposal, in the frequency range of bass, in the form of a home-made Max For Live patch, that slowly fade in and out (a very slow cyclic repetition unchanged throughout, on which I have no influence), and which will always observe the momentary graphic notation setting for their exact pitch. In performance, I decide whether they are fully audible, or if I apply short ‘Attack Decay Sustain Release’ (ADSR) amplitude envelopes to them so that they appear as looped percussive patterns (“bass lines”) instead of sustained long tones, or if I have them not audible at all. 20 https://soundcloud.com/marcelzaes/materials-sine-tones/s-0Nvep?in=marcelzaes/sets/thesis-1/s-k2roT Media Sample 7: Demonstration of sine tones Sound Material: Electronic Beats Third, I have beat loops, consisting of individual loops for kick drum, snare, and hi-hat. They have been made ahead of time using the Braid rhythm algorithm that triggers extremely short amplitude envelopes that shape noise/waveform oscillators. Much like the digital shaping of the viola tones, the Braid algorithm constructs beats out of sustained waveforms. Braid is a Python 3 extension that will produce endless variations of a Midi beat that is entered into the algorithm. Figures 9 and 10 illustrate this process. Figure 9: Braid code for a rhythm pattern in 4/4 with random variation; note that the line starting with t11.pattern is the Midi code for the rhythm pattern; each parenthesis is one of the four beats, while the brackets define several possibilities for each beat; ‘K’ stands for kick, ‘S’ for snare, and ‘H’ for hi-hat 21 Figure 10: Musical notation of the afore given Braid code; there are two possibilities for beat one (kick-snare or rest-kick), three possibilities for beat two, and so on; in performance, each succession of this pattern will result in a new combination of all these possibilities to an ever-changing continuous beat https://soundcloud.com/marcelzaes/materials-beats/s-9gwg2?in=marcelzaes/sets/thesis-1/s-k2roT Media Sample 8: Demonstration of Braid-generated beats For sound, what I call “kick,” “snare” and “hi-hat,” in fact, has nothing to do with these instruments per se. They consist of filtered analog pink noise and sine/square tones with a self-designed amplitude envelope and hence are entirely synthetic, and yet reference the aforementioned instruments in their musical function. This is another example of remediation in my work, for these instrumental musical functions get adopted and reused in another medium, while the old medium (the real snare drum) is undoubtedly referenced. In performance, my playing consists of assembling these loops with mouse and keyboard to ever new combinations of beat sections, or in not having them audible at all. Additionally, using my “deviation knob,” will render the beats irregular stuttering swarms of percussion. 22 Playing the Sound Materials Now that I have outlined the materials at my disposal, we shall have a closer look at how I am playing them in the concert. As described before, my role as a performer is the role of an improviser. While the materials are prepared and at my disposal, my composing with them is executed in the moment and achieved via touching the Midi devices and the keyboard and mouse. What I am doing in performance is, in the words of Kjus and Danielsen, “live improvisation with studio technology,” (Kjus and Danielsen 2016, 330) that is, I “designed [my] setup[ ] in a way that accommodated intuitive forms of temporal sequencing.” (Kjus and Danielsen 2016, 335). In the process of developing this work, I was constantly going back and forth between directly improvising with the computer and creating algorithms to make momentary decisions outside of my control that relieve me of responsibility. Finding this balance between the two extremes, which could be said is a continuous investigation in my practice, is a trade-off. This is closely linked to the initial project of rendering my practice more interactive and improvisatory; more parameters need to be played manually in a direct action-to-sound relationship. At the same time, I am caught in a constant realization that I can only deal with a certain, quite limited number of parameters at once, and that every parameter that I add will necessarily lead to a lesser degree of attention for another parameter. In addition to this, having an instrument that operates at the basis of a fixed pulse, in the words of Vijay Iyer, makes it a “meta-instrument” from the very beginning. Iyer is referring to the “hip-hop DJ” who does not create the pulse of the music as an embodied action, and thus “treats the turntables as a kind of percussion meta­instrument” (Iyer 1998, 145). The Idea of the Gate By having my instrument detached from any sort of direct action-to-sound relationship, it positions my acting into another realm. I am outsourcing the creation of rhythm to my algorithmic instrument. Neither do I create those audible rhythms myself, nor do the violists create them, for the algorithm is framing the violist’s sound temporally, and hence is a performer itself. I like to think of this 23 like the actor of the New-York based Hungarian expatriate “Squat Theatre” group described by Auslander, who “would throw open a huge garage door; at that point, whatever happened to be taking place on the street outside was framed for the audience as part of the production” (Auslander 2005, 9). What Auslander describes on a visual level, almost exactly applies to my work on a sonic level, albeit on a different scale. The Squat Theatre’s gate opens probably only once per performance, while my gate opens uncountable times during one performance. But, as I will explain later, the play with scale is inherently built into my instrument. The amplitude envelopes will open up the violists’ signals for an instance of time, thus create a sort of sonic window through which we hear into the violists’ “close-up” action that otherwise remains unamplified as acoustic direct sound, as illustrated in figure 8. Much like in the Squat Theatre’s show, opening this sonic window encompasses only very little control on what sound is momentarily played by the violist, and this is exactly where my mechanism crucially deviates from what is normally called “sampling music.” When making sampling music, one will not only decide on the starting and ending points of the sample and on the precise amplitude envelope, but also on the sonic content. Each new iteration of that sample will result in an exact mechanical copy, which is not the case with my amplitude envelopes: the exact same envelope will produce a different result, as the violist, meanwhile, may have moved on to the next sustained note, or even if still playing the same note, will necessarily produce a slightly different timbre for the reasons that I will describe later: the fragility of the viola. My play with the gate (that I am using as a synonym for my amplitude envelopes) is thus adding another element of instability. Let us have a look at the control level of the gate. Now, in Auslander’s example, it is a human performer who decides upon the when to open and close that garage door. In the realm of music, a comparable situation is the playing of an analog synthesizer, in which multiple processes that are going on in the background are outsourced to an external ‘Low Frequency Oscillator’ that will constantly change the timbral, amplitudinal or temporal characteristics of a sound wave and to which the performer has only limited momentary access. The performer is instead, most commonly via a controller keyboard, 24 solely playing the ‘Voltage Controlled Amplifier,’ that is, the amplitude envelope. At the push of a key, the envelope sets on and opens up the otherwise covertly oscillating sound wave. The idea of this, much like in my work, is that each single push of that same key will result in a slightly different tone. Figure 11: The model of an analogue synthesizer with a ‘Low Frequency Oscillator’ modulating the wave 25 Yet another example of triggering an envelope (or a gate) to open up is the computer game Guitar Hero by Harmonix, as discussed by Kiri Miller: “the player functions as the gatekeeper for prerecorded material; correct fretting/strumming allows each note to make its way from the game console to the speakers. If the player misses a note, that note drops out of the audio playback” (Miller 2012, 89). According to Miller, it is this latter fact that contributes to “the overall gameplay experience because it provides sensory evidence that the player is producing the sounds that come from the speakers” (Miller 2012, 104). Strikingly, Miller captures the essence when referring to the performer as “gatekeeper.” What is different here from all other examples, is that the signal being gated is not an independent or even total random signal onto which the performer opens the window for an instant like the actor in New York, but that this background signal is precisely planned for a certain pattern of gate openings, and if the gate is opened at the wrong moment, there will be a mistake sound. In Miller’s description of Guitar Hero, it is the performer who controls the gate, while the background signal has been planned and prerecorded by Harmonix’ developers beforehand. Now, in Pulsations [Bitumen], neither am I playing the envelope myself, nor has the background signal been planned or synchronized to the gate opening pattern beforehand. Instead, I have programmed an algorithm that at the push of a button will create a new random pattern with always slightly different opening and closing times—a looped rhythm pattern—but will then run on its own in a cyclic repetition until otherwise directed. On a second button push, it can be stopped, or changed to a freshly generated random pattern. As said before, the rhythmic basis of my work is a rigid pulse that runs in the background, and which ensures that any possible onset of such an amplitude envelope occurs as a quantized event on the pulse. If the “deviation knob” is activated, the gate opening pattern will obviously observe the deviated pulse and trigger the gate irregularly. Thus, it is the level on which I am acting as performer that is a different from the performer in the New York garage theater and in Guitar Hero. In the latter, as Kiri Miller argues, even though the performer is not technically producing the very sound that comes out of the speaker but rather reproducing a prerecorded sound material, they nonetheless “feel responsible for producing the music through moment-to-moment embodied engagement with [what 26 Schutz calls] the ‘inner time’ of the song” (Miller 2012, 111). In other words, most of these performers feel like live musical performers. It is exactly this “ambivalence and even paranoia” (Miller 2012, 86) that results from the “split between a recorded sound and its source” (Miller 2012, 85) for which Kiri Miller, building on R. Murray Schaefer’s term, has coined the term “schizophonic performance” (Miller 2012, 85) to describe the linking of bodily gestures to prerecorded sounds in games such as Guitar Hero. For my role as a performer, I do not feel like an instrumental performer exactly because I am not interacting with the sonic result on a moment-to-moment basis, nor am I precisely deciding upon that “inner time” of sound (Schutz et al. 2011). In Miller’s words, my performance does not “[rely] on mastery of a specific technical interface” inasmuch as “if [I] stop playing [my] instrument controllers or play them inaccurately,” I will not—unlike Miller’s Guitar Hero players—“disrupt or destroy the game’s musical output.” (Miller 2012, 11). I am more a meta-instrumentalist than a schizophonic performer. Nevertheless, I am deciding on the overall activity or inactivity of the algorithm, and I am using bodily feedback: I am constantly listening in real time, and if I do not like what the algorithm is producing, I will either direct the algorithm to change, or to stop acting. Consequently, rather than playing music, I am playing the algorithm. The algorithm, then, is a source of what Oliveros calls “unstable,” for its exact shape will always be randomly generated upon pushing the button, and the material upon which it opens up the gate is not entirely predictable beforehand. The amplitude envelope patterns that shape the violas—and indeed also my sine tone oscillators and beat materials—fall in two different realms of time: short and long. The “long” category will render the viola playing a sampling music aesthetic: an artificial (mostly very short) attack, followed by viola timbre that is longer than 200 milliseconds, and an artificial release (mostly rather short). As for “short,” following Alexander Case I consider gate openings shorter than 200 milliseconds: “Signals that are shorter than 200 ms can be particularly difficult to hear. Most percussion falls into this category” (Case 2007, 6). The viola timbre at such short openings is being audible not only as live samples, but since in their shortness they are hard to be recognized as viola timbre they appear more as “percussion.” Media examples 8 and 9 illustrate this difference. 27 https://soundcloud.com/marcelzaes/long-amplitude-envelopes-for/s-UhEFO?in=marcelzaes/sets/thesis- 2/s-EZbPg Media Sample 9: Long amplitude envelopes for viola. https://soundcloud.com/marcelzaes/short-amplitude-envelopes-for/s-OTa7w?in=marcelzaes/sets/thesis- 2/s-EZbPg Media Sample 10: Short amplitude envelopes for viola. 28 Second Instability: The Violist’s Workstation Why the Viola? There are several reasons why four violas were chosen for this work. The physical qualities of the viola lead to an emitted sound that is deeply fragile. The playing of a single note in fact requires the performer’s capability of balancing all partials involved in that sound. This timbral balance is continuously in flux as the bow and the left hand’s pressing finger are constantly in motion, especially upon first activating a note. The initial noisy attack must turn rapidly into the desired sound, or in terms of physics: “For low-pitched stringed instruments …, it is very important that the Raman bowed waveform is established very quickly, otherwise there will be a significant delay in establishing the required pitch” (Gough 2007:603). According to Gough, “There are several potential sources of fluctuations in the envelope of musical instruments,” such as the “irregularities in the sound of any bowed instrument due to inherent noise in the slip-stick bowing mechanism” (Mcintyre, T. Schumacher, and Woodhouse 1981 quoted in Gough 2007, 585). What Alexander Case says for the electric guitar and the bass, I believe, equally holds for the viola: it is an instrument characterized by unclarity in spectrum, “broad in spectral content, when it is loud, [and] it masks much of the available spectral range … to compete with signals living in a much higher range” (Case 2007, 6). This physical fact is amplified with four violas, which implies that we have four times the same four strings with similar resonating wooden bodies, in a constant slight detuning (by nature of physics), thus adding up to a quite unstable sounding body. This is much different from an instrument like the piano, where the generation of a tone is packaged in a complex mechanics that is then being activated by the player with only few variables to play with: when, how strongly, and for how long—not speaking now of the “infinite variation via permutational and combinatorial processes” (Moseley 2016, 67) that the piano affords for. Conceptually, the viola could itself be thought of as an analog modular synthesizer: the left hand deciding upon the pitch is, so to say, the oscillator that is running “in the background,” while the right hand with the bow is the amplitude envelope that “opens up” that pitch for an instant. This metaphor reinforces my argument in its built-in 29 fragility as much as it constitutes another analogy to analog synthesis. I believe it is the coordination between the left and right hand that makes its sound constantly precarious, for micro differences in timing between the two hands will result in noise, distortion and artifacts. The violist can never quite be sure what exact sound is coming out upon the onset of the bow, and hence needs the first moment of playing as a feedback to adjust timbre, pitch, bow speed and pressure. Another consideration is the viola’s traditional role in an ensemble. It is neither a bass instrument, nor a treble instrument that usually carries the melody, but a harmonic “filler” in-between. For ensemble timing, this implies that in a string trio “the violin’s lead voice tends to play ahead by 5 to 10 ms, the cello tends to follow, and the viola’s middle voice showed [in a study] a net lag of another 5 to 10 ms” (Drake and Palmer 1993 quoted in Iyer 2002, 399). That lag that is necessary for a filler voice, I believe, has become embodied by the Western classically-trained violist, who, generally speaking, generates notes that have a less articulated transient than those produced by a violinist. What must sound quite pejorative as a description, in fact, is a declaration of my own commitment to the viola. Its physical fragility along with the viola’s political “shadow” position make it particularly attractive to me. The choice of having four violas in my work—as opposed to, say, four electric organs or four pianos—constitutes at least one, if not several elements of Oliveros’ “unstable.” “Ensemble-ness” of the Ensemble The violists, in Pulsations [Bitumen], finds themselves in a “workstation” that is more akin to playing a videogame than to traditional ensemble playing in both Western art music and in popular music context. The spacious placement of the four players already reduces the social-visual “ensemble” component, and the screens in front of them, together with the earbuds in their ears, supports the videogame comparison, for it immerses them in a virtual reality inasmuch as the headphones detach them to some degree from the outer sonic environment, and the constant staring-at-the-screen detaches them to some degree from their visual surrounding. It is almost as if the four violists were fully immersed in a video game world wherein they make music, much like the players of Lord Of The Rings Online 30 described by William Cheng. Cheng says that music scholars have attempted “to establish online environments as vital social settings that are not essentially peripheral or subservient to the physical world” (Cheng 2012, 41). And yet, what the audience witnesses during the show, is an ensemble of violists making music together, not only in the remediated “live sampling” but also simply acoustically and visibly in the space. It is this togetherness that is particularly striking in Pulsations [Bitumen] as will become evident in my comparing the violists to a “software instrument:” they are existing as autonomous entities much like several software instruments in one and the same track, where the “togetherness” is merely constituted by them acting in the same moment in the same room, and obviously by the planning of their togetherness from the outside: by me, assembling their score in the moment. Ashley Frith says, Sometimes I felt a part of the ensemble, and other times I felt less connected…I’m assuming a big part of that is the lack of necessity for communication with the other musicians. … overall I felt in touch with the energy of the audience, and sometimes with the activity of the audience as well. (Frith 2018) As is described by Ashley, the four violists in one space, albeit dispersed and detached from each other, will necessarily exhibit some reacting to each other, because they are listening to each other’s direct acoustic sound in the space. This is reinforced by Zsolt Sörés: “when someone [another violist] is playing lots of notes, C – D – G etc., … I felt it’s more interesting when we played less; more minimal, more just one thing. Let’s repeat this one thing only” (Sörés 2018). Ashley Walton, in her 2015 article, accounts exactly for what I understand as the agentive potential of the four violists in Pulsations [Bitumen]: she proposes to “Understand[] musicians as a self-organized system, coupled such that they constrain each other’s musical performance” and outlines “how musical meaning emerges from the coordinated, yet complex turn taking manifested in spontaneous musical expression” (Walton et al. 2015, 7). The Violists and the Mechanical Pulse With Pulsations [Bitumen], my primary project was rendering the technology-based music more human. Including four violists in the otherwise digital music practice, beyond having them act as agentive 31 “videogame players,” also introduces another “humanizing” component: namely in how they relate, as humans, to the mechanical pulse that is made audible to them via the in-ear click track. The mechanical metronome, embodied and performed in and via the violists, renders the pulse social. A pulse, in ensemble music making, is a felt and embodied time among all participating bodies, as is theorized by Vijay Iyer. He writes “Individual players have their own feel, that is, their own ways of relating to an isochronous pulse. … A musician can pop out of a polyphonic, rhythmically regular texture by a ‘deviation’ from strict metricality” (Iyer 2002, 398). The relating of the violists playing to a metronome is such a “deviating” activity: the metronome is clearly designed to insure their rhythmical stability especially because they are dispersed around the stage and pointing in diverse directions. And yet, in the “meta-instrument” that I am crafting, in which the violists oftentimes play notes in a steady pulse following the metronome, and my digital amplitude envelopes will shape their notes digitally while opening and closing the “gate,” due to humanly temporal deviations on a microtiming level, there will always be tiny imprecisions as the violists start their notes either slightly before the gate opens or slightly after. How exactly they relate to the pulse is informed by a rich set of embodied experience formed in their previous musical activities. With this idea of omitting the “ensemble-ness” of the ensemble (in having the violists dispersed around the stage and separated by immersing each of them in their own “virtual reality” in a workstation with screen and headphones which render them rather four soloists than an ensemble), the steady mechanical pulse that constitutes the basis of my work at once produces synchronicity between the players as much as it produces a-synchronicity between the digital temporal envelopes and the human relating to the mechanical pulse. Even though they relate to the same metronome, they never really play together. This fact is amplified with my rendering the metronome itself irregular at times, so to purposefully design the system for rhythmic instability. 32 A Videogame or a Notation System? Let me describe the violists’ workstations in more detail. Their computers run the Max software that is in charge of algorithmically assembling and displaying graphic envelope patterns, which serve as notation. As their max patches are linked to my main Ableton Live set via a self-built Max For Live plugin and a wireless local area network (WLAN), I decide on the primary characteristics of each pattern, yet the final generation and graphic rendering of the notation is left to their machines. As a result, even if I “send” all the four violists the same instruction, each computer will generate a slightly different version. For an example, even if all violists see patterns consisting of short triangular shapes, one performer will see a pattern with 7 beats, while the others may receive a pattern of 6, 8, or also 7 beats. The click track that they receive via the earbud serves as their metric reference. Whatever pulse they are hearing in their ear—as explained in chapter one the click track may jump suddenly to any divider or multiple of 92 beats per minute—they consider as their momentary metric unit, even if it is irregular. Figure 12: Self-built Max For Live device that is the basis for the notation system In my Ableton Live set, the violists’ computers literally appear as a software instrument, as is depicted in figure 12. On this interface, I am given the possibility to spontaneously compose the next instruction for the violists. It will let me choose a pattern preset from 1-15, while 16 stands for the 33 instruction “silence” (cf. figure 14). Then, I may choose the notation to display either “tone” or “noise,” or the exact in-between, or “random,” in which case the computer will choose either of those timbres. I will also choose one of the four simple pitch pools, that is, four different scales15 (all in C), plus will decide whether the violist’s notation will only display the first note of that scale (obviously C only), or the first four notes of the respective scale, the full scale (in which case the performer chooses themself), or if I want the computer to randomly pick “any one note” out of the scale. Last, I am given the options “short black” or “long black,” which stands for different transitions into the new notation “page” that is being generated. Once I have decided and pushed all those buttons, I may either send that new notation to all four violists, or to a specific subset or single violist, by pressing their names. On that click, all my decisions are encoded as a list of numbers, which is then transmitted via the UDP protocol and via the local wireless lan network to the violist’s or violists’ computer(s). On their machine(s), this list of number is decoded, as outlined in figure 13. Figure 13: Network receiving part of the notation system Max patch; an incoming list of raw numbers is interpreted by the patch and includes information for the “tone-noise” ruler, the rhythm envelope pattern, and the pitch scale 15 Note that for the scales I am adopting a quite arbitrary choice of four scales that are offered by Brian House’s Braid software, which are: “Suspended Flat,” “Gamelan Pelog,” “Gamelan Slendro,” and “Phrygian,” and I use all these scales in the key C. While their original “function” and cultural history is unimportant to me, I am literally using these scales as a sort of random generators for harmony as the precise selection of pitch is quite uninteresting to me. 34 The primary part in this process are the presets for rhythm patterns, that I have defined beforehand—not as final composed patterns, but as probabilities for narrowed random generation. The list depicted beneath in figure 14 is the overview of all presets that I have at my disposal. During the performance, this list I had printed on paper next to the computer as a mnemonic for improvisation. Figure 14: Paper list as mnemonic for performance: overview of rhythm presets The following figures demonstrate what the outcome of the notation system is (what is visible to the violists), when one of these presets from figure 14 along with the other choices for timbre and pitch in figure 12 is applied. In figure 15, preset #7 has been applied along with “random” timbre and a random one note of the “sus flat” scale. The violist is instructed thus to repeat a 7-beat pattern on a G# (any octave they prefer), varying the dynamics according to the black rectangular shapes, while the timbral quality should match some pitch with some noise component. How this is achieved, is left to the violist. In figure 16, preset #9 has been selected, resulting in a randomly generated pattern, as says the list above, that is 9- 12 beats long, consists of “sine or triangle up” shapes that have lengths between 5-7 steps (whereas the last one-step-long shape is just the filler for the loop), and is at a rather low overall dynamic level (thus no shape exceeds the 50% height threshold, and must include “no rests,” hence every beat will be filled with some shape. The iteration in figure 17 is based on preset #12 with a “sine down” in “open time.” The grid 35 has disappeared, which for the performer means that they may freely move in time. However, they were instructed ahead of performance, that once they have decided on a duration for this shape, they need to keep that duration while endlessly repeating the shape. For an example, this fade-out could take 3 seconds, but it could also take 3 minutes. Note that in figure 17, the pitch bar now contains several options. For each new iteration of the loop, the performer is thus free to choose anew from the pitch classes given, and from the octaves available on their instrument. The octave is left to the violist throughout. Figure 15: On-screen notation for Pulsations [Bitumen]; screenshot of Max software 36 Figure 16: On-screen notation for Pulsations [Bitumen]; screenshot of Max software Figure 17: On-screen notation for Pulsations [Bitumen]; screenshot of Max software https://www.youtube.com/watch?v=lUek150ainI Media Sample 11: Jordan Dykstra demonstrates the graphic notation 37 In the demonstration video (media sample 11), violist Jordan Dykstra demonstrates how the notation works while we see what was on his screen. It is striking to observe minor deviation from what is written in his interpretation. For an example, starting from 00:38, he is given an 11-beat pattern with 6 soft and 5 slightly louder steps. In the first cycle he plays 5 soft and 5 louder notes. In the following, as he is given the possibility to freely choose from the three given pitches, I notice that before each pitch change he is especially imprecise in timing and counting. Also, note how the first of the louder beats, which is beat 7, will always be slightly emphasized as if to start a musical phrase, albeit in the middle of the actual pattern. David Schnee, another participating violist, said, if [I was playing] a 4-beat pattern but the last beat in the pattern was the emphasis, then I started to feel beat four as beat one. So, to me it remains as a question if one should try to notate all patterns as more musical phrases, that is, with emphasis at the beginning, or on the contrary if you even intend this shifted, skew character of accents that are not at the beginning? (Schnee 2018) And he goes on, asking, Can I even play a loud beat four exactly as loud as a loud beat one? Or will I play them slightly differently? Or, from the outside, would someone in the audience even feel a pulse, just because there is no meter that is perceivable? Nobody will say ‘oh, this is a four-beat measure, and now it’s a seven-beat measure’ (Schnee 2018) Jordan, in the demonstration video, is clearly creating musical phrases where there are no such phrases visible in the notation, which corresponds to what David is saying. Another observation is Jordan’s rhythm becoming quite irregular around 1:22 (the click track given, here, was regular). Undoubtedly, Jordan’s dynamics, phrasing, articulating and rhythm deviate to some degree from what is displayed in the notation. Not only is this deviation component genuinely my interest as it aligns with Oliveros’ unstable, but also it is my aim to provoke their interpretation of a notation system they do not know yet. Then, it is in their application of their available embodied knowledge and experience in the form of them accessing their musical habitus where they turn my “neutral” (not yet belonging to any formerly known stylistic school) notation into music, simply by applying, in the case of Jordan, his internal feel for groove and musical phrase. It is exactly this contrast of my digital notation with their 38 seeking for their individual analogies with other music they know from before, that make the viola playing in Pulsations [Bitumen] a “digital analogy.” In the third pattern in this demonstration video, beginning at 1:47, Jordan is provided with an open time pattern: a simple, exponentially ascending amplitude envelope whose length is left to him. While my original thought was him being entirely free in deciding on the length of this pattern, I note a striking correlation with the affordances of the viola, or better, its bow: his pattern is executed in a way exactly as long as his bow, so that the exponential dynamic shape is not interrupted by a bow change. In the full performance video, the same holds for Ashley at 9:44 and for David at 58:10. Only occasionally, Zsolt would as the only one create such open time envelopes that are several bow strokes long. It is crucial to notice that the graphic shapes given to the violists made them not wanting to disrupt the continuity of such a shape, so that my desire for “open time” is restricted by the affordances of the viola’s bow length. Now that I have described Jordan’s deviating in this demonstration video, let me delineate the notation system developed for Pulsations [Bitumen] in even more depth. It is a prescriptive notation system which is a composite of graphics and text. Its underlying algorithmic principles make it an on- screen notation. The most crucial fact to note is this notation system’s limited scope: it is genuinely idiosyncratic to my musical practice. The fact that I have developed a notation system capable of displaying some musical parameters at incredible high accuracy and resolution intentionally ignores other parameters, or better, condemns them to a deeply reductive existence. First, such a reduction accounts for my initial thoughts on notation—however complex—not being able to capture music at all. Second, it encompasses a specific function: it was my primary idea to create a “human sequencer,” that is a set of patterns that are being repeated over and over. While the machine repeats a pattern as an identical mechanical copy without ever getting tired, it is very different for my human version of the sequencer: the violist will continuously change their perception and acting as each repetition adds in an accumulative way to their craft and practice, thus no succession will sound exactly the same. As Ashley points out, she 39 got tired16 after many repetitions. Thinking of the violists as a “human synthesizer” is in the line of thinking of the them as a “software instrument.” What the patterns—which the violists are asked to repeat—are consisting of is mainly a complex array of amplitude envelopes at a pattern length (number of beats) that is different from what many Western vernacular folk music traditions and popular music traditions will use as their “step length;” namely it is in not four or an integer multiple of four. Consequently, in my notation system, dynamics are displayed at an arguably uncommonly high resolution, while timbre and pitch are given as quite reductive, text-based instructions, such as “noise” and the pitch class of “C#.” One can imagine how many different things “noise” and the pitch class “C#” could mean for a viola, starting from intonation, tuning, vibrato, playing technique, phrasing, octave, etc. Even what “C#” means for intonation, on a string instrument, is subject to a constant negotiating amongst all ensemble participants. Is it the minor second of a scale in C, a major third in an A scale? In other words, the simple representation of a melody, that is the sequence of different pitches, is simply not an affordance of this notation system, which makes it by no means an adequate solution for transcribing existing music or as an alternative to the conventional staff system. For the representation of dynamics as an amplitude envelope, let us briefly compare what could be a default notation in Western art music, and what is possible with the notation system in Pulsations [Bitumen]. Figure 18: The author’s notation system versus a potential equivalent in traditional staff notation 16 Cf. appendix page 62. 40 It could be argued that the left side in this comparison may look more familiar to people accustomed to digital music software, Midi roll editors, waveforms and “clips” in contemporary digital audio workstations,17 while the right side probably will be easy to understand for someone trained in Western art music. As a matter of fact, the move from traditional notation to something more “opened,” to me, is a manifold move. It is political, inasmuch as it will not exclude someone who has not received training in Western art music. It reflects my aesthetics, as it accounts for a notion of digital music making which is deeply technology-driven. Lastly, it supports my project of creating an unstable element, for nobody will know yet how to deal with it, and above all, no stylistic “school” exists yet for this notation, with which potential performers could possibly conform. Ashley, in an interview that I conducted with her after the first rehearsal with the new notation system, talks about there being different “worlds” which come with their specific set of rules of how something has to be interpreted and has to sound like. Ashley says, even if you never play let’s say Bach, you have never played Bach before, but we all know how Bach sounds. So, for the most part you can sort of guess, I think. There is a lot of guessing that I think happens. Okay, so it’s like, that’s the note, that’s how long you hold it for, and then there’s so much of that where you just know what’s coming, like even Bach who is so complex, and so… he was so surprising in his cell of music in which he was playing, but still for a lot of us we memorize those surprises in those terms. So yeah there’s something about that, and then the familiarity of the lines on the staff, like even if you take away notes, there’s something about that notation, which isn’t to say that this [the Pulsations notation system] couldn’t become very familiar, if it’s something that is just done, right, and this now is my first time doing it. (Frith 2017) It is my claim that unlike with Bach, in my notation system Ashley does not yet know the “rules,” and that this absence of stylistic default creates, for her, some degree of anxiety. In the words of Schutz, I am crafting a notation system for which, arguably, exists not yet a “socially derived musical knowledge” (Schutz et al. 2011, 167), but which is then, for my musicians, negatively defined by difference to the various notation systems they know. 17 For waveform notation in electroacoustic music, cf. (Chippewa 2018). 41 The Notation’s Digital Aesthetics It is however worth looking at the “digital-ness” of both notation systems. In fact, as representations of music in the graphic realm they are both equally digital inasmuch as both systems rely on quantization to an intrinsically defined resolution. The classical Western staff notation quantizes pitch usually in a chromatic field of 12 pitches per octave, while each pitch is fixedly assigned to a specific line or gap in the staff. In the temporal domain, its resolution is dramatically higher. It being based on subdivisions of subdivisions indeed makes its rhythmic accuracy almost infinite, though not continuous. One could use 32th, 64th, or 128th notes, at which point the very limitation of the high resolution is evident: its readability for a classically trained musician is not intuitively given at such extreme values. On the other hand, in the notation system developed for Pulsations [Bitumen], on the pitch axis there is no difference to staff notation except one: while it is equally based on the chromatic field, the octave is not given, hence its resolution is even poorer. In the temporal domain, it is not based on subdivisions of, let us say a 4/4 measure, but is rather based upon an additive principle of adding up single steps. Its unit (the beat) is always the same, while the underlying grid (the click track) is changing in tempo. No subdividing of a single beat is possible. The only way of achieving a higher resolution is raising the tempo of the click track. Thus, while both notation systems in their nature are digital in terms of them being based on discrete values, I argue that my notation system is slightly more digital for its direct aesthetic inspiration in the domain of digital audio workstations. Simulating a step sequencer, the given pattern is to be looped over and over, until eventually a new pattern is generated upon me pushing a button. What makes this act for the violists even more sequencer-like, is that it is executed along with a mechanical pulse: a regular measured time, that is visually present as the underlying grid. What is more, and here my notation deviates deeply from staff notation: information is given in a modular way. Unlike staff notation that packages as much information about pitch, rhythm, articulation into few graphic signs, I have defined split areas on the screen that stand for pitch, for timbre, and for the envelope pattern. The performer has to assemble all the parameters in their mind and come up with their personal interpretation of what is written. This modular system reflects 42 the logics of many music apps and software, where single parameters are displayed in separate windows, or are hidden and only retrievable upon request. It is the visual characteristics of such apps that have heavily influenced my notation system. The visual design of such applications, namely cubes, clips and blocks arranged on visual grids—be it on-screen Midi editors, piano rolls or clips in a session view, or be it hardware “controllers” with pads and knobs—typically follow a Tetris-kind-of-look: They are, in simple words, “blocks on grids.” In most cases, the horizontal axis will stick to the Western tradition of displaying time from left to right, while the vertical axis will commonly be used for pitch, dynamic values or filter openings. Figure 19 depicts such a customary way of displaying a melodic pattern in the realm of digital electronic music, which served me for inspiration. Figure 19 “Blocks-on-grids” notation in Ableton Live; here, the grid represents the 16th note, the black blocks represent notes, the vertical axis depicts low to high pitch, the horizontal axis depicts time, and the red envelope depicts amplitude; here in a quantized “curve” that reflects the 16th note grid Inspiration: Graphic Notation, Tabs & Blocks-on-Grids I cannot omit referencing those practices that have dealt with unconventional representation of sound as prescriptive means for performance. John Cage, when he fabricated his tape editing “plan” on paper and called it “Williams Mix,” thereby elevated such paper plans for tape montage—as commonly used by sound engineers as a memory aid in the analogue age of music production—into the realm of art music. After Cage, there have been many significant contributions to what is commonly called “graphic 43 notation:” notably the groundbreaking work of Earle Brown—“December of 1952”—and the graphic scores of Xenakis and Feldman. According to David Cline, “Morton Feldman’s graphs merit attention because they altered the course of Western classical music, … becoming some of the very first musical works presented in a strikingly new notation to receive public attention” (Cline 2016, 1). As he explains, “Feldman’s use of the term ‘graphs’ in connection with these works reflects the fact that he composed them on printed graph paper. It also reflects his mode of presentation, which is more graph-like than conventional staff notation, with time notated proportionately along the horizontal axis in a manner comparable with the proportional specification of a variable on the abscissa of a line or scatter graph” (Cline 2016, 3). It is the visible presence of the underlying grid in my notation system that is deeply reminiscent of Feldman’s graph paper and its proportional relationship to time. Another crucial influence is tablature notation that dates back to medieval music practices, and that is used in contemporary popular music in different forms such as guitar tabs. Notating my pitch choices is a form of tablature notation. Ultimately, in contemporary art music and especially in electroacoustic practices, a wide array of animated, graphic, interactive or otherwise unconventional forms of notation have been developed, for an example what Jef Chippewa calls “waveform display notation” and what is essentially based upon how audio signals are graphically represented in digital audio workstations (Chippewa 2018). Or, the various attempts to notate timbre, as outlined by David Gray (Gray 2018). Computer games constitute another entity of creative use of prescriptive notation, such as the examples discussed by Kiri Miller. However, my point here is not so much to present any form of complete history of graphic representation of sound, but rather to claim how ubiquitous these various forms of tabs and “blocks-on- grids” have become in both art music and popular music with the emergence of music apps. That the digital era has initiated a paradigm shift in values and aesthetics is well discussed, for example Jace Clayton observes that “Copy, cut, and paste downgraded to become the default operations we all use to push data around on the screen” (Clayton 2016, 149). Or as Adam Sinnreich puts it, “because technology and culture coevolve, continually nudging one another in new directions, we have seen drastic changes in our aesthetic and symbolic environments” (Sinnreich 2007, xv). Jason Stanyek, in his 2014 “Forum on 44 Transcription,” discusses the goal of visual representation of sound (Stanyek 2014). I believe that the ways in which sound is visually represented has deeply contributed to that paradigm shift in music. If, as Paul Théberge writes, the piano along with staff notation was the default music making interface of the nineteenth century, inasmuch as “orchestral music was typically composed at the piano, and piano transcriptions of orchestral works were prepared as a matter of course, for study and performance without any sense of either medium being ‘trivialized’ in the process” (Théberge 1997, 159), then the computer and mobile apps are today’s piano.18 They constitute the default for human interacting with sound in many contemporary practices, and the screen will usually display sound in some form of tabs and blocks- on-grids. Along with these various blocks-on-grids, musical parameters are detached from each other, music is becoming more modular, to the extent that many parameters disappear for the user on some background layer of the interface, and/or are outsourced to algorithms, LFOs, or random modules that take over control. Figure 20: The author’s notation system versus a potential equivalent in traditional staff notation. Let me discuss another comparison with staff notation. My claim here is that the Pulsations [Bitumen] notation system prioritizes the entire amplitude envelope as opposed to staff notation which focuses traditionally much more on the ‘attack’ part of the ‘attack-decay-sustain-release’ envelope. 18 I acknowledge that in between the piano and the various computer apps there has undoubtedly existed the tape and particularly the multitrack tape machine that already brought in modularity, but I will argue that it is only the various customary available music apps that have achieved the same popularity and ubiquity (within a privileged sociocultural milieu) as the piano, while the multitrack tape machines were available to only an academic and industrial elite. 45 Unconventional amplitude courses require additional signs in staff notation. David Schnee told me after the last performance that “for us [musicians], if you read a ‘crescendo’ sign on a traditional score, which basically is a linearly increasing shape, but then you never play a crescendo as a linear getting louder. It’s always situational” (Schnee 2018). As much as the pure capability of displaying complex amplitude envelopes could be understood as a major gain of this new notation system, it also poses a problem for the performers who cannot rely on any previously acquainted knowledge in how to interpret these complex shapes. The Violist as a Software Instrument In my Ableton Live set, via a self-built software device, I play the violist’s notation, hence, to some degree I am coding the violist’s acting in the moment. Since the violist, as opposed to a real software instrument, is an agentive participant in the system, and is never just executing the score, I could see their agentive acting as a part of my algorithm. That is, they constitute a “random” component in my system—though random only for me as I am never entirely in control of their acting. This algorithmic character is amplified by the notation system that leaves little more room for decisions than what would be the case with traditional staff notation. What I am getting at here, with this odd metaphor, is relating to my initial prompt of rendering the viola playing more digital, and thus more akin to my musical practice. Pulsations [Bitumen], in this sense, is a simulacrum of a synthesizer, therefore we could call it a “human synthesizer.” When Dixon writes about Auslander that “he clearly discerns that the dominant aesthetic force is the digital, into which the live is incorporated [Auslander speaking of digital live video]” (Dixon 2007, 123), he is exactly referring to the phenomenon that I am seeking to create: the live violist, while interpreting the graphic notation, becomes a part of the digital system themself—but, beyond that, my graphic notation system itself represents my digital way of thinking that stresses an aesthetic known from digital music or gameplay practices. Having the four violists both as agentive algorithms in my system and as players in my game, led the performance, for me, to many fascinating yet precarious moments. 46 Conclusion: A Field of Potentialities Now, one must ask, why am I more interested in playing the algorithm than playing the actual music? I doubt that I can give a satisfying and concluding answer, for what seems a humble question is in fact a complex matter. I believe that many notions of what music making means are still largely wedded to the era of Rock, or if you will, to the playing of classical music of the romantic period: namely, a virtuosic mastering of the instrument in the very moment of performance. Arguably, it is this notion that is remediated in games such as Guitar Hero. There have existed games that model the making of electronic dance music that comes with its own understanding of what virtuosic mastery means, such as the games Frequency and Amplitude by Harmonix. I am alluding here to the practice of music making that the advent of the computer has enabled,19 that is equally an embodied, “vibrational practice,” (Eidsheim 2015, 10) that includes bodily and material feedback from the sounds being created in a computer, and yet has shifted onto another level of play. In EDM, it is rather about playing the “live mediation” (Kjus and Danielsen 2016, 336) and about playing the dancers. An answer would thus be that my acting as a performer is nothing more than reflecting a state-of- the-art performance practice as practiced by billions of kids worldwide. It is my algorithm that shapes the violas rhythmically that, by doing so, is capable of executing incredibly fast, machine-regular rhythms, that I would simply not be capable of playing myself. Or, if I would be capable of such machine-like rhythmic virtuosity, the audience would probably suspect the machine as its cause, much like in Auslander’s account of “chatbots,” algorithms that participate in web chats and pretend being humans; Auslander quotes Heather Peel who “advises [human chatroom users] that if you type too fast, lurk in the 19 I am referring here to a post DJ/vinyl practice, since with vinyl DJing there is still a crucial ‘hand/ear coordination’ necessary that will gradually disappear with computer software, where required skills are less in the realm of real-time action, but rather on the in-advance planning of the next action that is then executed by the system. Herefore, cf. Dan Sicko, who states that “for now, the reality of kids in America’s heartland buying Technics turntables or Ableton Live software and developing ‘hand/ear coordination’ is promising enough” (Sicko and Brewster 2010, 81). 47 chatroom without participating actively in the conversation, or use too many automated functions in your chat responses, you may be mistaken for a bot [a machine]” (Auslander 2002, 19). My original contribution to the field is then probably not so much my acting on stage as a performer, but rather the designing of a meta-instrument that seamlessly integrates live violists (as real human “humanizers”) in my digital music making practice and is responsible for the characteristic sound of Pulsations [Bitumen]. When I have defined my conglomerate of computer, software, Midi controllers and myself as an “assemblage” for live performance, then I need to revise this account here and add the four violists that have become a crucial part of my algorithm. Let me turn back to the amplitude envelopes that I use in this work and that I have described earlier as “gates” that frame the audio signal. I find the idea of framing fascinating for analyzing my work. It is the framing that will liberate the action in the street (in the Auslander example that I have used in this essay) from its normal context, and will re-contextualize it as a staged, theatrical action instead; if only for the theater audience who is looking at this frame. With sampling music in general, but even more so with my “live sampling music,” this equally applies. The technological amplitude envelopes frame the viola’s sound for those who listen to it (also) via the technological remediation. These amplitude envelopes will detach the viola sound from the instrument inasmuch as they add a technological copy to its real envelope20 as played by the performer. What I am creating here is a re-synthesized tone that is, as a matter of fact, a novel sound, that could neither exist as a pure human made sound, nor as a pure technological one, and which is added to the acoustic direct sound of the violas in space. It is a “techno- organic” composite already on the level of sound, and I am yet not even talking about its conspicuous social potential/implications. In this sense, Pulsations [Bitumen] is constantly troubling the digital, whilst 20 The theorizing of my “double viola” is complex, for the adding of a digitally shaped envelope via the speakers adds to the acoustic envelope of the viola in the room; hence, there are always two envelopes juxtaposed in the space. Many studies proved that the transient of a note, that is it’s ‘attack’ part of the amplitude envelope, is as least as important as the timbre itself for the recognition of an instrument. Thus, while the acoustically played envelope of the viola will obviously reinforce the viola’s “viola-ness,” my digitally created ADSR envelopes will transform the viola into something that is not necessarily perceived as a viola—cf: “The starting transient provides an immediate clue to the ear enabling the listener to recognise the instrument being played quickly.” (Gough 2007, 584) 48 going against the digital with the digital. The digital-ness that it creates requires human participation. The “game world” that thereby is created is at once real and imaginary and behaves much like Ian Bogost’s description of videogames as “procedural representations” that “are capable of generating moving images in accordance with complex rules that simulate real or imagined physical and cultural processes,” (Bogost 2007, 35) which in the case of Pulsations [Bitumen] happens not on the visual, but on the auditory level. Kjus and Danielsen offer an interesting line of thinking in their analysis of electronic music live performance: “Incorporating acoustic instruments into the live performance is one way to negotiate the performance’s overall relationship between actions and sounds” (Kjus and Danielsen 2016, 335). While it could be said for Pulsations [Bitumen] that what is audible to the audience is a composite of acoustic viola sound and of digitally mediated technological music, it is quite the contrary on the visual level: four violists play their viola in a way that is familiar to most audience members. I argue that it is this complex relationship, or even mismatch, between the audible and the visible, that renders this work yet again ambiguous: what the audience sees, is not entirely what they hear, and vice versa. What violist Zsolt Sörés said in the quote that I have included in the first chapter, namely “You have to be at once inside and outside,” (Sörés 2018) beautifully concludes this essay as it offers especially one important insight: that he felt quite isolated within his “workstation” or “game world.” Later in the same interview, he conflicted with his earlier statement and said that he felt quite relaxed within his “workstation” just because he knew that I was there and would “care about his sound” (Sörés 2018) from an outside perspective. He expected me to heavily process his viola sound and turn him into a remediated meta violist, and he expected me to change his on-screen score if I did not like what he was playing. He went on, saying that if you [as a participating violist] ignore what is outside, then what you’re creating won’t be relevant. This way of sound-making that this algorithm and this score provoke, and also this workstation which I really like—that’s just a really wide spectrum of being in a certain space-time that we call ‘concert.’ It’s so interesting, never before have I seen such a score. So, this is not the way of music making that I know. (Sörés 2018) 49 The fact that he told me these things only after the concert series was over, to me, proves that the network I created with this work was fundamentally relational, and so many interactions happened on an unconscious or unspoken level, that a generalizing statement on what Pulsations [Bitumen] is, cannot be made. As the world in general, it is experienced by each participating member—which includes the audience as active listeners—in an individual way that is internally negotiated between experience and brought-along knowledge. Yet, the technological agents responsible for remediation—speakers for the audience, and screens and click track for the violists—amplified the individualizing and individualized notions of the piece. In my eyes, it is a piece that offers quite a vast number of readings. As I stated throughout this essay, it can be read as sampling music, as a live electronics practice, as a music game, or even as a viola quartet, just as much as the genre references allow it to be read as popular beat music, as electronica, as classical or as academic art music. In addition, it could be perceived as a composed as much as an improvised piece of music. I have insisted so profoundly on unstable elements throughout this essay because they are at least partly responsible for creating ambiguities and a certain tension—an attendance and anticipation of action evoked by the work. If Pulsations [Bitumen], in the end, was capable of creating an ambiguous field of potentialities, then I think it was successful. I will conclude this essay with an interview fragment from violist Ashley Frith, that I will leave uncommented. There’s so much of just getting it “right”, so then when it’s like the first time I’m just going in this mode of trying to get it right which is a beautiful practice to just get it wrong [laughs out loudly], just to be okay with getting it wrong, and it’s closer to what you’re looking for, because when I’m so concerned about it then my brain… I need so much focus, mental focus, because it’s new, also, but the repetition too is another thing, I’m just staying in it and I am just being there. I’m counting irregular measures if you will, and getting it to that… (Frith 2017) 50 Bibliography Ahearn, Laura M. 2001. “Language and Agency.” Annual Review of Anthropology 30: 109–37. https://doi.org/www.jstor.org/stable/3069211. Auslander, Philip. 2002. “Live From Cyberspace: Or, I Was Sitting at My Computer This Guy Appeared He Thought I Was a Bot.” PAJ: A Journal of Performance and Art 24 (1): 16–21. https://doi.org/10.1162/152028101753401767. ———. 2005. “At the Listening Post, or, Do Machines Perform?” International Journal of Performance Arts and Digital Media 1 (1): 5–10. https://doi.org/10.1386/padm.1.1.5/1. Bogost, Ian. 2007. Persuasive Games. Cambridge, Mass.: MIT Press. Bolter, J. David, and Richard Grusin. 1999. Remediation: Understanding New Media. Cambridge, Mass.: MIT Press. Bourdieu, Pierre. 1977. Outline of a Theory of Practice. Translated by Richard Nice. 1st English Edition. Cambridge, United Kingdom: Cambridge University Press. Buchanan, Jono. 2014. “Understanding Groove.” Resident Advisor. 2014. https://www.residentadvisor.net/features/2094. Case, Alexander U. 2007. Sound FX: Unlocking the Creative Potential of Recording Studio Effects. Amsterdam; Boston: Focal Press. Cheng. 2012. “Role-Playing toward a Virtual Musical Democracy in The Lord of the Rings Online.” Ethnomusicology 56 (1): 31. https://doi.org/10.5406/ethnomusicology.56.1.0031. Chippewa, Jef. 2018. “Typology and Problematics of Fixed Notation for the Representation of Electroacoustic and Digital Media.” EContact!, no. 19.3 (January). http://econtact.ca/19_3/chippewa_notation-waveform.html. Clayton, Jace. 2016. Uproot: Travels in Twenty-First-Century Music and Digital Culture. New York: Farrar, Straus and Giroux. Cline, David. 2016. The Graph Music of Morton Feldman. Cambridge, United Kingdom: Cambridge University Press. Deleuze, Gilles, and Félix Guattari. 1987. A Thousand Plateaus: Capitalism and Schizophrenia. 2nd Edition. London: Continuum. Dixon, Steve. 2007. Digital Performance: A History of New Media in Theater, Dance, Performance Art, and Installation. Reprint edition. Cambridge, Mass.: MIT Press. Drake, Carolyn, and Caroline Palmer. 1993. “Accent Structures in Music Performance.” Music Perception: An Interdisciplinary Journal 10 (3): 343–78. https://doi.org/10.2307/40285574. Eidsheim, Nina Sun. 2015. Sensing Sound: Singing and Listening as Vibrational Practice. Durham; London: Duke University Press. Gough, Colin. 2007. “Musical Acoustics.” In Springer Handbook of Acoustics, edited by Thomas D. Rossing, 533–667. New York, NY: Springer New York. https://doi.org/10.1007/978-0-387- 30425-0. Gray, David. 2018. “The Visualization and Representation of Electroacoustic Music.” EContact!, no. 19.3 (January). http://econtact.ca/19_3/gray_visualization.html. 51 Iyer, Vijay. 1998. “Microstructures of Feel, Macrostructures of Sound: Embodied Cognition in West African and African-American Musics.” Ph.D., Berkeley, Calif.: University of California, Berkeley. ———. 2002. “Embodied Mind, Situated Cognition, and Expressive Microtiming in African-American Music.” Music Perception 19 (3): 387–414. https://doi.org/10.1525/mp.2002.19.3.387. Jones, Gareth Michael. 2013. “Music Production: What Methods Do You Use to ‘humanize’ Hip-Hop Drum Patterns?” https://www.quora.com/Music-Production-What-methods-do-you-use-to- humanize-hip-hop-drum-patterns. Kjus, Yngvar, and Anne Danielsen. 2016. “Live Mediation: Performing Concerts Using Studio Technology.” Popular Music 35 (03): 320–37. https://doi.org/10.1017/S0261143016000568. Knowles, Julian, and Donna Hewitt. 2012. “Performance Recordivity: Studio Music In A Live Context.” June 2012. http://arpjournal.com/performance-recordivity-studio-music-in-a-live-context/. McClary, Susan. 2004. “Rap, Minimalism, and Structures OfTime in Late Twentieth-Century Culture.” In Audio Culture: Readings in Modern Music, edited by Christoph Cox and Daniel Warner, 289–98. New York: Continuum. Mcintyre, Michael, R T. Schumacher, and Jim Woodhouse. 1981. “Aperiodicity in Bowed-String Motion.” Acta Acustica United with Acustica 49 (September). Miller, Kiri. 2012. Playing Along: Digital Games, YouTube, and Virtual Performance. Oxford; New York: Oxford University Press. Moseley, Roger. 2016. Keys to Play - Music as a Ludic Medium from Apollo to Nintendo. Oakland, Calif.: University of California Press. Neill, Ben. 2004. “Breakthrough Beats: Rhythm and the Aesthetics of Contemporary Electronic Music.” In Audio Culture: Readings in Modern Music, edited by Christoph Cox and Daniel Warner, 386– 91. New York: Continuum. Oliveros, Pauline. 2011. “From Outside the Window: Electronic Sound Performance.” In The Oxford Handbook of Computer Music, by Roger T. Dean, 467–72. Oxford: Oxford University Press. Rose, Tricia. 2008. The Hip Hop Wars: What We Talk about When We Talk about Hip Hop—and Why It Matters. Philadelphia: Basic Books. Schloss, Joseph Glenn. 2004. Making Beats: The Art of Sample-Based Hip-Hop. Middletown, Conn.: Wesleyan University Press. Schutz, Alfred, H. L. van Breda, Maurice Natanson, Arvid Brodersen, Ilse Schütz, Aron Gurwitsch, Helmut R. Wagner, et al. 2011. Collected Papers V. Phenomenology and the Social Sciences. Phaenomenologica 11, 15, 22, 136, 205, 206. London; New York: Springer. Sicko, Dan, and Bill Brewster. 2010. Techno Rebels: The Renegades of Electronic Funk. Second Edition. Detroit, Mich.: Painted Turtle. Sinnreich, Aram Arthur. 2007. “Configurable Culture: Mainstreaming the Remix, Remixing the Mainstream.” Los Angeles, Calif.: University of Southern California. http://cdm15799.contentdm.oclc.org/cdm/ref/collection/p15799coll127/id/590228. Stanyek, Jason. 2014. “Forum on Transcription.” Twentieth-Century Music 11 (01): 101–61. https://doi.org/10.1017/S1478572214000024. Théberge, Paul. 1997. Any Sound You Can Imagine: Making Music/Consuming Technology. Wesleyan University Press. 52 Tomlinson, Gary. 2015. A Million Years of Music: The Emergence of Human Modernity. New York: Zone Books. Toynbee, Jason. 2000. Making Popular Music: Musicians, Creativity and Institutions. London: Arnold [u.a. Veal, Michael. 2007. Dub: Soundscapes and Shattered Songs in Jamaican Reggae. Wesleyan University Press. Walton, Ashley E., Michael J. Richardson, Peter Langland-Hassan, and Anthony Chemero. 2015. “Improvisation and the Self-Organization of Multiple Musical Bodies.” Frontiers in Psychology 06 (April). https://doi.org/10.3389/fpsyg.2015.00313. Young, Andrea. 2007. “Rhythm in Electronic Music.” The Hague, Netherlands: Royal Conservatory of The Hague. https://doi.org/10.13140/RG.2.1.2638.6408. 53 Appendices Excerpts from the First Interview with Violist Ashley Frith, November 2017 Oral conversation at Brown University, in a rehearsal for Pulsations [Bitumen]. [Marcel explains system, then rehearsal with viola and electronics] Ashley: I feel that there were couple moments where there was a rhythm that I could… like where I was like, okay, my brain was computing the instructions, I think it’s all just so new, like I feel like I was with this language for a while, … I could feel it very well and sort of get into the rhythm of it, and ya I think there’s a lot of, it’s like very clear and understandable, my brain was just struggling to like stay in the language, it’s just so new, and I’m just trying it out. Marcel: What were the things that you were struggling with? A: So just like even just coming back to a page like this, it would take me a second to compute like… even if I can associate these as notes, the black things on the page as notes, but just thinking, seeing this as the thing I was hearing in my ear [the metronome], okay, that’s the pulse that is going by, and then I’m just sort of fitting in that way, and that was a notation to grasp on to, to like divide and count. So, there was a sort of anxiety about like getting it wrong which was really interesting because… it was also just going by quick enough that it was… there was a lot of computing that I had time to do, sort of just a little bit before, a little bit after, just fitting it in in that way. I’m sure there is so much intentionality in there of that being the relationship to it, but ya I just… you know just when that page appeared with the [text instruction] “noise” and there were no notes [laughs loudly], so I was like, oh right, you just explained this, so my brain was not working; of course just make no… 54 It’s really interesting, it’s a curious way to just explore time, and be with time, even though I was struggling with it a little bit, I was… there was also something very freeing about it, I would just need to get into that world. Just practice being in that space of interpretation. What was more constraining, the click track that was kind of not really… there’s no freedom to that, or is it more the graphic representation that you have to understand really quickly or put those three parts together in your brain? I think that… it was the three parts, putting in my brain, and, okay, I’m doing this now. And that was the trickiest part personally. The click track was… yeah, you have to follow that, but it was also the steadiest thing, right, so that was like instructions, it’s like, okay, go with this, that’s the thing that’s going. And in some ways if it was a little bit slower or something, like, initially, for the first time doing it, there might be a space to just like interpret things in a different way, mentally. But I’m not sure what, you know, if that’s a part of it, you know what I mean? Like too much space in time to… because if I have a longer beat, then my brain can easily just be, okay, fit in here, fit in there, as opposed to being like… it goes by so quickly, it’s like, you really just… you can only just… it’s very quick, you only just… like, you’re guessing! It’s just like a ballpark area! If it’s a longer beat, then there’s room for just mental decision that are probably a little bit more accurate than what’s on the page, but ya… M: But how is that different that in you’re playing Bach from a sheet of music, or if you sight- read a new Bach sonata? A: So, I guess, so you don’t have the click, right, so there is a sense of pulse you’re clinging on to, and also, I think we cling a lot to, okay, you’re holding this note for this long, that note for that long, and it’s also very familiar. Even if you never play let’s say Bach, you have never played Bach before, but we all know how Bach sounds. So, for the most part you can sort of guess, I think there is a lot of guessing that I think happens. Okay, so it’s like, that’s the note, that’s how long you hold it for, and then there’s so much of that where you just know what’s coming, like even Bach who is so complex, and so… 55 he was so surprising in his cell of music in which he was playing, but still for a lot of us we memorize those surprises in those terms. So yeah there’s something about that, and then the familiarity of the lines on the staff, like even if you take away notes, there’s something about that notation, which isn’t to say that this [graphic notation] couldn’t become very familiar, if it’s something that is just done, right, and this now is my first time doing it. M: A pitch system like this, as opposed to that [referring to Bach], I could easily just include five lines here, but what do you think is the difference? Is there a difference? A: I think there is choice, right, so I can choose an octave, and personally there is a momentary hesitation with that because it’s not like, okay I’m playing that note, here, but it’s like so which one do I play, which can be really freeing thing, but it’s interesting to just feeling so married to notation, where it’s like you’re just so used to so many just saying “play this note on that string, that register. So, for me this has something to do with it, there is a lot of choice, there’s a lot of choice there. M: Did you ever work with [the software] Garage Band or with [other] music software? A: No. I have worked just on Sibelius and stuff like that but that’s just notation. [pauses] There’s probably an added level of being recorded and not just practicing on my own, […] there’s so much of just getting it right, so then when it’s like the first time I’m just going in this mode of trying to get it right which is a beautiful practice to just get it wrong [laughs out loudly]. Just to be okay with getting it wrong, and it’s closer to what you’re looking for, because when I’m so concerned about it then my brain… I need so much focus, mental focus, because it’s new, also, but the repetition too is another thing, I’m just saying in it and being there. I’m counting irregular measures if you will, and getting it to that… 56 M: Did you end up counting? A: I was, but initially I think I was just going with it and just reading it, but then I found if there is any misstep or anything I just wanted to make sure I’m getting back to one, then I started counting. Especially when there wasn’t—what I actually kind of liked in some way—when it wasn’t notated, when the beats weren’t all there in the ones that were measured, I got into a groove, ya because then I was just counting, and somehow that felt easier to me, but it was also, I didn’t have to line up with the pitch, but there was another one, the one we were crescendoing and that had two measures, that one [points to the screen], I was counting to five […]. It did feel more free, but again I think it’s just like there’s something about accuracy. And I can accurately start around beat three, and on the fifth beat, that’s easier than approximating each beat, like landing somewhere, it’s again just like getting used to the notation, just practicing that. The other interesting thing was the sizes of the notes, the height of the notes. It was really interesting that I found myself—especially with the sound—so there was a lot of exploration of how soft can I get, so this is a new language, for me, and because it was going so fast there wasn’t a lot of prep time, or planning, M: …Or correction time for each single note…? A: Right, I was just: keep going and try again, the next that came around, and then the one that was notated, the dynamics, that was really interesting because I think, again, there’s such a mapping out of dynamics even within a longer phrase, right, so then there was this thing where I’m playing these notes which was really great to hear just these different sounds but it was going at a rate that I found I would play a note and I was like, oh wow, I was way too loud for what needed to come next, and then my timbre was totally getting distorted, it felt a lot more like pressure sound coming out, just because I was differentiating between the dynamics. 57 That was just such an interesting mental quality to explore how I was just struggling to just start in a certain way, to start in a certain tempo, you know what we have is just this little arrow [referring to an earlier version of the notation where there appeared sometimes an upward or downward arrow next to pitch choices as an instruction to play slightly higher or lower], and there’s so much of that, that’s all just interpreted, right? But maybe there is something about again being just so attached to, okay, we’re starting here and this is where the phrase is going, and then it cuts off there, or it’s going to this sharp thing and then we die back down, and this letter is just notation that is like so… engrained, it’s like, what do I do?, but this is like a forte-piano if I think about it [points to the on-screen score], and… M: But you didn’t think about it while doing it? A: No, but… all my normal tools, I guess, were just sort of like… but it’s again just practicing. I was a little anxious, but again it was just like… wanting to do it well, and that just kept coming up, you know, I was just like sort of all over the place, and wasn’t executing it that well, and that’s just a good emotional practice, that anxiety and that worry, of just, can I do this?, you know am I interpreting this, the first time through, am I interpreting this as close to what’s on the page, what’s on the screen, as possible? And there is a school, there are schools of interpretations that we default to, right, so we say, okay, Baroque music, this is the timbre that they used, this is what we use and this is how we interpret that… M: So, is there a default school for this? A: I don’t know! Maybe that’s also the scary part of it, right, where I don’t have that, oh yeah this is how people play this music. It’s freeing in a lot of ways if I just go with that, as opposed to just… but initially it’s a little scary because you’re like, what am I going for here? You know, or we have these pinnacles and places, even composer by composer, right, like this is what we’re going for here and this is 58 what we’re going for there, and totally just created by so many people, changed and interpreted over hundreds of years, but we still cling to that. … M: Have you played Feldman? A: No. M: Did you ever have the feeling of a musical phrase? A: Totally, for some of them that I did for long enough, it totally became like this thing that I was feeling, and I had some let go whether or not it was correct, that was something I kept coming back to, when I was feeling it and I was, oh wait, I was second-guessing that pulse, or that my own memory, that I had just taken the sound in, but there were definitely moments that… even for the harmonics one with the crescendo, there was just this thing that I was feeling that, and I had a little pattern [a sequence of pitches] going, which I don’t know if that was okay but it just felt good, I was just flowing with this sound. And there was another one, oh yeah, the one with the crescendo and decrescendo, so that was really interesting because again that was one of those things where it’s like—we just had this conversation about too harsh bow pressure—but then when you add the fade-in and fade-out, so that’s something that I have to work with, just like, can I so create that sound and then still adding the… because so much of that is pressure, that I’m thinking about, so it’s like… [plays the viola]. But I struggle, because I had to explore that, because bow speed and pressure is so much of fading in and fading out, so again I was like, oh my language or my tools are not there anymore, so how am I gonna create this shape with keeping this particular sound? So that was really interesting. And then I would… I found it was much more ordinary sound a lot of times, but I was also just flowing around with those notes. There was another one where it felt easier just counting 7, so there’s a little rhythm with that sort of without having the beats sort of latched out, so that was an interesting 59 experience, which again I just had to, like, how do I make this? I just had to give myself some possibilities and some options to do. But [it’s] really neat, it was like enjoyable, there were moments where I was stressing out [laughs out loudly] you know, for the first time and I was like, just in the sound. M: Which were these enjoyable moments? A: So, I think it was the ones I didn’t have to… the ones I wasn’t trying to be ahead or behind or in-between the beat, which I think would just take more time to like adjust. So, there were these were I was just sort of counting and fitting in and exploring, and just figuring out the sound quality and just moving within the pitches, like just moving around. Something really enjoyable that came out of this one, and then the other one with the harmonics and the crescendos and the… maybe the “noise” ones were kind of interesting, they were just fast, and then there also was some cool thing to those just with the beat, just going, but then ya, the dynamics… that became another thing… M: But the only thing this system really can’t do is melody? A: Hm, well, it can’t give you a melody, but a melody can come out of it. M: I this [notation system] too easy or too complicated? A: More on the complicated side, complicated in the sense that I just have to spend some time with the language… so it’s not a surprise each time [when a new page shows up]. It’s a newer thing for me [using this kind of notation] to me, it’s not a world that I have been in for a long time, which I am very excited about, …it’s just more foreign, than…, like Laura [a friend of 60 her] who was in this world for ten, fifteen years… So, there is always a bit of anxiety for it just being a new world. 61 Excerpts from the Second Interview with Violist Ashley Frith, April 2018 Email conversation in April 2018. Marcel: How participatory did the piece feel? Did you feel overwhelmed by too many decisions to make (interpreting the graphic score), or did you feel constrained by not having enough freedom or possibility for expression? Ashley: The piece felt very participatory, it felt like I played a huge part in deciding what to play. I never felt too overwhelmed with decisions, but the experience worked a different part of my brain…constantly combining options. There were only a few moments when I felt stuck, or a slave to the machine (Marcel, the machine☺), when something would go on until I was exhausted but didn’t feel like I could stop, even though I knew I was allowed to. M: Did you feel private and hiding behind your screen/hidden behind technology (you alone with that computer screen that tells you what to do), or did you feel as part of the ensemble and in touch with the audience? A: I never felt private or hidden, definitely not any more than behind a stand of music. With so many options and room for musical decisions, I often felt more vulnerable. Sometimes I felt a part of the ensemble, and other times I felt less connected… I’m assuming a big part of that is the lack of necessity for communication with the other musicians. Each space and city [when touring with the production in four cities] definitely felt drastically different from each other, but overall I felt in touch with the energy of the audience, and sometimes with the activity of the audience as well. 62 M: What was your experience with the click track? What happened with your playing (a) if you were hearing a regular click track, (b) if you were hearing an irregular click track, and (c) if there was no click track? The click track mostly felt okay. I tended to be less relaxed when it was present, and when it became irregular I became very stressed. I think I just really wanted to always be with it, and I had to accept that that wasn’t possible… which helped tremendously. After that, I was able to feel the irregular pulse as a rhythm I was imitating, sometimes more successfully than others. 63 Excerpts from Interview with Violist David Schnee, March 2018 Oral conversation on the phone in Swiss German, translated by the author Marcel: The fact that you cannot foresee what will come next [what score/pattern], how did you deal with that? David: I surely had to get used to not knowing what will come next. I learned not to be like ‘gosh what will be next and will I get it?,’ but rather just learned to accept whatever came next. I wasn’t surprised anymore. I just kept playing whatever I was playing and accepted the fact that every now and then something new would show up. It was in the second concert when I fully understood to take things in as the case may be. Perhaps it was also because less action happened in the second concert, my patterns remained for longer durations before they eventually changed, so there were much less changes than in the first concert. I got used to the long durational quality of the concert. I stopped expecting new patterns soon, and also started to even expect each new pattern to stay for quite a long time. For us [violists/musicians], if you read a ‘crescendo’ sign on a traditional score, which basically is a linearly increasing shape, but then you never play a crescendo as a linear getting louder. It’s always situational. The longer I played the same pattern, the more I was starting to feel it as a musical phrase. Sometimes I also shifted patterns in my mind to render them more musical. Say if the first event starts on the third beat, and then goes like ‘loud – medium – silence – medium –silence,’ then I started to feel the loudest event on beat three as the ‘one.’ Some of your patterns are much more easily readable as musical phrases than others, but to me it is also a question if your intention was to really have these patterns felt as musical phrases, or if you wanted to keep them as abstract as possible? The faster I was capable of reading the pattern as a musical phrase, the faster I was also capable of… [pauses] ‘putting’ it into the groove. 64 M: does this mean that you were constantly counting, internally? D: yes, actually, I was always counting, I think. Especially for longer patterns. Say, if I had an 11- beat pattern, I would always count 11 just to insure that I was really playing in 11. But if it was a 3-beat pattern that went like ‘loud – soft – soft’ [sings], or a similar 4-beat pattern, then I stopped counting and felt it internally. But, as I said before, if it was a 4-beat pattern but the last beat in the pattern was the emphasis, then I started to feel beat four as beat one. So, to me it remains as a question if one should try to notate all patterns as more musical phrases, that is, with emphasis at the beginning, or on the contrary if you even intend this shifted, skew character of accents that are not at the beginning? Can I even play a loud beat four exactly as loud as a loud beat one? Or will I play them slightly differently? Or, from the outside, would someone in the audience even feel a pulse, just because there is no meter that is perceivable? Nobody will say ‘oh, this is a four-beat measure, and now it’s a seven-beat measure.’ 65 Excerpts from Interview with Violist Zsolt Sörés, February 2018 Oral conversation on the phone. Marcel: Compared to what you usually do with your viola and your objects, and now you were limited as you had no objects, no electronics; you only had your viola—so what did you end up doing with it? Zsolt: I tried to focus on the sound making, for sure. I produced meta sounds, and I focused on how I can create the sounds that are required of me. It is of course impossible to play a hundred percent solid, only a robot could do that, so human beings always do everything a bit differently. It’s a question about how you read a score. I think it’s a big mistake if classical musicians want to repeat last nights’ performance and want to play a piece in the same way as they played it the night before; for they really would like to recreate the same as always, but I think this is not an intended part of the music, as you have to recreate it every time very differently. M: Do you position yourself outside of the classical music world? In what realm do you position yourself? Zs: For me, music creates a space. I don’t think that you have to fill that space up with your music, just because the sound itself will create that, anyways. And every time, that space is a different one. Hence, our music is always changing. So, playing Pulsations [Bitumen], for me, is more like a meditation about my sound creation practice. About how I express myself. And that is very interesting, because it is not very common to experience this. 66 M: Going back to the word “repetition,” that you used to describe classical musicians who want to repeat a piece over and over. But what is repetition in Pulsations [Bitumen]? Zs: Yes, sure, [there is] looping. I felt a difference between the loops; when I restarted a sequence, because I just had the score page [with the pattern] and no idea for how long I will be playing it—so, I had to be “ill” [“crazy”]… So, that immediately eliminated the objective time sense and became more of an inner timing as I didn’t know for how long I will be playing this—I will just be looping my gesture, maybe eternally? So, this is the way of doing a meditation, I think, and that metaphor really helped me to understand your music. I would love to ask the other musicians if they used a similar metaphor. M: Have you done editing on the computer? …with LFOs or [music] software? Zs: My viola sound is like a recreation of a digital style of looping, yes really, but I have never before thought of that—maybe with the difference that every loop iteration, with my viola, will be different, and not exactly the same. What is the context and how is it different if two violas play a duo, or if it is a trio, or solo, or quartet, or quintet with you of course; so when someone is playing lots of notes, C – D – G etc. in one score you know. And I felt it’s more interesting when we played less; more minimal, more just one thing. Let’s repeat this one thing only. This is what I felt in all of the concerts, in all the three or four concerts, including the dress rehearsal. For me, every time we played the full 60-minute version, it was always most interesting to go ‘back to the basics’ and to focus, myself, on that very sound that is in one tone, one note. In how many different ways can I play this one sound? It was about finding the right way. I never believed in the absolutist idea of music making or in the absolutum of sound, that consists of only one single way of doing it, but what I felt is that I wanted to get better and better at what is required from me. How can I apply my technique to these slots through the score? 67 Zs: It’s a very ‘human,’ composition, it’s not only about the machine or about the music of the machine, even the rhythmic looping is very human, because it was very personalized, everyone could be very individual in it, this was really great. I really enjoyed that everyone is so different. Generally, in improvised music, I like it more if each participant has their individual ‘language’ or sound, because then everything is homogenous. The collective music, then, is how the individuals add their own worlds, and mix it together, so it’s a music that you cannot imagine alone, and that you cannot do alone. M: is, then, Pulsations [Bitumen], even though I hate categories and categorizing, rather improvised or composed music? Zs: Neither I like the word ‘improvisation’ and I think of great improvisation rather as instant composition, and on the other hand improvisation was always part of composed music, just think of Baroque music practices. Composition and improvisation really are the same thing, they are not opposites. The methodology might be the only difference between the two. M: So, then, Pulsations [Bitumen] is Baroque music, I only gave you a rhythmic structure and harmonies, and what you make out of this was left to you? Zs: If this piece helps me to develop my own language within this piece, then it is successful. The irregular click track, to me, was like an instruction of ‘try to follow the click track,’ but it’s impossible to follow. You probably would have to rehearse endlessly. The problem is that there is a delay between hearing the next click and then the realization of what this rhythm will be. Your brain will try to find some scheme in this click track, what is its rhythm if it’s not regular, but you really can’t because it’s irregular. So, you will get the next click always at a very indeterminate moment. It’s really like a work [a job], namely how can you try to follow it? You always make a big effort and will never succeed [laughs 68 out loudly]. You definitely need a brain, intuition, but also passion for these irregular sections. You have to be at once inside and outside. It is really like work [like a job], you have your brain, your mindset, your intuition, your senses. You need to be inside and outside, if you ignore what is outside, then what you’re creating won’t be relevant. This way of sound making that this algorithm and this score provoke, and also this ‘workstation,’ (which I really like)—that’s just a really wide spectrum of being in a certain space-time that we call ‘concert.’ It’s so interesting, never before have I seen such a score. So, this is not the way of music making that I know. 69