Originally appeared in French on Revue & Corrigé Magazine. 2012.
How do you make music with spoken language ? How do you notate and listen to speech in a musical setting ? Is there a dividing line between semantic and musical sound ?
The more I look into this and the more this border line between language and music, meaning and sound, speaking and singing seems blurry. It seems to me that this divide cannot any longer be articulated as a dichotomy but rather requires a more complex set of references and perspectives.
This article is a collage of interviews I conducted in different places and times within the first few months of 2011 with five of my favorite speech-music makers.
They are the composers Peter Ablinger from Austria and Arturas Bumšteinas from Lithuania, composers-vocalists Jennifer Walshe from Dublin and Tomomi Adachi from Tokyo, and the slightly exceptional case of Alex Waterman from New York City, usually a cellist, composer and musicologist. The reason I’ve questioned Alex here is his ongoing work as the editor of a comprehensive publication of the of Robert Ashley’s operas scores.
I brought them together in this virtual conversation that navigates several common themes while taking some unexpected twists and turns.
READ IT SAY IT SING IT
Alessandro Bosetti : Robert Ashely’s music very much sounds like speech, but is it speech what we hear or its rather singing ?
Alex Waterman : You cannot just read his work. Its impossible. You ought to sing it. You sit down there and you work it out and you do it on pitch and you work it with a metronome, you record it and ultimately, when its really good, it sounds like speech.
Bob (Ashley) always says : “talk and music don’t mix”. You can’t talk and make music at the same time.
Alex Waterman : The most important this to consider when looking at Robert Ashley’s scores is that they are made for his band that is made up of musically trained people and not musically trained people alike. The notation is produced with the abilities of those people in mind.
What you see on the page in a typical Robert Ashley score looks like a Microsoft word document. You see text in lines, where lines are numbered and numbers act as rehearsal’s numbers and bar numbers at the same time. He has very crude ways of representing the rhythm and the music, in certain parts the words would be underlined, sometimes if he is squeezing more syllables into a beat he would use a smaller font size in order to compress the text into the single beat. It’s a very simple, immediately understandable, way of representing how something should be read.
And there is an enormous amount of information that is not on the page !
What we hear as the music in the background, the “orchestra” as Ashely calls it, its not on the page at all. It only exists as midi files. If one was to recreate this music one thing to do would be to go back and try to reconstruct with old technology how those midi patches must have sounded. That is something not really interesting and its actually really not the point.
Jennifer Walshe : I normally don’t bother notating pieces that i make for myself in a form accessible to other interpreters. It’s because I play them and that’s the deal. But when I write pieces for other vocalists I am actually incredibly meticulous about the score and about how the notation works.
In this case I use a combination of different things: normal score notation and the International Phonetic Alphabet to get specific sounds. I also use photographs, recordings or more strict score notations augmented by the use of those. Once I really wanted the performers to do a falsetto in the way of Pharrell Williams or Justin Timberlake. I made them a cd of all those pop boys to listen to. “He wants his cowboys to sound like how he thinks cowboys should sound”, is a piece for the Vokal Solisten in Stuttgart that uses a lot of different vocal techniques, ranging from really abstract percussion sounds in the mouth to plain singing. Sometimes I would just say “The way you are singing here is like that scene of that movie” or like “You are like this person who just gone through that emotion” , trying to give them a way into the sound which is connected to their bodies or their experiences.
Jennifer Walshe : Early on, when i started improvising with my voice I felt – or the establishment felt – that I had to be able to notate it otherwise it would not have been valid compositionally. This was a big challenge for me. The first piece that I sort of did write for my own voice, a piece that’s called “As mo Cheann (out of my head”)” its for voice and violin because I was working with my improviser friend Johnatan Chen. I though I’d write a piece that he and I could play together. It won the Kranichsteiner Musikpreis in Darmstadt. I remember I had just written a rough score and the jury came backstage after I performed and said : “Give us the score ” and I said “Oh, but its just sketches.” So they said “you will be eliminated from the competition unless you hand over the score”. I remember I had to write my name because it didn’t even have my name on it and then it went off to the jury. Afterwards they said to me “Our big concern was whether or not you could notate your sounds”. They felt that if I had improvised the sounds that I had made, even if I had improvised them exactly the same every time but dint know how to write them down I would not have got the prize.
Peter Ablinger : In the mid 90’s when I started to work with electronic medias and with programmers that created specific tools to analyze the materials I was bringing in – not just spoken materials at the beginning but every kind of sound – and transform them pixel to pixel into a musical score. It would be like having a photograph, lie a grid over it and then, pixel to pixel, transfer it onto a canvas with a paintbrush.
I decided very quickly to not even try the maximal approximation but to rather create this specific situations with different fields of sounds and perception. I learned several things by opposing a field recording on one end and a musical possibility on the other end.
I learned that music is already a grid by itself, since is always constructed in steps. In the vertical level, in the spectrum, and in the horizontal rhythmic level you always deal with discrete steps, especially on the piano where you always have the half tones.
The observation goes more in one way than in the other, it goes from music to the sounds. The music becomes the observer of these other sounds. Music is already a kind of historic grammar we have learned and worked with in thousands of years. But we don’t really have a grammar that is about how to deal with the other sounds. It does not exist. We don’t really listen to the things outside of music. We do it only in functional ways, we hear when a car comes by, so we can step aside. It’s just functional , we don’t listen to them aesthetically.
Alessandro Bosetti : So, how do you write this down ?
Peter Ablinger : I usually proceed in two steps : first step is using an analyzing tool that analyzes the spectrum. The second step is that of extracting score information from that analysis. If I would take everything that comes out from the analysis I would not hear anything, probably, or we would not find it interesting…
Alessandro Bosetti : So you chose ?
Peter Ablinger : I have to chose, yes, absolutely. As I have learned in this kind of works there is not an objective way of doing that. In the very earlier pieces I tried to use some kind of “objectivism” for example only working with absolutely regular grids to sound as less subjective as possible. I eventually started to accept that this objectivism is just an illusion.
Therefore I worked with my… intention. I think perception needs intention. Without having an intention we don’t see anything or hear anything. And the intention is already a kind of selection of a specific grid. It’s like having a library of grids, and having to choose. You say “I am in this or that mood” and choose accordingly. You choose to see a situation like the movie “Casablanca”, or you choose to look at this tree in the same way as if it were a painting from Manet.
Alessandro Bosetti : How do you notate for instruments in your “speech pieces” ? How do you deal with rhythm ?
Peter Ablinger : My notation is completely traditional, bars, meter, notes, rhythms , pitches and sometimes dynamics, not in all pieces…
But how do I approach a specific speaking voice ? That is very different. Sometimes I hear it having a very specific rhythm in it and this may lead me to imitate its tempo. But then it never fits !
Sometimes there are five syllables but just four sounds. They sound as if they were five.
So it’s the grid that influences our perception making four out of five.
In other pieces i don’t even try to attempt this kind of relationship and work with very long grids. Long durations. In the visual sense that would mean a picture with very large pixels and with very large pixels the picture becomes abstract. There is no naturalistic relation anymore between the music and the speech and I find again very interesting that whether I do it in a mimetic way – that is to say concrete – or I do it in an abstract way it is always the same method. Concrete and abstract are opposite only in our brain.
Tomomi Adachi : Coming and going between the abstract and the concrete is very interesting.
Alessandro Bosetti : Can you identify a “tradition” of speech music ? Could you give a very subjective sketch of it according to your tastes and influences ?
Tomomi Adachi : I usually call it “sound poetry” and it originated from DADA. Schwitters’ work is still an important referential point for me. But I believe my work rather belongs to music than to poetry. I am uncomfortable with the word “speech music” because I don’t use real “speech” in my works so much. My material is voices which sounds like “speech” and what is important is the resemblance to “speech”.
Apart from the solo voice improvisations, where , in most cases, I use electronics , I use the voice in the context of compositions for a temporary group including non-musicians and for my solo voice. Materials are basically voices which sound like speaking although I don’t use meaningful words so much. They follow Japanese phonetic system, but not real Japanese words. It’s interesting that some non Japanese listeners believe those words may have a meanings in Japanese. Both Japanese and non-Japanese speakers are fascinated by the same characteristics of these artificial words. I think it is an interesting phenomena – related to the late Wittgenstein’s thought.
I choose types of notation which I suppose performers could easily understand. I mostly use the western five lines notation. I don’t use clefs and the pitch in the score is not precise, I just suggest intonations. And in case I use text descriptions.
Alessandro Bosetti : how did you came to use the voice as your main instrument ?
Tomomi Adachi : After trying many instruments, I chose the voice since it is the cheapest and the easiest one to carry, and also a good material for a live-processing. Inevitably I embraced the tradition of sound poetry since I was not convinced that my practice belonged to singing.
Arturas Bumšteinas : Everything suddenly changes when the voice appears in a soundscape. The human ear starts striving for communication. It is up to composer to define how much communication he wants to put out there through the voice he is emploing. I would choose a place somewhere in between those two choices – music and communication. I’d like to use a beautiful free voice which speaks as if it singing and sings so naturally that it resembles an everyday speech. This is a dream …
Singing in itself is an ambiguous form of speech. Speech which is granted the privilege of going in all three directions: that of language, that of music and that of noise.
I can’t avoid using voice but I don’t like using an exalted voice -operatic singing, screaming, pompous melo-declamation etc. – I prefer whisper and croon, as well as simple speech. I always thought that amateur vocalists are better than classically trained ones and that so called normal everyday speech is much more interesting than the acted, staged, fake actor’s voice – the voice of deceive.
If I have to give you one example in my work it would be Antiradical Opera – a composition based on the libretto by the American writer Jesse Glass. For this piece I worked with around 20 musicians (vocalists and instrumentalists both professional and amateur) . I used – along with the traditional notation – a self-invented number notation where I indicate the direction of the melody and it’s pitches with the help of few numbers. The realization of such notation is always highly subjective. When you collect several of such realizations and try putting them together you get a very unexpected but always exciting result which I would call a virtual choir. It is a very simple method and was already used many times before my work, but the subjectivity of the final result is something which interests me the most and which cannot be repeated twice.
I believe that there are several kinds/ways of listening to sounds. To any sounds – be it speech, a violin, an explosion or a birdsong. The so-called reduced listening permits us to hear any acoustic phenomena only by it’s superficial features. Although, maybe the word “superficial” is not the right one in the case of reduced listening since the so-called surface of the sound is the essential substance. A way to enjoy language as music would be that of concentrating on its non-semantic aspects. And, on the contrary, another way of listening to sounds would be the semantic, the one which makes you think of the source of the emitted sound – What is it? – a shotgun, a train, an alto flute, the wind? …
Jennifer Walshe : I am very interested in neurology and neuroscience and i am in particular interested in is neuroplasticity. Neuroplasticity – as described in the works of neurologists like V. S. Ramachandran and Paul Bach-y-Rita – states that the brain can continually change and that our brains are plastic rather than fixed.
Even in patients who had a stroke some years ago you can actually make these changes for the better happen.
One of the books i was reading in this regard is called “The Singing Neandethals” by Steven Mithen. He’s whole theory is that music and language developed together.
He talks about different processing modules in the brain. The Broca’s area processes the language and if the Broca’s area gets knocked out in somebody’s brain that person cant speak anymore. Maybe she understands exactly what is said but she cannot say anything back. Referring to music he talks about people that get one module of the brain knocked out by a lesion or an aneurism and if you play them the sound of a trumpet or the sound of a clarinet they cant tell them apart. They hear music and they cant figure out whether is a fast march or is a slow symphony or something. This is called amusia. Those people still have the area of the brain that processes noise that works just fine so that they can tell the difference between wind blowing or a dog barking. This is fascinating to me because I write music where i use the dog barking and the other noise sounds and I am also using the music sounds. My brain must be wired so that those two modules must have a lot more connections between them. In this music you are using different centers, you are making the brain of people walking into your concerts firing in different places, there would be the sound processing module as well as the noise processing module as well as the language processing module but by continually switching between those three or by continually having them fire you are forcing the brain to always make these connections. In that sense the connection with language becomes more blurred and more interesting to me.
This is what i tell to my students : ” you cant understand this music because the neural connections between the parts of your brain that process sound and noise haven’t been developed enough ! “.
Peter Ablinger : With the automatic piano pieces ( — developed in collaboration with the researchers of the university of Graz – it is a device that translates the spectral analysis of spoken recordings into clusters played by a robotic device applied to the instrument’s keyboard nda- ) one can observe really well that you can only hear either chaotic playing piano or you can only hear a voice. Your have only these two options. You can never hear both at the same time which I found amazingly exciting. It gives me an insight of what our brain does, it gives us an inside view of what happens when we perceive. It makes a physical adventure, an experience out of active perception.
It tells me that we don’t just perceive passively what happens but really do something when we perceive. We create something, we produce what we perceive. Its not there, we produce it. When I read of it in scientific publications of the last thirty years it sounds very abstract but in this piece is very direct, erlebar.
Alessandro Bosetti : So its like a kippbild, ( optical illusions where the observer vacillates between two alternative interpretations ) ?
Peter Ablinger : Yes, sometimes like a kippbild or sometimes like 3D pictures. If you look at them you just see an abstract ornament. Only by looking “through” the picture you see the 3D picture. You can’t have both at the same time, the ornament and the 3D picture.
Alessandro Bosetti : What distinguishes speaking from singing ? If there is an ambiguity on whether what we hear is speech or song, is that important to decide and define it ?
Tomomi Adachi : I have a clear definition for “speech” and “song”. “Song” is a melody plus words. The melody can exists by itself, the words exist independently. In “speech,” the musical aspects and the linguistic aspect are naturally connected, but it is always possible to divide them again. In other words, “song” is multimedia where “speech” is intermedia.
I am personally not interested in singing at all. When I begun to listen to music – I was 14 years old and it way mainly western classical music – I really could not be convinced why many of of that must include singing parts. Singing is too emotional and private. I thought music must be rational and universal. My vocal music is music with voice by a vocalist that hates singing. It is connected with speaking much more than singing.
Alessandro Bosetti : but over the years it seems to me that your vocal art has exposed some very emotional and private aspects. Do you still feel in the same way you felt about speech when you started listening to music ? Do you still think speech and singing are so opposed ? I am thinking of Robert Ashley’s work right now, that is very ambiguous in that sense…
Tomomi Adachi : Of course, my idea about music has changed since I was 14. Nevertheless I am still thinking music should be rational and universal in some sense although I know that reason is limited and the universality is an unfinished dream.
I don’t deny you feel my vocal works is emotional, but it is not a fact happening in my mind, but rather what you find on my/our performance. For me, the emotion is a physical chained reaction. In this sense, my music is emotional. When you hear my scream, you might feel some emotion from it, but this doesn’t mean my own fear or anger. It is just a material for me, a material that causes physical reactions on the performer. My focus is in these reactions on the performer and the relationships among materials, that in this case would be the vocal sounds.
Also I still don’t understand “singing” well, though my viewpoints got maybe wider, I guess “singing” coveys the “emotion” as a self-evidential subjective matter. I am working on another kind of “emotion”. Music have somehow a magical power to conceal it. I don’t like to exploit this magical power.
I see myself as a child of the conceptualism and minimalism rather than the expressionism. My question is; why conceptualists could not deal with the emotion? It seems it was an aesthetic problem rather than a conceptual one.
Alex Waterman : The issue of voices is that of “What can exist at the limits of language ? What can exist at the limits of logos ? What can exist at the limit of our conception of logic ?” and singing is of course that supplement, the thing which is outside of sense. It is the thing that sometimes makes absolutely no sense. It makes a mess of words. We don’t even understand what the words are anymore. That is why voice is so powerful. The voice is also the thing that gives body to the text. In a court of law for example a voice is the thing that you need in order to make something legal, to give it power. It can’t just exist on the page, it has to be spoken. You cant just write down on a piece of paper “You are guilty”. It has to be said out loud. And once is said out loud then you are sent to jail.
THE LIMITS OF LANGUAGE
Jennifer Walshe : I speak english but I am Irish and we speak a different sounding english. I was also taught some German so there were always a lot of languages swimming around in my head. When I learned of the the International Phonetic Alphabet I thought this was incredibly useful. These vowels that I say in English are very difficult for a German person to say. I love the idea that there is no one language the uses all of the vowels and there is an overlapping between different languages. With the the International Phonetic Alphabet you can get a little more of precision.
Tomomi Adachi : I feel compelled to write my works in an English frame. The most of my recent works is written alphabetically with English notes, even when I write it for Japanese performers. Otherwise the work will arrive just to a very limited amount of people. I am also irritated with a the fact I cannot include crafty texts in my works. For example, one of my work, “Anata 5″ is written in Japanese, non-Japanese understand its rhythmic and accumulative structure. But only Japanese speakers understand that intricate text. I hesitate to present the work for non-Japanese speakers. And I don’t have an ability to write such a tricky text in English.
On the other hand, a multi-lingual working process is exciting. My recent text sound compositions / sound poems are built by Japanese phonemes transformed into alphabetical form. It is the disassembling of Japanese by means of a phono-centric idea. I guess this process could not happen between English and Greek for example. My works shirk the nationalism in this point, because they are based on a somehow polluted language.
Alessandro Bosetti : what do you mean by “phonocentric idea” , and what do you mean by “polluted language” ?
Tomomi Adachi : Alphabetical languages are phonocentric, they are based on speech (I am referring to Jacque Derrida). On the other hand, for example, Chinese is based on writing, where one Chinese ideogram represents one meaning by itself. Japanese uses both Chinese characters and Japanese syllabic letters in one sentence. Japanese syllabic letters were based on figures of Chinese characters in the origin. And there was an interesting discussion about these letters in 18th century in Japan, and the discussion lies at the origin of Japanese nationalism. My recent compositional technic is based on this structural aspects of the Japanese language and its history. It’s process implies writing the same text in Alphabets and Japanese syllabic letters and applying a multi-layered transformation to the text. I call this transformation of Japanese syllabic system by the alphabetical system as “polluted language”. I invade Japanese language with an alphabetical thought. This doesn’t happen between English and Greek because both of them are phonocentric languages.
.Alessandro Bosetti : English seems to have established itself as the lingua franca of contemporary art and experimentation. Still, English its not your mother language. How do you deal with the progressive disappearing of other languages from the scene and repertoire ? How often you decide to use your own language ? Do you think that standing up for it would automatically imply a nationalistic position ?
Arturas Bumšteinas : I don’t think that the use of your mother-tongue language would imply a nationalistic position. It of course depend on the context but taking into account a current context of a generic use of english language in arts – I would say it would be very beautiful and also brave to exercise such sincerity and use your mother’s language as a default one. I always welcome such examples. In my own case I think that I still didn’t travel too much around other languages, so that I could come back to my native one. The identity is not fully formed yet. But maybe it is never going to be fully formed… I would logical to try to work not in an abstract removed isolation where any language is OK, or where a generic lingua franca is OK, but to work in a context-conscious locality. If I write a piece for my Polish friend I want it to feature Polish language, unless I want to teach him some Lithuanian.
Alessandro Bosetti : Would it be possible to translate your language based work into another idiom ?
Arturas Bumšteinas : Most of them yes, but they would need to be re-interpreted and re-recorded from scratch. But I still believe this would be the same piece, just a new version of if.
Alex Waterman : I consider myself to be a copyist. And i think that our idea of a copyist nowadays is much more limited than what it used to be. Traditionally most copyists were singers, if we go back to medieval music, all the way up till the early music printing in the early 15th century almost all copyists were singers, so when I talk about a “sung reading” its really literally because the copyist, in order to copy has to sing. They sing in their head, because when you look a the page, you look at the manuscript that you are copying , you sing it and as you move to the page that you are copying to, you hear the voices in your head and you copy from the voices in your head, that’s why, oftentimes the copy is different. Is not because the copyist is making mistakes but is because he sings it differently and that’s what he puts down on the page.
Alessandro Bosetti : so that’s something similar to what you set off to do by working on this book edition of Robert Ashley’s scores, rewriting them out of the single parts and notebooks …
Alex Waterman : this whole project is really designed as such to make it possible to read a single book all together with four people sitting at a table together and reading out loud from a single book.
We ( Alex and …. quote needed) want to break though the privatization of reading, we believe that reading out loud is a beautiful practice, something that we would love to see and hear more, we are just very interest in how words get off the page.
We are essentially trying to interpret the scores using typography and design as a mean of making the rhythm and the intonation of the words accessible to practitioners and general audiences alike.
Alessandro Bosetti : Why do you feel compelled to do it ?
Alex Waterman : Why do I do that ? Music is how i learn. If i am reading a text by Jacques Ranciere, or Godard for example I am always approaching it from a musical angle, it has to come alive, it has to come off the page ,that’s a really musical way of conceiving of the world. I didn’t realized it for a long time. I thought about it as two separate things : I thought that i was really interested in philosophy and reading, and that on the other side I played music. And I saw them as separate things for a very long time until I finally realized that the only way i’d ever understood anything was to listen to it.
Ands the same is true with the way I read : I can only understand it if I listen. I know a number of other readers that are like that. In my case its extremely important that I make the words on the page my own.
Thats why the copyist work is such an important thing.
I am reproducing Bob’s (Ashley) work and that process of reproduction means that it has to go through my voice, it has to go through my body, it has to come through my hands.
Peter Ablinger, likes to refer to photo-realism as one of his earlier influences, while he was still a visual art student in Austria.
Now one of the leading contemporary composers and based in Berlin he still feels this influence as actual. He never ceased bringing daily occurrences form the “real world” into his music as a mean of questioning the ways our perception mechanisms work.
Ablinger has composed many pieces derived directly from spoken language, most significantly the series “Voices and piano” where he portrayed and translated many iconic and historical voices into piano pieces. The cycle is still a work in progress, 35 of such portraits have been completed while the whole program should include 80 pieces and circa 4 hours of music. His practice with speech starts with detailed spectral analysis that leads to a score for piano that is played along – or better “compared to” in Ablinger’s own vocabulary – the original recordings. What may seem a mechanical process invariably ends up into unexpected musical and perceptive magic.
Tomomi Adachi, is a vocal performer, composer and electronic instruements builder from Tokyo. He’s eclectic field of activities ranges from exhilarating solo performances where he processes his voice via self made elctronic devices activated by sensors to compositions for vocal ensemble, most notably the now defunct “Tomomi Adachi Royal Chorus” that he describes as a “punk chorus. He is also a passionate interpreter of the avantgarde and fluxus traditions, especially in regard to the lesser known but important tradition of Japanese sound poetry. An hyperactive and captivating performer he describes himself as “a set of incoherent subjects, hopefully”.
Arturas Bumšteinas : is a composer and interdisciplinary artist from Vilnius, Lithuania. Born in 1982 he is the youngest participant to this conversation. His work plays with constant shifts between medias and disciplines, often translating images, language and pre existing musics into new creations. He has composed electro acoustic and acoustic music for a number of different ensembles and created pieces of video and radio art. In “Bumšteinas plays Baldessari sings Lewitt” he enacts a further step of the telephone game initiated by John Baldessary in 1972 in his sung renditions of Sol Lewitt’s statememts on conceptual art that Bumšteinas dubs in real time on several instruments.
Jennifer Walshe is one of todays most prolific composers and improvisers. Originally from Ireland she lived for long periods in New York city before moving to London one year ago to become composition professor at
Her work is extremely diverse having written for a vast array of instruments and ensembles and embracing a myriad of eclectic references. But it somehow centers around the voice, and its possibilities. Walshe is a vocalist herself and often times the main interpreter of her own music. Text, narratives and mischievous humor play a big role in her work that is the one of a creator of impossible universes though music.
The speaking, singing or noise-making voice in her music oscillates between referential and non referential modes making use of an extremely broad palette of unconventional sounds.
Alex Waterman has a busy career as a cellist, composer, curator and musicologist but the reason I am talking to him here is that of his current project of bringing together the scores of Robert Ashley’s operas in a comprehensive publication. And Robert Ashely’s music is what we have been talking about in this conversation. So far partitions for those pieces – among the most influential text-sound pieces of our times – existed only as individual notebooks given to the members of Ashely’s ensemble but were never published as comprehensive scores.
This book would make those available to interpreters and practitioners.