How does the constant presence of music in modern life—on iPods, in shops and elevators, on television—affect the way we listen? With so much of this sound, whether imposed or chosen, only partially present to us, is the act of listening degraded by such passive listening? In Ubiquitous Listening, Anahid Kassabian investigates the many sounds that surround us and argues that this ubiquity has led to different kinds of listening. Kassabian argues for a new examination of the music we do not normally hear (and by implication, that we do), one that examines the way it is used as a marketing tool and a mood modulator, and exploring the ways we engage with this music.
Ubiquitous Listening Affect, Attention, and Distributed Subjectivity
The pandemic has created major supply chain challenges for publishers, manufacturers, warehousing facilities and shipping companies. Please allow for a minimum of 15 business days to receive your order. If you need your order sooner, consider purchasing from one of our retail partner links in the buying options. Thank you!
Music Not Chosen
There are many kinds of music that belong in this book, given that its topic is all those musics that we listen to as secondary or simultaneous activities, often without choice. These include, of course, film and television music (as discussed in chapters 2 through 4), but also music on phones, music in stores (see chapter 6), music in video games, music for audiobooks, music in parking garages, and so on. Jonathan Sterne's "Sounds Like the Mall of America" (1997) long ago confirmed my suspicion about that music: we hear more of it per capita than any other music.
The 21 October 2000 issue of The Economist had a graph showing annual world production of data, expressed in terabytes. According to researchers at the University of California-Berkeley's School of Information Systems and Management, about 2.5 billion CDs were shipped in 1999. Music CD production far outstrips newspapers, periodicals, books, and cinema. And most of the music is being heard often, if not most often, as a secondary activity. If we then add cinema, television, and video games in as partially musical media, then stunning amounts of music are created and produced in any year, and the vast majority of it is not destined for attentive engagements.
Music to Follow You from Room to Room
In the early days of the new millennium, by most reckonings, the capacity to listen to music anywhere and everywhere was a trend that would continue to increase for some time to come. One mark of that might be Bill Gates's ideas for the "house of the future." All residents would have unique microelectronic beacons that would identify their wearers to the house. Based on your stored profile, then, "Lights would automatically come on when you came home.... Portable touch pads would control everything from the TV sets to the temperature and the lights, which would brighten or dim to fit the occasion or to match the outdoor light.... Speakers would be hidden beneath the wallpaper to allow music to follow you from room to room" (CNN.com 2000).The Cisco Internet Home Briefing Center imagined a similar musical environment: "Music also seems to have no boundaries with access to any collection, available in virtually any room of the house through streaming audio. A Digital Jukebox or Internet Radio eliminates the limitations of local radio, and can output music, sports and news from around the world." (Cisco n.d.).
These ideas are among the most basic and least radical in the field known as ubiquitous computing. First articulated in the late 1980s by Mark Weiser of Xerox PARC (Weiser 1991; Gibbs 2000), ubiquitous computing became a very active field of research. It was concerned with "smart rooms" and "smart clothes," with the seamless integration of information and entertainment computing into everyday environments. This would be akin to the penetration of words, or reading, in everyday life. Texts were first centrally located in, for example, monasteries and libraries; next, books and periodicals were distributed to individual owners; now, words are almost always in our field of vision, on labels, bookshelves, files, and so on. Written language is ubiquitous, seamlessly integrated into our environments.
From the perspective, for example, of the Broadband Residential Laboratory built by Georgia Tech in the late 1990s, these "stereo-piping tricks of 'smart' homes ... [are] just a starting point" (as quoted in Gibbs 2000). Their Aware Home has several audio and video input and output devices in each room, and several outlets and jacks in each wall. The MIT Media Lab, as Sandy Pentland said, went in a different direction. They "moved from a focus on smart rooms to an emphasis on smart clothes" (Pentland 2000: 821), because smart clothes offer possibilities that smart rooms don't, such as mobility and individuality. For example, the Affective Computing Research Group "built a wearable 'DJ' that tries to select music based on a feature of the user's mood" as indicated by skin conductivity data collected by the wearable computer (Picard 2000: 716).
What Do We Know about Most of the Music We Hear?
Music scholarship across the disciplines is utterly unprepared to think about such practices, even ten years after I first published a version of this chapter. There are few studies of the music that follows us from room to room, variously called programmed music, background music, environmental music, business music, functional music (Gifford 1995; Bottum 2000). One landmark study is Joseph Lanza's book Elevator Music (first published by St Martin's Press in 1995, expanded in a second edition from University of Michigan Press, 2003). Elevator Music is first of all a history of music in public space, and secondly a defense of the intramusical features that were part of elevator music in its prime: lush strings, absence of brass and percussion, and consonant harmonic language. The book is an invaluable resource, and it makes some fascinating arguments; for example, Lanza suggests that elevator music is the quintessential twentieth-century music because it focuses, as do many of the century's technologies, on environmental control.
Sterne's "Sounds Like the Mall of America" takes another tack. Commoditized music, Sterne argues, has become "a form of architecture-a way of organizing space in commercial settings" (1997: 23). Not only does the soundscape of the mall predict and depend on barely audible, anonymous background music of the "Muzak" type, but it also shapes the very space itself. The boundaries between store and hallway are acoustically defined by the different music played in each space: "To get anywhere in the Mall of America, one must pass through music and through changes in musical sound. As it territorializes, music gives the subdivided acoustical space a contour, offering an opportunity for its listeners to experience space in a particular way" (31). For Sterne, the issue is one of reification-music has become a commodity relation that supplants relations between people and that presupposes listener response.
Ola Stockfelt offers yet another perspective on listening and context; in "Adequate Modes of Listening" (1997), he argues that modes of listening develop in relation to particular genres-he calls these "genre-normative modes of listening"-and the style itself develops in relation to its listening situation. He says: "Each style of music ... is shaped in close relation to a few environments. In each genre, a few environments, a few situations of listening, make up the constitutive elements in this genre.... The opera house and the concert hall as environments are as much integral and fundamental parts of the musical genres 'opera' and 'symphony' as are the purely intramusical means of style" (136).
In Stockfelt's argument, modes of listening, listening situation, and musical style coproduce each other. In terms of background music, this helps explain the musical parameters we all know. What Stockfelt calls "dishearkening" has produced a particular set of practices for arranging background music. (The word he uses in Swedish is literally "to hear away from," which shares with "dishearken" a rather higher implication of agency than I think is useful.) There is a focus on moments of pleasant "snapshot listening" rather than development over time, and a focus on comforting timbres (legato strings) over vivid ones (brass).
None of these studies, however, can cope with the world of ubiquitous computing proposed by Xerox PARC and the MIT Media Lab, nor with some of the developments that came from those early projections, and certainly not with what is in the process of becoming. Prevailing scholarly notions of listening, subjectivity, and agency, even in the most innovative works, will not account for the music we wake up to.
I call the music that first began with radio and the Muzak corporation (originally called Wired Radio, Inc.), the kind of music that we listen to as part of our environment, "ubiquitous music." Muzak in particular, by broadcasting music into commercial spaces, established that music could come from "nowhere" and invisibly accompany any kind of activity. Mark Weiser's description of ubiquitous computing was the best description of this phenomenon I've ever seen, even though he was describing something that hadn't been developed until sixty or more years after Muzak's first broadcast.
Where Did This Music Come From?
I lead a happy life. Every day I wake in the best of possible moods and dance my way around the room as I get dressed. Then, while I prepare a pleasant breakfast in my tiny kitchen, several happy bluebirds land on my windowsill and twitter cheerfully. Outside, a tall man in coat-and-tails tips his hat and bids me good-day. A half-dozen scruffy children chase a hoop down the street, shouting gleefully. One of them cries out, "Mornin' mister!"
Ah yes, life is wonderful when you live in a musical from the fifties. Now, perhaps you're wondering, "How could this possibly be true?" Well, I have the unspeakable good fortune to live directly behind my local supermarket and each morning I wake up to a careful selection of merry tunes which easily penetrate my thin walls to rouse me from my slumber. (Schafer n.d.)
So does Tokyo resident Own Schafer begin his eloquent, elegant think piece about Muzak. Sedimented here is a trace of one of functional musics' siblings, that is, film music and musical theater. To tell functional music's history, one might begin with music hall, or even earlier. Another trace could be followed to radio, and from there to music in salons and gazebos. Or from workplace music to work music and chants. Strangely, these remain untold histories of the omnipresence of music in contemporary life in industrialized settings.
Two histories are told-an industrial one and a critical one. The former begins with General George Owen Squier, chief of the U.S. Army Signal Corps and creator of Wired Radio, the company now called Muzak. This history, best represented by Lanza's book and Bill Gifford's FEED feature "They're Playing Our Song," continues through shifts in technologies and markets, to Muzak's "stimulus progression" patents, to the 1988 merger with small foreground music provider Yesco (Gifford 1995), bankruptcy, the rise of competitors such as DMX, and ultimately satellite radio.
The other documented history is a counterhistory, a story of how a music came into being that could be confused with functional music, but is of course nothing like it-ambient music. That history begins with Erik Satie's experiments in the teens and twenties with "musique d'ameublement" (furniture music), soars through Cage's emphases on environmental sound and on process, and leads inevitably to-Brian Eno, from whose mind all contemporary ambient music has sprung. (For versions of this story, see any of the scores of ambient websites, and especially Mark Prendergast's The Ambient Century, 2003.)
This history goes to great pains to distinguish ambient from background music on the grounds of its available modes of listening. As musician/fan Malcolm Humes put it in a 1995 online essay: "Eno ... tried to create music that could be actively or passively listened to. Something that could shift imperceptibly between a background texture to something triggering a sudden zoom into the music to reflect on a repetition, a subtle variation, perhaps a slight shift in color or mood" (Humes 1995).What is important to defenders of the faith is ambient music's availability to both foreground and background listening. But since the mid to late 1980s, background music has become foreground music. In the language of the industry, background music is what we call "elevator music," and foreground music is works by original artists. While background music has all but disappeared, you can now hear everyone from Miriam Makeba to the Moody Blues to Madonna to Moby in some public setting or other and quite possibly all of them at your local Starbucks. (See chapter 6 for a discussion of music in Starbucks, coffeehouses, and retail spaces more generally.)
Foreground music seems to make talking about music in public spaces impossible-and perhaps it should be. Certainly there is a several-decades-long history of debate about the dissolution of public space and the public sphere. As Japanese cultural critic Mihoko Tamaoki has argued in her work on coffeehouses, Starbucks transforms customers into not a public, but an audience. Moreover, she argues, "Starbucks now constitutes a 'meta-media' operation. It stands at once in the traditional media role, as an outlet for both content and advertising. At the same time, it is actually selling the products therein advertised. And these, in turn, are themselves media products: music for Starbucks listeners" (Tamaoki n.d.: 4). This is Starbucks's genius as a music label; it is a meta-media operation that produces its own market for its own product all at once, in what once might have been public space. But if we focus too closely on the distribution of recordings, we will not fully address the problems foreground music poses to contemporary listening.
How Do We Listen to Foreground Music?
If one attends to public and popular discourse about music in business environments, it has hardly registered the change from background to foreground. By and large, most people talk about music in business environments as annoying and bad, and there are frequent references to Muzak and strings, as if there has been no change whatsoever. The reason, I want to argue, is that they are not discussing music, but rather a mode of listening about which most of us are at best ambivalent, thanks in no small part to the disciplining of music in the Western academy.
In the wake of Michel Foucault, critiques of music's disciplinary practices have been well argued. We have discussed canon formations, architecture, and training; we have argued about analysis and we have talked about transcription. We have discussed, at length, the expert listening held in such high regard by Theodor Adorno and so carefully cultivated by Western art music institutions such as the academy and symphony orchestras. It is perhaps primary among the forces that produce and reproduce the canonical European and North American repertoire.
But in all these discussions, we have not taken our own collective insights quite seriously enough. Logically, if expert, concentrated, structural listening produces the canon, don't other modes of listening produce and reproduce other repertoires? (While a bit of work has been done in this tradition on the rock canon, I think we have a long way to go before the critique of canonicity is widely spread throughout art music scholarship, much less popular music studies and ethnomusicology.)
This is, I believe, Stockfelt's most important point. Through a study of changes in arrangements of Mozart's G-minor Fortieth Symphony, he argues that different settings, different sets of musical features, and different modes of listening are coproductive. Text, context, and reception create each other in mutual, simultaneous, and historically grounded processes. But as foreground music programming has increased, this combination or mutual dependence seems less and less consistent or predictable. When anything can be foreground music, does it still make sense to talk about a specific background music mode of listening?
Do We Hear or Listen?
One possibility is to think of this most disdained activity as hearing rather than listening. This idea appears repeatedly, including in the sales literature of programmed music companies. But the distinction poses some interesting problems. In Merriam-Webster's Dictionary of English Usage, each term is defined by the other:
1: to perceive or apprehend by the ear
2: to gain knowledge of by hearing
3a: to listen to with attention: heed b: attend hear mass
4a: to give a legal hearing to b: to take testimony from hear witnesses
1: to have the capacity of apprehending sound
2a: to gain information: learn b: to receive communication haven't heard from her lately
3: to entertain the idea-used in the negative wouldn't hear of it
4-often used in the expression Hear! Hear! to express approval (as during a speech)
archaic: to give ear to : hear
1: to pay attention to sound listen to music
2: to hear something with thoughtful attention : give consideration listen to a plea
3: to be alert to catch an expected sound listen for his step
One obvious problem with the distinction is the circularity of the definitions, but that is, as we know, in the nature of language. That notwithstanding, we could probably agree that hearing is somehow more passive than listening, and that consuming background music is passive. Certainly everyone-from Adorno to Muzak-seems to think so.
The connotation of passivity in the term hearing is precisely why I prefer listening. To the extent that hearing is understood as passive, it implies the conversion of sound waves into electrochemical stimuli (i.e., transmission along nerves to the brain) by a discretely embodied unified subject (i.e., a human individual). Yet our engagements with programmed music surely extend beyond mere sense perception, and as I will suggest below, mark us as participants in a new form of subjectivity.
Is there, then, a programmed music mode of listening? Here I want to offer an anecdote as a beginning of an answer. When I was teaching at Fordham College at Lincoln Center, I asked the students in my popular music class to write an essay on a half-hour of radio broadcasting. Ryan Kelly, a member of the New York City Ballet corps de ballet at the time and now an artist/choreographer, began his essay by identifying himself as a non-radio listener. He described sitting down to listen to the tape to begin his essay, and ten minutes later finding himself at the kitchen sink washing dishes. This is, of course, only one story, but an eminently recognizable one. We are so used to music as an accompaniment to other activities that we forget we are listening.
A friend described to me a proto-Ubicomp kind of system he had set up-he has speakers under his pillow, so that he could sleep listening to music without disturbing his wife and without the intrusion of headphones. (He also listens to music constantly at work.) He was profoundly articulate about this matter-he thinks of music as an "anchor," keeping his mind from spinning off in various directions. Last year, I got a pillow with a jack and little speakers in it as a gift-what my friend had to rig ten years ago is now a product. Parents of children with attention deficit disorder are often advised to put on music while their kids are working for just such purposes.
From its inception, Gifford says, Muzak was about focusing attention in this sense. Workers' minds "were prone to wandering. Muzak sopped up these non-productive thoughts and kept workers focussed on the drudgery at hand" (Gifford 1995).(See also Christina Baade's Victory through Harmony: The BBC and Popular Music in World War II, 2011, for documentation of discussions about how music would help in alleviating boredom and increasing productivity.) One of my daughter's babysitters, Anett Hoffmann-and many of my students at Fordham-reported leaving the radio or MTV on in different rooms, so that they are never without music. They say it fills the house, makes the emptiness less frightening. Muzak's own literature says, "Muzak fills the deadly silence." Of course, now the silence is far more frequently filled with headphone listening to MP3 players, but the logic is the same.
These have always been background music's functions. We learned them from Muzak, and now they are a part of our everyday lives. As Muzak programming manager Steve Ward says: "It's supposed to fill the air with sort of a warm familiarity, I suppose.... If you were pushing a cart through a grocery store and all you hear is wheels creaking and crying babies-it would be like a mausoleum" (as quoted in Gifford 1995). All these listeners and music programmers and writers share a sense of listening as a constant, grounding, secondary activity, regardless of the specific musical features.
A Ubiquitous Mode of Listening?
Those of us living in industrialized settings (at least) have developed, from the omnipresence of music in our daily lives, a mode of listening dissociated from specific generic characteristics of the music. In this mode, we listen "alongside," or simultaneous with, other activities. It is one vigorous example of the nonlinearity of contemporary life. This listening is a new and noteworthy phenomenon, one that has the potential to demand a radical rethinking of our various fields.
I want to propose that we call this mode of listening "ubiquitous listening" for two reasons. First, it is the ubiquity of listening that has taught us this mode. It is precisely because music is everywhere that Ryan forgot he was doing an assignment and got up to wash the dishes.
Second, it relies on a kind of "sourcelessness." Whereas we are accustomed to thinking of most musics, and most cultural products, in terms of authorship and location, this music comes from the plants and the walls and, potentially, our clothes. It comes from everywhere and nowhere. Its projection looks to erase its production as much as possible, posing instead as a quality of the environment.
For these reasons, the term ubiquitous listening best describes the phenomenon I am discussing. As has been widely remarked, the development of recording technologies in the twentieth century disarticulated performance space and listening space. You can listen to opera in your bathtub and arena rock while riding the bus. And it is precisely this disarticulation that has made ubiquitous listening possible. Like ubiquitous computing, ubiquitous listening blends into the environment, taking place without calling conscious attention to itself as an activity in itself. It is, rather, ubiquitous and conditional, following us from room to room, building to building, and activity to activity.
However, the idea of ubiquitous listening as perhaps the dominant mode of listening in contemporary life raises another problem: if there is a ubiquitous mode of listening, does it produce and accede to a set of genre norms? In the articles on which this chapter is based, I argued for something like a ubiquitous genre based on production practices and mode of listening and some other features, but I now think that that was not quite right. Rather, I think genre has receded significantly in importance, as Melissa Avdeeff (2011) has argued in her ethnographic study of young women listeners. Since the earliest days of electronic dance music, genres have begun proliferating according to all sorts of parameters, though most obviously bpm (beats per minute). In addition, the widespread use of streaming sites like last.fm, Spotify, and even YouTube have meant that there is as much reason to differentiate your music (as both a producer and a consumer) as there is to ally it with others. Genres like steampunk (Abney Park, Dr Steel, The Cog is Dead) and dark cabaret (Birdeatsbaby, the Dresden Dolls) are enabled by the ability of the Internet to disperse and collect information and interest; twenty years ago, these would almost certainly have been individual quirky bands with small, local followings. In other words, before the Internet, most bands would try, because of the contours of production and distribution in the music and media industries, to create music that could fit in existing genre categories.
This shift, following the widespread penetration of the Internet and digital distribution into everyday musical life, means other significant changes in this argument as well. When I was first writing on this topic, I looked into how researchers were imagining future uses of music. I focused on ubiquitous computing, from which I took the term ubiquitous to describe the omnipresence of music in contemporary everyday life in developed and especially urban nations and contexts. Any even basic searches for, for example, the house of the future led to a great many proposals of how listening to music would be changed by ideas that came from ubiquitous computing researchers.
What was first called ubiquitous computing seems to have settled on new names, such as pervasive computing and ambient intelligence, and ideas about wearable computing that were coming into focus in places like the MIT Media Lab have established a field of their own. In pervasive computing, ideas revolve around ways for your environment to sense and identify you-such as RFID (radio frequency ID) tags-in order for it to know what kind of music (and lighting and temperature and visual arts) you prefer. The projections here include the music following you around as you walk through your home, and the home's ability to coordinate other features of the environment to your music, which is in turn coordinated to your preferences.
In wearable computing, sensors on the inside of clothing lie in contact with your skin, informing a small onboard computer of your skin temperature, heart rate, and so on, so that it can respond to your emotional needs. Wearable computing has been used to develop ski jackets that are aware of your physical state and can call for emergency assistance in case of an accident, clothing that can detect the onset of epileptic seizures, ways to detect when a student is losing interest so that an automated learning system can change approaches, and other very useful possibilities.
Each of these has significant appeal, but also substantial flaws. I subscribed to the journal Pervasive Computing for about three years, and never once did I see anyone discuss how to handle two competing sets of instructions from different RFID tags. I have yet to see an article on wearable computing that accounts for more complex needs and desires on the part of the wearer-the MIT Media Lab's Wearable DJ, for instance, presumed it knew what to program for a wearer given her or his physical state. (Perhaps the utter disappearance of this project is a sign that the researchers realized the problems with this approach?) An article from 2010 points out some of these issues in its abstract: "One important but often overlooked aspect of human contexts of ubiquitous computing environment is human's emotional status. And, there are no realistic and robust humancentric contents services so far, because there are few considers about combining context awareness computing with wearable computing for improving suitability of contents to each user's needs" (Lee and Kwon 2010: 1). So the problems both with pervasive/ubiquitous computing and with wearable computing since their outset still persist. (See Cook and Song 2009 for a survey of the recent state of play at the intersection of these fields of research.) Intriguingly, wearable computing is now mainly being researched, as the examples I listed earlier suggest, for medical and educational purposes, and a certain level of pervasive computing is on the market. But both wearable and pervasive computing are also appearing in art projects of many kinds. For example, Arduino, an open-source platform created for artists and designers, is enabling projects like little fabric speakers and many other new kinds of art forms.
Ambient intelligence has found its métier in the home of the future. When I first began this research, the home of the future was all about Internet wiring that would make your whole record collection available at the push of a button. Now that that's the stuff of everyday life, there's actually very little about music in the homes of the future, such as the currently available model featured on UK's "Gadget Show," which can remember your preferences for lighting, awaken you, draw your bath, and close your curtains from a variety of voice or RFID cues. Like this one, most "homes of the future" hardly show any new musical ideas at all. One recent home of the future, Living Tomorrow in Brussels, barely mentions that music is available around the house. Another, "South Korea's Home of the Future," has walls that coordinate images on the digital wallpaper to the mood of the music you've chosen-hardly a very advanced imagination of the role of music (or digital wallpaper, for that matter). In another, a German-design cylindrical chair called the Sonic Chair-it looks a bit like a tin can tipped on its side-has three speakers around you (left, right, and above) and the back soft cushion you lean against is the bass speaker behind you. None of these are as intriguing and full of possibilities as was a hard drive containing all of your thousands of music tracks that could be accessed from anywhere in the late 1990s, when I was first researching these issues.
But there is a new form of technology that is bridging the gap between pervasive and wearable computing, in a very intuitive, widespread way, and it is changing the nature of our relationships to music just as surely as its predecessor, the MP3 player, did. Smart phones-and the apps you can have on them-are an intriguing scale of entertainment and programming. They are challenged by some simple things, and most apps don't expect you to remain engaged with them for extended periods. Nonetheless, they are creating a whole new world of musical possibilities for users.
In awarding Steve Jobs the 2010 Innovation Award in the area of Consumer Products and Services, the Economist, offers some perspective on the economic performance of the iPhone:
As of April, 2010, the App Store offered more than 185,000 applications for the iPhone written by more than 28,000 developers. More than 4 billion apps, some free, some for sale, had been downloaded as of April, 2010. Apple provides 70% of revenues from an application sold in the store to the software developer and while that may have contributed tens of millions of dollars to its sales, it's not a huge revenue generator given the size of the company. More importantly, the wide diversity of the applications has increased the functionality of the iPhone, contributing to its enviable consumer popularity. Apple had sold more than 51 million iPhones as of March 31, 2010. Market watcher IDC said Apple's iPhone was the third-best selling smartphone in 2009 with a 14.4% share of the market as sales for the year soared 82%.
These statistics are already quite out of date, and their implications are staggering. For just one small example, on 2012 May 9, RovioMobile posted a video to YouTube announcing that 1 billion copies of the Angry Birds games had been downloaded. Sales of apps for Android smart phones passed 1 billion in December 2011.
There are some interesting controversies around the iPhone that will bear consideration as this market develops, such as differences among smart phones and their policies. A PhD student told me when we were discussing this project that when she was shopping for a smart phone, people told her there is more freedom in writing apps for Android, since they are not as carefully screened as iPhone apps are before being included in the iTunes store. An app developer agreed; according to him, Apple's intensive testing means that all products available in the iTunes AppStore are quite strong and without glitches, but that it also discourages certain more experimental approaches (Warren Bakay, personal conversation, 25 February 2011).
Nonetheless, there is a seemingly endless supply of music apps for the iPhone. There is no way to pin down a meaningful figure, given the rate of release of new apps, but as of 23 January 2012, the Apple iPhone app page says there are over five hundred thousand apps available. While they are too new to say anything very definitive about them yet, so far most music apps seem to fall into one of these nine genres:
1. Listening apps: There are a number of these, including some PC standbys such as Spotify and Last.fm, as well as things like Tuner Internet Radio, which gives you access to thousands of radio stations worldwide.
2. Music management apps: My favorite of these is Moodagent; it was to research this app that I originally got my iPhone. It has five sliders (happy, tender, sensual, anger, and tempo) that you can set to varying levels, and according to those choices the app will create a playlist from the music on your iPhone. It was voted third-best music app this year by the readers of Laptop magazine.
3. Apps to identify music: Shazam and SoundHound are popular versions of this type of app. The idea is that you point your iPhone at the source of the music and the app identifies the song.
4. Mixing and DJ apps: These range from simple DJ apps that let you smooth transitions from one tune to the next to apps that let you scratch and remix. 3DJ offers a range of remix apps by genre, such as Dance Remix. Others in this line of products include ska, reggae, and soul. Other products in this genre include apps from individual DJs, though I haven't yet found one of these with good reviews.
5. Music games: There are many examples of games that depend on music for part or all of the game play. One such game is Balls, which allows the user a startling range of temperaments and a few instruments according to which the balls on the screen will make music. They make sounds by hitting each other or bouncing off the wall, and the user can affect the way the balls move by either tipping the phone so that the balls react to "gravity" or by pushing them with a finger.
6. Instruments: Music Ally, a music blog, and many other sources have rated Ocarina as one of the top five or top ten music apps, and a band called the Mentalists used it as one of their i-instruments to make a cover of MGMT's "Kids." As of March 2009, seven hundred thousand people had bought Ocarina. (You play Ocarina by blowing into the microphone and pressing on images of circles on the screen of the phone. But it also has a GPS component, in which you can see a rotating map of the globe that lights up everywhere that someone is playing Ocarina, and you listen in to what other people are playing.
7. Music education apps: Karajan is an interesting example of this category. It was selected by Music Think Tank, a website for the music industry, in May 2010 as one of the four best apps that can make you a better musician. It allows the user to practice recognizing intervals, beats per minute, key signatures, and scales and modes, and it offers many basic ear-training exercises. There are a wide range of such apps available, including ones to help instrumental students, practicing sight-reading skills, rhythmic ability, and so on.
8. Music tools: these include tuners for specific instruments, metronomes of various kinds, dictionaries, and more. I have both Guitar Tuner and Drum Beats (a little drum machine app) on my phone.
9. Generative music: Bloom, Air, and Trope are the best-known generative music apps, made by Brian Eno in various collaborations, though there are many others, from simple free ones like RjDj to more professionally oriented ones like Mixtikl. The term itself is Eno's, according to a range of sources, including Wikipedia, although it long precedes smart phone apps. Each of the apps from Eno's group has multiple moods and several adjustable parameters, including delay. The sounds are coordinated with visuals, and they can be used either actively, with the user tapping and dragging on the screen to create both visual and audio events, or passively, where the app generates audiovisual content in a particular ambient-style mood.
The Eno generative music apps have appeared on many "best of" lists, especially Bloom. But some of the user comments may offer a bit of insight into how they are understood and used. These are taken from the iTunes Preview page for Bloom, which was the earliest of the three.
by Justin Norman
This is quite a unique app for simple composition and it offers cheap musical entertainment for a short amount of time. The idea behind it is great: easily compose moody soundscapes on the go, even if you're not a musician. The game's design basically makes it impossible to fail. For the money, Bloom delivers a good number of different variables to work within, and it is fairly enjoyable. However, I quickly lost interest in it and moved on to other, non-musical applications, saving music creation for my desktop computer. Others might find it more interesting than myself, but I found the musical confines the app provides to be a bit too limiting to mess around with for long.
This is a pleasant little application that becomes quietly addictive. It's beautiful, and I can't praise it's [sic] calming ability enough. My kids enjoy it as well! I wasn't sure it would be, but it turned out to be well worth the few dollars!
I was really [sic] forward to using this app. But all the moods are WAY TOO SIMILAR. The sounds they have and how they repeat the user's input seem the same to me. There is NOT ENOUGH VARIETY. I wish I had NOT PURCHASED IT.
It seems quite clear that Bloom is not meant to hold a user's attention for long periods of time, nor is it designed for musicians to use (although one comment on the announcement of Trope in Synthtopia, a web magazine of synth and electronic news, says, "Without these apps, my intros would be so much lamer ... the price is perfect as well if this was a rating system from 1 to 10, I give it an 11."). Because there are no other listeners and the music and visuals that one produces are completely transient-there is, for instance, no "save" function-these generative music apps are toys. Or relaxation tapes-they can be used in "listen" or "create" modes, and listening is quite relaxing. Or sleep inducers-like many audio iPhone apps, they come with a sleep timer. What they do not bear up to-and this is no surprise, given their creators-is long, focused listening.
A New Environment?
In general, music apps vary in relation to the level of activity they require, the duration of interest they are likely to command, the degree of attention they may occupy, and so on. They are certainly not in any sense a single type of activity. But many of them can be thought of as ways of managing one's audio environment. It then becomes clear, I think, that it may well be productive to think of a group of iPhone apps as a cross between wearable and pervasive computing-on the one hand they are small and always with you, like wearable computing, and they can respond to your mood-but on the other hand they both interact with and create your environment.
Simply to bring my point home, I want to remind you of the Gadget Show's eHome, which remembers your preferences and responds to your voice. Not only can one control one's audio environment in the here and now, but one can control one's home environment from hundreds or even thousands of kilometers away, simply by the touch of an iPhone. In practice, this means that not only can you rest assured that you didn't forget to turn the lights off when you left home, but you can come home to a hot bath waiting for you, with the music you prefer at that moment. Or, for a lot less money, and on the level of the micro-environments I'm interested in, Aqueous, an audiovisual relaxation app, provides you with choices of water sounds, nature sounds, and musical tracks, simultaneously, so that you can run your fingers in virtual water and create a sound environment with waves, birds, crickets, and Balinese trance music with just a few taps of your finger on the screen.
This is a new way of engaging-or, to a significant extent, creating-our sonic environments. Rather than thinking about the acoustic qualities of building materials and activities in the neighborhood, or of the quality, complexity, and placement of speakers in a surround sound system, apps can offer small-scale creations of small-scale environments-environments that are the size of an iPhone screen, for instance, and can be heard by only one person. The place of these small bits of programming in environmental experience is creating a whole new category of environments.
My point is simply this. iPhone apps are a new "size" of interaction with environment, a new place of processing between wearable and pervasive computing, a new set of audio-visual relations, and a new form of soundscape management. They offer users worlds of possibilities, figuratively and literally, for somewhere between no cost and twelve euros as the most expensive music app I've seen so far. They enable users not only to become composers in the sense that includes what Jacques Attali, DJ Spooky, and Brian Eno all are interested in, but also in the way that the programmers of Ball are imagining, where the app helps them create aleatory compositions in twenty different temperaments with eight timbres and from one to seven vectors to control. These apps are enabling users to compose (in many different senses of this word) their sound and visual environments in small units for highly variable lengths of time. They require methods of analysis that can understand these very small-scale activities that do not have a major place in people's lives, that take place in the margins between times and places, and that do not obviously suggest a mode of study. Nonetheless, they are begging for us to try to understand the changing nature of the many new "environments" that they are putting on offer.
What Is Ubiquity?
Mark Weiser's ideas about ubiquitous computing were unbelievably prescient. But so, too, were George Owen Squier's ideas about listening. The ubiquity of listening, the omnipresence of music in daily life that grew throughout the twentieth century and continues to grow in the twenty-first, would shock a listener from before the invention of recorded music. Of course there are enormous variations among listeners according to all sorts of parameters, including individual proclivities, but it would be a very challenging task to try to get through a week without coming into contact with music, and for most of us in the industrialized world, a musicless day would be highly unlikely indeed.
As I have been arguing, listening as a simultaneous or secondary activity-that is, ubiquitous listening-has profound implications in many directions: it modulates our attentional capacities, it tunes our affective relationships to categories of identity, it conditions our participation in fields of subjectivity. If the insights of the past thirty years of theories of culture are to be taken seriously, insofar as even the most widely divergent have insisted that engagements with culture are an integral part of both the social and the individual, however those two categories are understood, then surely ubiquitous listening, which is involved in many kinds of cultural activities (from restaurants to movie theaters to clothes shopping to domestic life), should be taken very seriously. But more than that, I am arguing that music is articulated to affect, and that it can help bring or suppress attention, and that we know ourselves in and through musical engagement, then
the quantity of music we hear, as well as the frequency ranges, volume, timbral qualities, meaning processes, and affects demand much closer attention than they have been given so far;
the music to which we listen so frequently should be taken very seriously indeed, not least in relation to how and why we encounter the musics we do and what we might be doing with them.
What I am advocating is a whole new field of music studies, in which we stop thinking about compositional process, or genre, or industrial factors as the central matters. Those approaches have yielded some very important insights, and no doubt they will continue to do so. But as a transdisciplinary scholarly community-across music history, music analysis, popular music studies, ethnomusicology, communications, media studies, sociology, anthropology, cultural studies-we have had far too little to say about most of the relationships between most musical events and most people in the industrialized world. Whether it is through living behind a supermarket in Tokyo, like Own Schafer, quoted earlier in this chapter, or the audiences of television musical episodes (see chapter 4), or my own listening to Armenian jazz albums (see chapter 5), these complicated, shifting, amorphous encounters require much more thought than they have had to date, both because they influence other forms of listening (recall Ryan Kelly earlier in this chapter) and because they are profound engagements with the field of culture/distributed subjectivity/labor that is everyday life.