Here’s a little pamphlet I call “Harmonic Magic for Songwriters.” You can download it as a pdf and read it in the bath or on a vision quest (both at once?), or just eyeball it right here on the blog. Careful with the hair dryer near the tub.
Here’s a little pamphlet I call “Harmonic Magic for Songwriters.” You can download it as a pdf and read it in the bath or on a vision quest (both at once?), or just eyeball it right here on the blog. Careful with the hair dryer near the tub.
The Alibi—Albuquerque’s weekly paper that does great music writing, not to mention political writing and all else—published a series of interviews with me this month. Robin Babb did the writing and was great to work with. She read my works, asked great questions, listened to and read through all my answers, and then represented me in a way that is much smarter and more polished than I actually presented myself to her. Here’s part I, called An Interview with David Bashwiner—Pt. I: The Musician (published Dec. 10, 2015). The second part is a bit longer and can be found here at David Bashwiner: A Theorist—Part II of the Interview (published Dec. 17, 2015).
This is a pdf of a chapter I contributed to Barbara M. Stafford’s A Field Guide to a New Meta-Field: Bridging the Humanities-Neurosciences Divide (University of Chicago Press, 2011). The complete book can be found here.
The following questions were asked me by graduate students at the University of Iowa enrolled in an advanced class in music cognition taught by Prof. Jennifer Iverson. The students had read my chapter, “Lifting the Foot: The Neural Underpinnings of the ‘Pathological’ Response to Music,” found in Barbora Stafford’s A Field Guide to a New Meta-Field: Bridging the Humanities-Neurosciences Divide (University of Chicago Press, 2011). These questions largely had to do with that chapter, although they are all applicable to the phenomenon of musical emotion generally (and/or neuroaesthetics). Any citations included in my answers can be found in the reference list at the end of this blog entry. And if you’d like to add a question, please do. I’ll try to answer promptly and thoughtfully.
This question may sound anti-progressive, but I don’t mean it that way. I think understanding how our bodies and minds react to music and sound is very important and appreciate the knowledge we’ve gained from research like yours. That said, here is my (series of) question(s):
As we move toward a more perfect understanding of our physical and emotional relationship to music, are we eliminating the need for creativity? If we know how to create a piece of music that perfectly evokes a certain emotion in a certain situation (or perfectly compels us to move our bodies in a certain manner), why would we ever create a second one? Is this field of research moving toward a world of “perfect” music?
Brilliant question. Subcortical regions of the brain that can be thought of as the engines of emotion are in no way separate from the cortical regions of the brain that are more closely associated with information processing. Consider what a dream is (or likely is, to at least some extent): a stimulation of subcortical affective activity, which leads to a narrative being generated and then navigated to make sense of the affect. Emotions require thoughts, even if they have to produce them themselves. Thoughts, therefore, will never be absent from emotions.
If you all of a sudden became unintelligent, or even became amnesic, perhaps it would be possible for the same piece of music to press an emotional “button” in you over and over again. (It would be quite fascinating to test this on those with anterograde amnesia—do they tire of pieces of music less rapidly?) But my guess is that, in most cases, we involve ourselves so much cognitively in the works of art we enjoy, and in the emotions we get from them, that we are constantly changing relative to them. We don’t really hear the same work in the same way twice, and eventually a given work will bore us, as will a given technique, style, set of timbres, even affective agenda. (This is the “Heraclitian” argument for why we like to rehear music. See Meyer 1961 and Jackendoff 1991—also my dissertation, chapter 4, which is on expectation, rehearing, and modularity.)
Neural pathways, rhythm, and moral frigidity
Two questions regarding Dr. Bashwiner’s “Lifting the Foot”:
1. Your discussion of the neural pathways that musical stimuli travel on their way to their final destination (the “musculature” in fig. 7.6) provides two pathways – the “high road” – the slower method that presumably allows for more “pure contemplation”, and the “low road”, which likely is the pathway of choice for the “startle reflex”. Throughout the article the primary discussion of the type of musical stimuli involved concerns itself with consonance, dissonance, and the mathematical relationships between tones and overtones – that is to say, harmonic considerations. As the subject of the article concerns itself with the dance-ability of the waltz (presumably used as a metaphor for any music that stimulates one to “lift the foot”), I find it interesting that no mention is made of the rhythmic component of music. Are rhythm and harmony interpreted differently in the brain? Do they travel the same or different neural pathways? Is the distinction between the two irrelevant to the brain? There is considerable music that is primarily rhythmic, with little concomitant harmonic content; how is this music processed by the brain? Do Hanslick and Helmholtz address rhythm in their writings, and if so, in what light?
This is an excellent criticism of my article. You’re correct, and I didn’t even realize it until now. There is no way that the pitch content is all that makes for danceability, and yet that’s the only thing I talk about in any depth from the physiological/neurological perspective. Thanks for showing me the way forward.
As I mention in the article, Hanslick’s position is that, “when the South Sea Islander bangs rhythmically with bits of metal and wooden staves and along with it sets up an unintelligible wailing, this is the natural kind of ‘music,’ yet it just is not music” (69-70). In other words, Hanslick recognizes that rhythm is a significant part of musical experience, but he denies it has anything to do with beauty or aesthetics. I don’t fully remember Helmholtz’s position, but I think he also ignores rhythm.
In teaching, pitch and rhythm are kept pretty separate. This parallels the situation in the mind: we have no problem talking about just the pitch content of a phrase, or just its rhythm. But in the brain, they may be less separate. Earlier neurological studies sought to identify modules in the brain/mind that were devoted specifically to rhythm vs. meter (e.g., Liégeois-Chauvel et al., 1998; see Peretz and Zatorre, 2005, p. 94 for a review), but the results seem to have been pretty inconsistent. Some found meter to be on the left and rhythm on the right; others found the reverse. One thing that is agreed upon is that when a temporal pattern has discernable metrical patterning (e.g., 2:1 or 3:1 compared to 2¼:1), the brain shifts the way it processes the information: the putamen, a structure within the basal ganglia, gets more involved, and as a result others can get less involved (Grahn & Brett, 2007; Brown et al., 2006; Geiser et al., 2012). The cerebellum is also frequently implicated in rhythm studies, especially for fine-tuning of temporal behaviors. Thus, in entrainment to a beat, both the cerebellum and the putamen would be involved; whereas if there is no predictable pulse, only the cerebellum would be involved, not the putamen. It’s important to recognize that processing temporal patterns against the background of a pulse allows for more efficient processing (Geiser et al., 2012). That may be why we like metrical stuff more than totally ametrical, unpredictable stuff (musically speaking, that is); and it may also be why we like our rhythms to be not totally perfectly executed, as if by computer, but a bit more flexible, breathable.
For the only true account I know of in which tone and rhythm are treated together, see Musacchia et al. (2013). On groove neuroimaged, see Janata et al. (2011).
2. On a lighter note, in the final sentence of the article, you mention that “as once we shed our Victorian moral frigidity, so too may we someday see the aesthetical beauty in even the most pathological of musical pleasures.” Are you suggesting that those of us still deluded by the antiquated promises of purity and chastity (be they emotional, intellectual, or otherwise) are by definition incapable of appreciating the sheer decadence of the Rolling Stones and the Beatles?
Not at all. The aim here is more at music theorists. We still tend to have a preference for analyzing the high-brow classics over the low-brow, and we tend to look at them in ways that are “purely formal,” i.e., aesthetical rather than pathological. I think we don’t fully understand even the great high-brow classics if we look only at aesthetical responses and not pathological responses. I believe they’re intertwined.
So the critique is not of appreciation, but of analysis. I think we all appreciate the Beatles, and some of us even the Rolling Stones.
Aesthetical vs Physiological
In your article, it seems that you suggest a preference for a physiological, “pathological” approach to music perception. If so, how is music different from random sound in terms of its neurological/physiological effect with regard to our perception? What is the role of musical syntax in the physiological approach?
Also an excellent question.
Language is a good starting point in answering this one. If we hear someone speaking in a foreign language that we do not understand, there’s still plenty of information that we can get from listening. We can hear how old or young the speaker is, his/her gender; we can hear how excited he/she is, and roughly what emotional state he/she is in. We can pick out which are the most important words in a given utterance, and whether individual sentences are questions or answers. We can hear when a new topic has begun and a previous one ended. We get all this from the paralinguistic features of language: intonation contour, temporal patterning, timbre, volume. (There can be accents in any of these dimensions.)
All of these “paralinguistic” or “prosodic” features of language are shared with music, but there’s clearly more to music than just these. Otherwise, to invoke Wittgenstein (the same passage that led to the naming of Huron’s “Wittgenstein’s Puzzle”)—why, when you wish to hear a minuet, do you wish it to be a certain minuet? We hear tunes as distinct tokens, not as graded, only slightly different versions of the same thing.
Just as we process spoken sound as consisting of distinct phonemes (not merely graded differences in timbral space), so too do we process sung sound as consisting of distinct pitch degrees within scalar space, and having distinct durational proportions within a metrical space. This is what we call “syntax” in music. (Aniruddh Patel’s Music, Language, and the Brain is a great resource on this.)
Most definitely, music is completely uninteresting without syntax. The only problems are that: a) music theory focuses almost entirely on syntax (these are Hanslick’s “tonally moving forms”); and b) music theory as a result mistakenly attributes much of the effect of the “paralinguistic” or “prosodic” features of musical sound to its syntax.
The most difficult thing to do presently is figure out how and why it is that processing the syntax of musical sound leads to affect. After all, we don’t get tons of affect from processing the syntax of linguistic utterances (though we get some, as in garden path sentences; see Jackendoff, 1991). In language, we get lots of affect from semantics as well. But music is more evocative than much if not all speech. Where does all that affect come from? It seems to me that it comes partly from musical syntax, and partly from musical “prosody.” We need to up the focus on prosody, and especially up the inquiries into how prosody and syntax interact in music, and in musical experience.
More subtle physiological responses
Dr. Bashwiner, Thank you for agreeing to visit with our class. I really enjoyed your paper, especially the prose. Your work was very engaging and even though it contained neuroscience terminology, it was still very accessible to the lay music theorist reader. My question stems from another source that we read earlier this semester by David Huron. He speaks about physiological responses like frisson in his book Sweet Anticipation. While the bodily reaction to dance music is to dance, other absolute music can evoke more subtle physiological responses, like chills or elevated heart rate. Can you speak to how the neuro-affective apparatus explains these responses in detail?
This is another excellent question. Chill responses to music are among the best understood from the neurological perspective.
The first to investigate them was Goldstein (1980), who called them “thrills.” He found that people experienced them fairly consistently in response to music that they liked, and that there was some consistency to where in the music they experienced them. He also found that opioid-blockers reduced the thrill experience for about 30% of participants, suggesting it might be an opioid-induced effect.
A more recent study by Grewe and colleagues (2007) supported the finding that chills would be found fairly consistently in the same portion of a piece across numerous sessions by a single participant. Physiological measurements were taken, and chill experiences correlated with SCR responses in most cases (292 out of 399). The pieces were analyzed psychoacoustically (with computer), not music theoretically—they looked only at loudness, sharpness, roughness, and fluctuation. Chill moments tended to correspond with changes in loudness, roughness, and fluctuation, although the data are somewhat messy and require a careful read before drawing any conclusions. Most importantly, though, it seems there’s a somewhat consistent relationship between musical/sonic structure and chill responsiveness.
Blood and Zatorre (2001) were the first to study this using brain imaging. They found that during intensely pleasurable chill responses, greater blood flow was seen in or near the nucleus accumbens, a region of the ventral striatum of the basal ganglia associated in numerous studies with the experience of pleasure.
In 2011, a study from the same lab (Salimpoor et al., 2011) reported that the increased blood flow in the nucleus accumbens seen in the 2001 study was in fact due to dopamine binding in the nucleus accumbens. (They were now using more sophisticated techniques, allowing not only for the tracking of blood flow, but also the tracking of dopamine binding.) Pretty fascinatingly, though they found that there was increased dopamine binding in the NAc leading up to the point in time when the chill was experienced, post-chill they found dopamine binding elsewhere, in the caudate nucleus (also in the basal ganglia, part of the dorsal rather than the ventral striatum).
Recalling Meyer’s expectation theory of 1956, it may very well be that there is an anticipatory phase leading up to a chill moment that is neurologically distinguishable from the consummatory phase of appreciating it. The former is dopaminergic and involves binding in the nucleus accumbens; the latter involves dopamine binding in a different region, the caudate nucleus.
I have a slightly different take on this. Salimpoor and colleagues studied the involvement of dopamine but not the opioids. The former is associated with ‘wanting,’ the latter with ‘liking’ (Berridge and Robinson, 1998; Kringelbach & Berridge, 2010). The caudate nucleus is not associated with pleasurable liking; rather, it is associated with cognition. Hence, I would posit that dopamine binding in the caudate nucleus following a musical chill would reflect a coding of the fact of the reward, rather than causing the reward experience itself. The anticipatory phase is probably reflected by dopamine binding in the nucleus accumbens (as Salimpoor and colleagues demonstrate), but the consummatory pleasure is probably achieved through opioid binding in nucleus accumbens and ventral pallidum (see Berridge and Kringelbach, 2010, for more on the neuroscience of pleasure generally, especially the Aldridge and Berridge chapter on the ventral pallidum).
As I understood, after comparing two ideas about music perception, you are suggesting a neuroaesthetics idea. In some parts of the article you challenge the idea that music’s capacity to be meaningful depends on the capacity of attention and emotional state of listener. We might be able to find some examples of music that has similar effects (quality and quantity) on different people, children and even animals. Should we consider this music empty of aesthetic meaning and related to just physical and feeling side of perception?
Great question. I don’t at all think that there could be a music that was pleasurable if we were able to somehow completely get rid of thinking. It seems pretty clear to me, for instance, that my dog gets little pleasure from the music he and I both listen to. I can get pleasure from music even when I’m not actively paying attention to it; but if I were unconscious or in a coma, I probably would not get much pleasure from it. (Or if I were a dog.)
That said, consciously directed attention is only one, very limited faculty that we use when we appreciate music. It is the one that we understand best. But there’s much that goes on beneath the level of conscious awareness (when we listen to music) that we should be aware of.
There’s something called “misattribution” that happens all the time. When we go to a film, for instance, we may leave the theater thinking we really liked the acting, and the story. We may not at all think about the directing, or the lighting, or the music. We may be completely unaware of when music was playing at all, or how many lights were on in a particular scene, etc. Yet these things very much influence how much we like the acting and the story. We, in a sense, misattribute their effects to the acting and story.
I think that even in absolute musical contexts, we do a lot of misattributing. We pay careful attention to words sung, to melodies, and sometimes harmonic progressions. There’s lots of other stuff going on in music—the paralinguistic or prosodic stuff—that hugely influences how we feel about the melody and words and harmony. But we misattribute these effects to whatever it is we’re consciously attending to.
To understand what we’re not consciously able to access, we have to use indirect methods, which we tend to call “science.”
In more direct response to your question, I don’t think it’s possible to get pleasure from music that is purely physical and not at all mental. I don’t think so because I can’t envision a situation in which music could be physical without being mental, unless the listener was in a coma, etc. I’ve tried to make music for animals in the past, using their own sounds, their own rhythms, their own social situations…. but to little avail. I definitely do not think it is impossible to do so—but even doing so would rely on some thinking on the animal’s part, some mental activity. I do not think I could create music that would make, say, my desk start dancing, or my coffee cup wiggle.
In your account of Hanslick’s terms and frame, “pathological” could almost always be read as “bodily”. I’m intrigued by the concurrent use of the term “pathological” to mean both “related to emotions” (a use that is now obsolete) and “related to disease in the body.” http://www.oed.com/view/Entry/138800?redirectedFrom=pathological#eid It’s very intriguing from a new-musicology perspective to consider the implications of the body = pathology construction. Do you have anything to say about how this issue has been taken up in feminist musicology and/or disability studies? Do you think the 19th century idea that the body (bodily response) is pathological might still be operating today?
Unfortunately I don’t have anything interesting to say on this front. You point out to me something I was not fully aware of—that I take advantage of the fact that “pathological” currently means “diseased” in order to allow the absurdity of Hanslick’s position (as I see it) to speak for itself. I’m embarrassed to say I have not gone further than this. Perhaps we can talk more about this in future. It’s certainly an avenue to be explored.
Neuroaesthetic Approach – A New Language of Music?
On page 259 of Lifting the Foot you conclude that the aesthetical aspect of the response to music is inseparable from the pathological. Could you specifically talk about what a neuroaesthetic approach to talking about music might be? Are we waiting for a new language of music?
In some sense my push toward neuroaesthetics is aimed at musical scholars – theorists, historians, philosophers, aestheticians generally, and so on. It’s not totally aimed at composers, at least not proximately. But, at the same time, the reason I got into “neuroaesthetics” in the first place is because I was composing film music and was looking for a reference guide of sorts, to give me some sense of how to arouse and modulate certain forms of emotion in certain ways. And I simply couldn’t find anything of value at all. I found a lot of philosophical discussion of musical “meaning” – but seemed mostly about whether music has meaning, not how it influenced meaning. Topic theory, too, though more practical in a sense, I found to be too cerebral, too dependent upon enculturation and conscious awareness—cogitation, as I like to call it. Psychology I also found to be too general—fast music was found to arouse (or at least portray) happiness and fear, compared to slow music for sadness and “peacefulness.” But there is so much more to music than it speed, and psychology had at that time not quite caught up with music’s complexity. (It really still hasn’t, though it has made great strides. Juslin & Sloboda, 2010, is an amazing resource.) So, I do think that neuroaesthetics can ultimately be incredibly inspiring to composers. But I think that’s less direct than the influence it is likely to have more proximately on scholars and researchers.
It seems as though you suggest that intervals/sonorities that are based on simple proportions are more pleasing than those based on irrational proportions. Does this mean that a 12-tone equal temperament system may not be the best solution for creating pleasing sonorities? Should we use just intonation instead? If so, what are the implications for the use of this system (or another tuning system, perhaps) in the future? Thank you!
As I mentioned during our Skype discussion the other day, I make no claim to be an expert on the pleasantness of justly tuned intervals (and those tempered to be close to justly tuned). I pretty much take it as an assumption that justly tuned intervals are pleasing, that the simpler ratios (e.g., 2:1, 3:2, 4:3, 5:4, etc.) are more “consonant” and “pleasing” than the more complex ratios (e.g., 9:8, 15:16, 23:16). (See chapter 1 of my dissertation for a more in-depth tracing of such ideas throughout music history.)
Given this assumption, it follows that 12-tone equal temperament is particularly good at reproducing a “pleasing” perfect fifth (and 12th, etc.). The 3/2 fifth is 702 cents, and the 12-tone equal tempered fifth is 700 cents. Two cents difference (one fiftieth of a semitone) is effectively indiscernible (although I guess you can hear the warbles). But certainly many of the other intervals are less close to their just counterparts.
Other temperament systems are great for other purposes. One I’ve always meant to spend more time with is 19-tone equal temperament, which gives not only good fifths (695 cents) but also good thirds and sixths—and it has a septimal minor seventh (roughly the 7:4 ratio), which I also hope to explore compositionally sometime. There are good Wikipedia pages on 19-tone equal temperament and the septimal minor seventh, and Joseph Yasser has a cool book on the former from the 1930s.
All of this is to say that the work I do accepts 12-equal temperament as the current norm, but nothing is tied to it, and I definitely think there can be pleasure out there that is linked to other systems. As you’ll find if you read Easley Blackwood’s book on alternate tuning systems, the modulatory capacities within each different equal tempered world are totally different, and they too promise new and undiscovered possibilities for pleasure.
Dear Dr. Bashwiner, Your article does a smooth job of interweaving disparate findings from a variety of sources. Did you find it challenging to balance writing about technical subjects, such as physiology, with keeping an accessible tone for an audience of potential non-specialisits?
I mentioned this during our Skype session too: because of the subject matter I deal with, I am always writing for non-specialists. A main claim that I make in my research work is that emotional responses to music are simply not capable of being understood from any one isolated perspective alone. Emotion itself is psychological and biological, having social and experiential influences as well. Music, similarly, is acoustical, physiological, psychological and social. Put all of these together, and you get so many fields that you will never find someone who is an expert in all of them.
I also mentioned that for the most part I prefer an audience of educated non-experts. I like to think that if explain my ideas clearly and creatively enough, even the most complex ideas can be understood by educated non-experts (e.g., a grandmother, a mother, a granting committee member, a reader of the New York Times). I highlight the word creative because it his harder, no doubt, to communicate effectively an idea to a non-expert in the same amount of detail as to an expert. I aim to keep the complexity at the ideal level irrespective of the reader’s expertise, and then use metaphors and examples to bootstrap the reader up to the level necessary to understand the ideas in their full richness.
Basilar membrane and emotional responses
For the most part, musical climax is the section of a piece that evokes a strong emotional response from a listener. These passages are often marked by the feeling of inexorable tension that is subsequently released. Since tension and dissonance go hand in hand (to a degree), how might a listener’s emotional reaction be connected to the excitement of the basilar membrane?
Climax is a fascinating subject of study, and too few people have studied it, let alone written about it (Huron, 2006, being a notable exception). Perhaps shockingly, almost every generalization we have about what we like in music and why goes out the window for climax— and yet we wouldn’t like works anywhere near as much as we do if they didn’t have these climaxes. I myself have thought very little about it, so I won’t be able to offer much in the way of assistance except to say—yes, go for it, study climax!
Tension and dissonance do pretty much go hand in hand, and climax is dependent upon tension. Also, from my “Lifting the Foot” article, you know that the basilar membrane is the original source, physiologically, of dissonance. Helmholtz found, for instance, that there was no such thing as dissonance in the acoustic realm, only once the sound begins to be transmitted into neural impulses. Lateral inhibition leads the processing of one tone to interfere with the processing of another tone if it is within a critical band. Lateral inhibition takes place along the basilar membrane.
But not only there. If you play, for instance, A 440 into your left ear, and the B-flat just above it into the right ear, you will still experience dissonance. This means that dissonance is not only a product of the “analog-to-digital” translation process that takes place in the basilar membrane. It’s also in the inferior colliculus (my dissertation has more on this, also Houtsma & Goldstein, 1972) and probably elsewhere (auditory cortex). At least one species of monkey has been shown not only to recognize dissonance vs. consonance, but also to be able to generalize these categories (Izumi, 2000). Thus the physiological/neurological basis of consonance/dissonance may not be specific to humans but may be shared at least by other primates. This doesn’t necessarily mean they like it better (McDermott and Hauser, 2004), though at least one chimpanzee does (Sugimoto et al., 2009), as do baby chicks (Chiandetti, 2011).
For recent neuroscientific work on musical tension (in humans), see Lehne et al. (2013).
In sum, and as a blunt answer, I wouldn’t expect there to be any sort of straightforward relationship between the emotional experience of musical tension and anything taking place along the basilar membrane. That said, what takes place along the basilar membrane takes place at multiple levels in the brain (inferior colliculus, primary auditory cortex, and probably dorsal nucleus of the medial geniculate body of the thalamus). This “lemniscal pathway,” therefore, probably maintains categorical distinctions between consonance and dissonance, and information in this pathway is probably transmitted in relatively distinct ways throughout the rest of the brain, interacting differently with higher cognitive centers and subcortical emotion centers (including endocrinological, attentional, motivational, etc.). Thus, yes, there is likely to be a physical correlative to tension, dissonance, and thus climax in the brain; but the basilar membrane is only a tiny part of that picture.
If some sector of music theory is devoted to understanding the ‘neuro-affective apparatus’ and how it responds to music, do you think there is room for this research in the medical field in understanding its potential for aiding in improving health? For example, collecting measurables to determine the role certain musics can have on improving mental health (during or outside therapy), coupling with cancer treatment, or patients recovering from injury or treatment while hospitalized (or after).
Yes, definitely. The fact that there is a bi- or tri-annual international conference called “The Neurosciences and Music” attests to the fact that there are numerous neuroscientists working seriously on understanding music’s influence within the body. This means that enough people believe music to be capable of affecting health that they either devote research hours to it, or they fund those research projects. Music is also used more and more frequently in hospitals now (e.g., my colleague Patrice Repar’s Arts in Medicine program). Music demonstrably affects the immune system, hormone levels, and movement ability in Parkinson’s patients. I’m sure there are many more examples that I’m not thinking of off the top of my head.
Metaphor and Physiological Semantics
Thank you for your time to “meet” with us. Your article “Lifting the Foot” inspired my curiosity when you argued in your conclusion that, without a developed musical schema to account for the relationship between the physiological and cognitive realms, any answers arising from investigations of auditory phenomena will only result in metaphors; yet, only two paragraphs earlier, you state that metaphor “is a legitimate mode of thought” in its function to “explain the as yet unexplained” (261). To me, it seems that music semantics, in particular, falls in the category of “as yet unexplained”. If this is the case, then since we have not established or universalized any form of musical semantics to account for the auditory sensations we experience, is it possible for metaphor to continue to function as a legitimate means to account for the auditory experiences that we have until such a category in music can be validly formed?
Yes. Metaphor is definitely a legitimate mode of thought. What I wish to impart is that, as far as science goes, it’s a beginning rather than an end. Using a metaphor helps us get our minds around an idea that is difficult to express, or to understand by other means. That said, we often rely on metaphors as if they were the final word, as if they were explanatory or even causal.
This is very much the case for understanding music’s relationship to the emotions. Many of us feel that music arouses emotions, and yet we don’t understand how it does so. When we brainstorm about it, we recognize a) that music sometimes resembles things in the world that we feel emotion about; and b) that music sometimes even resembles emotions themselves, or the expression of emotions. Once we point out that music resembles emotion, its expression, or something that evokes emotion, we naturally leap to supposing that this relationship may be a causal one.
The causal one is problematic, however. Peter Kivy (1980) invokes the idea of the kidney-shaped swimming pool, saying that of course it doesn’t stand for or signify a kidney. If someone urinates in the pool, to take the analogy farther (a bit too far, I recognize), it would be silly to blame the shape of the pool. Resemblance in that case is obviously not causal of anything.
But with musical emotion, we often reason that, since music resembles emotion, emotional expression, and/or things that give rise to emotion; and since music gives rise to emotion, it must follow that the resemblance causes the giving rise. This is an Occam’s razor sort of explanation, but it simply doesn’t follow. To make a big deal out of resemblance, and then to call it quits, is to cover up the fact that a causal explanation is still missing. In my opinion, this is what we do when we make too big a deal about metaphors.
When it comes to musical semantics, we know both more and less than what we know about emotion. The sound of the Dies Irae, in whatever composition it occurs, calls up a recognition in my mind that the composer is invoking the semantic idea of death or a funeral. Does it make me feel scared or sad? I really don’t think so. If it is played loudly, suddenly, darkly, powerfully, grandly—if any combination of these, I think I would be more inclined to experience some terror. And because I have the semantic token of “death” or “funeral” in my head, I may then start thinking of eternal damnation, or some such thing, and think about this at a greater intensity, and with greater darkness, than if the Dies Irae had been played on recorders by third graders. In short, I tend to take greater interest in music’s emotional effects than its ability to reference ideas—its significance more than its signification. And I do think a lot of interest in musical meaning is misplaced when it focuses on signification over significance. I think of signification as icing on the cake significance—which wouldn’t exist without the cake itself. Or at least wouldn’t be music. (For cupcakes in music, see chapter 6 of my dissertation. Seriously.)
Aldridge & Berridge (2010), “Neural Coding of Pleasure: ‘Rose-tinted Glasses’ of the Ventral Pallidum.” In Kringelbach, M., and Berridge, K. C. (eds.), Pleasures of the Brain, 62–73. Oxford, UK: Oxford University Press
Bashwiner, D. M. (2010). Musical Emotion: Toward a Biologically Grounded Theory. (Doctoral dissertation). Retrieved from ProQuest Dissertations and Theses A&I (Order No. 3408503).
———. (2011). Lifting the foot: the neural underpinnings of the “pathological” response to music. In B. M. Stafford (Ed.), A Field Guide to a New Meta-Field: Bridging the Humanities-Neurosciences Divide (239–266). Chicago: University of Chicago Press.
———. (2013). Musical Analysis for Multimedia: A Perspective from Music Theory. In S.-L. Tan, A. J. Cohen, R. Kendall, and S. Lipscomb, eds., Psychology of Music in Multimedia, Oxford University Press.
Berridge, KC, & Robinson, TE (1998). What is the role of dopamine in reward: Hedonic impact, reward learning, or incentive salience? Brain Res Rev, 28, 309–369.
Blackwood, Easley. 1986. The Structure of Recognizable Diatonic Tunings. Princeton University Press.
Blood, A. J. and R. J. Zatorre. 2001. “Intensely Pleasurable Responses to Music Correlate with Activity in Brain Regions Implicated in Reward and Emotion.” Proceedings of the National Academy of Sciences 98, no. 20: 11818–11823.
Chiandetti, C. and Vallortigara, G. 2011. Chicks Like Consonant MusicPsychological Science 22: 1270.
Geiser, E., Notter, M., and Gabrieli, J. D. E. 2012. A Corticostriatal Neural System Enhances Auditory Perception through Temporal Context Processing. The Journal of Neuroscience 32(18): 6177–6182.
Goldstein, A. 1980. “Thrills in Response to Music and Other Stimuli.” Physiological Psychology 8: 126–9.
Grahn, J. A. and M. Brett. 2007. “Rhythm and Beat Perception in Motor Areas of the Brain.” Journal of Cognitive Neuroscience 19, no. 5: 893–906.brown et al 2006
Grewe, O., Nagel, F., Kopiez, R., and Altenmüller, E. (2007). Listening to music as a re-creative process: physiological, psychological, and psychoacoustical correlates of chills and strong emotions. Music Perception 23(4), 297-314.
Hanslick, E. 1986. On the Musically Beautiful: A Contribution Towards the Revision of the Aesthetics of Music. Trans. G. Payzant. Indianapolis, IN: Hackett. Originally published 1854. Translation based upon the eighth edition, 1891.
Helmholtz, H. L. F. 1954. On the Sensations of Tone as a Physiological Basis for the Theory of Music, second edition. Trans. and ed. A. J. Ellis. New York: Dover Publications, Inc. First edition 1863. Second edition 1877. Second edition in English translation originally published 1885.
Houtsma, A. J. M., and J. L. Goldstein. 1972. “The Central Origin of the Pitch of Complex Tones: Evidence from Musical Interval Recognition.” Journal of the Acoustical Society of America 51, no. 2: 520–29.
Huron, D. 2006. Sweet Anticipation: Music and the Psychology of Expectation. Cambridge, Massachusetts: The MIT Press.
Izumi, A. “Japanese Monkeys Perceive Sensory Consonance of Chords.” Journal of the Acoustical Society of America 108, no. 6: 3073–78.
Jackendoff, R. 1991. “Musical Parsing and Musical Affect.” Music Perception 9: 199–230.
Janata, P., Tomic, S. T., and Haberman, J. M. 2012. Sensorimotor Coupling in Music and the Psychology of the Groove. Journal of Experimental Psychology: General 141: 54–75.
Juslin, P. N. & Sloboda, J. A. (Eds.). 2010. Handbook of Music and Emotion: Theory, Research, Applications. Oxford: Oxford University Press.
Kivy, P. 1980. The Corded Shell: Reflections on Musical Expression. Princeton: Princeton University Press.
Kringelbach, ML, Berridge, KC, (Eds) (2010). Pleasures of the brain. Oxford: Oxford University Press.
Lehne, M., Rohrmeier, M., and Koelsch, S. 2013. Tension-related activity in the orbitofrontal cortex and amygdala: an fMRI study with music. Social Cognitive and Affective Neuroscience. doi:10.1093/scan/nst141.
Liégeois-Chauvel, C., I. Peretz, M. Babaï, V. Laguitton, and P. Chauvel. 1998. “Contribution of Different Cortical Areas in the Temporal Lobes to Music Processing.” Brain 121: 1853–67.
McDermott, J. and M. D. Hauser. 2004. “Are Consonant Intervals Music to their Ears? Spontaneous Acoustic Preferences in a Nonhuman Primate.” Cognition 94: B11–B21.
Meyer, L. B. 1956. Emotion and Meaning in Music. [EMM.] Chicago: University of Chicago Press.
———. 1967. “On Rehearing Music.” In Music, the Arts, and Ideas: Patterns and Predictions in Twentieth-Century Culture, 42–53. Chicago: University of Chicago Press. Originally published 1961.
Musacchia, G., Large, E., and Schroeder, C. E. 2013. Thalamocortical mechanisms for integrating musical tone and rhythm. Hearing Research 308: 50–59.
Patel, A. D. 2008. Music, Language, and the Brain. Oxford: Oxford University Press.
Peretz, Isabelle, and Robert J. Zatorre. “Brain Organization for Music Processing.” Annual Review of Psychology 56 (2005): 89-114.
Salimpoor, VN, Benovoy, M, Larcher, K, Dagher, A, Zatorre, RJ (2011). Anatomically distinct dopamine release during anticipation and experience of peak emotion to music. Nat Neurosci, 14, 257–262.
Sugimoto, T., H. Kobayashi, N. Nobuyoshi, Y. Kiriyama, H. Takeshita, T. Nakamura, and K. Hashiya. “Preference for Consonant Music over Dissonant Music by an Infant Chimpanzee.” Primates: DOI 10.1007/s10329-009-0160-3.
Wittgenstein, L. 1966. Lectures and Conversations on Aesthetics, Psychology, and Religious Belief. Berkeley.
Yasser, Joseph. 1932. A Theory of Evolving Tonality. American Library of Musicology.
“Quality” refers to whether a triad is major, minor, diminished, or augmented. (The qualities of seventh chords are discussed in a separate post.)
A “triad” is a collection of three notes, but it is not a collection of any three notes. These notes are always selected as every other note of a scale. Given the scale of C major (white notes of the piano), the notes C-E-G form a C major triad, the notes D-F-A form a D minor triad, and so on, as follows:
C-E-G C major
D-F-A D minor
E-G-B E minor
F-A-C F major
G-B-D G major
A-C-E A minor
B-D-F B diminished
What makes C-E-G major and D-F-A minor? Consider where the black keys fall relative to the white keys in these chords. The size of the interval from C to E is actually larger than the interval from D to F, because the former has two black keys in between, the latter only one (see Basic Scales: Major and Minor). C-E is called a major third, while D-F is a minor third; the first has four half steps and the second has three. Axiomatically, a three-half-step third is minor in quality, while a four-half-step third is major in quality (see Intervals).
A major chord consists of a root note, another note that is a major third above the root, and another note that is a perfect fifth above the root. A minor chord also consists of a root note with a perfect fifth above it, but the note that is a third above the root is a minor third above rather than a major third. The notes C-E-G spell C major; the notes C-Eb-G spell C minor; the notes D-F-A spell D minor; the notes D-F#-A spell D major.
Also axiomatically, a diminished chord consists of a minor third and diminished fifth above the root, while an augmented chord consists of a major third and augmented fifth above the root. Thus, using a root note of B, we get the following:
B major: B-D#-F#
B minor: B-D-F#
B diminished: B-D-F
B augmented: B-D-Fx (F-double-sharp).
There are very few piece of music that have all versions of a triad in them. For instance, I can think of no pieces that use the chords C major, C minor, and C diminished, all at once or in even very modest proximity with one another. Yet there are an infinite number of pieces that use the chords C major, D minor, and B diminished within close proximity of one another, often over and over again. Why is this? In short it is because the former set of three chords, though they share the same root note, are not closely related to one another, while the latter set are.
The following is a close-knit family of triads: C major, D minor, E minor, F major, G major, A minor, and B diminished. Just about every folk song on the planet (and someday Mars, too) uses these chords and only these chords (though often transposed as a whole to other keys). These are simply the chords that are generated by playing every third note of a C major scale, i.e., the white notes of the piano. Try it yourself. You can use exactly the same hand formation, playing C-E-G, and then shifting the whole hand rather thoughtlessly up one note to create the chords D-F-A, E-G-B, etc. It’s fun. For a few minutes.
What’s more fun is to try this in different keys (because it’s harder). Do it in G first, and just remember that every F will be sharped (in your iii chord, your V chord, and your viiº chord). Now do it in F, flattening the fourth degree of the scale—B-flat—in the ii chord, the IV chord, and the viiº chord. And now try it in some more difficult keys, just by beginning with a major scale in that key and forming chords out of every other note of the scale. This is actually fun for more than a few minutes. I’d go so far as to say for several.
I snuck some Roman numerals into the previous paragraph. For those of you who are not Italian, don’t fret. They are easily understood. The C major chord, in the key of C major, is the I chord; the D minor chord in the key of C major is the ii chord, and so on. The I chord is formed from the first degree of the scale, or “do” in solfège; the ii chord is formed from the second degree of the scale (or “re” in solfège), and so on. Capital Roman numerals are for chords that are major in quality, while lowercase numerals are for minor chords. The diminished chord gets the º sign (viiº). See, nothing to worry about!
The last thing to do is derive the triads in minor. This is admittedly somewhat complex, given the unwieldiness of having three different forms of minor. It’s best to start with natural minor, and we’ll use A as the root (to allow us to think only in white notes). The triads we get are the following:
A B C D E F G A
min dim Maj min min Maj Maj min
i iiº III iv v VI VII i
If you start messing around with these chords, however, you’re going to feel like you’re playing “Greensleeves,” or any other folk song in minor. This doesn’t really sound like Western classical music, in other words. To get that special sound, we have to raise the seventh degree—the leading tone—transforming the scale from natural minor to harmonic minor. Note, however, that we do this only on the V and viiº chords, not on the III chord:
A B C D E F G# A
min dim Maj min Maj Maj dim min
i iiº III iv V VI viiº i
One occasionally sees the sixth scale degree raised as well, as in melodic minor (ascending). Raising this scale degree gives us minor ii, major IV, and diminished viº. These are almost always thought of as insubstantial alterations. In other words, don’t plan to just raise the third of your IV chords willy nilly, or make your ii minor rather than diminished.
One final note, about augmented chords: if you were to raise the seventh degree of the scale in the III chord (in harmonic minor), you would end up with an augmented chord (in A minor, the chord would be C-E-G#). But this effectively never happens. A rule of thumb to memorize is that augmented chords always (or almost always) have dominant function—i.e., V chord function. This means that if you encountered the chord C-E-G# and thought you had an augmented III in A minor, you’d almost certainly be wrong. Rather, you’d probably have an augmented V chord in a completely different key, namely F major (C is V in the key of F). This is an idea that will be returned to in a later post.
A scale is an ordered collection of notes. In Western classical music, there are two main types of scale: major and minor. The minor scale has three commonly accepted forms: natural, harmonic, and melodic. Other scales, such as the octatonic and whole-tone, will be discussed in a later post.
The major scale is easy to derive if you have a piano in front of you, or if you know the first seven letters of the alphabet relatively well. Start on C and keep going till you get back to C again: C-D-E-F-G-A-B-C.
The natural minor scale is even easier, do the same as above but begin on A instead: A-B-C-D-E-F-G-A.
To get a harmonic minor scale, take the above A natural minor scale and raise the seventh, i.e., raise the G to G#: A-B-C-D-E-F-G#-A.
To get melodic minor (ascending), do the same as above but raise the sixth as well: A-B-C-D-E-F#-G#-A. On the way down in melodic minor, one traditionally reverts to natural minor. Thus ascending melodic minor raises the sixth and seventh (F# and G#), while on the way down these notes are lowered to their natural forms, F and G.
So how does one translate these principles to other keys? There are two ways. One is simply to apply a key signature. The key of G major has one sharp, so the notes in a G major scale will all be natural except for the F, which will be sharped: G-A-B-C-D-E-F#-G. For E minor, apply the same principle. The scale E natural minor will be E-F#-G-A-B-C-D-E; the scale E harmonic minor will be E-F#-G-A-B-C-D#-E; the scale E melodic minor (ascending) will be E-F#-G-A-B-C#-D#-E.
I say that’s the easy way because it’s relatively mindless. You can simply look up in a table what the key signature of a given key is and then apply the transformation. You will also quickly memorize this “table” if you haven’t already done so. (Or you should.) But you won’t really understand what a scale is if you only do it this way. The best way to understand what scales are and where they come from is to consider their whole-step/half-step patterns.
To do this, we’ll superimpose the C major scale over the chromatic scale:
C D E F G A B C
C – C#/Db – D – D#/Eb – E – F – F#/Gb – G – G#/Ab – A – A#/Bb – B – C
Note that there are two different sizes of “space” or “interval” between the notes of the major scale. The intervals C-D, D-E, F-G, G-A, and A-B are larger in size that the intervals E-F and B-C. All of these intervals between adjacent letter names are called “steps,” but the former set are called “whole steps” and the latter “half steps.” The terms “tone” (or “whole tone”) and “semitone” are also used.
The whole-step/half-step pattern of a major scale is thus w-w-h-w-w-w-h.
We can derive our friend the G major scale based upon this pattern. Beginning on G, we ascend by whole step [w] to A, again by [w] to B, now by [h] to C, now by [w] to D, now again by [w] to E, now by [w] to F#, and finally by [h] to G. We have arrived at the same answer as we did by using the key signature method.
Now let’s look at the minor scales. I personally tend not to think in whole-step/half-step patterns for all the versions of the minor scale. Rather, I do for natural minor, and then I derive harmonic and melodic from the natural.
The whole-step/half-step pattern for natural minor is w-h-w-w-h-w-w. Starting on D, for instance, this yields D-E-F-G-A-Bb-C-D—which is exactly what we would get if we were to use the key signature method (D being the relative minor of F, and F having one flat in its key signature). To get harmonic minor we now raise the seventh, yielding: D-E-F-G-A-Bb-C#-D. And to get melodic minor (ascending), we raise the sixth as well: D-E-F-G-A-B-C-D
Before continuing on, derive a B-flat major scale and all three versions of B-flat minor. No cheating!
Okay, now check your answers:
Nat. Min.: Bb-C-Db-Eb-F-Gb-Ab-Bb
Harm. Min.: Bb-C-Db-Eb-F-Gb-A-Bb
Mel. Min.: Bb-C-Db-Eb-F-G-A-Bb
Songs are not trivial. They are lifeblood. You have to use a song to get at truth. You have to escalate through it, you have to explore it fully. Almost “encyclopedically,” I would say. That sounds cold-blooded, but there’s a kind of cold-blooded determination you have to have to force yourself to explore every corner of your trauma, all the darkness surrounding the light. We like to direct the spotlight at what we’d like to be the truth, the whole truth. It may be truth, but if it is it may also be just one side of the coin. Flip that coin, and write a song about what’s on the back. To understand what’s in the spotlight—to really understand it, to engage with it, to embrace it—go into the darkness surrounding it and illuminate that. It will be ugly, and that’s how you know you’ve found something good, something real, something beautiful, something human and not pretend, something you.