An interesting five minute piece exploring findings on the chemical reactions occurring in the brain as a result of being exposed to story:
An interesting five minute piece exploring findings on the chemical reactions occurring in the brain as a result of being exposed to story:
The video below can be found at NPR.org, (3 min, 34 sec).
Humans innately go in circles when the horizon — the simplest form of context — is removed from our toolbox of perceptions. Is the circle a default protection mechanism? An adaptive trait that, as cavemen, saved us from disappearing into the wilderness when we inevitably got lost? And what implications does this have for narrative artists: how can we take advantage of it?
It seems to me that storytellers — playwrights, novelists, epic poets, etc — have intuitively known about this tendency for millennia. It is literally a trope employed in many stories: the characters, lost in the dark, walk in circles (think Scooby Doo). Aside from literal characters lost in the landscape, circle plots aren’t rare; many plots circle back to where they began, or execute an upward spiral (a form of circle, natch) that returns to where we began, but with new context. The good ol’ “rule of three” could be seen as a series of loops wherein the story returns to an idea a couple of times, each time adding something new. No doubt there are dozens of ways we could apply this understanding as narrative engineers persuading audiences to navigate our stories.
How much do your characters talk about themselves?
Recent scientific research suggests that in everyday life we talk about ourselves a lot:
“If you’re like most people, your own thoughts and experiences may be your favorite topic of conversation. On average, people spend 60 percent of conversations talking about themselves—and this figure jumps to 80 percent when communicating via social media platforms such as Twitter or Facebook.” Read the full text at: The Neuroscience of Everybody’s Favorite Topic: Scientific American.
Just because current science says we talk about ourselves around sixty percent of the time doesn’t mean we absolutely need to check our manuscripts to see that our characters match that standard. It might help the storytelling be more “realistic” if we do so, but “realistic” is not really an aesthetic standard so much as a concept to be balanced among a large number of factors in a given narrative. Science-says need not be Simon-says.
However, tracking self-referential dialogue can be an interesting notion to play with, especially in story that is dialogue-driven. It is potentially useful to go through your play script, for example, and label each line of dialogue as 1) self-referential, 2) neutral or n/a, and 3) reflective of external concerns or others. From this data, one could build a simple chart and determine approximately how much of a character’s expression is about himself and how much is referring to people/things/ideas outside himself. Using 60 percent as a standard baseline, we have a benchmark to see if characters are behaving naturally, at least according to the study cited above. Finding that one of your characters talks about himself far above or below the average might reveal something you hadn’t consciously crafted into the character. Either an overabundance or a paucity of self-referential dialogue might be reinforcing the actions a character is taking, or might be undermining them, both of which can be interesting choices, but generate divergent audience responses.
This kind charting as a tool is, to some degree, subjective to the author’s intent behind the individual lines. And it is tempered by an author’s control in generating subtext that obviates the surface expressions of the lines; any given snippet of dialogue may seem outwardly directed but actually intended to have a subtext that is self-referencing, or conversely, seem selfish but actually meant to be selfless. To be as accurate as possible, generating a chart of a character’s self-referential dialogue should track what a character means — what an author intends, and the meanings she is capably placing underneath the character’s words — more so than the literal surface of what a character says. Plus, as a further caveat, even if a character talks about himself only 15% of the time, it is not an ironclad guarantee he is less self-centered. Actions not reflected whatsoever in a given character’s dialogue can outweigh everything a character says, of course. Even so, while it is apparent that there is a deal of latitude for interpretation in tracking how self-centered any character’s dialogue is, charting how often a character talks about himself is potentially a valuable tool.
Here’s another interesting compilation of findings about what neuroscientists are seeing in the brain when we experience story. It’s worth the read, especially for how meta-analysis of fMRI studies has been confirming that our neurons fire in parallel ways: when chewing a raisin, or just reading about munching raisins, the same areas of the brain light up. And, as a nice, lemony glaze on this carrot cake of info, the author even goes so far as to remind us that we’re attracted to this neurological research because it confirms a bias we already have:
These findings will affirm the experience of readers who have felt illuminated and instructed by a novel, who have found themselves comparing a plucky young woman to Elizabeth Bennet or a tiresome pedant to Edward Casaubon. Reading great literature, it has long been averred, enlarges and improves us as human beings. Brain science shows this claim is truer than we imagined. The Neuroscience of Your Brain on Fiction – NYTimes.com
There’s nothing wrong with confirming a bias like this, right? I mean, lemon frosting is just the thing: a tang to cut the sweet, yet decadent enough to make me feel swell… Still, I guess there is one small question: if we assume the scientists are getting the science right — not an entirely safe assumption, actually, since correlation is not always causation — but if we assume the science is right, isn’t it just telling us story is important to humans? Don’t I already know this? Doesn’t everyone already know that story is important?
Well, some folks might argue, but not many. I can’t find anyone to argue about whether story is important; everyone seems to agree it is, many scientists now included. I can find people to argue about the neuroscience, but not the bias itself. Who is out there saying story or metaphor aren’t important to us as a species?
So, for the science of narrative, what does neurology really have to tell us? For our tribe (authors), crowing about research such as this might seem like an exercise in self-congratulation, assuming the research proves true. Sometimes, frankly, that’s what is going on. Sometimes we just want to be told we’re doing something valuable for the species. But that’s not always the case. In fact, from the same article I quote above comes the following little revelation that might be of tremendous, rubber-meets-the-road value to story engineers everywhere:
…a team of researchers from Emory University reported in Brain & Language that when subjects in their laboratory read a metaphor involving texture, the sensory cortex, responsible for perceiving texture through touch, became active. Metaphors like “The singer had a velvet voice” and “He had leathery hands” roused the sensory cortex, while phrases matched for meaning, like “The singer had a pleasing voice” and “He had strong hands,” did not. The Neuroscience of Your Brain on Fiction – NYTimes.com
Does that get your attention? (Or, should I say: does this velvet news prickle your leathery nape the way it does mine?) Again, assuming the science is correct, as narrative artists we are activating the brain more acutely when we use language that evokes texture or touch. It’s true that creative writing teachers have been saying things like this for a century or six, but now scientists are kinda sorta suggesting it too! Perhaps there’s still room to argue about whether activating these regions with sensation-centric language might be counterproductive to conveying certain ideas in certain contexts. And again, the proof is far from complete, or very deep.
Still, bias confirmed? Yep. Is it likely a good idea to employ sensory-conscious language in every story you tell, perhaps even in a blog post like this, where the only literal, physical reality is the slick glide of my thumb on a sleek glass touchpad, and the clack of square, black keys? You bet your lemon frosting it is… Now, where’s the neuroscience on the effect of mixing metaphors? That would be extremely valuable.
The power of metaphor is being mapped by brain scientists…?
I first encountered neuroscientist Robert Sapolsky on Radio Lab, a podcast I follow closely to pick up tid-bits that apply to my field of practice. Upon learning about the guy, I went back and looked up anything I could find he’d written. (Well, I looked up anything he’d written for laymen, that is, since much of his academic writing is likely beyond me.) Below is something Sapolsky wrote for civilians, and it applies directly to the science underpinning narrative devices, such as metaphor. The whole article is a great read and worth the time for those interested in the physiological reasons for why metaphor is so powerful.
Consider an animal (including a human) that has started eating some rotten, fetid, disgusting food. As a result, neurons in an area of the brain called the insula will activate. Gustatory disgust. Smell the same awful food, and the insula activates as well. Think about what might count as a disgusting food (say, taking a bite out of a struggling cockroach). Same thing.
Now read in the newspaper about a saintly old widow who had her home foreclosed by a sleazy mortgage company, her medical insurance canceled on flimsy grounds, and got a lousy, exploitative offer at the pawn shop where she tried to hock her kidney dialysis machine. You sit there thinking, those bastards, those people are scum, they’re worse than maggots, they make me want to puke … and your insula activates. Think about something shameful and rotten that you once did … same thing. Not only does the insula “do” sensory disgust; it does moral disgust as well. Because the two are so viscerally similar. When we evolved the capacity to be disgusted by moral failures, we didn’t evolve a new brain region to handle it. Instead, the insula expanded its portfolio. [Bold emphasis is mine.]
Sapolsky’s entire essay can be found at This Is Your Brain on Metaphors – NYTimes.com.
Note: Although there is growing dissent and push-back in the scientific community concerning how the media has misrepresented recent advances in neurology by blowing them out of proportion, or taking them out of context, Sapolsky is one guy I tend to trust in this regard. The scientist, Sapolsky himself, is making arguments based on well regarded evidence from his own findings and the research of those he trusts. This information coming directly from him is not quite the same as some layman — me, or some journalist, or the evening news — making leaps or taking license that the research itself doesn’t support.
It seems there’s something about the process of going through a multi-stepped procedure that provokes in people feelings of control, above and beyond the role played by any associated religious or mystical beliefs.
See the short review of recent scientific research on the power of ritual quoted above at: BPS Research Digest: Rituals bring comfort even for non-believers.
I won’t argue here how theater likely arose from ritual; google “theater+ritual” and you’ll see the arguments for the emergence of theater from the rituals of the ancient Greeks, etc. Nor is there any need to belabor how Beckett and many other modern theater artists can be viewed through a ritualistic lens; google “Beckett+ritual” and you get plenty. Theater theorists and scholars in the 20th century mined this vein well, and made strong arguments. It’s worth looking into, but not my main point here, and I think most would stipulate that between the Greeks and Beckett are likely many thousands of examples of ritual in theater, ritual as theater, theater as ritual, and a mountain of evidence that theater and ritual remain bound together. For narrative artists working in front of an audience, there is plenty from ancient times through present day to tell us the value of ritual.
However, science is working to prove empirically the effect ritual has on us, and such proof may well expand our understanding of how best to employ it as narrative artists.
Over at BPS research (one of my favorite places to sift through advances in the behavioral sciences for how they might affect narrative science) Christian Jarret sums up some interesting new research on how ritual gives one a sense of control over their future — or more precisely, over things one cannot control, such as grief or loss. The ramifications are, again, pretty obvious for theater artists. It’s nice to see science proving something we practice in theater: catharsis. But can ritual prove to be a powerful tool for all kinds of narrative? My shoot-from-the-hip answer would be to point to Joseph Campbell’s Hero’s Journey argument and say: it’s hard to argue with Campbell’s extensive work showing that story itself is derived of ritual. From that point of view, it really doesn’t matter how story is conveyed to you; it could be a live performance in the theater, or words a page, or even in a haiku story that is tweeted and viewed on your iPhone and it would be a descendant of ritual as Campbell explains it. Again, google will tell us there is plenty of contemporary theory about ritual in all types of storytelling — novels and prose and poetry from all traditions, East and West, ancient and present. But what I’m tracking most here in the science is the why and the how. I already know ritual is important, but scientific proofs may lead us to more concrete answers for why ritual is important to us and how best to use it as narrative artists.
At some point, if you are someone who has gone through a training program for creative writing, you are likely to have been introduced to some form of “automatic writing” exercise. When I was in training, I usually felt molested by in-class writing exercises that were about “freeing the muse” or “tapping into the visceral” as opposed to concrete exercises that explicitly demonstrated the mechanics or tactics of narrative craft. Automatic writing exercises never worked for me. More than once I was forced to sit and listen to Mozart and/or Van Morrison and/or Van Halen, or stare at a Picasso nude, and then told to write whatever comes out of my pen without controlling it. And furthermore, I was told the resultant mishmash might have real 24 carat gold hidden there… Yet nothing but pyrite or coprolites has ever appeared in my automatic writing, unfortunately. However, my own experience is simply anecdotal — not a very good way to judge the possibilities or mechanisms in the writer’s brain that might be at work in automatic writing. I know some (reasonable) people who find automatic writing-esque exercises (see this wiki, which gets the underlying theory mostly right) to be aces.
Don’t misunderstand: as a playwright, poet and fiction writer, I’m not without my own mysterious processes. I do have characters sometimes talk to me out of the blue, and I do sometimes sit down and let them talk on the page with no pre-ordained idea of plot or story. But this is not the same as automatic writing. At least, I don’t think it is, and it doesn’t match the definitions one can find for automatic writing.
So that’s my prejudice: I’m a skeptic when it comes to automatic writing, but willing to allow there might be something to it even if I cannot experience it myself. Then, a few months ago I stumbled onto this video above (courtesy of NPR.org, 3 min, 5 sec) and asked myself: have I been missing something? If a man can lose his ability to read, but still have the ability to write — if the processes are that separate in the brain — perhaps there is something to it? It’s possible I’m conflating automatic writing with brain injury in an inappropriate way: tell me off in the comments, if you think so. But when you watch the video, ask yourself this: might your hand (or the part of the brain that moves it, really) know something you don’t? Yes, the example in the video is about a writer who lost his reading ability, but he still knew his plot; he’s not really doing a version of automatic writing as typically defined. But still: the hand knows… What else might the hand know? Can I be taught to talk to the hand?