Science Proves We Go In Circles

The video below can be found at, (3 min, 34 sec).

Humans innately go in circles when the horizon — the simplest form of context — is removed from our toolbox of perceptions. Is the circle a default protection mechanism? An adaptive trait that, as cavemen, saved us from disappearing into the wilderness when we inevitably got lost? And what implications does this have for narrative artists: how can we take advantage of it?

It seems to me that storytellers — playwrights, novelists, epic poets, etc — have intuitively known about this tendency for millennia. It is literally a trope employed in many stories: the characters, lost in the dark, walk in circles (think Scooby Doo). Aside from literal characters lost in the landscape, circle plots aren’t rare; many plots circle back to where they began, or execute an upward spiral (a form of circle, natch) that returns to where we began, but with new context. The good ol’ “rule of three” could be seen as a series of loops wherein the story returns to an idea a couple of times, each time adding something new. No doubt there are dozens of ways we could apply this understanding as narrative engineers persuading audiences to navigate our stories.

Stupid Pet Tricks Can Be Smart

bean-photoScientists finally proved it: puppies and kids get more empathy than adults…! (Cue “Thus Spake Zarathustra” and let Strauss’ bombast wash over you.)

But stay with me here. An important implication for narrative craft hides inside this patently obvious “discovery.”

Some science seems like simple, common sense. It is so obvious that we don’t even think of it as science. For instance, when that sledgehammer slips from your hand it will fall toward the earth and smash your big toe so flat that it’ll be nigh impossible to pry off your flip-flop. That’s the science of gravity. Or, if you step on a rake just right, the handle will spring up and smack you in the face, blinding you momentarily so that you’re sure to step on all the other rakes inexplicably strewn across the yard. That’s the science of leverage (plus the science of cartoons)… My point is this: obvious science is not bad science just because it is obvious. In many cases it may be the most important science you can use precisely because it is always there, and therefore in your power to capitalize on it.

So, how does the science of puppies and kids being cute help you as a narrative artist? Let’s start with what the research actually tells us:

“The fact that adult human crime victims receive less empathy than do child, puppy, and full grown dog victims suggests that adult dogs are regarded as dependent and vulnerable not unlike their younger canine counterparts and kids.” via

If that sentence is a bit convoluted for you (as it is to me) you can read the short article at the link above, or take my word for it: what the study reveals is that, in the abstract, we have more empathy for children and dogs suffering abuse than we do for adult, human victims of abuse. The operative notion here is that our culture views children and dogs as generally “vulnerable and dependent.” Another way of saying the same thing would be: children and dogs do not have the informed agency to make good decisions concerning their safety. Therefore, dogs and children should never deserve abuse. Adults, the study suggests, are held to a higher standard; adults are more likely to be held accountable for the decisions they’ve made — even when we have no idea what those decisions were — that led them to being abused or beaten.

Setting aside the implications of “blame the victim” psychology for another day, for narrative artists the most basic ramification to this research is pretty straightforward: if you want your audience’s empathy and attention, place a child or dog at risk. Do that, and you’ll get people to empathize with little effort. You can see this practice in action during every hour of primetime television on every night of the week. Police procedurals tend to specialize in this technique, and it is very little different when Simon Cowell dresses down a naive high school folk singer.

However, we need not use this information so blatantly in order to capitalize on the underlying theory at work here; we need not grab the low hanging fruit and constantly spotlight crimes against the most vulnerable among us. Assuming you don’t want to write about dogs and children in distress, we can instead focus on the underlying implication of the science and translate one level up from the literal vulnerability exemplified by dogs and kids. In the research quoted above, scientists assert that in our cultural narrative — what we generally agree to be the truths of our society — adult humans are perceived as less vulnerable and therefore automatically garner less empathy. But this is true only in the abstract, when a scientist asks you — apropos of nothing, and with little context — if you’d feel worse for a dog that takes a beating or a fully grown adult who’s being beaten. Most people feel for the dog over the human. Yet, with sufficient context the reverse can be true: an adult can be found to be extremely vulnerable if his agency is abridged, and an abstract dog can morph into a specific monster seemingly teeming with agency. (For evidence supporting this argument, and for fun, see the Cujo trailer here.) Perceived vulnerability through loss or modification of agency is the pivotal tool at issue, more so than simply putting cuteness in the crosshairs and relying on the cultural narrative to earn your audience’s eyes. In many story situations, it is the duty of the plot to exemplify and specify the vulnerability of a character so that an audience begins to empathize. And a fundamental way to dramatize vulnerability in an adult character is to modify agency.

For an example of loss of agency as an effective tool to earn empathy and attention, look at Lear. King Lear does it to himself: he makes the decision to give away his agency — his power — over his kingdom. The Lear we see in the first part of the play is not a guy we can feel very much empathy toward. His pride is ugly to behold. However, once his agency is surrendered and he begins to realize that he has no power left to affect the outcome of the kingdom, or even his own fate, we start to empathize with him. A king like Lear is nothing like a puppy or a child. At rise he is a fully capable adult inhabiting one of the most powerful societal and political roles one can imagine. And still we empathize with him as the story unfolds because he is increasingly at the mercy of the world around him, unable to protect himself, and unable to make decisions that will extricate him from the hole he’s intentionally stepped into. (Yes, Shakespeare’s King Lear is more nuanced than the way I’ve just described it, and I am intentionally glossing it to make a point.)

Lear is an extreme example of loss of agency. A narrative need not focus on a single, giant reduction of agency such as Lear’s in order to earn an audience’s empathy. Ken Kesey’s novel Sometimes a Great Notion chronicles the story of Hank Stamper, the leader of a family logging clan in Oregon. Throughout that narrative we see Hank lose agency one little piece at a time because of his beliefs; he loses a friend here, and a contract there; his brother turns against him; the town turns against him; the union turns against him; his wife is alienated from him; his father is literally torn in half, and his best friend dies trying to help him as Hank continues to fight for the agency to do what he thinks is best. Each loss, by itself, removes a small piece of agency — removing possible escape routes from a dire fate — until Hank is alone with a final monumental task that threatens his life. And yet Hank takes on the final task anyway, unwilling to yield his agency, even when it comes down to a final decision between literally dying while trying, or living under the yoke of another’s decision making.

Aside from those two striking examples, often agency is not the central idea of the story, and possibly not even key to the story. But as a plot tool, modifying a character’s agency by narrowing the field of his or her choices can earn the audience’s empathy at any given point in a narrative. Reducing a character’s agency can even make us empathize when a character is not otherwise very appealing, as more than one recent cable TV series has shown. Your story doesn’t have to focus on a child darting out in front of a speeding truck in order to make the science of cute dogs and kids work for you. Instead, simply put the focus on the truck driver and take away her agency to do anything other than hit the kid or hurtle off a cliff to her death. Now you have an adult character we can empathize with no matter which dark choice she makes.

Everybody’s Favorite Topic: Themselves

KeenInSuitHow much do your characters talk about themselves?

Recent scientific research suggests that in everyday life we talk about ourselves a lot:

“If you’re like most people, your own thoughts and experiences may be your favorite topic of conversation.  On average, people spend 60 percent of conversations talking about themselves—and this figure jumps to 80 percent when communicating via social media platforms such as Twitter or Facebook.” Read the full text at: The Neuroscience of Everybody’s Favorite Topic: Scientific American.

Just because current science says we talk about ourselves around sixty percent of the time doesn’t mean we absolutely need to check our manuscripts to see that our characters match that standard. It might help the storytelling be more “realistic” if we do so, but “realistic” is not really an aesthetic standard so much as a concept to be balanced among a large number of factors in a given narrative. Science-says need not be Simon-says.

However, tracking self-referential dialogue can be an interesting notion to play with, especially in story that is dialogue-driven. It is potentially useful to go through your play script, for example, and label each line of dialogue as 1) self-referential, 2) neutral or n/a, and 3) reflective of external concerns or others. From this data, one could build a simple chart and determine approximately how much of a character’s expression is about himself and how much is referring to people/things/ideas outside himself. Using 60 percent as a standard baseline, we have a benchmark to see if characters are behaving naturally, at least according to the study cited above. Finding that one of your characters talks about himself far above or below the average might reveal something you hadn’t consciously crafted into the character. Either an overabundance or a paucity of self-referential dialogue might be reinforcing the actions a character is taking, or might be undermining them, both of which can be interesting choices, but generate divergent audience responses.

This kind charting as a tool is, to some degree, subjective to the author’s intent behind the individual lines. And it is tempered by an author’s control in generating subtext that obviates the surface expressions of the lines; any given snippet of dialogue may seem outwardly directed but actually intended to have a subtext that is self-referencing, or conversely, seem selfish but actually meant to be selfless. To be as accurate as possible, generating a chart of a character’s self-referential dialogue should track what a character means — what an author intends, and the meanings she is capably placing underneath the character’s words — more so than the literal surface of what a character says. Plus, as a further caveat, even if a character talks about himself only 15% of the time, it is not an ironclad guarantee he is less self-centered. Actions not reflected whatsoever in a given character’s dialogue can outweigh everything a character says, of course. Even so, while it is apparent that there is a deal of latitude for interpretation in tracking how self-centered any character’s dialogue is, charting how often a character talks about himself is potentially a valuable tool.

Methodical or Muse?

Mourning_MuseBrain Pickings, a wonderful blog written by Maria Popova, is often a valuable resource for writers studying the craft. A recent Brain Pickings post, Malcolm Cowley on the Four Stages of Writing: Lessons from the First Five Years of The Paris Review, is a great re-introduction to an influential book that’s been around for a long time. Popova deftly and succinctly lays out the four stages of writing as defined by Cowley. And, just as I was 20 years ago while studying for my MFA, I’m ambivalent about the premise that there are four stages of writing: from ideation (the germ of a story), to incubation (a process of meditation) to first draft (get black on white), and ending with revision-revision-revision. It seemed (and seems) so… Sterile. But Cowley’s rubrics make perfect sense and, at the end of the day, they’re broad enough to allow for my own process(es) to fit within them, so why bother arguing?

Teaching in a MFA program over the last nine years, I find graduate creative writers want to push back against Cowley’s four stages as much as I did. They enjoy “interrogating the notions.” And in doing so, they better articulate what their own processes look like, which is valuable. Yet, besides this pushing-back against Cowley for the purposes of education, why argue with what he culled from a careful analysis of some of the best writers of his age, many of whom rank among the best of the 20th century? Cowley was methodical in his approach; that’s science, or the core of it anyway. Cowley sifted through what the top performers in the Paris Review said about their processes and arrived at these four stages. Other than as a rhetorical exercise, how can trying to refute Cowley (or at least prove significant outliers exist) help our understanding of narrative?

Well, Cowley’s four-stage breakdown from the 1950’s is reminiscent of another, only slightly younger (and far more famous) stage-identification system: the “five stages of grief” that Elizabeth Kubler-Ross identified in the 1960’s. For more than a generation our culture didn’t seem terribly interested in arguing with those stages, either. However, all sorts of formerly foundational ideas in the arts and sciences are under assault through more recent behavioral and cognitive research. For around forty years, Kubler-Ross’s stages of grief remained some of the most stalwart, oft quoted (and often misconstrued) research-turned-conventional wisdom offered to those suffering personal tragedy. That is, until 2004, when Professor George Bonanno began to publish results which showed grief to be somewhat simpler and  more monochromatic than the five stages approach suggests. Bonanno’s work, now commonly accepted as accurate and well-documented, shows that most widows and widowers are resilient and bounce back from grief faster (and with fewer distinct “stages”) than our cultural narrative is willing to admit. His clinical studies are strongly supported science that partially contradicts the narrative that Kubler-Ross presented, and further erodes any remaining residue of old world practices such as wearing black for a year in honor of the dead, or staying single for a “respectable” amount of time after a spouse’s passing. Of course it should be noted that Elizabeth Kubler-Ross herself never represented the five stages of grief as all-encompassing. She did not insist that everyone experiences all five stages, or that anyone experiences them in a specific order; that was pop-culture over-simplifying her work. And personally, I still find the five stages of grief to be useful as a narrative tool for understanding possible character responses to plot machinations when I am writing a story, even if more precise studies of human behavior now say the five stages are likely less common than a simpler, relatively quick resilience in the face of great loss… But there it is: recent science says our cultural narrative about grief, which for some time has largely been based on distortions of Kubler-Ross’s work and the faded ghosts of old world traditions, is not accurate.

So. Back to Cowley and the four stages of writing: How would Cowley’s four stages hold up if we applied rigorous science in the same manner Bonanno applied it to grief?

Short answer: I don’t know. No one has applied the science to prove Cowley’s findings to be inaccurate.* But there is at least one very compelling retort to Cowley that anyone can witness in Chicago and New York with regular frequency: TJ and Dave. They are — for want of a better description — improv artists. However, they don’t perform short, comedic sketches. Instead, TJ and Dave seem to defy the four stages by pulling mature, sophisticated stories from thin air. They create hour-long, compelling narratives that look very much like standard stage plays. They generate a story with a complete beginning, middle and end. They do it live, in front of an audience and do far more than simply make an audience laugh. Each beat/movement pays off and applies to the whole as if the story were crafted in advance by a seasoned, professional playwright. In fact, people have often accused them of writing one hour plays beforehand, memorizing them, and then performing them as if they were improvised. This is simply not the case. These two narrative artists are verifiably capable of inventing deeply affecting, long-form stories in the same moment they are performed. They somehow write well-crafted drama before our very eyes, and seemingly do so in one step rather than Cowley’s four. But don’t take my word for it. RadioLab has done a recent short episode on them. From Radio Lab Presents: TJ and Dave (15 mins):

Krulwich: The way they deal with it, they tell themselves this story: this thing that they’re creating? They don’t actually create it. They don’t make it happen.
TJ/Dave: It’s already happened. It’s all already going on. It’s not our job to make it.

TJ and Dave believe that the stage is already swirling with “billions of stories, and the moment the lights go up, one of those stories gets frozen in place.”

Is that a scientific observation? Not even close. At best, it is anecdotal and based on what the performer/storyteller feels rather than what can be proved. Yet, as we require of experimentation in the laboratory process, it is replicable (well, replicable by the original researchers/artists themselves) — TJ and Dave tell new stories several times a month using their method, even if the process is still mostly conceptual and emotional rather than scientifically codifiable.

Since I’m not likely to credit a supernatural explanation, TJ and Dave lead me to ask: what if there are as yet unidentified antennae in our brains that literally (not figuratively) pull stories out of the ether? Does that sound so nuts? A little nuts, probably, though it isn’t any crazier than saying there’s some wraith dressed in a gauzy gown called The Muse who gives us our stories capriciously, and entirely beyond our conscious control.

Ultimately, TJ and Dave’s work may not totally invalidate any of the four stages Cowley suggests. Their work might simply give credit for the first one or two of Cowley’s stages (“germ” and possibly “incubation”) to an entity/process that is external to the ego — a part of consciousness not yet codified by psychologists much further than calling it “sub-conscious,” and as of now, no more than a strange orange smudge on an fMRI.

And yeah, maybe I’m being a little too facile. Maybe all four stages are present in the process for TJ and Dave, but they’re so polished and condensed through refinement of technique and repetition that it only seems to the naked eye to be instantaneous, spontaneous story-eruption. For improv-artists such as TJ and Dave there is a rigorous training process and therefore there’s likely a specifically trained brain process that allows TJ and Dave to do what they do. It looks like magic but really it’s the result of years of hard work and discipline. And yet… And yet: they say (and our eyes agree) that their process seems to have one step instead of four: come together on stage and pull a story from the air. Ideation, incubation, first draft and revision all in the same moment.

– – –

* Please email me through the about page if you know of such findings, but I can’t find them. Previously, I would’ve said “leave a comment in the comments section” instead of asking you to email me, but all comments had to be turned off — too much spam and no time to fight it.

Scientific Proof of the Importance of Ritual?

It seems there’s something about the process of going through a multi-stepped procedure that provokes in people feelings of control, above and beyond the role played by any associated religious or mystical beliefs.

See the short review of recent scientific research on the power of ritual quoted above at: BPS Research Digest: Rituals bring comfort even for non-believers.

bstern-wgDkfjSIv70-originalI won’t argue here how theater likely arose from ritual; google “theater+ritual” and you’ll see the arguments for the emergence of theater from the rituals of the ancient Greeks, etc. Nor is there any need to belabor how Beckett and many other modern theater artists can be viewed through a ritualistic lens; google “Beckett+ritual” and you get plenty. Theater theorists and scholars in the 20th century mined this vein well, and made strong arguments. It’s worth looking into, but not my main point here, and I think most would stipulate that between the Greeks and Beckett are likely many thousands of examples of ritual in theater, ritual as theater, theater as ritual, and a mountain of evidence that theater and ritual remain bound together. For narrative artists working in front of an audience, there is plenty from ancient times through present day to tell us the value of ritual.

However, science is working to prove empirically the effect ritual has on us, and such proof may well expand our understanding of how best to employ it as narrative artists.

Over at BPS research (one of my favorite places to sift through advances in the behavioral sciences for how they might affect narrative science) Christian Jarret sums up some interesting new research on how ritual gives one a sense of control over their future — or more precisely, over things one cannot control, such as grief or loss. The ramifications are, again, pretty obvious for theater artists. It’s nice to see science proving something we practice in theater: catharsis. But can ritual prove to be a powerful tool for all kinds of narrative? My shoot-from-the-hip answer would be to point to Joseph Campbell’s Hero’s Journey argument and say: it’s hard to argue with Campbell’s extensive work showing that story itself is derived of ritual. From that point of view, it really doesn’t matter how story is conveyed to you; it could be a live performance in the theater, or words a page, or even in a haiku story that is tweeted and viewed on your iPhone and it would be a descendant of ritual as Campbell explains it. Again, google will tell us there is plenty of contemporary theory about ritual in all types of storytelling — novels and prose and poetry from all traditions, East and West, ancient and present. But what I’m tracking most here in the science is the why and the how. I already know ritual is important, but scientific proofs may lead us to more concrete answers for why ritual is important to us and how best to use it as narrative artists.

In the Brain: Reading Is Separate from Writing

At some point, if you are someone who has gone through a training program for creative writing, you are likely to have been introduced to some form of “automatic writing” exercise. When I was in training, I usually felt molested by in-class writing exercises that were about “freeing the muse” or “tapping into the visceral” as opposed to concrete exercises that explicitly demonstrated the mechanics or tactics of narrative craft. Automatic writing exercises never worked for me. More than once I was forced to sit and listen to Mozart and/or Van Morrison and/or Van Halen, or stare at a Picasso nude, and then told to write whatever comes out of my pen without controlling it. And furthermore, I was told the resultant mishmash might have real 24 carat gold hidden there… Yet nothing but pyrite or coprolites has ever appeared in my automatic writing, unfortunately. However, my own experience is simply anecdotal — not a very good way to judge the possibilities or mechanisms in the writer’s brain that might be at work in automatic writing. I know some (reasonable) people who find automatic writing-esque exercises (see this wiki, which gets the underlying theory mostly right) to be aces.

Don’t misunderstand: as a playwright, poet and fiction writer, I’m not without my own mysterious processes. I do have characters sometimes talk to me out of the blue, and I do sometimes sit down and let them talk on the page with no pre-ordained idea of plot or story. But this is not the same as automatic writing. At least, I don’t think it is, and it doesn’t match the definitions one can find for automatic writing.

So that’s my prejudice: I’m a skeptic when it comes to automatic writing, but willing to allow there might be something to it even if I cannot experience it myself. Then, a few months ago I stumbled onto this video above (courtesy of, 3 min, 5 sec) and asked myself: have I been missing something? If a man can lose his ability to read, but still have the ability to write — if the processes are that separate in the brain — perhaps there is something to it? It’s possible I’m conflating automatic writing with brain injury in an inappropriate way: tell me off in the comments, if you think so. But when you watch the video, ask yourself this: might your hand (or the part of the brain that moves it, really) know something you don’t? Yes, the example in the video is about a writer who lost his reading ability, but he still knew his plot; he’s not really doing a version of automatic writing as typically defined. But still: the hand knows… What else might the hand know? Can I be taught to talk to the hand?

The Writer Who Couldn’t Read from NPR on Vimeo.

A Spoonful of Humor Makes the Puzzle Brain Turn On

Scientists have discovered that we humans are much better at solving puzzles if we are entertained or in a good mood:

In a just completed study, researchers at Northwestern University found that people were more likely to solve word puzzles with sudden insight when they were amused, having just seen a short comedy routine.

“What we think is happening,” said Mark Beeman, a neuroscientist who conducted the study with Karuna Subramaniam, a graduate student, “is that the humor, this positive mood, is lowering the brain’s threshold for detecting weaker or more remote connections” to solve puzzles.

(Full text:

So: what does this mean for narrative artists? Maybe it means we should leaven our leaden loaves of dirty laundry with a commedia routine.

kitestab1My work with contemporary narrative artists in all fields leads me to conclude that we are commonly eschewing the most traditional plot: time and space moving forward as it does in real life toward a climactic moment. Today’s storytellers are often creating story arcs that ask an audience to put together the pieces of a fragmented or shattered narrative — something like a puzzle. And if the science is correct, is it perhaps suggesting we may have better luck persuading our audiences to track the puzzle — to integrate the pieces or shards into a whole that is greater than the sum of its parts — if they are able to laugh while they do it, or least be in a positive mood for a moment before plunging back into the puzzle?