HULK Smash Digital Analysis

Click Hulk to read his slam of digital analysis.

Film crit Hulk slams digital modeling of narrative, and with good reason:

SO THE NEW YORK TIMES WROTE A PIECE ABOUT A FORMER STATISTICS PROFESSOR, MR. BRUZZESE, WHO GIVES HIGH-PRICED SCRIPT ANALYSIS TO STUDIOS (AND ANYONE WHO WILL HIRE HIM) BASED ON WELL-MINED DATA WHICH HE USES TO PROGNOSTICATE THE ECONOMIC SOUNDNESS OF YOUR  STORYTELLING.

SHORT VERSION: THIS IS COMPLETE HORSEPOOP.

SLIGHTLY LONGER VERSION: THEY ARE ADVERTISING YOU A DIAGNOSTIC TOOL, BUT WHAT IT’S REALLY A POORLY-AIMED, POORLY-CONCEIVED DIAGNOSTIC TOOL THAT IS AN AFFRONT TO DIAGNOSTICS. SO LET’S TALK ABOUT WHY.

via Film Crit Hulk Smash: HULK VS. STATISTICAL SCRIPT ANALYSIS | Badass Digest(Note: Hulk always write in all-caps. Because Hulk angry, I think. But this Hulk very smart.)

Hulk is right — the kind of analysis described in the Times article is “horsepoop.” (Seriously. Give both the Times article and the Hulk’s reply a read.) But data driven analysis is only as good as the intent and methodology of the analyzer and as deep as the data collected. Analyzing stories with computers is a tool of the future, one we will likely rely on as an effective tool in the toolbox someday, but it’s not a very effective one just yet. (Disclaimer: I am currently developing such a tool, though not for predicting sales volume at the box office. My tool is intended to simply identify patterns in narrative so that the writer can see for herself things she may not consciously know she is doing.) We don’t yet have tools to prove beyond the shadow of a doubt where the best psychological placement of a climax should occur in a story, let alone enough data to say that including bowling scenes means your story will be a dud, as the gentleman profiled in the Times suggests. For Hollywood, where formula reigns, correlation seems good enough: if most movies with bowling scenes turn out to be failures at the box office, then they will avoid bowling scenes. That practice may even serve them in their goal of bigger box office, though it would be by accident rather than design. The problem is not the data analysis that finds such correlations but rather simply practicing good science: correlation is not necessarily causation. The fact that most movies with bowling scenes correlate to reduced financial viability does not mean that bowling scenes cause reduced financial viability.

It is rarely wise to analyze subject matter — via computer or otherwise — except as subject matter organically reflects or affects narrative structure. I typically find analysis of subject matter to be of little use because it is all too common for audiences to think they don’t want to experience a movie/play/novel about X, but if the story of X is devised in a compelling way, they will flock to it. For instance, on the surface, where subject matter analysis resides, I can say that the last thing I care to see is a movie where toys come to life. Yet the Toy Story franchise has consistently surprised and delighted me with its narrative finesse. Analysis of subject matter cannot be as important as analysis of narrative structures; it is always the structural mechanisms of the storytelling that engage and retain an audience, not the subject matter itself. I won’t deny that people are often attracted to a story because of the subject matter, but it is the prowess of the storytelling technique that keeps them in their seats.

Methodical or Muse?

Mourning_MuseBrain Pickings, a wonderful blog written by Maria Popova, is often a valuable resource for writers studying the craft. A recent Brain Pickings post, Malcolm Cowley on the Four Stages of Writing: Lessons from the First Five Years of The Paris Review, is a great re-introduction to an influential book that’s been around for a long time. Popova deftly and succinctly lays out the four stages of writing as defined by Cowley. And, just as I was 20 years ago while studying for my MFA, I’m ambivalent about the premise that there are four stages of writing: from ideation (the germ of a story), to incubation (a process of meditation) to first draft (get black on white), and ending with revision-revision-revision. It seemed (and seems) so… Sterile. But Cowley’s rubrics make perfect sense and, at the end of the day, they’re broad enough to allow for my own process(es) to fit within them, so why bother arguing?

Teaching in a MFA program over the last nine years, I find graduate creative writers want to push back against Cowley’s four stages as much as I did. They enjoy “interrogating the notions.” And in doing so, they better articulate what their own processes look like, which is valuable. Yet, besides this pushing-back against Cowley for the purposes of education, why argue with what he culled from a careful analysis of some of the best writers of his age, many of whom rank among the best of the 20th century? Cowley was methodical in his approach; that’s science, or the core of it anyway. Cowley sifted through what the top performers in the Paris Review said about their processes and arrived at these four stages. Other than as a rhetorical exercise, how can trying to refute Cowley (or at least prove significant outliers exist) help our understanding of narrative?

Well, Cowley’s four-stage breakdown from the 1950’s is reminiscent of another, only slightly younger (and far more famous) stage-identification system: the “five stages of grief” that Elizabeth Kubler-Ross identified in the 1960’s. For more than a generation our culture didn’t seem terribly interested in arguing with those stages, either. However, all sorts of formerly foundational ideas in the arts and sciences are under assault through more recent behavioral and cognitive research. For around forty years, Kubler-Ross’s stages of grief remained some of the most stalwart, oft quoted (and often misconstrued) research-turned-conventional wisdom offered to those suffering personal tragedy. That is, until 2004, when Professor George Bonanno began to publish results which showed grief to be somewhat simpler and  more monochromatic than the five stages approach suggests. Bonanno’s work, now commonly accepted as accurate and well-documented, shows that most widows and widowers are resilient and bounce back from grief faster (and with fewer distinct “stages”) than our cultural narrative is willing to admit. His clinical studies are strongly supported science that partially contradicts the narrative that Kubler-Ross presented, and further erodes any remaining residue of old world practices such as wearing black for a year in honor of the dead, or staying single for a “respectable” amount of time after a spouse’s passing. Of course it should be noted that Elizabeth Kubler-Ross herself never represented the five stages of grief as all-encompassing. She did not insist that everyone experiences all five stages, or that anyone experiences them in a specific order; that was pop-culture over-simplifying her work. And personally, I still find the five stages of grief to be useful as a narrative tool for understanding possible character responses to plot machinations when I am writing a story, even if more precise studies of human behavior now say the five stages are likely less common than a simpler, relatively quick resilience in the face of great loss… But there it is: recent science says our cultural narrative about grief, which for some time has largely been based on distortions of Kubler-Ross’s work and the faded ghosts of old world traditions, is not accurate.

So. Back to Cowley and the four stages of writing: How would Cowley’s four stages hold up if we applied rigorous science in the same manner Bonanno applied it to grief?

Short answer: I don’t know. No one has applied the science to prove Cowley’s findings to be inaccurate.* But there is at least one very compelling retort to Cowley that anyone can witness in Chicago and New York with regular frequency: TJ and Dave. They are — for want of a better description — improv artists. However, they don’t perform short, comedic sketches. Instead, TJ and Dave seem to defy the four stages by pulling mature, sophisticated stories from thin air. They create hour-long, compelling narratives that look very much like standard stage plays. They generate a story with a complete beginning, middle and end. They do it live, in front of an audience and do far more than simply make an audience laugh. Each beat/movement pays off and applies to the whole as if the story were crafted in advance by a seasoned, professional playwright. In fact, people have often accused them of writing one hour plays beforehand, memorizing them, and then performing them as if they were improvised. This is simply not the case. These two narrative artists are verifiably capable of inventing deeply affecting, long-form stories in the same moment they are performed. They somehow write well-crafted drama before our very eyes, and seemingly do so in one step rather than Cowley’s four. But don’t take my word for it. RadioLab has done a recent short episode on them. From Radio Lab Presents: TJ and Dave (15 mins):

Krulwich: The way they deal with it, they tell themselves this story: this thing that they’re creating? They don’t actually create it. They don’t make it happen.
TJ/Dave: It’s already happened. It’s all already going on. It’s not our job to make it.

TJ and Dave believe that the stage is already swirling with “billions of stories, and the moment the lights go up, one of those stories gets frozen in place.”

Is that a scientific observation? Not even close. At best, it is anecdotal and based on what the performer/storyteller feels rather than what can be proved. Yet, as we require of experimentation in the laboratory process, it is replicable (well, replicable by the original researchers/artists themselves) — TJ and Dave tell new stories several times a month using their method, even if the process is still mostly conceptual and emotional rather than scientifically codifiable.

Since I’m not likely to credit a supernatural explanation, TJ and Dave lead me to ask: what if there are as yet unidentified antennae in our brains that literally (not figuratively) pull stories out of the ether? Does that sound so nuts? A little nuts, probably, though it isn’t any crazier than saying there’s some wraith dressed in a gauzy gown called The Muse who gives us our stories capriciously, and entirely beyond our conscious control.

Ultimately, TJ and Dave’s work may not totally invalidate any of the four stages Cowley suggests. Their work might simply give credit for the first one or two of Cowley’s stages (“germ” and possibly “incubation”) to an entity/process that is external to the ego — a part of consciousness not yet codified by psychologists much further than calling it “sub-conscious,” and as of now, no more than a strange orange smudge on an fMRI.

And yeah, maybe I’m being a little too facile. Maybe all four stages are present in the process for TJ and Dave, but they’re so polished and condensed through refinement of technique and repetition that it only seems to the naked eye to be instantaneous, spontaneous story-eruption. For improv-artists such as TJ and Dave there is a rigorous training process and therefore there’s likely a specifically trained brain process that allows TJ and Dave to do what they do. It looks like magic but really it’s the result of years of hard work and discipline. And yet… And yet: they say (and our eyes agree) that their process seems to have one step instead of four: come together on stage and pull a story from the air. Ideation, incubation, first draft and revision all in the same moment.

- – -

* Please email me through the about page if you know of such findings, but I can’t find them. Previously, I would’ve said “leave a comment in the comments section” instead of asking you to email me, but all comments had to be turned off — too much spam and no time to fight it.

The Neuroscience of Fiction

Here’s another interesting compilation of findings about what neuroscientists are seeing in the brain when we experience story. It’s worth the read, especially for how meta-analysis of fMRI studies has been confirming that our neurons fire in parallel ways: when chewing a raisin, or just reading about munching raisins, the same areas of the brain light up. And, as a nice, lemony glaze on this carrot cake of info, the author even goes so far as to remind us that we’re attracted to this neurological research because it confirms a bias we already have:

These findings will affirm the experience of readers who have felt illuminated and instructed by a novel, who have found themselves comparing a plucky young woman to Elizabeth Bennet or a tiresome pedant to Edward Casaubon. Reading great literature, it has long been averred, enlarges and improves us as human beings. Brain science shows this claim is truer than we imagined. The Neuroscience of Your Brain on Fiction – NYTimes.com

Williams and Cake

Tennessee Williams & cake; 20th anniversary of “The Glass Menagerie”

There’s nothing wrong with confirming a bias like this, right? I mean, lemon frosting is just the thing: a tang to cut the sweet, yet decadent enough to make me feel swell… Still, I guess there is one small question: if we assume the scientists are getting the science right — not an entirely safe assumption, actually, since correlation is not always causation — but if we assume the science is right, isn’t it just telling us story is important to humans? Don’t I already know this? Doesn’t everyone already know that story is important?

Well, some folks might argue, but not many. I can’t find anyone to argue about whether story is important; everyone seems to agree it is, many scientists now included. I can find people to argue about the neuroscience, but not the bias itself. Who is out there saying story or metaphor aren’t important to us as a species?

So, for the science of narrative, what does neurology really have to tell us? For our tribe (authors), crowing about research such as this might seem like an exercise in self-congratulation, assuming the research proves true. Sometimes, frankly, that’s what is going on. Sometimes we just want to be told we’re doing something valuable for the species. But that’s not always the case. In fact, from the same article I quote above comes the following little revelation that might be of tremendous, rubber-meets-the-road value to story engineers everywhere:

…a team of researchers from Emory University reported in Brain & Language that when subjects in their laboratory read a metaphor involving texture, the sensory cortex, responsible for perceiving texture through touch, became active. Metaphors like “The singer had a velvet voice” and “He had leathery hands” roused the sensory cortex, while phrases matched for meaning, like “The singer had a pleasing voice” and “He had strong hands,” did not. The Neuroscience of Your Brain on Fiction – NYTimes.com

Does that get your attention? (Or, should I say: does this velvet news prickle your leathery nape the way it does mine?) Again, assuming the science is correct, as narrative artists we are activating the brain more acutely when we use language that evokes texture or touch. It’s true that creative writing teachers have been saying things like this for a century or six, but now scientists are kinda sorta suggesting it too! Perhaps there’s still room to argue about whether activating these regions with sensation-centric language might be counterproductive to conveying certain ideas in certain contexts. And again, the proof is far from complete, or very deep.

Still, bias confirmed? Yep. Is it likely a good idea to employ sensory-conscious language in every story you tell, perhaps even in a blog post like this, where the only literal, physical reality is the slick glide of my thumb on a sleek glass touchpad, and the clack of square, black keys? You bet your lemon frosting it is… Now, where’s the neuroscience on the effect of mixing metaphors? That would be extremely valuable.

Power of Metaphor Is Being Mapped by Neurology

public-domain-medical-brain-pThe power of metaphor is being mapped by brain scientists…?

I first encountered neuroscientist Robert Sapolsky on Radio Lab, a podcast I follow closely to pick up tid-bits that apply to my field of practice. Upon learning about the guy, I went back and looked up anything I could find he’d written. (Well, I looked up anything he’d written for laymen, that is, since much of his academic writing is likely beyond me.) Below is something Sapolsky wrote for civilians, and it applies directly to the science underpinning narrative devices, such as metaphor. The whole article is a great read and worth the time for those interested in the physiological reasons for why metaphor is so powerful.

Snip:

Consider an animal (including a human) that has started eating some rotten, fetid, disgusting food. As a result, neurons in an area of the brain called the insula will activate. Gustatory disgust. Smell the same awful food, and the insula activates as well. Think about what might count as a disgusting food (say, taking a bite out of a struggling cockroach). Same thing.

Now read in the newspaper about a saintly old widow who had her home foreclosed by a sleazy mortgage company, her medical insurance canceled on flimsy grounds, and got a lousy, exploitative offer at the pawn shop where she tried to hock her kidney dialysis machine. You sit there thinking, those bastards, those people are scum, they’re worse than maggots, they make me want to puke … and your insula activates. Think about something shameful and rotten that you once did … same thing. Not only does the insula “do” sensory disgust; it does moral disgust as well. Because the two are so viscerally similar. When we evolved the capacity to be disgusted by moral failures, we didn’t evolve a new brain region to handle it. Instead, the insula expanded its portfolio. [Bold emphasis is mine.]

Sapolsky’s entire essay can be found at This Is Your Brain on Metaphors – NYTimes.com.

Note: Although there is growing dissent and push-back in the scientific community concerning how the media has misrepresented recent advances in neurology by blowing them out of proportion, or taking them out of context, Sapolsky is one guy I tend to trust in this regard. The scientist, Sapolsky himself, is making arguments based on well regarded evidence from his own findings and the research of those he trusts. This information coming directly from him is not quite the same as some layman — me, or some journalist, or the evening news — making leaps or taking license that the research itself doesn’t support.

Scientific Proof of the Importance of Ritual?

It seems there’s something about the process of going through a multi-stepped procedure that provokes in people feelings of control, above and beyond the role played by any associated religious or mystical beliefs.

See the short review of recent scientific research on the power of ritual quoted above at: BPS Research Digest: Rituals bring comfort even for non-believers.

bstern-wgDkfjSIv70-originalI won’t argue here how theater likely arose from ritual; google “theater+ritual” and you’ll see the arguments for the emergence of theater from the rituals of the ancient Greeks, etc. Nor is there any need to belabor how Beckett and many other modern theater artists can be viewed through a ritualistic lens; google “Beckett+ritual” and you get plenty. Theater theorists and scholars in the 20th century mined this vein well, and made strong arguments. It’s worth looking into, but not my main point here, and I think most would stipulate that between the Greeks and Beckett are likely many thousands of examples of ritual in theater, ritual as theater, theater as ritual, and a mountain of evidence that theater and ritual remain bound together. For narrative artists working in front of an audience, there is plenty from ancient times through present day to tell us the value of ritual.

However, science is working to prove empirically the effect ritual has on us, and such proof may well expand our understanding of how best to employ it as narrative artists.

Over at BPS research (one of my favorite places to sift through advances in the behavioral sciences for how they might affect narrative science) Christian Jarret sums up some interesting new research on how ritual gives one a sense of control over their future — or more precisely, over things one cannot control, such as grief or loss. The ramifications are, again, pretty obvious for theater artists. It’s nice to see science proving something we practice in theater: catharsis. But can ritual prove to be a powerful tool for all kinds of narrative? My shoot-from-the-hip answer would be to point to Joseph Campbell’s Hero’s Journey argument and say: it’s hard to argue with Campbell’s extensive work showing that story itself is derived of ritual. From that point of view, it really doesn’t matter how story is conveyed to you; it could be a live performance in the theater, or words a page, or even in a haiku story that is tweeted and viewed on your iPhone and it would be a descendant of ritual as Campbell explains it. Again, google will tell us there is plenty of contemporary theory about ritual in all types of storytelling — novels and prose and poetry from all traditions, East and West, ancient and present. But what I’m tracking most here in the science is the why and the how. I already know ritual is important, but scientific proofs may lead us to more concrete answers for why ritual is important to us and how best to use it as narrative artists.

In the Brain: Reading Is Separate from Writing

At some point, if you are someone who has gone through a training program for creative writing, you are likely to have been introduced to some form of “automatic writing” exercise. When I was in training, I usually felt molested by in-class writing exercises that were about “freeing the muse” or “tapping into the visceral” as opposed to concrete exercises that explicitly demonstrated the mechanics or tactics of narrative craft. Automatic writing exercises never worked for me. More than once I was forced to sit and listen to Mozart and/or Van Morrison and/or Van Halen, or stare at a Picasso nude, and then told to write whatever comes out of my pen without controlling it. And furthermore, I was told the resultant mishmash might have real 24 carat gold hidden there… Yet nothing but pyrite or coprolites has ever appeared in my automatic writing, unfortunately. However, my own experience is simply anecdotal — not a very good way to judge the possibilities or mechanisms in the writer’s brain that might be at work in automatic writing. I know some (reasonable) people who find automatic writing-esque exercises (see this wiki, which gets the underlying theory mostly right) to be aces.

Don’t misunderstand: as a playwright, poet and fiction writer, I’m not without my own mysterious processes. I do have characters sometimes talk to me out of the blue, and I do sometimes sit down and let them talk on the page with no pre-ordained idea of plot or story. But this is not the same as automatic writing. At least, I don’t think it is, and it doesn’t match the definitions one can find for automatic writing.

So that’s my prejudice: I’m a skeptic when it comes to automatic writing, but willing to allow there might be something to it even if I cannot experience it myself. Then, a few months ago I stumbled onto this video above (courtesy of NPR.org, 3 min, 5 sec) and asked myself: have I been missing something? If a man can lose his ability to read, but still have the ability to write — if the processes are that separate in the brain — perhaps there is something to it? It’s possible I’m conflating automatic writing with brain injury in an inappropriate way: tell me off in the comments, if you think so. But when you watch the video, ask yourself this: might your hand (or the part of the brain that moves it, really) know something you don’t? Yes, the example in the video is about a writer who lost his reading ability, but he still knew his plot; he’s not really doing a version of automatic writing as typically defined. But still: the hand knows… What else might the hand know? Can I be taught to talk to the hand?

The Writer Who Couldn’t Read from NPR on Vimeo.

A Spoonful of Humor Makes the Puzzle Brain Turn On

Scientists have discovered that we humans are much better at solving puzzles if we are entertained or in a good mood:

In a just completed study, researchers at Northwestern University found that people were more likely to solve word puzzles with sudden insight when they were amused, having just seen a short comedy routine.

“What we think is happening,” said Mark Beeman, a neuroscientist who conducted the study with Karuna Subramaniam, a graduate student, “is that the humor, this positive mood, is lowering the brain’s threshold for detecting weaker or more remote connections” to solve puzzles.

(Full text: http://www.nytimes.com/2010/12/07/science/07brain.html)

So: what does this mean for narrative artists? Maybe it means we should leaven our leaden loaves of dirty laundry with a commedia routine.

kitestab1My work with contemporary narrative artists in all fields leads me to conclude that we are commonly eschewing the most traditional plot: time and space moving forward as it does in real life toward a climactic moment. Today’s storytellers are often creating story arcs that ask an audience to put together the pieces of a fragmented or shattered narrative — something like a puzzle. And if the science is correct, is it perhaps suggesting we may have better luck persuading our audiences to track the puzzle — to integrate the pieces or shards into a whole that is greater than the sum of its parts — if they are able to laugh while they do it, or least be in a positive mood for a moment before plunging back into the puzzle?