Digital Humanities and Your Brain
[this post first appeared on the NINES blog, 10/12/2011]
Here at the University of Virginia a group of scholars from various disciplines meets periodically to discuss digital scholarship. We began meeting a few years ago, and we call ourselves EELS (it stands for “electronically enabled literary scholarship.”) At one of our earliest meetings we read articles by Mark Bauerlein and Nicholas Carr, readings we termed “anti-EELS.” Their arguments are rather played-out by this time, but a few years ago were gaining steam: Carr’s article “Is Google Making Us Stupid?” had just appeared in the Atlantic, and Bauerlein asked in the Chronicle whether online literacy is of a lesser kind (both have since expanded their arguments books). Forget about challenging the place of digital scholarship in the academy: Carr and Bauerlein challenged very idea that the internet and the digital age left us any ability to think at all.
Even in 2008, of course, Carr and Bauerlein did not speak for everyone. Bauerlein cites Leah Price, for example, who argues that we need to expand our notions of what “reading” means. The response that most intrigues me, though, is Steven Pinker’s. In a New York Times op-ed Pinker (a cognitive scientist at Harvard) takes issue with Carr’s argument that the internet affects our brains: “cognitive neuroscientists roll their eyes at such talk. Yes, every time we learn a fact or skill the wiring of the brain changes; it’s not as if the information is stored in the pancreas. But the existence of neural plasticity does not mean the brain is a blob of clay pounded into shape by experience.” Pinker goes on to compare Carr to “primitive peoples who believe that eating fierce animals will make them fierce.” By writing “as if the brain takes on the qualities of whatever it consumes” Carr and Bauerlein seem to “assume that watching quick cuts in rock videos turns your mental life into quick cuts or that reading bullet points and Twitter postings turns your thoughts into bullet points and Twitter postings.”
Digital humanists have of course found academic uses for Twitter, and might take exception to this particular example. But Pinker’s larger point is an accusation that I think we must take seriously. His response to Carr is, essentially “you just don’t understand the brain”: he draws a boundary between the cognitive neuroscientists rolling their eyes and the poor humanist who can’t tell the brain from the pancreas. So while I applaud Pinker’s argument, the tone has me a bit worried.
Many humanists are no stranger to Steven Pinker, who ranks high on a short list of scientists whose work forms the basis of another emerging area of humanistic inquiry: cognitive literary studies. The 1990s was declared “the decade of the brain,” and during the same period in which the digital humanities have become so prominent, a parallel movement has connected humanistic research to brain science. But as Jonathan Kramnick has argued, this scholarship has its risks. While hoping to add a “scientific” basis to humanistic questions, proponents of cognitive approaches sometimes wind up, like Carr, drawing on a field without really understanding it. Literary Darwinism, says Kramnick, might not “bring us any closer to science. At the very least, the substance of the claim fails to represent debates within the sciences themselves.” Some scholars, says Kramnick, have failed to sufficiently immerse themselves in the discipline from which they hope to draw, and in an attempt to become “more scientific” have in fact become less so.
Like cognitive literary studies, digital humanities must draw on other disciplines, using methods and tools that many humanities scholars aren’t comfortable with. And digital humanities has witnessed similar debates about the extent to which we must immerse ourselves in these other disciplines. Do we, as Stephen Ramsay suggests, have to know how to code, and build things? Do we have to be trained statisticians so that the our text-mining results are “statistically significant? Are we more or less rigorous than the proponents of culturomics, whose work many humanities scholars seem skeptical about? These are questions about method, and interdisciplinarity, and collaboration. And they’re not particularly new questions. But I do think the comparison between digital humanities and cognitive literary studies is a useful one: how can tools and methods from other disciplines help us answer questions in our own?
As a parting note, I’ll point to Cathy Davidson’s upcoming course, which looks to be a model for interdisciplinary teaching. Perhaps this approach will connect the humanities with the brain and the internet.