Wikipedia, Open Access, and the Sciences
This week (October 21-27) is Open Access Week, when universities, colleges, libraries, funding agencies, and other interested parties come together to ask questions about who owns the research they produce and/or pay for. If you’re reading this blog, you probably already know the importance of these debates: the dominance of companies like Elsevier, which make enormous profits by keeping research behind a paywall, or the well-publicized trial and suicide of Aaron Swartz.
I have a passing interest in these topics, and a general belief in Open Access, so I’ve been following OA week — mostly be reading tweets with the hashtag #oaweek, and nodding in agreement. Yesterday, though, Amanda French’s live tweets of a presentation at George Mason’s Roy Rosenzweig Center for History and New Media caught my attention. I’ve written a bit about Wikipedia on this blog, and Jake Orlowitz was talking about the Wikipedia Library project. As Amanda points out, Wikipedia is the most-visited non-profit website in the world, making it a kind of poster-child for Open Access. And as students in my “Writing about Wikipedia” class will tell you, everything on Wikipedia should be verifiable; in other words, it has to be sourced. The Wikipedia Library defines itself as “a place for active Wikipedia editors to gain access to the vital reliable sources that they need to do their work,” connecting Wikipedians to “libraries, open access resources, paywalled databases, and research experts.” When access to information is limited, the Wikipedia Library opens the door.
These tweets reminded me of two articles I’d come across recently, both relating Wikipedia to the sciences. The first is about UC San Francisco’s medical school, which is offering course credit for participating in WikiProject Medicine, one of the online encyclopedia’s many topic-focused collaborative endeavors. Julie Beck, covering the UCSF program in The Atlantic, quotes the course’s instructor Dr. Amin Azzam, who notes that people turn to Wikipedia for health advice “more than any other website. More than the National Institutes of Health, more than WebMD, more than Mayo Clinic. Itโs more than many of those combined.” This is troubling, given that “the fraction of high-quality information on Wikipedia in the medicine-related topics is significantly lower than other domains of Wikipedia.” The motivation behind Azzam’s course is the feeling that this deficiency results from the medical community’s unwillingness to participate. Ideally, Azzam’s course and the resulting media coverage (Noam Cohen wrote about in the New York Times as well) will increase participation.
These medical students will turn to peer-reviewed science journals, the kind of thing the Wikipedia Library provides access to. That’s the good news. But can we trust the information from those journals?
Writing for Science Magazine, John Bohannon describes sending an obviously terrible paper to hundreds of (purportedly) peer-reviewed science journals. The results were shocking (or exactly what you’d expect, depending on your feelings about such things):
Any reviewer with more than a high-school knowledge of chemistry and the ability to understand a basic data plot should have spotted the paper’s short-comings immediately. Its experiments are so hopelessly flawed that the results are meaningless … More than half of the journals accepted the paper, failing to notice its fatal flaws.
Humanists who remember the Sokal hoax might feel vindicated: the sciences can’t spot a fake paper, either. But that’s not Bohannon’s takeaway. There are very real problems in scientific publishing, and only some of them are related to open access (as a side note, the fact that Bohannon targeted OA journals seems ancillary to me; would he have gotten the same results with paywalled journals? Maybe. And lots of the journals are provided by services like Elsevier.) Professional pressures to publish drive scientists to extremes, with journals’ rejection rates sometimes in the 90% range or higher. Fake scientific journals see this as a market: many charge the author for publication, and one of the invoices Bohannon received with his acceptance letter was for a whopping $3100.
Bohannon’s article is well worth reading for its own sake, but when I read it I thought immediately of Wikipedia, especially when he gets to the reactions from the journal editors. I wrote a few weeks ago about a short assignment in which my students messed with Wikipedia. Responses ranged from supportive to downright threatening. Bohannon, in a much more serious context and with a much more serious subject, got similar feedback. Some journals appreciated his test (especially those that rightly rejected the paper). Others didn’t. Here is how Malcolm Lader, editor-in-chief of one of the journals, responded to Bohannon: “An element of trust must necessarily exist in research … Your activities here detract from that trust.”
Bohannon’s whole point, of course, is that readers also trust the journals to submit the articles to peer review, and his activities revealed a good reason to detract from that trust.
Science journals obviously matter: peer review is the best system we have and we should be worried that it doesn’t work. But Wikipedia, the sixth most-visited website in the world, matters too. Endeavors like WikiProject Medicine and TooFEW hope to increase participation among groups who don’t historically contribute to the encyclopedia. But one thing Bohannon’s article emphasizes is that Wikipedia is in the real world: it’s only as good as the sources on which it’s based.
Of course, right now Wikipedia is worse than the sources on which it’s based: hence the need for more diverse editors, whether in terms of gender, nationality, or expertise. Unfortunately the trend seems to be the other way: Wikipedia is getting more and more popular, but the number of editors is getting smaller and more insulated. Might the poster child for Open Access might actually reveal its limitations, rather than highlight its strengths?
Although it occurs to me that Wikipedia *is* a poster child for OA in the sense that “if you make stuff freely available online you’re more likely to get a million readers.” ๐
Indeed, that’s the less-technical meaning of the term I was going for ๐ . I am, in turn, glad some else’s students are still testing Wikipedia’s defenses. Even just seeing (and then moving past) the harsh responses from Wikipedians is a good lesson.
Your point about the speed of changes is critical. From my friends in the sciences, things seem really informal: I have a chemist friend who, in graduate school, had shown a paper to her advisor before publishing. He circled one of her citations, and said “oh, everyone knows this paper’s been recalled.” But the site she used to access the journal didn’t seem to know. As you say, a reader easily might miss such a recall. And a system relying on “everyone knows” is scarily insular and exclusive — which is just the problem OA publishing is trying to combat.
I’ll be interested to see if there are any changes in the near future, w/r/t recalling “bad” articles. Strengthening the gatekeepers brings its own problems, but I’ve not yet seen an alternative that’s working all that well (though the DH field has certainly been trying). Wikipedia is fascinating, but seems to be getting worse in certain ways, rather than better.
Great post, and I’m so glad to “meet” another person who’s having their students learn about Wikipedia! I too had my students edit Wikipedia as a means of testing its defenses, although I didn’t ask them to put in fake info, as I see you did. Some did anyway, but the things they chose to write when not guided were the kinds of things that were super-easily detected (e.g., “Mumford and Sons is Amanda’s favorite band”) and didn’t last for more than a few minutes or even seconds.
Boy, do I remember the Sokal hoax. So, so well. ๐
One of the things I take away from your interesting comparison of deliberately submitting fake info to Wikipedia versus deliberately submitting fake info to a journal is this: the fake information on Wikipedia *can* (though might not always) be quickly and easily corrected. Of course, fake info can then be quickly and easily inserted again, which in its turn can be quickly and easily corrected.
I’m sure that journals sometimes discover errors after publication that go uncorrected, or corrected only in a subsequent edition that a reader might easily not see. Even online-only journals tend to freeze their content, I think, so that when false information is published it’s a more serious and less revocable thing. Wikipedia’s very fluidity is one of the things that makes me trust it. I’d rather consult something that oscillates at a fairly high average level of reliability than something that spikes and valleys between high and low reliability. (Cf. David Weinberger’s book _Too Big to Know_, which argues that knowledge is now “a property of the network.”)
This is a slightly picky point, but I’d disagree that Wikipedia is “a kind of poster-child for Open Access.” Open Access is (as of course you know) a movement to make the products of original research freely available, and Wikipedia articles are very explicitly *not* original research. And Jake pointed out another thing I knew but hadn’t explicitly thought about: Wikipedia doesn’t skew toward citing OA research. The sources cited at the end of most Wikipedia articles are encouraged to be the *best* sources, not the most easily available ones, although Wikipedia is planning to implement “Open Access signaling” by putting little icons next to those sources to indicate which are OA and which not.
Here’s the link to Jake Orlowitz’s slides on “The Future of Libraries and Wikipedia”: http://t.co/GETxWwMM3n