Nobody is in charge of all this. That’s the number one thing we want to tell people with this site.
Considered as a large community, or by academic discipline, or even down to a single university department, the world of modern science does not have any absolute authority deciding what’s good and who’s bad. The rule holds that “everybody’s got to serve somebody”, but there are many somebodys to ask for money and many publishers wanting for content. In this messy world researchers looking for continued employment must find a way to stand out. With no central body to recognize and promote journeyman academics, and a broad flowing river of published papers to wade through, citation statistics have become a proxy for informed judgement.
A bit of journalism from a few years ago came up for us recently and it’s worth revisiting the story (because the issues covered are still current and truer-than-ever).
Why Are Gamers So Much Better Than Scientists at Catching Fraud?
A pair of cheating scandals—one in the “speedrunning” community of gamers, and one in medical research—call attention to an alarming contrast.
The article contrasts events in two unmanaged communities. In the demonstration gaming community a cheater was uncovered and exposed.
Whether they’re employing audio-spectrum analysis, picking through every keypress to make sure that the run is legit, or simply using their long experience to spot a questionable performance, members of this community of technical experts have put in strenuous work to make life harder for those who break the rules.
Scientists should pay attention.
The contrasting story is actually ‘not that bad’ by scientific publishing standards. This fraudster was actually outed after an investigation, and some retractions executed.
That Ueshima’s university made such an extensive investigation of his work and published it for all to see is unusual. Skeptics and whistleblowers who spot potential fraud in researchers’ work are routinely ignored, stonewalled, or sometimes attacked by universities or journal editors who don’t have the time or inclination to dig into potentially forged (and potentially dangerous) studies.
Author Stuart Ritchie goes on to summarize the motivations and counter-motivations in policing science publishing. It’s important to do, but tragic for respected persons’ careers when done right. Amateur gamers face little repercussions beyond loss of kudos or video views.
In the scientific fraud example discussed the fraudster was an established and trusted researcher. But how hard is it for a relative nobody to slip shoddy research into the mix, while generating positive reputation stats?
How easy is it to fudge your scientific rank? Meet Larry, the world’s most cited cat
In about an hour he created 12 fake papers authored by Larry and 12 others that cited each of Larry’s works. That would amount to 12 papers with 12 citations each, for a total citation count of 144 and an h-index of 12. Richardson uploaded the manuscripts to a ResearchGate profile he created for the feline. Then, he and Wise waited for Google Scholar to automatically scrape the fake data.
Of course this is an insult to all the young PhDs who’ve slogged through the process of getting an article through the meat grinder of a serious publisher’s workflow (a flow which does not include any steps that ensure quality of research). But it’s a byproduct and self-reinforcing artifact of the increasingly bloated and fractured sphere of knowledge publishing. When we start to talk about things as statistics, it can seem like some serious math-y process has delivered an immutable result.
Many researchers would like to see less emphasis on h-index and other metrics that have “the undue glow of quantification,” as Lange puts it.
In this tiny corner of the information space, we just bought Ritchie’s book (Science Fictions: How Fraud, Bias, Negligence and Hype Undermine the Search for Truth) and expect to have plenty more to say in future articles.