Scientific datasets are riddled with copy-paste errors
sciencedetective.org82 points by jruohonen 11 hours ago
82 points by jruohonen 11 hours ago
What should give people pause is how not complicated (I'd hesitate to say easy) it would be to create a python script that would generate fake data that would be all but impossible to determine whether it's real or not. You just need to model the measuring device and hypothesis you want to support, then sample away.
The people who get caught red handed like this are lazy, incompetent and stupid. Makes you wonder what about the ones not getting caught.
> It could be either a fat-finger mistake when editing the Excel file or deliberate tampering to cover up real data that didn't tell the right story.
I can easily imagine after spending years or decades devoted to discovering a scientific breakthrough that some could be tempted to slightly alter the data. I believe there was some scandal about this a few years back with climate data. Fixing this is however something that AI would do fairly well.
I don’t believe fixing this is not something AI would do well.
Identifying it is something AI could do well, though. It’s very good at finding patterns - that’s kind of essential to how it works.
> AI would do fairly well
But AI can also hallucinate data. I am not sure this is an area for an automatic "AI is better than humans". Honesty is very important in science. There were even fake articles generated:
https://www.thelancet.com/journals/lancet/article/PIIS0140-6...
And some other article I forgot, about arsene or some other ion being used in/for DNA or so. Turned out to be totally fabricated. Right now I don't remember the name of the article; was from some years ago.
This is legitimately so challenging to avoid, because loads of scientific processes are—to some degrees or others—bespoke and difficult to fully streamline and introduce efficient, well-structured, comprehensive QA.
A LOT of labour goes into making it work. Most scientists I know and work with are very diligent people who care a lot about the outputs being as correct as possible, but wow, their workflows aren't great.
My job is to try and address this in whatever ways are practical for the data and the people doing the science, and it's kind of like Saas in that you think it should be easy enough to spot problems, solve them, and carry on/become a billionaire, but... The world is much more complicated than that, and it's easier to fail in this endeavour than it is to break even.
The classic "DropBox is just rsync" or "I could build Airbnb in a weekend" sentiments have their commonalities and counterparts in science, and the reality is similarly defeating and punishing on both sides. Making science go faster while maintaining correctness is exceedingly difficult. There are so many moving parts. So many disparate participants who are wildly technical and capable, or brilliant at studying bacteria in starfish yet terrified to run a command in a terminal. Your user base has virtually nothing in common in terms of ability and willingness to do anything other than get their own work done. It's brutal.
So, I sympathize with the authors of these papers and I hope readers don't assume they're bad at what they do or that it's done in bad faith. It's genuinely difficult.
An anecdote: I created a tool for validating biodiversity data against a specification called Darwin Core. Initially our published data was failing to validate so much that I thought I'd made the tool wrong. Rather, the spec is so complex and vast that the people I work with were unable to manage to get valid data into the public repositories. And yet! They were able to publish, because the public repositories' own validation is... Invalid. That's the state of things.
Granted, the data is still correct enough to be useful, and the errors don't cause the results to indicate anything that they shouldn't. It's more like minor metadata issues, failures to maintain referential integrity across different datasets, etc. But it's a very real, very difficult problem.
Science isn't easy at all. So many hoops to jump through, so much rigor, so much data. Mistakes are inevitable.
One Offs. A lot of research results in one-off code. You may not go back to this dataset, these ideas again. When you do, sometimes years, later, you go, oh shit, this is hard to work with. So then you begin to build better structures, do the extra work it takes to make things easy to apply to new purposes or to accept new (but slightly different) datasets. It takes time and effort, and money. And that is where it all breaks down. Most scientists have to be jacks of many trades to get by.
It's hard to avoid, but there are steps we can make towards fixing it. I spent years in academia building open-source data processing pipelines for neuroscience data and helping other researchers do the same. Most quantitative research goes through "lossy" steps between raw data and final results involving Excel spreadsheets, one-off MATLAB commands, copy pasting the results, etc.
In a lot of cases (where data is being collected by humans with a tape measure, say) there is room for error. But one of the things that's getting traction in some fields is open-source publication of both raw datasets and the evaluation/processing methods (in a Jupyter Notebook, say) in a way that lets other people run their analysis on your data, your analysis on their data, or at least re-run your start-to-finish pipeline and look for errors!
As is often the case, the holdups are mostly political: methods papers are less prestigious than the "real science" ones, and it takes journals / funders to mandate these things and provide funding/hosting for datasets for 10+ years, etc - researchers are a time-poor bunch and often won't do things unless there's an incentive to!
Taking notebooks to a production environment isn't fun either. With ai there's no more excuse for using that coding crutch.
Yes…mistakes are inevitable, and I get not expecting or demanding perfection. But the subtext here is that this is unlikely to be a mistake, and much more likely to be fraud.
There are incentives for these spreadsheets having the values that they do, and also there is no conceivable way that the values are correct, and on top of that, the most likely ways to get these values are to copy and paste large amounts of numbers, and even perturb some of them manually.
If you see this in accounting,(where there are also mistakes), it’s definitely fraud. (Awww man - we accidentally inflated our revenue and profit to meet expectations by accidentally duplicating numerous revenue lines and no one internally caught it! Dang interns!) If you see it in science, you ask the authors about it and they shrug and mumble a semi plausible explanation if you’re lucky? I can totally imagine a lab tech or grad student making a large copy paste mistake. I can’t imagine them making a series of them in such a way that it bolsters or proves the author’s claim AND goes completely undetected by everyone involved.
> I can’t imagine them making a series of them in such a way that it bolsters or proves the author’s claim AND goes completely undetected by everyone involved.
The small minority of cases that do fit this pattern get selected to be on the front page of HN. So we aren't drawing from a random sample of mistakes. All the selection effects work against the more common categories of mistakes from showing up on the HN front page, such as author disinterest, reader disinterest, to rejection by the journal, to a lack of publicity if the null result is published. The more reliable tell that it's a fraud is that the authors didn't respond when the errors were discovered.
A lot of the work I have done for scientists when I was a contractor (and a bit while working for bespoke software consultancies) was quite literally just making programmatic applications out of Excel sheets.
In one case, we used mdftools to literally use the original excel spreadsheet as our logic engine.
> their workflows aren't great
Sounds like a startup idea.
Spend a few years working in the target environment. It will disabuse you of the idea that science research can be regularized with technology.
If you want to make no money, sure.
The solutions these scientists need are bespoke and share little in common. They also have fixed grant funding.
In 2009 I made $15/hr working with some PhDs and grad students in a couple different labs to automate their workflows - I was the highest paid person in the room most of the time.
You'll want to sit down when I tell you the budget these folks have for workflow solutions. Ain't gonna take long but might be shocking if you've got big startup hopes. ;)
For sure. These are often people who want better equipment to do their research, not software subscriptions that promise to force them to work in unfamiliar and uncompelling ways. You'd need a fantastic, game-changing idea to get meaningful traction.
One example of these might be systems like S3 and distributed computing in AWS. Like, huge ideas that take massive initiatives to implement, but make science meaningfully easier. I can't think of many other modern technologies we use that the team doesn't mostly resent (like Slack or Google Drive). They're largely interested in just doing the science, the rest eats into funding (which is increasingly sparse these days).
just imagine you scan private insustry. this is a generic problem that LLMs wont solve in generative capabilities.
Not only that but sometimes at universities, they use AI to generate descriptions.
Recent example I found (semi-accidentally, I was only looking for microscopy related courses):
https://ufind.univie.ac.at/de/course.html?lv=301053&semester...
At the end of the description it has:
"Übersetzt mit DeepL.com (kostenlose Version)"
This means, in english, "translated via DeepL.com (free version)" aka the not-paid-for version. What I found baffling is that even for a single paragraph, some are too lazy to write stuff on their own - or, at the least, remove that disclaimer. Other people also pointed out that they saw this in autogenerated brochures/booklets, in the USA for instance; think I saw this about 3 months ago but I forgot which booklet it was. But the whole booklet was AI-autogenerated. To me this is all spam. I can not want to be bothered to read AI "content" when it is really just glorified slop-spam.
[dead]
[dead]
[dead]