New arXiv policy: 1-year ban for hallucinated references

twitter.com

320 points by gjuggler 6 hours ago


btown - 6 hours ago

> The penalty is a 1-year ban from arXiv followed by the requirement that subsequent arXiv submissions must first be accepted at a reputable peer-reviewed venue.

This is incredibly good for science. arXiv is free, but it's a privilege not a right!

I'm not seeing this clearly listed on https://info.arxiv.org/help/policies/index.html so it's possible this is planned but not live yet - or perhaps I'm not digging deeply enough?

As a certain doctor once said: the whole point of the doomsday machine is lost if you keep it a secret!

mks_shuffle - 39 minutes ago

While this is certainly a welcome step, I hope there is more work done to fix the underlying problem of easily creating correct BibTeX entries for the cited papers. Citations for any given paper can come from a wide range of journals with various publishers, conferences, and preprints. The same paper can be available from multiple sources with varying details, e.g. arXiv and the conference website. Tools like Zotero have certainly made it significantly easier to extract citations from webpages of publication, but I still find issues with the extracted BibTeX details. While author names and titles are often extracted correctly, I still have to manually ensure that details like publication venue, year, volume number, page number, URL, etc. are extracted correctly and also shown correctly in LaTeX format. Different publications can use different citation styles. This can unfortunately lead to taking shortcuts with AI-generated citation data due to the lack of an easy and unified approach to extract consistent citation data. I am not sure whether hallucinated citations are being generated in the main manuscript or in a separate BibTeX file, so I may be a bit off in my understanding.

imenani - 6 hours ago

https://xcancel.com/tdietterich/status/2055000956144935055

rgmerk - 3 hours ago

Good.

If it’s not worth your time to check the output of your LLM carefully, it’s not worth my time to read it.

MinimalAction - 4 hours ago

There needs be to a careful vetting before such adverse actions. If somebody includes a name and pushed it without express permission, does everyone get the ban? I agree that implemented the right way, this is good.

noobermin - 3 hours ago

Seeing the usual LLM hypers angry replying to this on twitter is such a tell. Just like the comments on the LLM poisoning articles, some people just can't accept that some people don't like LLMs and get upset when you put any amount of hindrance to their rapid acceptance.

bigfishrunning - 6 hours ago

Good; academic literature is in crisis because of all of the slop. Forcing some consequences on easily-detectable hallucinations can only be a good thing

nullc - 2 hours ago

It's been pretty eye opening watching Craig Wright (of bitcoin fakery fame) flooding out LLM generated 'academic' papers and even having some of them accepted.

He's toast if SSRN were to adopt a similar policy.

squirrelon - 4 hours ago

Had a colleague submit a paper with literal AI slop left in the text, got hit with a nasty revision request. Check your drafts before you submit, people. The reviewers will find it.

jszymborski - 3 hours ago

Should be more harsh in my opinion.

- 3 hours ago
[deleted]
hyunwoo222 - 2 hours ago

[flagged]

random3 - 5 hours ago

It seems a good idea to ban cheating, but how hard is it, especially in new reasoning/agents contexts to validate references?

The deeper question is whether legitimate AI generated results are allowed or not? Test - In the extreme - think proof of Riemann Hypothesis autonomously generated (end to end) formally proven - is it allowed or not?