pull down to refresh

Trusting in this will be shameful for the scientific community. They will exist, no doubt, but papers reviewed this way should be disregarded until they are reviewed by human peers. Just because there are bad journals and unethical reviewers doesn’t mean the process is flawed — quite the opposite, the fact that we’re aware such cases exist is what makes the system trustworthy.
I still think human peer review is necessary. But there are tasks that can be realistically only be done by automated tools, at least, to the point of flagging them. A few of them were highlighted in the snippet I pasted. Humans should still assess how trustworthy the results are. Like anything spawned by current AI models, that, by definition, are stupid.
reply
Even so, every filter is a bad filter. You can’t trust these AIs, so letting them review papers and prioritize some based on whatever criteria they use is bad. I know that today the ones that stand out the most might not be the most important, but academia is still largely guided by good science and successful experiments. Nature isn’t a respected journal for nothing.
reply
Yeah, but none of the examples that I cited have anything to do with actually reviewing papers on their scientific merits.
It's about flagging tortured sentences, negative citations, figure duplication, data manipulation, algebraic inconsistencies, etc.
reply
I saw the post. Even though that’s the goal, the applications go beyond just being a compliance reviewer. Even these things should be reviewed during the journal’s editorial process and by human peers.
reply
For sure, human reviewers are still a crucial part of the process.
Autonomous AI agents are doing more bad than good, from what I've seen.
reply
Yes, and yet people keep feeding them and putting their faith in them.
Where before the most common thing to hear was: “Google it,” Now there’s this air of certainty: “Ask ChatGPT then.”