pull down to refresh

Even so, every filter is a bad filter. You can’t trust these AIs, so letting them review papers and prioritize some based on whatever criteria they use is bad. I know that today the ones that stand out the most might not be the most important, but academia is still largely guided by good science and successful experiments. Nature isn’t a respected journal for nothing.
Yeah, but none of the examples that I cited have anything to do with actually reviewing papers on their scientific merits.
It's about flagging tortured sentences, negative citations, figure duplication, data manipulation, algebraic inconsistencies, etc.
reply
I saw the post. Even though that’s the goal, the applications go beyond just being a compliance reviewer. Even these things should be reviewed during the journal’s editorial process and by human peers.
reply
For sure, human reviewers are still a crucial part of the process.
Autonomous AI agents are doing more bad than good, from what I've seen.
reply
Yes, and yet people keep feeding them and putting their faith in them.
Where before the most common thing to hear was: “Google it,” Now there’s this air of certainty: “Ask ChatGPT then.”
reply
Yeah, I hate when in a chat discussion where some people are asking for advice, some other random person with no knowledge on the topic jumps in and says: "according to ChatGPT..." and states it like definite truth.
If I wanted to have ChatGPTs input, I'd have asked it myself. If I am asking fellow humans, it's because I believe that for this specific thing, I need a human with specialized knowledge.
reply