Tbf it doesn’t read like AI to ME.
Zerogpt isn’t maintained/accurate.
Yeah, didn't strike me as AI either.
The language I (have to) use in scientific articles often matches quite well with what ChatGPT outputs. Say I write my abstract and ask ChatGPT to improve on it, it uses quite similar expressions and vocabulary. It just improves my flow which isn't always perfect as a non-native speaker.
reply
I've been trained like a Pavlovian dog to use neutral, hedged language, because if you use stronger language in an academic article referees will usually attack you.
Like if you say, "This evidence proves..." you will get attacked endlessly. If you say, "This evidence is consistent with... " then you get a pass.
(For economics, where evidence is often suggestive at best.)
reply
Even in physics, where evidence is supposed to be much less suggestive and be more of the absolute type, we also very much use hedged language. ChatGPT excels at hedging its statements ;)
reply
Initial 2 paragraphs are 100% and the rest of the article is well edited after generating through AI.
reply
10 sats \ 2 replies \ @k00b 5h
We can't know with 100% accuracy. Look at SimpleStacker's other content. Unless all their other content is AI, they just write well and in a rather normal/neutral tone.
reply
May be you're right and he's right as well. #715974
Maybe I'm wrong. Maybe AI detectors are failing. Whatever it is.
The first two paragraphs sounded to me like AI and o just checked.
reply
To be fair, I can understand why the first two paragraphs might strike one as AI. Interesting conundrum we're in... but I suppose that means AI is getting much much closer to the Turing Test.
reply
Ran it through Quillbot, got 0%. Not sure what the standard is nowadays for AI detection though.
reply