No, I don't think less of them. They're early adopters (yay!), but I do think less of those who trust AI with the final edit. Instead I hope they pay attention to the following as they put on the finishing touches.
Mental load. We're familiar with the mistakes people make but we're still figuring out how AI gets things wrong. This detracts from the message because you start looking for mangled trees instead of trying to assemble the mental forest. => Writers should make sure AI's use is not detectable to the audience.
The other issue is not knowing if AI was used in first draft or just for polish, you're left wondering how much of the piece you're reading is human vs AI. Logical beings wouldn't care. It would just be about how well the piece executes its purpose. => Writers should make sure AI's use is not detectable to the audience.
We're not 100% logical though, we're also social and maybe this is a third aspect, that we don't know who we're hearing from. Is it a social being or a technological process? Does it care if I get it? => Writers should make sure AI's use is not detectable to the audience.
Guess, it's pretty simple.
I agree with you in terms of not trusting AI with the final edit... and I think that's part of where I lose respect, when they can't be bothered to edit the final piece, or maybe it's because they didn't even realize the AI got something wrong.
reply