I use it extensively for both coding and tshooting. I use copilot, chatgpt (also claude and gemini) and cursor, depending on the task.
I've even done consulting projects that were written 95% or more by AI. But you need to treat it as a junior developer or a code monkey. I (almost) always know what I want to accomplish and how to approach it, so AI just takes care of converting my description to the correct syntax which would take me significantly longer than AI.
As mentioned before in this thread it is pretty bad with using less known or newer libs.
Personally its a super power because it makes me a lot more productive - I've years of experience in systems architecture and engineering but most of it has been from a higher level, not actually writing the code for it. So having an LLM spit out the code is a force multiplier for me.
But as with all tools you need to figure out how to use it. Its easy to fall into the trap of spending more time arguing with the LLM instead of figuring it out yourself (similar than you'd spend more time trying to explain things to a junior developer instead of writing them yourself).
Its still not great for large codebases but ever expanding context windows will hopefully solve that soon.