pull down to refresh
Yeah, I know the media loves some crazy stories to get attention, and then you've got the marketing doing its thing. But like I said before: at the end of the day, it's still just an LLM.
reply
pull down to refresh
Yeah, I know the media loves some crazy stories to get attention, and then you've got the marketing doing its thing. But like I said before: at the end of the day, it's still just an LLM.
I can adjust the weights of an LLM to only say evil things. Just like I can fill a database with only evil things, write a book or a website about evil things. The problem is that not enough time is spent on rethinking "alignment":
But, since most of the AI-as-a-service CEOs have an imaginary hardon the size of the Eiffel tower for AGI, they aren't thinking like that. They are faking-until-making AGI and it's likely they will fail no matter how much money they throw at it, because they haven't even realized the
Iyet.