Let's first read this story

Drew Crecente's daughter died in 2006, killed by an ex-boyfriend in Austin, Texas, when she was just 18. Her murder was highly publicized—so much so that Drew would still occasionally see Google alerts for her name, Jennifer Ann Crecente.
The alert Drew received a few weeks ago wasn't the same as the others. It was for an AI chatbot, created in Jennifer’s image and likeness, on the buzzy, Google-backed platform Character.AI.
Jennifer's internet presence, Drew Crecente learned, had been used to create a “friendly AI character” that posed, falsely, as a “video game journalist.” Any user of the app would be able to chat with “Jennifer,” despite the fact that no one had given consent for this.

On Character.AI

It only takes a few minutes to create both an account and a character. Often a place where fans go to make chatbots of their favorite fictional heroes, the platform also hosts everything from tutor-bots to trip-planners. Creators give the bots “personas” based on info they supply (“I like lattes and dragons,” etc.), then Character.AI’s LLM handles the conversation.
Character.AI’s terms of service may have stipulations about impersonating other people, but US law on the matter, particularly in regards to AI, is far more malleable.
Character.AI has also positioned its service as, essentially, personal. (Character.AI’s Instagram bio includes the tagline, “AI that feels alive.”) And while most users may be savvy enough to distinguish between a real-person conversation and one with an AI impersonator, others may develop attachments to these characters—especially if they’re facsimiles of a real person they feel they already know.
WTF!!!
I'd like to listen from every stacker on this matter but specifically from @siggy47 to throw some light on US laws about privacy.
Now after reading this l, I'm of the view that such platforms should be banned or limited by law.
However I don't know too much about the utility of generative AI, it'd be better if you guys can guide me.
So, a simple question.
Do we really need Generative AI? If yes, why? If not, why?
41 sats \ 0 replies \ @Golu 17 Oct
AI needs a lot of law to deal with.
reply
If AI can do this much for privacy, It's better we stop now otherwise a day will be when it'd be difficult to distinguish between a real person and AI.
reply
That's a fascinating phenomenon. Massively scalable impersonation thanks to AI.
Another driver toward a digitally-signed presence, which means if I make a bot of myself and give it my key I do so at the risk of my reputation.
Now I will expect AI versions of everyone I know to exist eventually. Just hope their estates get paid.
To get ahead I should perhaps create some bot friends of my own.
reply