pull down to refresh

This piggy backs off of the article that @Coinsreporter posted yesterday (#1080311) about a damning report over Meta’s AI chat bots. Sen. Hawley is a huge privacy hawk and is not someone you want investigating your business….
“We intend to learn who approved these policies, how long they were in effect, and what Meta has done to stop this conduct going forward,” Hawley wrote.
A Meta spokesperson told Reuters that “The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed.”
“We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors,” the Meta spokesperson told Reuters.
Meta has done a ton of rebranding since Trump won last November and this is a huge issue he faces now.
I'm not really a fan of government intervention as a solution to children misusing AI chatbots. I think parents need to just have a better pulse on what their kids are doing and protect them at home. Otherwise we get into draconian age verification laws and other things like that
reply
110 sats \ 0 replies \ @optimism 11h
I think the most offensive thing that they built into the chatbots is emulation of persona. This is probably what is making people:
  • believe that LLMs are singular and an actual entity
  • believe that LLMs are smarter than a human being
  • fall in love with a chat template (or manipulate it sexually w/ a custom system prompt, because Elmo did it)
  • follow bad advice from a chat template
  • completely lose all cognitive ability because it is like a mentor to them
Not too long ago one of my security mailing lists got a mail from a person that:
  • Spoke about "their AI" as their partner
  • Followed its advice to the letter but didn't have it write the email (kudos for non-slop, thats the one feather I'd put on their cap)
  • Told it everything I replied
  • Then came up with even more nonsense, including invented nomenclature
  • Apologized for their partner's mistakes
... and so on. All in all I felt extremely sorry for this person's delusion (and kind of for my lost time too) because that would have never happened if chatbots weren't role-playing.
So if you want child safety, you have to teach children that LLMs are not entities, but software that people can tune to trick you. Like the man in the raincoat, and like you don't take candy from a stranger.
reply
Cool
reply