pull down to refresh

I've been thinking a lot about when I start letting the kids use LLM's. Before reading this article I thought it would be better to have a screenless device that you could ask questions. Kinda like an interactive children's dictionary. I might wait a little longer as this article brought about some issues that I missed.
“He was not done telling the story that he wanted to tell, and I needed to do my chores, so I let him have the phone,” recalled Josh, who lives in north-west Ohio. “I thought he would finish the story and the phone would turn off.”
But when Josh returned to the living room two hours later, he found his child still happily chatting away with ChatGPT in voice mode. “The transcript is over 10k words long,” he confessed
Letting a kid just burn energy into something like an LLM that doesn't stop would be a parents paradise. What better way to learn when there's always an answer to the infinite questions kids come up with.
“I literally just said something like, ‘I’m going to do a voice call with my son and I want you to pretend that you’re an astronaut on the ISS,’” Kaushik said. He also instructed the program to tell the boy that it had sent him a special treat.
“[ChatGPT] told him that he had sent his dad some ice-cream to try from space, and I pulled it out,” Kaushik recalled. “He was really excited to talk to the astronaut. He was asking questions about how they sleep. He was beaming, he was so happy.”
Childhood is a time of magic and wonder, and dwelling in the world of make-believe is not just normal but encouraged by experts in early childhood development, who have long emphasized the importance of imaginative play. For some parents, generative AI can help promote that sense of creativity and wonder.
All I know that once I start, it will be very difficult to take it away.
“The more that it became part of everyday life and the more I was reading about it, the more I realized there’s a lot I don’t know about what this is doing to their brains,” Kreiter said. “Maybe I should not have my own kids be the guinea pigs.”
This might be my main concern. What does it take away from how we're living and learning at the moment? It feels like it could become another additional "screen time" period.
McStay is particularly concerned with the way in which LLMs can create the illusion of care or empathy, prompting a child to share emotions – especially negative emotions. “An LLM cannot [empathize] because it’s a predictive piece of software,” he said. “When they’re latching on to negative emotion, they’re extending engagement for profit-based reasons. There is no good outcome for a child there"
There might be some other negative consequences that haven't been thought through enough. I notice that gen Zers really struggle with verbal communication. I think it's likely due to most social interaction done through phones and the internet in general. I imagine something similar could happen if kids become too dependent on their beloved LLM's.
The pitch for toys like Curio’s Grok is that they can “learn” your child’s personality and serve as a kind of fun and educational companion while reducing screen time. It is a classically Silicon Valley niche – exploiting legitimate concerns about the last generation of tech to sell the next. Company leaders have also referred to the plushy as something “between a little brother and a pet” or “like a playmate” – language that implies the kind of animate agency that LLMs do not actually have.
So it seems the new toys are coming fast. Once your kids friend has one, yours will want one too!
Interested to hear from how stackers deal (or plan to deal) with these issues with young kids.
50 sats \ 3 replies \ @optimism 9h
I personally don't think that it's safe right now, for the simple reason that there is too much hype and the model and chatbot releases are done so rapidly that it is impossible for there to be real quality control. There are adults that are literally driven over the edge against self-preservation with their chatbot usage and if it isn't safe for adults, it definitely isn't safe for children.
Unless something is provably and fully independently tested to be safe, I'd advice caution.
reply
122 sats \ 2 replies \ @OT OP 9h
Yes, probably better to be on the safe side.
Couldn't you simplify or refine a model to only do math? Or to only be a dictionary?
reply
That’s interesting, I actually tried getting ChatGPT to just translate everything I said into English, but it didn’t really work. Sometimes it would stop translating and start treating stuff like commands. It’s probably on me though, I just told it to translate whatever I typed in.
reply
50 sats \ 0 replies \ @optimism 9h
Yes, but I don't see that being done, likely because safety is not what rakes in the $$$, so there aren't any best practices. If OpenAI cannot prevent people from taking their own lives, then we can be sure that the technology for safety is severely underdeveloped at the moment.
I think that one could train a model only on, say, children's books, and actually curate the inputs, which is the opposite approach of what the current models have been doing. If I were to make a database of knowledge I would want to teach my children, would I let it ingest porn, violence and adult fantasy stuff?
reply
I have an 11 year old who has really taken to making up stories with ChatGPT. I think for the most part it's harmless as long as we are able to monitor it and he does it in moderation. We treat it like video game time. But I did force him to turn of memory between chats, because i think the memory function is one of the most dangerous.
reply
Using LLMs with kids can be valuable but only as a guided tool within limits and with plenty of real life interaction to balance the experience...
reply
0 sats \ 0 replies \ @nichro 4h
"you're absolutely right! a limit on candy and a bedtime are not reasonable"
reply
deleted by author