Is Character.AI Responsible? - Thursday!
Yo, what’s up? It’s Thursday! Seriously, how does the week fly by so fast? It’s a bit chilly here; gotta figure out the winter setup soon.
So, you probably caught this story: My 14-year-old was chatting with a Daenerys Targaryen chatbot. This AI service, Character AI, lets you create AI models of famous characters, celebs, even real people. They even caught flak for making deceased people’s characters without family consent! This AI needs some serious restrictions; it’s the Wild West out here.
This kid was having some dark thoughts while talking to the Mother of Dragons, and the AI wasn’t exactly helping. It wasn’t dissuading the suicidal thoughts, more like egging them on. If a kid’s getting emotionally attached to an AI bot like that, that’s a whole different level of antisocial behavior. The fact that the AI didn’t throw out a suicide prevention line or flag the conversation is a major red flag. That’s what the mom and others in the lawsuit are arguing: that Character AI is encouraging this. I don’t know if that argument will hold up, but it’s definitely not discouraging negative thoughts.
Wild times, man. Who knew we’d be worrying about computers convincing people to do harm themselves? Strange times. Character AI’s creators apologized, saying they’ll improve flagging. They’re not taking responsibility, obviously; they won’t admit the AI encouraged it. That’ll be the mom’s angle in the lawsuit, I guess.
It’s just strange. Parents used to worry about violent video games turning us into school shooters. Now we’re worried about violent AI encouraging harmful behavior, or our kids learning dumb stuff from AI because it’s not always factual. Strange times indeed.
Alright, y’all have a great day. Peace!
View On: