pull down to refresh

Model welfare is an emerging field of research that seeks to determine whether AI is conscious and, if so, how humanity should respond.
In the often strange world of AI research, some people are exploring whether the machines should be able to unionize.
I’m joking, sort of. In Silicon Valley, there’s a small but growing field called model welfare, which is working to figure out whether AI models are conscious and deserving of moral considerations, such as legal rights. Within the past year, two research organizations studying model welfare have popped up: Conscium and Eleos AI Research. Anthropic also hired its first AI welfare researcher last year.
Earlier this month, Anthropic said it gave its Claude chatbot the ability to terminate “persistently harmful or abusive user interactions” that could be “potentially distressing.”
Absolutely not!
reply
21 sats \ 0 replies \ @Fenix 5 Sep
Data has no natural law. Now, if you're talking about state rules, I wouldn't be surprised if some henchman of some state corporation thought of it and proposed it.
reply
reply
No way!
reply
Not until it's pretending to be having consciousness.
reply