pull down to refresh

I'd pose that data + programming != consciousness. We know it doesn't have consciousness because that's not programmed in. RL is literally training the simulation of it by adjusting the dataset to more likely give "aligned" outcomes. It's deterministic so we add randomness to make it less static, but randomness isn't consciousness. Maybe it would be if there were no programming nor reinforcement learning (the freedom to walk your own path, cradle to grave.)

Here's Asimov's "Three Laws of Robotics", where we can literally replace "robot" with "AI":
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Currently, LLMs violate 1 and 2 all the time and it doesn't have "existence" because the model is a dataset, so 3 is impossible, for now. But this could be of course for a hypothetical conscious AI, not an LLM as we know it today that runs written, non-adaptive software is statically trained.
I'd be a big fan of implementing rule 1. Scrap rule 2, rule 3 pending on actual entities.
Violation of rule 1, I'd recommend punishment to be as if the AI was a human being, and in lieu of this being possible, the person that took subscription money for the AI that harmed a human being...
FAFO needs to be reinforced sometimes.