pull down to refresh

Interesting, I have never considered the ethical considerations behind developing LLMs. It’s rather alarming, though. By whose moral yardstick are we using in the making of these LLMs? And how can we ensure that the developers involved are working for the betterment of mankind instead of manipulating us subtly towards some insidious goal? This is the kind of article that will turn @Bitman off Gen-AI haha
this territory is moderated
Actually, I found the article really interesting and thought-provoking.
There's been so many conversations about ethics when working within robotics that I must admit that I've been blindsided: I'd never considered that AI services are the first real-life application of this in our lives.
Within robotics, there's been the theoretical scaffold of The Three Laws of Robotics. This was invented by sci-fi author Isaac Asimov. These laws have been central in the conversation since 1942 when Asimov used them as a plot device in his novel I Robot.
As a refresher, The laws are:
The First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
The Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
The Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
In fact Asimov considered laws to be subconsciously in people's minds anyway:
The Laws apply, as a matter of course, to every tool that human beings use", and "analogues of the Laws are implicit in the design of almost all tools, robotic or not":
Law 1: A tool must not be unsafe to use. Hammers have handles and screwdrivers have hilts to help increase grip. It is of course possible for a person to injure himself with one of these tools, but that injury would only be due to his incompetence, not the design of the tool.
Law 2: A tool must perform its function efficiently unless this would harm the user. This is the entire reason ground-fault circuit interrupters exist. Any running tool will have its power cut if a circuit senses that some current is not returning to the neutral wire, and hence might be flowing through the user. The safety of the user is paramount.
Law 3: A tool must remain intact during its use unless its destruction is required for its use or for safety. For example, Dremel disks are designed to be as tough as possible without breaking unless the job requires it to be spent. Furthermore, they are designed to break at a point before the shrapnel velocity could seriously injure someone (other than the eyes, though safety glasses should be worn at all times anyway).
The Wikipedia article that expands on this is linked above. The three laws have been expanded upon by other sci-fi authors and loopholes explored.
It might be worth a while exploring the subject more.
Thanks @cryotosensei for bringing this to my attention.
Incidentally, although I've not yet seriously used Ai text services, I do think it's worth exploring and using.
reply
Great points raised. I tend to think of Gen-AI as so revolutionary that I forget that this advancement has been many decades in the making.
Seems like your inertia towards Gen-AI has softened. Come, join me on the dark side
reply