Two thing to think about:
What if the chat algorithm is biased? You ask it something and it return only the "politically correct" answers of the who is in charge now. This would bring a possibility of hyper manipulation of public opinion "Chat GPT said that so it must be true..."
Chat GPT analyze and train itself with all that people write and probably with voice recognitions sample that found on the internet. Is content it is not original but merely computations of all the things it saw on the internet. If the majority of people use it from where it will get original human content to learn about? After sometimes there would be only fake shuffled content out there.
Just my two cents, and of course the possibility to to good things are also unlimited but so are the possibility of very bad things happening if not properly programmed. Seeing what happened from 2020 onward i do not think we are in for a good ride.
The algorithm is clearly biased. Basically, it's a sentence generator based on content written by humans in the past.
So, for example, it would be able to give you great source code for hello world programs in many languages, but if you ask something more niche, or different, it doesn't have the capacity to think and give you a response to that. It basically has to be done before, or a combination of things done before.
It can still be useful if you apply your own mind to it, and also your ability to ask what you need (an AI just like an animal, doesn't initiate conversations)
reply