pull down to refresh
30 sats \ 4 replies \ @optimism 15h \ on: To Share Weights from Neural Network Training Is Dangerous AI
Alright. So closed weights is better? Then all we have to do is steal the GPT model and modify it, and then do the same thing, and no one will ever know that you did it? Nor be able to detect it?
Wasn't the problem with the mad scientists that they were lying?
Wasn't the problem with the mad scientists that they were lying?
Well, yes, they are lying all the time, but the main problem is they never think of consequences beyond the test tube they are looking at or the device they have just constructed. Unfortunately, for the rest of us, the test tube product escapes into the population, killing innocent people and the devices are good for 100,00 people a pop!
Now, do we want released whatever the next MAD SCIENTISTTM or MAD PROGRAMMER or whatever kind of idiot that wants to realease a vengeful or even revengeful AI on the population? Didn’t we get enough of that with COVID19? Aren’t we going to get enough of that with the next Gates Special?
reply
For COVID the research was done in an unsafe lab (generally considered proven by congress), funded by the DoD that took some people 3 years out of their life to uncover through the most awful FOIA procedures imaginable (RTK.) So this was the USG sponsoring some research that was then covered up while done in unsafe places to save money. That is the status quo as I read congress' conclusion. EVERY action here was taken by unsavory human beings: the USG, EcoHealth Alliance, the scientists that covered it up, Fauci... you name it. All scammers.
Now you're saying that someone like you or me training an open model and publishing the weights is worse than a proven scammer like Sam Altman (never forget WorldCoin and biometric data harvesting) getting billions off a closed model and open sourcing only crap? What makes you think ChatGPT (closed source) isn't lying to you already?
reply
Now you're saying that someone like you or me training an open model and publishing the weights is worse than a proven scammer like Sam Altman (never forget WorldCoin and biometric data harvesting) getting billions off a closed model and open sourcing only crap? What makes you think ChatGPT (closed source) isn't lying to you already?
It is lying, already. Giving the power to doe these extra little modifications on AI through open learning and programming may not be the best decision for all the reasons you mentioned about COVID. Letting the USG or any other villain do their operations on the training may not be too wise, although you would suspect them of doing it anyway. You know, villains will villain.
reply
Would you ban speech because some people will use it to trick others, leaving only the state to be able to trick others?
Would you ban guns because some people will use it to shoot others, leaving only the state to be able to shoot others?
... if villains will villain, then the open model gives a chance for defense, especially if there's an imminent AI threat (which I don't believe.) Without open models I myself would not have learned anything about them. I'd just hated Sam Altman while maybe in a few years from now I'd find myself obsolete (which I also don't believe, but they definitely do and they are actively working towards that goal - they can't shut up about this aspect of the lie.)
With open models, I have defense. I can locally 5x my productivity without gatekeepers. This means that if a villain does 5x, I do 5x too and they don't have asymmetrical benefit. I can also modify these open weights to be more productive; for example, Salesforce took llama2, finetuned it, and made smaller models as effective in tool calling as the huge closed weight ones.
No open weights, no counterbalance to the villains.
reply