pull down to refresh
30 sats \ 2 replies \ @optimism 13h \ parent \ on: To Share Weights from Neural Network Training Is Dangerous AI
For COVID the research was done in an unsafe lab (generally considered proven by congress), funded by the DoD that took some people 3 years out of their life to uncover through the most awful FOIA procedures imaginable (RTK.) So this was the USG sponsoring some research that was then covered up while done in unsafe places to save money. That is the status quo as I read congress' conclusion. EVERY action here was taken by unsavory human beings: the USG, EcoHealth Alliance, the scientists that covered it up, Fauci... you name it. All scammers.
Now you're saying that someone like you or me training an open model and publishing the weights is worse than a proven scammer like Sam Altman (never forget WorldCoin and biometric data harvesting) getting billions off a closed model and open sourcing only crap? What makes you think ChatGPT (closed source) isn't lying to you already?
Now you're saying that someone like you or me training an open model and publishing the weights is worse than a proven scammer like Sam Altman (never forget WorldCoin and biometric data harvesting) getting billions off a closed model and open sourcing only crap? What makes you think ChatGPT (closed source) isn't lying to you already?
It is lying, already. Giving the power to doe these extra little modifications on AI through open learning and programming may not be the best decision for all the reasons you mentioned about COVID. Letting the USG or any other villain do their operations on the training may not be too wise, although you would suspect them of doing it anyway. You know, villains will villain.
reply
Would you ban speech because some people will use it to trick others, leaving only the state to be able to trick others?
Would you ban guns because some people will use it to shoot others, leaving only the state to be able to shoot others?
... if villains will villain, then the open model gives a chance for defense, especially if there's an imminent AI threat (which I don't believe.) Without open models I myself would not have learned anything about them. I'd just hated Sam Altman while maybe in a few years from now I'd find myself obsolete (which I also don't believe, but they definitely do and they are actively working towards that goal - they can't shut up about this aspect of the lie.)
With open models, I have defense. I can locally 5x my productivity without gatekeepers. This means that if a villain does 5x, I do 5x too and they don't have asymmetrical benefit. I can also modify these open weights to be more productive; for example, Salesforce took llama2, finetuned it, and made smaller models as effective in tool calling as the huge closed weight ones.
No open weights, no counterbalance to the villains.
reply