pull down to refresh

Now you're saying that someone like you or me training an open model and publishing the weights is worse than a proven scammer like Sam Altman (never forget WorldCoin and biometric data harvesting) getting billions off a closed model and open sourcing only crap? What makes you think ChatGPT (closed source) isn't lying to you already?
It is lying, already. Giving the power to doe these extra little modifications on AI through open learning and programming may not be the best decision for all the reasons you mentioned about COVID. Letting the USG or any other villain do their operations on the training may not be too wise, although you would suspect them of doing it anyway. You know, villains will villain.
Would you ban speech because some people will use it to trick others, leaving only the state to be able to trick others?
Would you ban guns because some people will use it to shoot others, leaving only the state to be able to shoot others?
... if villains will villain, then the open model gives a chance for defense, especially if there's an imminent AI threat (which I don't believe.) Without open models I myself would not have learned anything about them. I'd just hated Sam Altman while maybe in a few years from now I'd find myself obsolete (which I also don't believe, but they definitely do and they are actively working towards that goal - they can't shut up about this aspect of the lie.)
With open models, I have defense. I can locally 5x my productivity without gatekeepers. This means that if a villain does 5x, I do 5x too and they don't have asymmetrical benefit. I can also modify these open weights to be more productive; for example, Salesforce took llama2, finetuned it, and made smaller models as effective in tool calling as the huge closed weight ones.
No open weights, no counterbalance to the villains.
reply