As if fake news wasn't getting bad enough, AI is proving that it's about to get a lot worse.
I understand the immediate red flags drawn on AI because of stuff like this, but those fearing AI are missing the bigger picture I think.
I suspect that many (rightfully) live with the PTSD of how little privacy there is in the world today. Perhaps in the short term (~5-10 years) things may get even more dystopian, but IMO we're nearing the worst of it, and in the long run AI will actually do much more to improve the privacy situation rather than harm it. Much to the dismay of those in power.
Humans simply won't be able to determine what's real or what's fake anymore on the internet (in its current state). Images, videos, online content in general will become far too easy for AI to replicate and craft narratives around in the media.
What's going to be necessary are DIDs, as LeClair puts it:
This also begs the question for Stacker News! @k00b, what do you think about how SN is currently positioned for AI in general? Security measures taken? Any plans for implementation?? Anything really, I am curious to know!
Preston Pysh and Jeff Booth recently had a conversation on the same subject. AI will become so good at finding every little detail about you, that there's now a huge wave of demand coming for cryptographically secured, decentralized custody for both money and information at large.
AI not only creates more of that demand, but also at a much faster rate now that it's starting to get very real. Long term good for bitcoin, and good for people to start living in a more secure online world. Short term, well uhhhhhh ehhhhhhhhh I can't be so sure. Take self custody people.
We'll see how many come out of this environment orange pilled. A LOT i am guessing.