The novelty of AI makes right now the most dangerous time for it. Criminals and malicious actors of all kinds (various states included) are actively looking to weaponize AI and exploit the rest of us. While everyone dicks around using ChatGPT to write horrid blog posts, or having AI write a "Happy Birthday" 🥳 email to grandma, black hats are finding out how to use those tools to wreck them, and very little is being done as a countermeasure.
reply
that's exactly why I shared this here. Not using this commercial tools is probably the best we can do to avoid diving into idiocracy. In the other side, it's good to know how they work and understand why to avoid them. Apart from this, I honestly struggle in finding countermeasures to better prepare to the worst, at least psychologically.
Keeping my values in mind probably is the most useful, Don't trust, Verify. Always!
reply
AI is in it's infancy. Let it evolve. I believe these issues will be sorted out within time. Deepfakes are serious and it's not people who should recognise them, it should be the responsibility of big players to identify these frauds and stop them from making use of someone else's identity.
reply
I like your optimism. I don't believe that relaying on third parties or big players is a good thing, but I get your point, and they can definitely do something.
But in relation to identity fraud, the big players are governments and do not see them doing much apart providing digital ID and other artifact to enhance control over citizens.
I wonder how is that in Asia the numbers are bigger, what's there making it easier than in other countries?
From this link above, you can check the source of the data from where the infographic above has been generated. The authors work for a ID verification service provider that had obviously interest in selling and providing information about the "value" of their products.
Could be these same services the main issue for security fraud?
reply