Every day, a new report details how AI can be used for evil when all we really want is the “Star Trek” replicator. It’s a damn shame, but we cover what we must.
Like this: Joseph Cox of 404 Media used a site called OnlyFake to create realistic images of bogus IDs, which he then used to create an account on a crypto exchange.
How it works
The user enters a name and biographical data, uploads a photo (or chooses one from OnlyFake’s archives), and selects an AI-generated signature.
In minutes, OnlyFake generates images of the fake ID’s front and back, which users can attempt to upload to any website that requires ID verification.
- It can whip up fake passports and licenses from myriad countries and states.
- OnlyFake claims it can generate up to 20k documents per day, including hundreds at the same time via an Excel table.
Barriers do exist for sites that require video proof or photos of users holding up their own IDs, but naturally, people are working to bypass these, too.
So much yikes
This certainly has scary implications for any site that requires ID verification, from banking to Airbnb — but it’s also another reason why legislation, companies, and, well, everyone needs to be hypervigilant about AI and how fraudsters may use it.
Already, bad actors have used AI to scam victims out of money by impersonating their loved ones and churn out fake books and articles written by fake people.
One report even suggested that driverless vehicles could be hacked and used as weapons, which, OK, please stop.
And for companies, it’s shaping up to be a very annoying game of Whac-A-Mole, routinely combatting new scams as tech evolves — something Microsoft recently learned after its tech was used to create nonconsensual deepfakes of superstar Taylor Swift. Sigh.