We’re All Cold War Spies in the AI Era
Cultivate an attitude of distrust and settle in for the cyberpunk future literally none of us wanted.
That call from your mother? It’s fake. This isn’t sci-fi: people are being robbed right now.
Let’s get something straight: what we currently call “AI” isn’t the artificial intelligence of Star Trek. OpenAI’s ChatGPT and all the other implementations may be extremely early groundwork for Data (or the Borg), but any hint of “intelligence” is just a clever language trick. Which is why these “AI” services are called Large Language Models (LLMs). They’re samples of massive pools of data, trained to respond in predictable ways (with sometimes hilariously non-predictable results).
The danger of LLMs is not that they are going to rise up, Terminator-style, and demand Sarah Connor’s life. The danger is in how we human beings use this new technology. Long term? The corporations will happily see us all plugged in like the human batteries in The Matrix — short term? Well, in the short term, criminals are going to make your already difficult life even worse.
A few years back, headlines were made when an AI-generated voice was used to steal $220,000 from a UK energy company. The Washington Post reported that the “software was able to imitate the voice, and not only the voice: the tonality, the punctuation, the German accent…” of the company’s boss, and this fake was used to convince a chief executive of the company to wire the money.
Now, we’ve also had some true hilarity from the world of fake AI conversations. Like this time when a telemarketer was kept on the line for fifteen minutes by a bot that screamed at its “children” for bringing an ant-farm into the house, and complained about its ruined figure-skating career.
But humorous hijinks are a shiny golden veneer on a terrifying industry that is taking cons to the next level. And most of these scams aren’t attacking wealthy energy companies. Most scams are attacking our parents, grandparents, and children — the most vulnerable among us.
“The person calling Ruth Card looked like her grandson, Brandon. So when he said he’s in jail, has no wallet or cellphone, and needs cash card scramble to bail Do whatever she can to help.
“It was definitely … a sense of dread,” she said. “We have to help him now.”
Card, 73, and her husband, Greg Grace, 75, dashed into their bank in Regina, Saskatchewan, and withdrew 3,000 Canadian dollars ($2,207 in US currency), the daily maximum. They went to another branch for more money. But a bank manager pulled him into his office: Another patron had received a similar call and discovered the terrifyingly accurate voice was a fake, Card recalled telling the banker. The person on the phone was probably not his grandson.”
In 2022, reports The Washington Post, “impostor scams were the second most popular racket in the US, with over 36,000 reported victims.” That number is only rising.
These days, most of us leave a vast digital trail behind us as we surf the net. Beyond our data trail, however, are deadly breadcrumbs that scammers can snarf up and use for their own ends: including our voices and our faces. Every time we upload a video, appear on a podcast, or allow ourselves to be recorded in any way, we add to the likelihood that someone can create convincing voice and video fakes of us.
What can we do to defend ourselves against AI scams?
The oldest method of basic security is the password.
Tracing the etymology leads us to “watchword”, the Middle-English “wacchworde,” and the Old English wæċċan (“to be awake; to keep watch”). These days, we think of passwords as complex alphanumeric strings that reside inside our password managers. But we’ve always used passwords to keep ourselves safe from the unknown.
No matter how good a fake the AI can produce, it won’t be able to replicate a secret watchword that you and your loved ones know (as long as it remains secret). If “Brandon” had been asked, “What’s the weather in Oklahoma this time of year, dear?” poor Ruth Card and her husband wouldn’t have been scammed. Why? Because the scammer wouldn’t have known to say “Dour, with a chance of spring.”
This method is a pretty simple one, as long as you can trust your fellow watchworld-holders not to post the word or phrase online, tell it to people who shouldn’t know, or just plain forget it.
Throughout the ages, we’ve come up with a variety of spectacularly complex and ingeniously simple methods for securely getting information to someone who needs it. And, these days, the same methods used by militaries and spies of old can save us from the AI.
Other best practices include:
- If you get a call from someone at an “official” institution, such as your bank, school, or place of employ, hang up and call back at the official number you have listed in your personal contacts.
- If a relative or friend calls you desperately asking for help (especially in the form of money), hang up and call them back right away (using the number from your contacts) to confirm that it’s really them.
- Keep a list of your close family’s public payment links, such as PayPal and Venmo, so you can make sure that any money you send is really being sent to the right person.
- Enable two-factor authentication on all your services, bank accounts, and health portals (even if it’s a pain in the behind).
- Use a dedicated secure password manager, never the password manager provided by your web browser, and never a password that you’ve made up yourself and repeat variations of for every account.
Remember, you probably won’t be protected from financial loss if you’ve willingly sent money to a scammer. Worse, if the scammer can get access to your information and trick the bank into believing they’re really you, it might be an uphill battle to prove the bank wrong.
So, stay safe, cultivate an attitude of distrust toward surprises and alarm, and settle in for the cyberpunk future literally none of us wanted.
Hi there! I’m Odin Halvorson, an independent scholar, film fanatic, fiction author, and tech enthusiast. If you like my work and want to support me, please consider subscribing to a paid tier for as low as $2.50 per month!