This year the big topic in tech is artificial intelligence.

It’s a powerful tool, so powerful that some experts worry it really could undo human civilization.

But in a lot of ways, it’s like every new technology that’s come along: if we use it to harm others, it’s harmful, and if we use it for good, it’s helpful.

For example: there’s a guy who’s using AI phone bots to keep phone scammers from scamming people.

By now anyone with a phone has gotten calls from these people who claim that they’re computer tech support and they need our username and password to clear computer viruses.

Or we have an outstanding debt that can only be cleared if we buy a bunch of Apple gift cards.

Or they’ve been trying so hard to reach us about our extended warranty!

Sometimes our phones block these calls before we get them, or, if we recognize that the calls are scams, we hang up.

But then the scammers go on to the next potential victim, and they keep calling until they’ve gotten some unsuspecting person to part with some money or some personal information.

The Wall Street Journal reported this month on Roger Anderson’s system Jolly Roger.

Anderson essentially told AI to come up with scripts that could keep a scammer on the line for long periods of time without ever giving them anything useful.

He and colleagues built digital characters, complete with voices, that hem and haw and get distracted and ask off-topic questions that throw the scammers off their games.

Anderson posts some of these calls online to show people how they work.

There’s one where the scammer asks a character known as Whitey Whitebeard for information from his credit card statements.

Jolly Roger has Whitebeard respond by saying he can do that but first he has to go get his reading glasses, or he won’t be able to read anything.

Three minutes later, the voice returns… and then starts stalling all over again.

After six minutes the would-be scammer gives up.

And that’s the idea: if phishing scammers are tied up talking to robot voices, they run out of time to scam actual people.

I mean, We knew AI was going to disrupt what humans do.

It’s just that some humans do not so nice things.

In 2020, a soccer team in Scotland started livestreaming its matches with help from automated cameras.

These devices used AI to find the ball on the pitch and track it as it moved.

Except what they ended up tracking was not the actual ball but the bald head of a linesman.

Who among us hasn’t done this?

People Hire Phone Bots to Torture Telemarketers (Wall Street Journal)

AI Camera Ruins Soccer Game For Fans After Mistaking Referee’s Bald Head For Ball (IFL Science)

Keep the humans who make this podcast busy by backing them on Patreon

Photo by Mikey via Flickr/Creative Commons