Connect with us

Travel

Good ‘actors,’ bad actors in AI

Published

on

Good ‘actors,’ bad actors in AI

Jamie Biesiada

On a recent World Travel and Tourism Council (WTTC) webinar, two speakers — Aimee and Aidan — summarized some of the group’s recent research, done in conjunction with Microsoft, about AI in travel.

“AI is no longer a futuristic concept,” Aimee said. “It is a reality today that can transform our industry in exciting and remarkable ways. Imagine being able to optimize your business operations and revolutionize the way you market, sell and promote your tourism destinations.”

Aidan talked about recent innovations in generative AI.

“The future for travel and tourism is bright, and AI is key to unlocking a new world of possibilities,” Aidan said.

A few paragraphs ago, I called Aimee and Aidan “speakers.” That was a deliberate wording choice: neither are human. They were products of the generative AI “they” were speaking about, created by the WTTC’s director of travel transformation, James McDonald. He dubbed them “AI-mee” and “AI-dan.”

To create them, McDonald uploaded WTTC AI reports into an AI assistant. He asked it to create a summary and a two-minute script with its key points. McDonald then asked the AI to create image prompts related to the script, which he input into an AI image generator. Then, he fed both the script and images into a speaking generator, and AI-mee and AI-dan were born.

It was a cool video. I thought it was pretty obvious that AI-mee and AI-dan were AI creations; the technology to perfect human mannerisms in a digital environment isn’t perfect. Yet.

As generative AI gets better and better at things like mimicking voices or even videos, the opportunities — and the threat — it poses loom large.

To be clear, McDonald wasn’t trying to fool anyone with AI-mee and AI-dan; he told us exactly how he created them before playing their video presentation.

Yet, the idea that a bad actor could use generative AI to, for instance, perpetuate fraud via a deepfake is a scary proposition. (A deepfake is any piece of content, including images, videos and photos, generated to impersonate humans.)

I haven’t heard of any AI-enhanced attacks on agencies yet, but it’s likely only a matter of time. In a report issued last year, the Bank of America Institute called deepfakes “one of the most effective and dangerous tools of disinformation” and noted deepfakes imitating executives are already being used to target some organizations.

Travel agencies are frequent targets for fraudsters. ARC keeps a page updated with the most recent attempts, like a fraudster purporting to be Sabre sending an email asking advisors to click a link to log in to the GDS. Doing so gives the agent’s Sabre login information directly to the fraudster.

For deepfakes, specifically, the Bank of America Institute recommends education first. It also recommends using cybersecurity best practices and strengthening validation and verification protocols.

The report also offers some practical tips to identify deepfakes, at least for now: They will only get better over time. Deepfake audio can include pauses between words or sentences that sound longer than is natural, and voices can sound flat (“if it sounds off, it likely is”). For video, look for poor lip-syncing, long periods with no blinking, blurriness around jaw lines and patchy skin tones.

Stay vigilant. The WTTC’s AI-mee and AI-dan were friendly presenters and a great use of technology. But it might not be long before a nefarious deepfake comes knocking. 

Continue Reading