Bussiness
80% Of Surveyed Businesses Don’t Have Plans For An AI-Related Crisis
A fundamental best practice for managing crisis situations is to prepare for obvious—and not so obvious— risks. Business leaders who ignore potential threats could create a self-inflicted crisis for their companies.
A case in point are the hidden dangers posed by AI, an evolving technology that can also have important advantages and benefits for companies and organizations who use it with appropriate safeguards.
Despite news coverage and warnings about the threats from this technology, 80% of surveyed organizations still don’t have a dedicated plan to address generative AI risks, including AI-driven fraud attacks.
That’s according to the 2024 New Generation of Risk Report that was released last month by Riskconnect, a risk management software company.
Awareness Of Growing Risks And Threats
Of the 218 risk compliance and resilience professionals around the world who responded to the survey:
- 24% said AI-powered cybersecurity threats—such as ransomware, phishing, and deepfakes—will have the biggest impact on businesses over the next 12 months.
- 72% said cybersecurity risks are having a significant or severe impact on their organization, an increase over last year’s 47%.
- 65% of companies don’t have a policy in place to govern the use of generative AI by partners and suppliers, even though third parties are a common entry point for fraudsters, according to Riskconnect.
Mounting Concerns
“Concerns over AI ethics, privacy, and security continue to mount,” according to Riskconnect’s report.
“AI also tentacles into cybersecurity, geopolitics, and other areas, supercharging the risks of everything in its path. Hackers, for instance, are getting smarter, more sophisticated, and dangerous by the minute as they leverage the latest AI advancements to infiltrate organizations,” it observed.
Despite the growing concerns of the crisis situations that AI could cause, efforts to address those concerns appear to be lagging behind.
The report points out that “while companies’ top concerns [about AI] have shifted over the past year, risk management approaches largely haven’t evolved fast enough, and key gaps remain. The data also suggests that risk management is increasingly seen as a strategic business function, but continued investment is necessary to keep up with the changing risk landscape.”
Internal Threats
Internal threats can be just as damaging to companies as external ones. One example is the use of generative AI by companies to create marketing-related content.
“While well-prompted AI is an excellent starting point for written text, marketers need to ensure that ad copy, emails, and text messages are carefully proofread by human editors and not merely resubmitted to the same or a different AI program for proofing. This is because generative AI is focused on writing for clarity, but not necessarily for persuasion, which should be a primary communication goal for marketers,” Anthony Miyazaki, a professor of marketing at Florida International University, recommended in an email interview.
There’s another way in which reliance on generative AI can backfire for companies.
“More concerning is using AI to generate website content. Google has already warned web developers that AI content will be deprioritized if it is used to try to game the search process, and this would severely damage organic and even paid SEO,” Miyazaki pointed out.
Internal Safeguards
“A lot of organizational AI policies are heavily focused on protecting the organization from internal use of AI,”Andrew Gamino-Chong, chief technology officer and co-founder of Trustible, an AI governance software company, observed via email.
But organizations need to guarantee that their policies have covered all the bases.
Companies “want to ensure confidential data isn’t leaked, that AI chatbots are secure, and comply with relevant regulations. However, those policies sometimes omit setting clear standards for the AI systems they are building for customers; many regulations specifically want organizations to consider the downstream effects of their AI systems on individuals, groups, and communities,” he noted.
Dell Technologies
Prior to the generative AI boom, Dell Technologies “created a set of principles to guide our development and use of AI applications, ensuring they’re beneficial, equitable, transparent, responsible and accountable,” John Scimone, president and chief security officer of Dell Technologies, said in an email interview.
Chief AI Officer And Governance Structure
“We appointed a chief AI officer and established an AI governance structure that includes leaders from every major function within our company. Our AI use case review board carefully evaluates proposed projects and ensures they adhere to our principles and align to our business priorities,” he noted.
Security and Best Practices
In addition, the company “instituted AI security requirements and best practices to help our teams take appropriate security measures to safeguard data and systems at the point of design,” he concluded.
Empathy First Media’s AI Procedures
“The risks are very real, and we’ve taken deliberate steps to mitigate them,” Ryan Doser, vice president of inbound marketing at Empathy First Media, a digital marketing agency, commented via email.
He said the company has implemented the following policies and procedures to help ensure the responsible use of AI by employees:
Privacy
- It prohibits entering a client’s proprietary or sensitive data into generative AI tools.
Quality Control
- It does not allow generative AI responses to be copied and pasted, and requires the responses to be reviewed and polished by humans to help guarantee its accuracy and alignment with clients.
Regulatory Compliance
- The company avoids using the technology when it could create conflicts in complying with the standards of different industries.
Transparency
- Tells clients when generative AI has been used to create content.
“Transparency builds trust and helps educate our clients on how these tools are being used to enhance their campaigns,” Doser concluded.
Why Wait?
As I noted in a story about Riskconnect’s 2023 report about AI safeguards, “The longer companies wait to prepare themselves for the risks and dangers associated with AI, the longer they will be unprotected from this potential crisis.
“Why should business leaders wait any longer to do the right thing?”
From a crisis management and prevention perspective, given the growing sophistication of AI—and its hidden threats—there’s an even more urgent need today for business leaders to protect their organizations from the hazards posed by this technology.