Connect with us

Bussiness

In the rush to adopt AI, ethics and responsibility are taking a backseat at many companies

Published

on

In the rush to adopt AI, ethics and responsibility are taking a backseat at many companies

  • Companies are rapidly integrating generative AI technology to boost productivity.
  • Experts, however, are concerned that efforts to manage the risks of AI are lagging.
  • Responsible AI efforts are moving “nowhere near as fast as they should be,” a BCG senior partner said.

Companies have been racing to deploy generative AI technology into their work since the launch of ChatGPT in 2022. 

Executives say they’re excited about how AI boosts productivity, analyzes data, and cuts down on busy work.

According to Microsoft and LinkedIn’s 2024 Work Trends report, which surveyed 31,000 full-time workers between February and March, close to four in five business leaders believe their company needs to adopt the technology to stay competitive.

But adopting AI in the workplace also presents risks, including reputational, financial, and legal harm. The challenge of combating them is that they’re ambiguous, and many companies are still trying to understand how to identify and measure them.

AI programs run responsibly should include strategies for governance, data privacy, ethics, and trust and safety, but experts who study risk say the programs haven’t kept up with innovation.

Efforts to use AI responsibly in the workplace are moving “nowhere near as fast as they should be,” Tad Roselund, a managing director and senior partner at Boston Consulting Group, told Business Insider. These programs often require a considerable amount of investment and a minimum of two years to implement, according to BCG.

That’s a big investment and time commitment and company leaders seem more focused instead on allocating resources to quickly develop AI in a way that boosts productivity.

“Establishing good risk management capabilities requires significant resources and expertise, which not all companies can afford or have available to them today,” researcher and policy analyst Nanjira Sam told MIT Sloan Management Review. She added that the “demand for AI governance and risk experts is outpacing the supply.” 

Investors need to play a more critical role in funding the tools and resources for these programs, according to Navrina Singh, the founder of Credo AI, a governance platform that helps companies comply with AI regulations. Funding for generative AI startups hit $25.2 billion in 2023, according to a report from Stanford’s Institute for Human-Centered Artificial Intelligence, but it’s unclear how much went to companies that focus on responsible AI.

“The venture capital environment also reflects a disproportionate focus on AI innovation over AI governance,” Singh told Business Insider by email. “To adopt AI at scale and speed responsibly, equal emphasis must be placed on ethical frameworks, infrastructure, and tooling to ensure sustainable and responsible AI integration across all sectors.”

Legislative efforts have been underway to fill that gap. In March, the EU approved the Artificial Intelligence Act, which assigns the risks of AI applications into three categories and bans those with unacceptable risks. Meanwhile, the Biden Administration signed a sweeping executive order in October demanding greater transparency from major tech companies developing artificial intelligence models. 

But with the pace of innovation in AI, government regulations may not be enough right now to ensure companies are protecting themselves.

“We risk a substantial responsibility deficit that could halt AI initiatives before they reach production, or worse, lead to failures that result in unintended societal risks, reputational damage, and regulatory complications if made into production,” Singh said.

Continue Reading