Connect with us

Bussiness

An unchecked AI could usher in a new dark age

Published

on

An unchecked AI could usher in a new dark age

The dangers of generative artificial intelligence have already begun to reveal themselves — and now is the time to start creating new laws and regulations around the rapidly advancing technology, tech law experts told Business Insider.

One legal expert even warned that AI could potentially usher in a new, modern-day “dark age,” or a period of societal decline if the relatively new industry of AI goes largely unregulated.

“If this thing is allowed to sort of run away, without regulation and without compensation to those whose work it’s using, it basically is a new dark age,” said Frank Pasquale, a law professor at Cornell Tech and Cornell Law School.

“It pre-stages a new dark age or a new sort of like just a complete evisceration of incentives to create knowledge in many fields — and that’s very troubling,” he added.

With the growing popularity of AI tools like OpenAI’s ChatGPT and Google’s Gemini, the experts said that the largely unregulated landscape of social media nearly three decades later should serve as a cautionary tale for AI.

A main issue that has risen is the use of copyrighted work to train the technology.

Authors, visual artists, news outlets, and computer coders have already filed lawsuits against AI companies like Microsoft-backed ChatGPT-maker OpenAI, arguing that their original works have been used to train AI tools without their permission.

And while there is no federal uniform law on the books to address the use of AI in the United States, some states have already passed their own legislation on the use of AI. Congress has also been exploring ways to regulate the technology.

AI regulation, Pasquale said, could prevent many of the problems that could pave the way for this so-called new dark age dynamic.

“If uncompensated and uncontrolled expropriation of copyrighted works continues, many creatives are likely to be further demoralized and eventually defunded as AI unfairly outcompetes them or effectively drowns them out,” Pasquale said.

Many will perceive low-cost automated content as a “cornucopian gift,” Pasquale said, “until it becomes clear that AI itself is dependent on ongoing input of human-generated works in order to improve and remain relevant in a changing world.”

“At that point, it may be too late to reinvigorate creative industries left moribund by neglect,” he said.

Mark Bartholomew, a University at Buffalo law professor, said that he, too, is concerned about AI in the future “generating so much content — from artworks to advertising copy to TikTok videos — that it overwhelms contributions from real human beings,” but for now he’s more worried about AI being used to distribute misinformation, create political and pornographic deepfakes, and scam people.

‘The dangers are enough now’ to put regulations in place

The consequences of no comprehensive AI regulation soon, Bartholomew warned, include misinformation infecting elections, the spread of deepfakes, and people being defrauded by scammers using AI to simulate the voices of others.

“It would be dangerous to say we know now in 2024 exactly how to handle AI,” Bartholomew said, noting that too many regulations too soon could stifle “a promising new technology like AI.”

However, he added, “My personal opinion is that the dangers are enough now that we need to come in and at least have some specific regulations to deal with things that I think we’re already realizing are real problems.”

“It’s not like AI will shrivel up and die if we put real teeth into laws saying you can’t use AI for political deepfakes,” Bartholomew said.

US intellectual property laws related to copyright infringement and state-level publicity rights are among the main legal frameworks being used to potentially regulate AI in the country.

Harry Surden, a professor of law at the University of Colorado Law School, agreed that new federal laws should be created to specifically govern AI, but he warned against doing so too hastily.

“We’re really bad at predicting how these technologies come out and the problems that arise,” said Surden, also the associate director of Stanford University’s CodeX Center for Legal Informatics. “You don’t want to do this quickly or politically or haphazardly.”

“You might wind up hurting all the good along with the bad,” he said.

Both Bartholomew and Pasquale contended that the lack of regulation around social media and the light touch lawmakers have largely taken since its inception should serve as a lesson for when it comes to dealing with AI.

“It is a cautionary tale,” said Bartholomew, explaining, “We’ve waited too long to get our hands on social media, and it’s caused some real problems.”

And still, he said, “We just haven’t been able to find the political will to do much of anything about it.”

Pasquale added when social media first came about people did not really anticipate “how badly it could be misused and weaponized by bad actors.”

“There’s really a precedent in social media for regulation, and doing it sooner rather than later,” said Pasquale.

Surden argued that the early discussions regarding the regulation of social media “largely failed to predict other main issues about social media that we are worried about today that many today consider to be more significant.”

That includes how social media affects youth’s mental health and the propagation of disinformation and misinformation, he said.

He noted that the ability to regulate social media today exists, but that it’s not clear what the effective legal solutions are for the societal problems that have arisen.

“We, as a society, are not often as accurate at predicting issues ahead of time as we like to think,” said Surden.

“So there is a similar lesson about AI — we can certainly see issues of today that we need to be careful about, including privacy, bias, accuracy,” Surden said. “But we should be humble about our ability to predict and preemptively regulate AI technology problems ahead of time as we are often quite bad about predicting the details or the societal impacts.”

Continue Reading