Connect with us

World

Why Protesters Are Demanding Pause on AI Development

Published

on

Why Protesters Are Demanding Pause on AI Development

Just one week before the world’s second-ever global summit on artificial intelligence, protesters of a small but growing movement called “Pause AI” demanded that the world’s governments regulate AI companies and freeze the development of new cutting edge artificial intelligence models. They say that the development of these models should only be allowed to continue if companies agree to let them be thoroughly evaluated to test their safety first. Protests took place across thirteen different countries, including the U.S., the U.K, Brazil, Germany, Australia, and Norway on Monday.

In London, a group of 20 or so protesters stood outside of the U.K.’s Department of Science, Innovation and Technology chanting things like “stop the race, it’s not safe” and “who’s future? our future” with the hopes of attracting the attention of policy makers. The protestors say their goal is to get governments to regulate the companies developing frontier AI models, including OpenAI’s Chat GPT. They say that companies are not taking enough precautions to make sure their AI models are safe enough to be released into the world.

“[AI companies] have proven time and time again… through the way that these companies’ workers are treated, with the way that they treat other people’s work by literally stealing it and throwing it into their models, They have proven that they cannot be trusted,” said Gideon Futerman, an Oxford undergraduate student who gave a speech at the protest. 

One protester, Tara Steele, a freelance writer who works on blogs and SEO content, said that she had seen the technology impact her own livelihood. “I have noticed since ChatGPT came out, the demand for freelance work has reduced dramatically,” she says. “I love writing personally… I’ve really loved it. And it is kind of just sad, emotionally.”

Read More: Pausing AI Developments Isn’t Enough. We Need to Shut it All Down

She says that her main reason for protesting is because she fears that there could be even more dangerous consequences that come from frontier artificial intelligence models in the future. “We have a host of highly qualified knowledgeable experts, Turing Award winners, highly cited AI researchers, and the CEOs of the AI companies themselves [saying that AI could be extremely dangerous].” (The Turing Award is an annual prize awarded to computer scientists for contributions of major importance to the subject, and is sometimes referred to as the “Nobel Prize” of computing.) 

She’s especially concerned about the growing number of experts who warn that improperly controlled AI could lead to catastrophic consequences. A report commissioned from the U.S. government, published in March, warned that “the rise of advanced AI and AGI [artificial general intelligence] has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons.” Currently, the largest AI labs are attempting to build systems that are capable of outperforming humans on nearly every task, including long-term planning and critical thinking. If they succeed, increasing aspects of human activity could become automated, ranging from mundane things like online shopping, to the introduction of autonomous weapons systems that could act in ways that we cannot predict. This could lead to an “arms race” that increases the likelihood of “global- and WMD [weapons of mass destruction]-scale fatal accidents, interstate conflict, and escalation,” according to the report

Read More: Exclusive: U.S. Must Move ‘Decisively’ to Avert ‘Extinction-Level’ Threat From AI, Government-Commissioned Report Says

Experts still don’t understand the inner workings of AI systems like Chat GPT, and they worry that in more sophisticated systems, our lack of knowledge could lead us to dramatically miscalculate how more powerful systems would act. Depending on how integrated AI systems are into human life, they could wreak havoc, and gain control of dangerous weapons systems, leading many experts to worry about the possibility of human extinction. “Those warnings aren’t getting through to the general public, and they need to know,” she says. 

As of now, machine learning experts are somewhat divided about exactly how risky further development of artificial intelligence technology is. Two of the three godfathers of deep learning, a type of machine learning that allows AI systems to better simulate the decision making process of the human brain, Geoffrey Hinton and Yoshua Bengio, have publicly stated that they  believe there is a risk that the technology could lead to human extinction. 

Read More: Eric Schmidt and Yoshua Bengio Debate How Much A.I. Should Scare Us

The third godfather, Yann LeCun, who is also the Chief AI Scientist at Meta, staunchly disagrees with the other two. He told Wired in December that “AI will bring a lot of benefits to the world. But people are exploiting the fear about the technology, and we’re running the risk of scaring people away from it.”

Anthony Bailey, another Pause AI protester, said that while he understands there are benefits that could come from new AI systems, he worries that tech companies will be incentivized to build technologies that humans could easily lose control over, because these technologies also have immense potential for profit. “That’s the economically valuable stuff. That’s the stuff that if people are not dissuaded that it’s dangerous, those are the kinds of modules which are naturally going to be built.” 

Continue Reading