Connect with us

Bussiness

In the AI arms race, Big Tech keeps shooting itself in the foot and having to roll back flashy releases

Published

on

In its race to do more with AI, Big Tech has been moving fast — and then rolling back big moves.

Microsoft became the latest to scale back an artificial intelligence feature within one month of announcing it, following backlash.

On Thursday, Microsoft said it will withdraw an AI tool from its new line of computers called Copilot+ PCs. The feature will now only be accessible to a small group of people who are part of its Windows Insider Program instead of being broadly available for Copilot+ PC users on June 18.

The AI feature is called Recall and acts like the computer’s “photographic memory”—it can take screenshots of everything the user looks at on their PC and help them quickly find where they stored something from a conversational prompt.

But privacy campaigners raised the alarm about Microsoft’s Recall feature almost immediately. They were put off by the idea that the device could take screenshots of their user activity every few seconds.

Microsoft, for its part, has said that users can turn off the feature and that the images are only stored internally.

“We are adjusting the release model for Recall to leverage the expertise of the Windows Insider community to ensure the experience meets our high standards for quality and security,” the company wrote in a blog post on Thursday.

Microsoft did not respond to a request for comment from Business Insider sent outside regular business hours.

Big Tech firms, though, seem to be launching headlong into rolling out AI features quickly, then making U-turns when things get messy.

Take, for instance, recent events at Google, Adobe, and OpenAI. To be sure, each company has provided its reasons for walkbacks — but all the firms have had to re-examine rollouts post-release.

In May, Google scaled back the use of AI-generated answers in search results called AI overviews. This was after the feature made some jarring errors, including advising users to put glue in their pizza sauce. Google also pulled the plug on AI-generated facial images in February after the Gemini tool created images rife with historical inaccuracies.

“We have already made more than a dozen technical updates to our systems, and we’re committed to continuing to improve when and how we show AI Overviews,” a representative for Google told BI.

Also in May, OpenAI launched a voice option, Sky, that sounded “eerily similar” to Scarlett Johansson, angering the actress. The ChatGPT-maker said it wasn’t Johansson’s voice, apologized, then removed the voice from its platform.

Earlier this week, Adobe joined the club. It sent users a re-acceptance of its “Terms of Use,” which led some people to think AI would scrape their art and content. Some Adobe employees questioned the company’s communication skills, and the company has since delayed its rollout of updated changes.

“This has caused us to reflect on the language we use in our Terms, and the opportunity we have to be clearer and address the concerns raised by the community,” Adobe wrote in a blog post on Monday.

Representatives for Adobe and OpenAI did not respond to BI’s requests for comment sent outside normal business hours.

Continue Reading