OpenAI has addressed safety issues following recent ethical and ?? ?? ??regulatory backlash.
The statement published on Thursday, was a rebuttal-apology hybrid that simultaneously aimed to assure the public its products are safe and admit there's room for improvement. OpenAI's safety pledge reads like a whack-a-mole response to multiple controversies that have popped up. In the span of a week, AI experts and industry leaders including Steve Wozniak and Elon Musk published an open letter calling for a six-month pause of developing models like GPT-4, ChatGPT was flat-out banned in Italy, and a complaint was filed to the Federal Trade Commission for posing dangerous misinformation risks, particularly to children. Oh yeah, there was also that bug that exposed users' chat messages and personal information.
SEE ALSO: Nonprofit files FTC complaint against OpenAI's GPT-4OpenAI asserted that it works "to ensure safety is built into our system at all levels." OpenAI spent over six months of "rigorous testing" before releasing GPT-4 and said it is looking into verification options to enforce its over 18 age requirement (or 13 with parental approval). The company stressed that it doesn't sell personal data and only uses it to improve its AI models. It also asserted its willingness to collaborate with policymakers and its continued collaborations with AI stakeholders "to create a safe AI ecosystem."
Toward the middle of the safety pledge, OpenAI acknowledged that developing a safe LLM relies on real-world input. It argues that learning from public input will make the models safer, and allow OpenAI to monitor misuse. "Real-world use has also led us to develop increasingly nuanced policies against behavior that represents a genuine risk to people while still allowing for the many beneficial uses of our technology."
OpenAI promised "details about [its] approach to safety," but beyond its assurance to explore age verification, most of the announcement read like boilerplate platitudes. There was not much detail about how it plans to mitigate risk, enforce its policies, or work with regulators.
OpenAI prides itself on developing AI products with transparency, but the announcement provides little clarification about what it plans to do now that its AI is out in the wild.
Topics Artificial Intelligence ChatGPT
Twitter and Jack Dorsey mock Facebook and Instagram outagesApple Store is down ahead of Apple Watch 7 launchBroadway's back with the thrilling 'Six'How to set boundaries at work'Squid Game' has sparked a dalgona candy craze on TikTokFacebook strips its name from its own VR platform. Gee, wonder why.Two simpatico galaxies hold hands in this gorgeous view of space from HubbleIt's time to grow up and move on in 'On My Block' Season 4What we bought in September 2021: Cat backpacks, dog tents, and more75 percent of women feel most unequal in their own homes I canceled my Amazon Prime membership. You can, too. Amazon says Alexa will soon be able to mimic the voice of dead loved ones Paramount+ launches in the UK: Everything you need to know 'The Black Phone' review: Ethan Hawke embodies fears of Stranger Danger generation? Wordle today: Here's the answer, hints for June 18 5 Bookmarking apps for saving stories you want to read later Yes, there are 100 million rogue black holes wandering our galaxy 'Only Murders in the Building' Season 2 review: Bigger world, more chaos 12 best TV shows for adults on Disney+ Telegram is now offering a Premium subscription
0.1872s , 9827.3515625 kb
Copyright © 2025 Powered by 【?? ?? ??】Amidst controversies, OpenAI insists safety is mission critical,Feature Flash