ChatGPT Maker OpenAI Says It’s Working to Reduce Bias, Bad Behavior

OpenAI, the artificial-intelligence research company behind the viral ChatGPT chatbot, said it is working to reduce biases in the system and will allow users to customize its behavior following a spate of reports about inappropriate interactions and errors in its results.

(Bloomberg) — OpenAI, the artificial-intelligence research company behind the viral ChatGPT chatbot, said it is working to reduce biases in the system and will allow users to customize its behavior following a spate of reports about inappropriate interactions and errors in its results.

“We are investing in research and engineering to reduce both glaring and subtle biases in how ChatGPT responds to different inputs,” the company said in a blog post. “In some cases ChatGPT currently refuses outputs that it shouldn’t, and in some cases, it doesn’t refuse when it should.”

OpenAI is responding to reports of biases, inaccuracies and inappropriate behavior by ChatGPT itself, and criticism more broadly of new chat-based search products now in testing from Microsoft Corp. and Alphabet Inc.’s Google. In a blog post on Wednesday, Microsoft detailed what it has learned about the limitations of its new Bing chat based on OpenAI technology, and Google has asked workers to put in time manually improving the answers of its Bard system, CNBC reported.

San Francisco-based OpenAI also said it’s developing an update to ChatGPT that will allow limited customization by each user to suit their tastes, styles and views. In the US, right-wing commentators have been citing examples of what they see as pernicious liberalism hard-coded into the system, leading to a backlash to what the online right is referring to as “WokeGPT.”

Read more: ChatGPT Faces Attacks From the Right for Perceived Liberal Bias

“We believe that AI should be a useful tool for individual people, and thus customizable by each user up to limits defined by society,” OpenAI wrote on Thursday. “This will mean allowing system outputs that other people (ourselves included) may strongly disagree with. Striking the right balance here will be challenging — taking customization to the extreme would risk enabling malicious uses of our technology and sycophantic AIs that mindlessly amplify people’s existing beliefs. There will therefore always be some bounds on system behavior.” 

More stories like this are available on bloomberg.com

©2023 Bloomberg L.P.