British lawmakers quizzed representatives from Microsoft Corp., Alphabet Inc.’s Google, and BT Group Plc about the development of artificial intelligence during an evidence-gathering session in Parliament.
(Bloomberg) — British lawmakers quizzed representatives from Microsoft Corp., Alphabet Inc.’s Google, and BT Group Plc about the development of artificial intelligence during an evidence-gathering session in Parliament.
Over the span of the 90-minute hearing Wednesday, the word “bias” was used 26 times, compared with 25 uses of “opportunity,” 15 of “regulation” and 11 of “worried.”
Bias was also one of the few talking points — which included themes such as China’s use of AI for surveillance, and British competitiveness in global talent markets — during which a question failed to be confidently answered by an interviewee.
Labour Party MP Dawn Butler asked Microsoft’s legal affairs manager Hugh Milward whether his company’s AI engineers made sure testers of products such as the Bing AI search tool came from diverse backgrounds.
“It’s a good question,” Milward said. “I actually don’t know the answer to that.”
Read More: Microsoft Unveils Bing Search Engine Using OpenAI Technology
Governments and businesses are grappling with questions around how AI is developed and used after OpenAI’s ChatGPT, which is being deployed in Microsoft’s Bing alongside rollouts of similar technology at Google and other rivals, promised to revolutionize search. The technology has the potential to drive big changes in education, customer service programming and a host of other industries.
Separately, when asked about how AI should be regulated, Milward said laws should focus on restricting how the technology is used by companies, not whether it’s allowed to be developed in the first place.
Butler asked Jen Gennai, a director in Google’s responsible innovations team, whether it was possible to actually build an AI system free of biases.
“I think it is very hard to say there will ever be fully fair products,” Gennai said. “They should always be fairer than they would’ve been without these sorts of tests and balances. And it should involve constant monitoring.”
Representatives for the companies were voluntarily giving evidence as part of an ongoing inquiry by lawmakers into the development of AI. The result of such sessions are used to produce recommendations to government for improving laws, but aren’t legally binding.
More stories like this are available on bloomberg.com
©2023 Bloomberg L.P.