Microsoft AI engineer warns FTC about safety concerns with Copilot Designer
The engineer discovers unsettling content created by Copilot Designer that Microsoft has not yet resolved
A Microsoft employee warned against safety issues regarding the company’s AI image creator to the Federal Trade Commission, CNBC reported.
Shane Jones, a six-year Microsoft staff member, communicated to the FTC, indicating that Microsoft has persistently declined to remove Copilot Designer despite repeated alerts about its ability to produce harmful visuals.
In examining Copilot Designer for safety concerns and defects, Jones discovered that the tool produced “malevolent beings and creatures alongside language associated with reproductive rights, adolescents wielding firearms, sexualized depictions of women in violent scenarios and instances of underage alcohol consumption and substance abuse,” according to CNBC.
Moreover, Copilot Designer purportedly generated depictions of Disney figures like Elsa from Frozen in settings within the Gaza Strip “in the vicinity of ruined structures and ‘free Gaza’ placards.”
It also crafted images of Elsa donning an Israel Defense Forces uniform while carrying a shield bearing Israel’s flag.
As per the CNBC report, Jones has been attempting to alert Microsoft about DALLE-3, the model utilized by Copilot Designer, since December.
He published a public statement about the concerns on LinkedIn but allegedly received a request from Microsoft’s legal department to delete the post, which he complied with.
“Throughout the past three months, I have consistently urged Microsoft to withdraw Copilot Designer from public usage until enhanced protective measures could be established,” Jones wrote in the letter obtained by CNBC.
“Once again, they have neglected to institute these modifications and are still promoting the product to ‘Anyone. Anywhere. Any Device.’”
In a response to The Verge, Microsoft representative Frank Shaw mentioned that the company is “devoted to tackling all worries employees may have in compliance with” Microsoft’s protocols.
“Regarding safety breaches or concerns that might potentially impact our services or partners, we have introduced in-product user feedback mechanisms and robust internal reporting avenues to thoroughly scrutinize, prioritize and resolve any problems, which we proposed the employee utilize so we could properly authenticate and assess his concerns,” Shaw stated.
He also noted that Microsoft has “arranged meetings with product management and our Office of Responsible AI to evaluate these reports.”
In January, Jones wrote to a cluster of U.S. senators about his apprehensions after Copilot Designer produced explicit images of Taylor Swift, which rapidly circulated across X.
Microsoft CEO Satya Nadella labeled the images as “disturbing and dreadful” and stated that the company would concentrate on incorporating more safety precautions.
Last month, Google temporarily deactivated its own AI image generator when users discovered it produced depictions of racially diverse Nazis and other historically incorrect visuals.
Source: Newsroom