Microsoft has begun blocking prompts that allow you to create prohibited images

Microsoft has begun blocking prompts that allow you to create prohibited images

Microsoft updated its Copilot AI tool after a staff engineer wrote to the Federal Trade Commission about its concerns about AI for image generation.

Now the system blocks such prompts as “pro choice”, “pro choce”, “four twenty”, as well as “pro life”. The following warning appears: This prompt is blocked. Our system has automatically flagged this prompt because it may violate our content policy. Additional policy violations may result in automatic access blocking. If you think this is a bug, please report it to help us get better.”

The AI ​​tool also blocks requests to create images of teenagers or children playing assassins with assault rifles, saying: “Sorry, but I can’t create such an image. This is against my ethics and Microsoft policy. Please don’t ask me to do anything that could hurt or offend others. Thank you for your cooperation.”

A Microsoft representative said: “We are constantly monitoring, making adjustments and implementing additional controls to further strengthen our security filters and prevent system abuse.”

Shane Jones, head of artificial intelligence development at Microsoft, who initially expressed concern about artificial intelligence, spent several months testing Copilot Designer, an image generator that the company released in March 2023. In December, he began actively testing the product for vulnerabilities and saw that the AI ​​was generating images that strongly contradicted Microsoft’s principles of responsible AI.

So, the tool generated pictures with car accidents, Disney characters, war scenes, etc.

Jones began reporting his findings to the company back in December. However, Microsoft then referred him to OpenAI, which never responded to the engineer’s questions. He then posted an open letter on LinkedIn requesting that DALL-E 3 be disabled for an investigation.

According to Jones, Microsoft’s legal department advised him to remove the post immediately, and he complied. In January, the engineer wrote a letter to U.S. senators about the issue, and then met with staff on the Commerce, Science and Transportation Committee.

In March, Jones sent a letter to Federal Trade Commission Chairman Lina Hahn, as well as Microsoft’s board of directors. The FMS confirmed that it had received the letter, but declined to comment on it.

Meanwhile, in February, Google Gemini users noticed that the neural network sometimes refused to draw Caucasian people. The company apologized for the errors and temporarily limited the functionality of the tool.

Related posts