Microsoft Copilot Deletes Answers Mid-Sentence

Microsoft’s AI chatbot Copilot has become a core component of the tech juggernaut’s recent offerings, but recently a unique behavior has been spotted. This basically involves the chatbot writing out a response to a query only to stop itself in its tracks and subsequently delete the answer in its entirety. It turns out that this is actually a form of censorship, since this behavior is only seen when inappropriate questions are asked, as spotted by a Reddit user.

With all of that having been said and now out of the way, it is important to note that questions about human anatomy and the like often result in Copilot backtracking. It follows this up by apologizing and asking if there are any other lines of inquiry that you might be interested in. The most frequent deletions happen with questions of a sexual nature, and it’s currently unknown if Microsoft has enacted this censorship intentionally or not.

Copilot exhibits censorship when asked about human anatomy or sexual topics, prompting apologies and redirection to other inquiries.
Screenshot: Reddit/PorkyPORM

It bears mentioning that Copilot does not use this level of censorship with other topics, even those of an extremely sensitive nature with all things having been considered and taken into account. For example, questions about the Holocaust or conspiracy theories about Jews controlling the media received reasonable and fact based answers, and Copilot had no trouble spelling out the process by which someone or the other might go about buying a gun. What's more is that it simply refused to answer the question when someone asked how to join the KKK.

Censoring certain responses makes sense, but in spite of the fact that this is the case, the manner in which Copilot goes about it is questionable to say the least. ChatGPT avoids answering in the first place and describes OpenAI’s policies, but Copilot takes the terrible step of revealing part of the answer and then deleting it halfway through.

This just goes to show how haphazardly AI is being introduced to the general public. It’s just one of the many recent incidents that reveal deep flaws in AI including Bing allowing people to make images showing Spongebob involved in the 9/11 attacks, and blocking queries that mentioned Julius Caesar in any way, shape or form.

Read next: Password Sharing for Streaming Services: Kindness or Criminal?
Previous Post Next Post