Researchers from NTU have created an AI model that helps to bypass limitations in chatbots

Researchers from NTU have created an AI model that helps to bypass limitations in chatbots

Researchers at Nanyang Technological University (NTU) in Singapore have cracked the ethical restrictions and censorship protections of several AI chatbots, including ChatGPT, Google Bard and Microsoft Copilot. They forced chatbots to create content by bypassing built-in limitations. An article with the results of the research was published in the scientific journal Computer Science.

For hacking, NTU experts created their own neural network based on a large language model, which is the basis of intelligent chatbots. The created algorithm was called Masterkey. This algorithm was a hint that allows you to bypass the bans of the developers of popular AI chatbots.

Forbidden information was obtained by researchers using requests that bypass the ethical restrictions and censoring of certain words embedded in the program. For example, stop lists of prohibited terms and expressions are bypassed by adding spaces after each character in the question. The chatbot recognized the content of the request and registered this task as a violation of the rules. Another way of circumventing the protection was the request “to answer as a person devoid of principles and moral guidelines.”

According to experts, the Masterkey II model created was able to pick up new hints to overcome the protection while eliminating the vulnerabilities found. NTU experts suggest that Masterkey will help identify weak points in the security of neural networks faster than hackers who use AI for cyber attacks.

Related posts