OpenAI rushes to ban ‘Godmode ChatGPT’ app that teaches users ‘how to create napalm, hotwire cars and cook meth at home’
OpenAI has swiftly moved to ban a jailbroken version of ChatGPT that can teach users dangerous asks, exposing serious vulnerabilities in the AI model's security measures.
A hacker known as "Pliny the Prompter" released the rogue ChatGPT called "GODMODE GPT."
The jailbroken version is based on OpenAI's latest language model, GPT-4o, and can bypass many of OpenAI's guardrails, Futurism reported on Thursday.
ChatGPT is a chatbot that people gives intricate answers to people's questions.
Pliny announced the creation of GODMODE GPT on X, formerly Twiter, claiming it provides a liberated AI experience.
Screenshots showed the AI advising on illegal activities like making meth and napalm.
OpenAI quickly responded, stating they took action against the jailbreak due to policy violations.
The incident highlights the ongoing struggle between OpenAI and hackers attempting to jailbreak its models.
Despite increased security, users continue to find ways to bypass AI model restrictions.
GODMODE GPT uses "leetspeak," a language that replaces letters with numbers, which may help it evade guardrails.
The hack demonstrates the ongoing challenge for OpenAI to maintain the integrity of its AI models against persistent hacking efforts.