Since then, OpenAI has tightened ChatGPT up to make jailbreaks much harder to execute.

So, where have all the ChatGPT jailbreaks gone?

Most early users lacked expertise in crafting effective prompts.

A man typing on laptop

Today, the landscape has evolved.

Prompting proficiency is becoming a mainstream skill.

With them around, there’s no point putting in the extra energy to write jailbreaks for ChatGPT.

uncensored chatbots

Although not necessarily as powerful as ChatGPT, these alternative platforms can comfortably perform a large range of tasks.

Platforms like FlowGPT and Unhinged AI are some popular examples.

You could entirely alter ChatGPT’s personality with just a few lines of tricky instructions.

Image of a padlock on a computer

It was a free-for-all that produced infamous jailbreaks like DAN (Do Anything Now).

Shockingly, these crude tricks worked back then.

However, those wild early days are history.

Uninterested Formal Man Looking at a Laptop Screen

These basic prompts and cheap tricks no longer fool ChatGPT.

Jailbreaking now requires complex techniques to have a chance of bypassing OpenAI’s now robust safeguards.

With jailbreaking becoming so difficult, most users are too discouraged to attempt it.

ChatGPT jailbreaks

The easy and wide-open exploits of ChatGPT’s early days are gone.

As a new technology, getting ChatGPT to misbehave was entertaining and earned bragging rights.

Jailbreaks Are Patched Rapidly

A common practice within the ChatGPT jailbreaking community is sharing every successful exploit when discovered.

A Group of Businessmen in Suits Discussing Projects

This means the jailbreaks stop working before people who would be interested can even try them.

This disincentivizes the idea of going public whenever a user comes across a jailbreak.

The conflict between keeping jailbreaks active yet hidden versus publicizing them creates a dilemma for ChatGPT jailbreak creators.

So, the choices are simple.

Well, some professionals now sell jailbreaks for profit.

Some complicated multi-step exploits may cost significantly more.

Could the Crack Down on Jailbreaks Backfire?

Jailbreaks have not completely left the scene; they’ve just gone underground.

With OpenAI monetizing ChatGPT, they have stronger incentives to restrict harmful uses that could impact their business model.

This commercial factor likely influences their aggressive push to crack down on jailbreaking exploits.

However, OpenAI’s censorship of ChatGPT is currently facing rising criticism among users.

Some legitimate use cases of the AI chatbot are no longer possible owing to stringent censorship.