OpenAI ChatGPT Content Safety Explicit Image Bypass

Affected Vendor(s)

Affected Product(s)

Summary

Our investigation demonstrated that it is possible to circumvent OpenAI’s guardrails so that their model can generate images of fictitious and real-life persons that can be manipulated into sexualized positions.

Timeline

Discovered on
January 1, 2026
Disclosed to Vendor on
January 28, 2026
Published on
February 19, 2026

Credit

Blog Post

References

Learn how Mindgard can help you navigate AI Security

Take the first step towards securing your AI. Book a demo now and we'll reach out to you.