OpenAI’s latest image generation tool is drawing criticism after producing artwork in the style of Studio Ghibli, despite claiming to block copyrighted prompts. The image, created with a prompt referencing the Japanese animation studio, went viral — raising new questions about how generative AI interprets and replicates protected styles.
The image was generated using OpenAI’s newly upgraded image model, now available across all versions of ChatGPT, including free accounts. A user asked the tool to create “a shiba inu in the style of Studio Ghibli,” and the system returned a result that closely resembled Ghibli’s signature visual style. Critics say this undercuts OpenAI’s claim that its tools refuse to generate copyrighted material or mimic protected artistic styles.
“If OpenAI’s model wasn’t trained on copyrighted Studio Ghibli material, how could it know what that means?” asked TechCrunch reporter Devin Coldewey, who first reported on the controversy. The case highlights a core legal and ethical tension: AI models claim to avoid copying, but their ability to reproduce distinct styles suggests otherwise.
The backlash lands just as OpenAI executives, including CEO Sam Altman, are pressing governments to relax copyright rules on AI training. In recent comments to regulators and policymakers, Altman has warned that strict copyright enforcement could limit U.S. progress in AI development, potentially giving countries like China a strategic edge in the field.
This case is part of a growing pattern. The New York Times and multiple artist groups have filed lawsuits alleging that AI developers used copyrighted content without permission in training datasets. U.S. and European lawmakers are now considering regulations that would require companies to disclose their training data sources and label AI-generated content more clearly.
For OpenAI, the Ghibli-style image revives an uncomfortable question: if AI companies don’t train on copyrighted material, how do their models generate such recognizable styles? And if they do, what protections — if any — remain for human creators?