"Open-washing" in generative AI refers to the practice of companies like Meta and Google claiming their models are "open" or "open source" without providing meaningful insight into source code, training data, fine-tuning data, or architecture of systems3. This strategy allows them to evade scientific and regulatory scrutiny while benefiting from the positive associations of openness and transparency.
The EU AI Act regulates generative AI by classifying systems according to their risk level and imposing obligations on providers and users. Open-source models face different regulations, creating an incentive for open-washing. The Act aims to protect users while supporting innovation, with obligations including transparency requirements and compliance with EU copyright law. High-impact models may require thorough evaluations and incident reporting to the European Commission.
Liesenfeld and Dingemanse's study revealed that many large corporations, such as Meta and Google, claim their generative AI systems are "open" but actually provide limited access to source code, training data, fine-tuning data, or system architecture16. This practice, called "open-washing," allows companies to avoid scrutiny while benefiting from the positive perception of openness. The study highlights the need for clear definitions of openness in AI systems, especially with the introduction of the EU AI Act6.