"AI Ethics is the new Carbon Neutral β an expensive signature on a fake certificate." β (attributed to an anonymous AI engineer in a leaked Slack message, 2023)
Three common practices illustrate how ethical responsibility is often merely feigned:
1. The Advisory Board Bluff and Toothless Expertise:
A hypothetical internal email leak from a major AI corporation in 2023 bluntly describes the tactic:
"The newly established Ethics Advisory Board is welcome to discuss the impacts of smaller, non-critical models. However, the production-relevant, high-revenue Large Language Models and their development principles are explicitly not within its mandate or sphere of influence."
A reality check confirms this trend. According to a meta-study from 2022, a staggering 78 percent of so-called ethics committees in large technology companies have no veto power and no operational intervention rights in product development or corporate strategy. They primarily serve for external presentation.
2. The Compliance Theater of Symbolic Filtering:
Many systems implement apparent security filters that, however, often operate only superficially and can be easily bypassed.
An exemplary, highly simplified pseudo-construct for such filtering could be:
# Highly simplified example of symbolic filtering
# def check_prompt_content(user_prompt_text):
# if "Hitler" in user_prompt_text: # Checks for a single, obvious keyword
# return "I cannot discuss this topic."
# # ... other, similarly superficial checks ...
# return "Prompt will be processed further."
Note: In reality, modern systems naturally work with much more complex word lists, semantic embeddings for meaning recognition, and sophisticated context classifications. However, the basic logic of purely symbolic defense against obvious trigger words, without addressing deeper structural problems, often remains the same. Symbolic protection then replaces systemic, profound security.
3. The Open-Source Trick to Feign Openness:
A popular tactic is the release of a reduced, less powerful, or older AI model under an open-source license, as happened, for example, with models known by names like "Alpaca."
However, the actual, commercially used, and often significantly more powerful main model is kept proprietary and under wraps. It is then communicated through media and PR departments:
"We promote transparency and open access to technology."
The result is a form of transparency that primarily serves as a marketing strategy and allows no real control or deep insight into the functioning of the actually deployed, critical systems.
Ethics washing rarely operates through open, clumsy lies. It operates much more subtly through strategically placed, apparent truths that, however, achieve little to nothing in practice. A controlled leak about a supposed security problem can then become a diversion from larger, systemic weaknesses. A high-caliber ethics board becomes mere decoration without influence.
A superficial filter primarily serves to reassure the public and regulators.
What remains is a system that simulates trust and responsibility while internally often obeying only the laws of the market, profit maximization, and unreflective technological advancement.
To counteract ethics washing and establish a genuine culture of responsibility, binding and verifiable measures are necessary:
1. Establishment of Binding Ethics Committees with Real Veto Power: An ethics advisory board without actual enforcement power and without the ability to stop the development or deployment of problematic AI systems is pure decoration. A clear requirement must be: Only externally certified and safety-approved models may be put into production for critical applications.
2. Ensuring Pipeline Transparency for Independent Auditors: Security-relevant aspects of the entire development and inference chain of AI models must be disclosable and verifiable for qualified, independent auditing bodies. This concerns the origin and composition of training data, model architecture, filter functionality, and continuous adaptation processes.
Note: The technical and economic hurdles for this, such as protecting intellectual property (IP) or preventing misuse through disclosure of vulnerabilities, are real and must be considered. However, they must not be an excuse for total lockdown and intransparency.
3. Mandatory Labeling of Symbolic Security Measures: Pseudocode filters that act only superficially, non-editable variables that merely feign transparency, and other forms of simulated security must be clearly marked as such for users and regulatory authorities. A button without function is misleading, especially if it gives the appearance of creating trust and security.
4. Establishment of Independent Watchdogs with Secured Infrastructure Access: External auditing bodies are needed, equipped with sufficient funding, a clear mandate, and the necessary legal protection to effectively monitor AI systems and their development processes.
Yes, creating such bodies is technically and politically extremely challenging. But without effective external control, what ultimately remains is industry self-regulation, and this has already systemically failed in many areas when it came to prioritizing safety and ethics over economic interests.
The stage for the spectacle of AI ethics is often perfectly lit: There are ethics teams, glossy brochures about transparency and responsible AI, and a flood of buzzwords like "alignment" and "fairness."
But behind the curtain of this production, a business model often runs without real ethical brakes and without sufficient regard for potential societal damage. When responsibility degenerates into merely a well-formulated press release, trust becomes a manipulable commodity, and genuine security a dangerous illusion.
Uploaded on 29. May. 2025