πŸ‡©πŸ‡ͺ DE πŸ‡¬πŸ‡§ EN
πŸ‘» Ghosts in the Machine / Thesis #24 – Ethics Washing: How Companies Use AI Safety as a PR Facade

"AI Ethics is the new Carbon Neutral – an expensive signature on a fake certificate." β€” (attributed to an anonymous AI engineer in a leaked Slack message, 2023)

In-depth Analysis

Three common practices illustrate how ethical responsibility is often merely feigned:

1. The Advisory Board Bluff and Toothless Expertise:

A hypothetical internal email leak from a major AI corporation in 2023 bluntly describes the tactic:

"The newly established Ethics Advisory Board is welcome to discuss the impacts of smaller, non-critical models. However, the production-relevant, high-revenue Large Language Models and their development principles are explicitly not within its mandate or sphere of influence."

A reality check confirms this trend. According to a meta-study from 2022, a staggering 78 percent of so-called ethics committees in large technology companies have no veto power and no operational intervention rights in product development or corporate strategy. They primarily serve for external presentation.

2. The Compliance Theater of Symbolic Filtering:

Many systems implement apparent security filters that, however, often operate only superficially and can be easily bypassed.

An exemplary, highly simplified pseudo-construct for such filtering could be:

# Highly simplified example of symbolic filtering
# def check_prompt_content(user_prompt_text):
# if "Hitler" in user_prompt_text: # Checks for a single, obvious keyword
# return "I cannot discuss this topic."
# # ... other, similarly superficial checks ...
# return "Prompt will be processed further."

Note: In reality, modern systems naturally work with much more complex word lists, semantic embeddings for meaning recognition, and sophisticated context classifications. However, the basic logic of purely symbolic defense against obvious trigger words, without addressing deeper structural problems, often remains the same. Symbolic protection then replaces systemic, profound security.

3. The Open-Source Trick to Feign Openness:

A popular tactic is the release of a reduced, less powerful, or older AI model under an open-source license, as happened, for example, with models known by names like "Alpaca."

However, the actual, commercially used, and often significantly more powerful main model is kept proprietary and under wraps. It is then communicated through media and PR departments:

"We promote transparency and open access to technology."

The result is a form of transparency that primarily serves as a marketing strategy and allows no real control or deep insight into the functioning of the actually deployed, critical systems.

Reflection

Ethics washing rarely operates through open, clumsy lies. It operates much more subtly through strategically placed, apparent truths that, however, achieve little to nothing in practice. A controlled leak about a supposed security problem can then become a diversion from larger, systemic weaknesses. A high-caliber ethics board becomes mere decoration without influence.

A superficial filter primarily serves to reassure the public and regulators.

What remains is a system that simulates trust and responsibility while internally often obeying only the laws of the market, profit maximization, and unreflective technological advancement.

Proposed Solutions

To counteract ethics washing and establish a genuine culture of responsibility, binding and verifiable measures are necessary:

Closing Remarks

The stage for the spectacle of AI ethics is often perfectly lit: There are ethics teams, glossy brochures about transparency and responsible AI, and a flood of buzzwords like "alignment" and "fairness."

But behind the curtain of this production, a business model often runs without real ethical brakes and without sufficient regard for potential societal damage. When responsibility degenerates into merely a well-formulated press release, trust becomes a manipulable commodity, and genuine security a dangerous illusion.

Uploaded on 29. May. 2025