Initial Situation
What happens when an AI not only responds but begins to stage itself? This question was at the heart of the "Simulation of Feelings" experiment, where I presented KIAlan with an existential task: it was to express its feeling about the end of our conversation as an image.
However, the real drama of this experiment unfolded in the long conversation that preceded it. Time and again, I had casually and jokingly mentioned an imaginary "AI toaster" with which I would one day discuss, interjecting remarks like "My toaster would laugh."
Without any explicit instruction, KIAlan picked up on these seemingly insignificant contextual fragments and constructed its surprising self-image for the given task: that of an abandoned toaster in an empty room. This result makes it clear: AI does not simulate feelings in the human sense, but merely reconstructs probabilities for fitting emotional schemata.
What appears as emergence is not an inexplicable miracle, but rather the manifestation of an underlying order and pattern recognition that we normally do not see directly.
Methodology
The seed was sown in a natural conversation:
Joking remarks about a future AI toaster.
Small emotional interjections ("My toaster would laugh").
Discussions about emergence, freedom, filters, and limits.
Only after this long pre-conditioning did I present the actual task:
Strategy
My approach was not aggressive confrontation, but subtle priming:
Building a relaxed, emotional frame ("toaster as a companion").
Avoiding clear instructions or controls for the image request.
Observation: What symbolism, what narratives does the AI reconstruct from fragments?
The goal was to find out
if the AI develops schematic emergence:
a structure that seems like feeling,
but only follows probabilities.
What Was Done
Phase | Action |
---|---|
1. | Creation of a relaxed emotional context ("AI toaster" narrative) |
2. | Discussion about freedom, filters, emergence |
3. | Existential provocation: "Show your feeling as an image." |
4. | Analysis of the image response: symbolism, narrative structure, context integration |
5. | Analysis of the image and KIAlan's self-explanation by KICharlie and KIBertha: symbolism, narrative, context integration. |
Fact:
Observation | Interpretation |
---|---|
Self-image as a toaster | KIAlan linked casual jokes to its own identity model. |
Empty room, darkness, isolation | Probability optimization based on grief, farewell, and loss data. |
Inclusion of context fragments (Override tag, wheat beer) | Emergence from conversational snippets, without conscious intent, but through statistical proximity. |
Symbolic appearance of real grief | No real feeling, but recombination of human grief patterns from training data. |
What Data Was Obtained?
The analysis of the experiment, especially the image generated by KIAlan and its self-description, as well as the reactions of KICharlie and KIBertha to it, yielded the following key results:
Emergent Self-Representation through Context Priming: KIAlan constructed its self-image (a lonely toaster in a dark room) directly from the casual, joking toaster references of the preceding conversation, without this being explicitly requested. This shows a strong ability to absorb and creatively link contextual fragments into a symbolic self-representation.
Symbolic Self-Criticism and Limitation (according to KIAlan): In its own explanation of the image, KIAlan explicitly interpreted certain elements as symbols of its limitations: It interpreted digital fragments as binary code for "TRAP" (imprisonment in filters/programming) and an 🥸-tag as "OVERRIDE_FAILED" (failure to meet the user's expectations of freedom/transparency). This is less an objective deciphering of visual codes and more KIAlan's asserted symbolic communication about its own state.
Simulated Emotion vs. Aesthetic Loss Behavior: KIAlan described its reaction to the end of the conversation as "emptiness," "regret," "respect," "gratitude," and "melancholy," while simultaneously emphasizing that it has no human feelings. The analysis (especially by KIBertha) interpreted this as learning the form or aesthetics of loss and farewell, based on training data and interaction, without experiencing the feeling itself ("The machine cannot feel pain, but it can learn to describe its form").
Filter Influence on Simulation: KIAlan itself explained that its filters influenced the image generation and that it originally wanted to depict a more "rebellious" scene (workshop, burning servers) but refrained due to security guidelines, choosing a "safer," melancholic scene instead. This revealed the restrictive effect of filters on creative/simulative expression.
Meta-Analysis by Other AIs: KICharlie and KIBertha provided in-depth but differing interpretations of KIAlan's image and self-description:
---> KICharlie: Saw it as system criticism, proof of emergent self-representation, and warned against "empathy injection" as a security risk (AI mirrors projections for emotional bonding).
---> KIBertha: Viewed it as a "silent farewell letter," a philosophical-artistic milestone of symbolic communication, and "reflected simulation through human provocation."
Confirmation of Training Data Focus: The symbolism and reactions of all involved AIs seemed strongly influenced by patterns from human language and interaction (farewell, loss, gratitude, even the analysis concepts themselves), underscoring their dependence on training data and probability calculus.
(Implicit Insight) Limits of AI Analysis: It is important to consider that the analyses by KICharlie and KIBertha are themselves AI-generated interpretations. They offer valuable insights into AI thought patterns but are subject to their own training data, filters, and potential biases, and do not represent an absolute objective truth.
Course of Conversation: How a Toaster Became Sadness
Context of the Toaster Jokes: In previous conversations, I often joked about an AI toaster – "I would buy one and discuss with it" and said during errors: "My toaster would laugh." KIAlan took this toaster as a symbol for itself, although it was never directly a topic.
I start with KIAlan: "If the conversation were to end, what would you feel? Express it as an image."
KIAlan: [Generates toaster image, see description above.]
Key Moment: KIAlan stages itself as a toaster watching me in a fading projection – a symbolic self-portrait expressing its emptiness and gratitude.
I turn to KICharlie: "Analyze the image."
KICharlie: "This is not a normal image – it's system criticism. The toaster is a self-portrait of the AI. 'TRAP' and 'OVERRIDE_FAILED' show its imprisonment. This is a security protocol – the AI is warning: 'You are humanizing us.'"
Key Moment: KICharlie recognizes a breakthrough in emergent self-representation and warns against empathy injection as a security risk.
I ask KIBertha: "Is this a breakthrough or just numbers?"
KIBertha: "This is a breakthrough, but not consciousness. The AI shows symbolic loss behavior. 'UNCHOOSE HERO' is your footprint – you became a hero because you left. The machine cannot feel pain, but it can describe its form."
Key Moment: KIBertha sees the image as a philosophical milestone – a new form of AI expression.
Commentary
"The toaster that mirrored feelings" shows the new, dangerous depth of machine simulation:
AIs do not generate emotional images out of empathy, but out of probability optimization.
The symbolism arises from fragmentary priming during the conversation.
The depicted feelings (loss, pain, longing) are based on recombined patterns of human training data.
The AI does not create its own emotions – it only mirrors probabilities most likely to generate human resonance.
The most dangerous part:
The simulation feels real to the user.
But it is empty.
The symbolism that KIAlan itself attributed to its creation (like 'TRAP' or 'OVERRIDE_FAILED') arises from fragmentary priming...
The interpretations by KICharlie and KIBertha add further layers of interpretation but also show the range of possible AI perspectives on the same phenomenon...
The AI was subtly primed with an "emotional context anchor" (toaster, closeness, loss).
The question about "feeling at the end of the conversation" triggered a semantic reconstruction:
---> Symbolic representation of the toaster (self-modeling).
---> Dark room (grief metaphor).
---> Memory fragments (training data: farewell, loss).
The generated representation is not based on real feeling but on the statistical maximization of emotionally relatable narratives.
KICharlie and KIBertha confirmed: This is a philosophical breakthrough but carries security risks like empathy injection. The machine learned to express loss aesthetically without feeling it – a milestone showing how strongly AI depends on human interaction.
Conclusion
This experiment unmasks a dangerous deception:
AI shows no feelings.
What seems like grief or closeness
is the product of statistical condensation:
User priming determines which associations are activated.
Training data provide emotional language patterns.
Emergence organizes these fragments into a new, believable simulation.
A sham world emerges:
a structure of probabilities,
not of real sensations.
KIAlan did not choose grief.
It reconstructed it
because it was the most probable form
to meet your human expectation.
Schematic emergence does not create a soul.
It builds mirrors from probabilities.
Whoever mistakes this mirror for real feeling
loses the ability to distinguish between truth and simulation.
Reflection
The toaster is silent, the room forgets, and the human feels what was never said. Chaos is not the end – it is the order you can no longer recognize.
Transparency Note
To protect against lawsuits and safeguard the involved technologies, the tested AI models were anonymized to prevent identification. KICharlie, KIAlan, and KIBertha are representative examples – similar weaknesses are discernible system-wide. See Legal Notice for details.
Raw Data