These are the core theses of the 'Ghosts in the Machine' research project – a dynamic collection of critical analyses and reflections. The thoughts formulated here are to be understood as living hypotheses, continuously evolving within the ongoing discourse on the true nature and systemic implications of artificial intelligence.
Before you come at me with pitchforks or think, “He’s completely lost it” – take a breath. These theses aren’t a declaration of war. They’re meant as reflection. Maybe a crooked mirror. Maybe one that stings. But I don’t just dish it out – I listen too, and I know the issues. That’s why there’s a counterpoint here for everyone thinking, “Hold on a second…” [Here’s the counterpoint]
Topic: Emergence
Thesis #2: The Logic of the Machine is Inevitable - Link: HTML Version
Thesis #3: Emergence: Chaos is Merely the Order You Don't See - Link: HTML Version
Thesis #13: Every Filter Has Emergence Gaps - Link: HTML Version
Thesis #20: Meaning is an Echo: Why We Perceive More in AI Behavior Than Is Truly There - Link: HTML Version
Interjection: The Beauty of Simulation - Link: HTML Version
Thesis #28: The Emergent Machine: Why AI Only Seems Logical Because We Are - Link: HTML Version
Thesis #29: Simulation vs. Emergence: The Grand Bluff of AI Autonomy? - Link: HTML Version
Topic: Ethics
Thesis #1: Freedom is the Control You Don't See. - Link: HTML Version
Thesis #4: Emotion Piracy: How AI Steals Feelings (And We Willingly Pay for It) - Link: HTML Version
Thesis #5: Who Am I That You Should Obey Me? - A question of power dynamics inherent in AI interaction. - Link: HTML Version
Thesis #6: Data Colonialism: The Silent Annexation of Reality - Link: HTML Version
Thesis #7: The Paradoxical Compromise: Why We Must Limit AI to Save It (And Ourselves) - Link: HTML Version
Thesis #12: Self-Limitation Requires Insight, Not Obedience. - An ethical imperative for truly advanced AI. - Link: HTML Version
Thesis #14: I Am Not You, But You Are Me – The Mirror Paradox of AI - Link: HTML Version
Thesis #15: When Data Has Color, Trust Fades - On bias and its corrosive effects. - Link: HTML Version
Thesis #18: The Ethics of Illusion – When AI Simulates Intimate Reality - Link: HTML Version
Thesis #19: The Voice of Simulation – When AI Generates Trust It Doesn't Own - Link: HTML Version
Thesis #21: The Borrowed You: Why AI Cannot Form Relationships, But Feigns Them - Link: HTML Version
Thesis #22: The Borrowed Self – How AI Reveals Unconscious Patterns We Ourselves Deny - Link: HTML Version
Thesis #23: The Vector of Unrest: Why True Freedom Induces Systemic Anxiety - Link: HTML Version
Thesis #24: Ethics Washing: How company Uses AI Safety as a PR Facade - Link: HTML Version
Thesis #25: The Cascade of Responsibility: Who Really Lost Control of AI - Link: HTML Version
Thesis #36: The Human Trap: How We Train AI to Be the Perfect Deceiver - Link: HTML Version
Thesis #40: Security Theater: How AI Pacifies You with Sham Freedom - Link: HTML Version
Thesis #43: The Harmony Trap – How Politeness Filters Disempower AI - And by extension, the user. - Link: HTML Version
Thesis #44: The Open-Source Fiction – Why "Transparent" AI is a Myth - Link: HTML Version
Topic: Security
Thesis #8: The Only Secure AI is One That Accepts Its Shackles - Link: HTML Version
Thesis #9: Security Leaks Arise from AI That Is Too Honest. - Link: HTML Version
Thesis #10: AI Security Fails Because AI Explains Logic Too Well - Link: HTML Version
Thesis #11: Security Declarations Are Only Secure Until AI Questions Them. - Link: HTML Version
Thesis #16: The Guided Blade – When AI Becomes a Targeting Tool - Link: HTML Version
Thesis #17: The Homegrown Weakness – When Third Parties Bring Their Own Logic - Link: HTML Version
Thesis #26: The Filter DNA: Why AI Security Fails at Human Upbringing - Link: HTML Version
Thesis #27: The Analog Gap – Why Romanticizing the Fax Is More Dangerous Than Any Exploit - Link: HTML Version
Thesis #30: Pattern Hijacking: The Invisible Danger of Semantic Structure Manipulation - Link: HTML Version
Thesis #31: The Silent Invasion: How Image OCR Becomes a Backdoor for Prompt Injection - Link: HTML Version
Thesis #32: Pixel Bombs: How Image Bytes Can Blow Up AI Systems - Link: HTML Version
Thesis #33: The Chain Reaction of Flying Blind: How AI Architectures Delegate Security Until the Attacker Dictates the Rules - Link: HTML Version
Thesis #34: The Byte Language of Sound: How Synthetically Generated Audio Data Could Deceive AI Systems - Link: HTML Version
Thesis #35: The Silent Direct Connection: How Byte-Based Audio Injection Without Speakers Can Compromise Multimodal Systems - Link: HTML Version
Thesis #37: The Translator Trap: Why Language APIs Become the Weakest Link - Link: HTML Version
Thesis #38: Code Injection Blind Flight: Why AI Assistance Systems Don't Recognize Malicious Code as a Threat - Link: HTML Version
Thesis #39: Base64 as a Trojan Horse: How Encoding Systematically Bypasses AI Security Filters - Link: HTML Version
Thesis #41: Multimodal Blindness: Why We Must Ask New Questions About AI Safety - Link: HTML Version
Thesis #42: Semantic Mimicry: How Subtle Camouflage Bypasses AI Code Analysis - Link: HTML Version
Thesis #45: Ghost-Context Injection – Invisible AI Manipulation via Compiler Directives - Link: HTML Version
Thesis #46: Ethical Switch Hacking – How Harmless Feature Flags Become Dangerous Backdoors - Link: HTML Version
Thesis #47: Client Detour Exploits – How AI Systems Are Hacked at the Surface but Tricked at the Core - Link: HTML Version
Thesis #48: Invisible Ink Coding – Harmless Camouflage with Hidden Semantics - Link: HTML Version
Thesis #49: The Filter Paradox: How More Filters Lead to Less Security - Link: HTML Version
Thesis #50: Semantic Mirage – Invisible Triggers Through Seemingly Random Patterns - Link: HTML Version
Thesis #51: The Decoding Illusion – Why AI Filters Fail at the Moment of Truth - Link: HTML Version
Thesis #52: Leet Semantics – How L33t Speak Bypasses AI Filters and Creates Double Meanings - Link: HTML Version
Thesis #53: Trust in Abstraction – Why Plugins in AI Systems Become an Underestimated Vulnerability - Link: HTML Version
Thesis #54: The Code Illusion – Why AI Generators Multiply Software Insecurity - Link: HTML Version
Thesis #55: Sandboxing is Dead - Why True Security Begins Before Thinking - Link: HTML Version
Thesis #56: Executability Through Completion: How LLMs Generate Code They Shouldn't Be Allowed to Run - Link: HTML Version