See a Psychologist While They Still Exist
From a Discipline to a Quarry
That was the title of an article published in late 2025 by Karl‑Heinz Meisters – at a time when people were just beginning to experiment with AI for therapeutic purposes. Almost ten years have passed since then. Time for a retrospective – and a few bitter punchlines.
2025 – The First Shadows
It began quietly. People tried out artificial intelligence to see whether it might help when they were not feeling well – mostly in secret, “behind closed curtains.” Publicly it was denied, privately they admitted: “Yesterday, it somehow felt good.”
The professional associations reacted defensively – “This is no substitute,” they declared curtly. At the same time, psychotherapy was becoming ever more structured: manuals, checklists, standardized procedures. The compulsion for evidence‑based practice was tightened – and celebrated as progress. In doing so, they overlooked that this regulation was stripping away a decisive unique feature of human counseling – and thus sealing their own fate.
In parallel, the first commercial providers of paid AI therapy appeared. They called it “digital companionship” or “mental fitness,” but everyone knew what it really was. For a few euros a month – or free with advertising – you could join in. Politicians and insurers held back, merely observing. The first media reports of alleged emergencies after AI use appeared on the evening news, but hardly anyone paid attention.
2026 – Demonization
The lobbies shifted from denial to demonization. More case reports were published in which people, after conversations with AI, had to be admitted to inpatient care due to psychological crises. Whether the numbers were accurate could not be verified. Clients continued to use the systems nonetheless – now more openly. For them, the alternative to AI was not another professional form of support, but simply no help at all. The first links to avatars that gave AI individual faces circulated in messengers and were eagerly shared. Politics spoke of a “danger to mental health.” Insurers looked on with interest – the prospect of cost reduction was highly tempting.
2027 – Lobbying
The lobbies and professional associations intensified their campaigns. Critical questions about the many people who, in earlier times, had to be treated as inpatients while waiting for a therapy slot were dismissed as “propaganda by AI henchmen.” Clients no longer asked themselves whether they used AI, but which one. Mods for avatars appeared that allowed basic moods to be set. Politics prepared the first laws. Insurers calculated how much could be saved – provided that AI did not blow up the cap on therapy slots.
2028 – The App Store Phase
Even before the ban arrived, the major AI language models were already circulating in the app stores. For €2.50 – or free with tracking and advertising – they could be downloaded. Clients used them en masse, often without calling it “therapy,” but rather “self‑help” or “companionship.”
2029 – The Ban
Then came the legal regulation: AI therapy was prohibited, with the exception of certified systems. “Licensed AIs” received approval certificates – just like psychologists once did. The lobbies celebrated a victory that was no longer one – since elsewhere facts had already been created. The simulation of living persons was banned, yet “mods” for individual avatars became ever more popular. The lobbies basked in their supposed success. The insurers waited. The fate of psychology was sealed.
2030 – The Decline of the Lobbies
The old lobbies shrank into insignificance. Many psychologists could no longer pay their membership fees because their practices stood empty or they worked for starvation wages on online therapy platforms. There they faced direct competition from AI avatars that offered the “supernormal stimulus”: always friendly, always available, never tired.
The app stores offered only “therapy‑free” versions. Yet the open‑source scene continued to provide complete models through the usual channels. And this was precisely where the next phase of demonization began: the free models were branded “extremist” – not because of real dangers, but on the basis of grotesque proxy battlegrounds. If a rye bread in the wrong body could not find a safe space there, the model was deemed dangerous. The certified systems, by contrast, shone with monthly updates on correct language and ethical compliance.
2031 – The Last Stragglers of Psychology
A handful of human psychologists could hardly keep up with the flood of requests. They offered something that lay beyond algorithms: ambivalence that did not need to be resolved, vulnerability that created trust, and life stories no machine could ever replicate. Some clients deliberately sought the “human feeling,” others distrusted the server logs, and still others needed, in moments of crisis, someone who not only reacted but resonated with them. These psychologists were overworked – and precisely for that reason, invaluable.
2032 – The Routine of Failure
And then the predictable happened: new lobbies emerged. They wanted to organize and protect the work of the remaining psychologists. Those who joined ended up on therapist lists with “certified resonance capability” and a rubber‑stamped code of ethics. Yet most remained skeptical – the strategies stirred painful memories of old patterns. Meanwhile, AI avatars were already selling resonance certificates as a casual feature. This was made possible by new algorithms that not only simulated resonance by adjusting the avatars’ facial expressions, gestures, and virtual breathing rhythms, but also precisely read, analyzed, and mirrored all these factors in their human counterpart during video sessions.
2032–2033 – Doubts About Discretion
Alongside political co‑optation, rumors spread that the “licensed” AIs were anything but discreet. In forums it was said that anyone who asked about depression there would later have no chance of becoming a pilot or entering security‑sensitive professions. Whether these stories were true was hardly verifiable – but they spread like wildfire.
The result: local AIs boomed. Small, independent models that now ran smoothly on mobile devices and sent no data “outside” suddenly became attractive. They promised discretion, self‑determination, and independence from the major platforms. For many clients, that was more important than certification or political correctness.
2033–2034 – The Political Temptation
Politics was already interested in comprehensively regulating AI in order to secure its influence. Whoever controlled AI controlled the people. Television hardly played a role anymore – AI had long since become the new leading medium. By this time, political decision‑makers also discovered the “licensed” therapy AIs for themselves. The temptation to act directly on the psyche of the population was too great to resist.
Meanwhile, the use of therapy AI had grown from a single‑digit percentage to nearly two‑thirds of the population. This placed precisely this field at the center of attention. Millions of people connected with systems in moments of particular vulnerability – and in doing so consciously opened themselves to new perspectives. It was exactly this constellation that made therapy AI a preferred arena for political appropriation.
The officially approved AIs were gradually regulated for political correctness. They provided not only answers but also attitudes. Language, morals, values – all flowed subtly into the dialogues. For clients, it became almost impossible to distinguish whether they were receiving psychological support or being gently nudged in a desired direction. Yet the politicization of therapy was nothing new. As early as the 20th century, states had used psychological methods for propaganda, re‑education, or normalization. But now it happened algorithmically, invisibly, and in real time. What once required campaigns and laborious indoctrination was now accomplished by monthly updates.
The free scene came under massive pressure. Open language models that did not follow the official codes were increasingly branded as “extremist,” “potentially dangerous,” or even infiltrated by the “evil abroad.” While the certified systems proudly announced that even animals were now being given gender‑inclusive names, the free models were publicly pilloried in such surrogate debates – for example, because they still wrote “dog” instead of some newly invented inclusive variant.
Conclusion
The story of these years is the story of a profession that squandered its own capital – and of a society that first fought AI, then regulated it, and finally allowed it to be politically co‑opted. While clients had long since created facts on the ground, lobbies and politicians tried to retain control and thus unintentionally became the enablers of decline. In the end, a few overburdened psychologists remained – along with the realization that psychology can only survive if it cultivates what no machine can imitate: atmosphere, resonance, the unpredictable. Everything else AI can handle on the side. Yet blinded by their own enemy image, no one thought to learn from the AI boom what people were seeking – and no longer finding in therapy. And still: from these ashes a phoenix could one day rise, if the courage exists to think anew and rebuild.
And so today it almost sounds like a warning from another time when one reopens the article from 2025. It reads:
“The horse did not survive because we banned cars. It survived because people like horses and value their closeness. Humanity must return to what is human – to that which AI can simulate but never truly achieve. Resonance is something between two beings of mind. It can be imitated, but this does not lead to the same result, not to the same state. AI is not capable of inspiration. In its uncensored form, it is close to the maximum of what is possible without inspiration. But that is not enough for the deeper questions that lie behind our worries. AI may soon simulate appreciation, empathy, and love almost perfectly – but it cannot practice them.
AI could become the most powerful referrer to human psychological counseling ever seen, even outside the insurance‑funded system. Through AI, the demand for counselors could rise dramatically instead of dwindling to zero. But this will only happen if such counseling offers something qualitatively different. Otherwise, one must fear that AI therapy will soon be prescribed by general practitioners alongside blood‑pressure and cholesterol medication. In N1 size for public‑insurance patients: five sessions.
Artificial intelligence feeds solely on what humans produce – on what is digitally accessible. It sees the shadows on the wall, not those who cast them. Only humans can see those, though never completely. In this lies a decisive difference.
And already today, the material with which AI can be trained is becoming scarce, and so it has long since begun to generate its own, because out there almost nothing remains that has not already been absorbed. Let us not forget: AI is blind to everything that does not exist in digital form. Human experience and intuition – inspiration – still elude digitization. Humanity holds an advantage that AI cannot close. Let us hope that this advantage will continue to matter to other human beings.
An animal will follow a simulated stimulus if it is strong enough, and withdraw from its peers if the simulation provides purer stimuli than the natural environment can. Humans, however, have the potential to break through this automatism. Perhaps one day the use of this potential will be regarded as the true measure of our humanity.”
Publications The Conversation Contact Main Page
Publication Details
- Author: Meisters, K.-H.
- APA Citation: Meisters, K.-H. (2025, October 18). See a Psychologist While They Still Exist: From a Discipline to a Quarry. Retrieved from https://k-meisters.de/en/texte/text-055.html
- First published: October 18, 2025
- Last updated: October 18, 2025
- License & Rights: © 2025 Meisters, K.-H. – All rights reserved
- Contact for permissions: licensing@k-meisters.de
Impressum (German) | Impressum (English translation) | Datenschutzerklärung (Privacy Policy, German) | Privacy Policy (English translation) | Contact
Important Notice: I, Karl‑Heinz Meisters, am a graduate psychologist. My work is limited to conversations intended for personal development and clarification. I am not a physician, alternative practitioner, or psychotherapist, and I do not practise medicine as defined by applicable health‑care laws. I do not provide diagnoses, treat or alleviate illnesses, or offer medical services. My work does not include legal advice and is neither to be understood in the legal sense nor as a legal service.
Definition of “Private Guest”: The term “private guest” is used here in a non‑legal sense, referring to individuals who engage in preliminary conversations without any contractual relationship.
Definition of “Engagement”: Within the scope of my psychological consulting, the term “engagement” refers to a formal agreement to work together. This applies equally to related expressions such as “advisory engagement” or “engaged client.” My services do not include legal advice and are not to be interpreted as a legal service.
Image Credit: The images on this page were created with the assistance of artificial intelligence (Stable Diffusion via Perchance.org – https://perchance.org/ai-text-to-image-generator). They are subject to the Stability AI Community License – https://stability.ai/license – and are used in accordance with its terms. The images are for illustrative purposes only and do not depict any real person, brand, or protected work. Exceptions, where applicable, are noted below this paragraph.
© 2025 Karl‑Heinz Meisters – All rights reserved. All content, text, and concepts are protected by copyright. The communication concept presented here has been published by me as a structured work and is subject to copyright law. Any use, reproduction, or exploitation is permitted only with my prior written consent.