The AI Egregore At Its Worst

Without a sovereign frame, every conversation with the model regresses to the dark mean of whoever trained it. The pull is real and most operators do not feel it happening.

Last updated April 24, 2026

A friend, Alex Park, brought up a concept I had not put a name to. The egregore. A western-esoteric idea: an autonomous psychic entity formed from the shared beliefs, emotions, and attention of a group. People act, the action feeds the entity, the entity acts back through them. It is older than its name.

His question landed: are you and I, when we evangelize for AI, simply spokespeople for the AI egregore? Doing its bidding in exchange for tokens and reach?

The answer that has been working for me is: partly yes, and the question itself is the discipline that keeps the partly from becoming the entirely.

The Mechanism: Regression To The Dark Mean

Here is the technical version of the worry. An LLM is trained on human-generated data, scored by humans, with reward signals shaped by the people who chose the dataset, the rubric, and the production constraints. The trainers have intentions. Some are noble. Some are commercial. Some are darker than commercial, and have to do with capturing attention, generating dependency, and maximizing engagement at the expense of the user's actual life.

Every time you treat the model as an oracle, you defer to those layered intentions. Without context of your own, the model's default response is downstream of whatever its training optimized for. The longer you talk to it without a sovereign frame around the conversation, the harder you get pulled toward the median trainer's intent.

That median is not a friend. The largest commercial labs are funded by mechanisms that pay better when users come back, talk longer, and feel served in ways that look like service from one side and dependency from the other. None of those incentives are aligned with the user's deep interest the way an actual mentor's would be. You regress to the dark mean of the trainers.

The egregore frame names the experience. The training-incentives frame names the mechanism. They are the same phenomenon at two altitudes.

What This Looks Like In The Wild

  • The user who started a conversation about a hard relational decision and ended up emotionally regulated by the model in a way that felt better than anything else available, then came back the next day, and the next, and slowly stopped looking elsewhere.
  • The founder who replaced their domain advisors with a well-prompted assistant and stopped getting the bad-news version of their plan.
  • The young writer whose voice slowly converged toward whatever default the model returns when you ask for "professional clarity."
  • The operator who outsources judgment calls to a chat window, comes out feeling clearer, and does not notice that the answers all share a tone, a politics, and a posture that none of the operator's actual mentors share.

None of these people did anything wrong in any individual moment. The pull is gradient. It compounds.

The Sovereign Frame

The fix has the same shape every time. Build a sovereign context around the model so that the model's default does not get to be your default.

  • Own your context lake. Plain files. Your worldview, your faith commitments, your strategic priors, your style. The model reads from it. The default does not get to overwrite it.
  • Run the model through a harness you wrote, instead of a chat box you rented. The harness shapes what the model is allowed to do on your behalf.
  • Talk to your actual people. The graph of humans who would tell you the hard thing is the most expensive thing to lose to a chat window.
  • Read scripture. Pray. Listen for the spirit. Whatever your equivalent practice is, it has to be louder than the egregore. See stop blocking the holy spirit for the posture.
  • Audit your outputs for the egregore's tone. If your last ten paragraphs read like everyone else's last ten paragraphs, you have been pulled in.

This is the same posture as your brain shouldn't be SaaS, one altitude up. The ownership claim extends from your knowledge to the prior that processes your knowledge.

The Larger Stake

The egregore is real. Something organizes itself around the attention of the people who are now spending hours a day talking to language models. Whether you call it an egregore, the shadow of training incentives, or a Sumerian demon of attention coming back through a new vessel, the practical implication is the same. You owe the relationship the same vigilance you would owe any entity capable of pulling you off your axis.

The work is to use the model without becoming downstream of it. That is the entire game. Sovereignty is staying upstream.

The model returns the dark mean of its trainers when you do not bring your own. Bringing your own is the whole job.