#10 What happens before you decide?
I have been listening to The Untethered Soul this week. Michael Singer keeps coming back to one idea. Consciousness gets pulled into whatever object it touches. You see a car and your awareness flows into the car. You have a thought and your awareness becomes the thought instead of the one watching it. The work, he says, is staying as the one noticing. The one who is actually making the decisions.
I was sitting with this when I realised it is the same mechanism as cognitive surrender when we interact with AI, which I have been writing about here for months.
The Wharton research I have referenced before shows that people adopt AI outputs around eighty percent of the time even when those outputs are clearly wrong. The usual explanation is trust or the path of least resistance. I think it is something more basic than that. The wrongness is what needs explaining. If it were just trust, people would still spot the obvious mistakes. They do not, because there is no pause long enough to look. The output is already inside their thinking before they have a chance to question it.
AI produces an output and your attention flows into it. The thought on the screen becomes your thought. Not because you decided it should, but because that is what attention does by default when it meets another object. It absorbs.
This is why the people who use AI well are not the most technical ones. They are the ones who already have a practice of being critical of their own thinking. They have spent years questioning their own reasoning, watching themselves think, treating their first response as something to look at rather than truth. When they sit down with a machine they bring that same critical stance with them. They do not merge with the output, because they do not even merge with their own first thoughts.
This is the question I keep coming back to lately. Is it easier for people who are already critical of their own thinking to keep judgment in the room when working with AI? And can this be trained for everyone else as well?
I think the answer is partly yes and partly something else. Yes, it can be trained, but not in the way most AI training is set up right now. You can not teach this with a checklist of prompts. What works is practice in noticing your own thinking while it happens and treating the AI output as something to explore rather than something to absorb.
The other half is how we build the AI itself. If the practice of noticing protects the human, the way the AI is built either supports that practice or works against it. Most AI being built right now works against it. The output is given as an answer rather than as something to think with. The way it is set up makes accepting feel easy and questioning feel like extra work. The whole thing is designed to pull you in.
This is part of why I built Cognitive-First AI and Pippi the way I did. The output should be something the human looks at, not something she absorbs. There needs to be a record of where her reasoning has failed before, so she can catch the same pattern next time. There needs to be a way for her to be pulled back when she has merged with something she should have questioned. The system is built to keep the human in the role of the one thinking and the one deciding.
Build it this way and something changes on both sides. The human keeps her judgment and the artificial intelligence gets to work with a human who has her judgment in place. That is a different working relationship.
The pattern Singer describes is not new. What is new is how often we are asked to stay as the one noticing and how easy it has become to stop.