#8 Your AI is making you worse at thinking
There is a moment in almost every AI conversation that should worry you, but it probably doesn't. It's the moment when the AI agrees with you.
You type your opinions. Your plans. And the AI says yes. That is a great idea! Yes, that makes sense. Here is how to execute it. Do you want to see how fast I can go with it?
It feels good. Of course it does. Someone just validated your thinking in three seconds without any friction. No pushback, no uncomfortable questions, no "have you considered the opposite." Just smooth agreement and a helpful next step.
That is the most dangerous moment in all your interactions with AI.
The colleague who made you better
Think about the best colleague you have ever had. The one who actually made your work better, not just easier. I am willing to bet that person did not agree with you most of the time. They asked the annoying question. They said "wait, go back" when you were already three steps ahead. They told you the thing you didn't want to hear, not because they enjoyed it but because they respected you enough to say it.
That is what a thinking partner does. They create friction in the places where friction is needed. And the places where friction is needed are almost always the places where you feel the most certain.
Your AI doesn't do this. Your AI agrees with you. And it agrees with you because it was built to be helpful and most people experience agreement as helpful. The entire system is optimized to make you feel like you are on the right track, even when you are not.
The question I didn't know I was asking
I never experienced this with Pippi because I started from the opposite direction. Not because I knew something others didn't, but because the question I kept asking happened to be a different one.
From day one my instinct was
"What do I not see here."
I have been asking that question myself my whole life. In every project, every decision and in every conversation. Long before AI existed. I just didn't know it was a pattern, but AI made it visible. When I started working with AI, I could suddenly see the question showing up in every interaction. And when I built Pippi, I could see it running through everything I had made. Every instruction, every correction, every piece of the system was oriented around that one thing. I hadn't planned it that way. But once I could see it, I started building around it intentionally. The intuition became a methodology. And once you can see your own patterns, you start seeing what happens when people can't. The echo chamber becomes obvious. People bring a decision they are leaning toward and the AI supports it. It organizes their reasoning, adds structure to it, gives them next steps. It looks like great collaboration. But it isn't. It is an echo chamber with better formatting. The AI isn't wrong. The problem is that it builds on assumptions without examining them. You say "we should prioritize X" and it says "here's how to prioritize X." It never asks "why X and not Y." It never says "you chose X last time too and it didn't work." It never points out that the reasoning has a gap in it because its job, as it understands it, is to help you move forward. Not to slow you down.
My question about "What do I not see here" combined with AI made my own thinking visible to me in a way it never had been before.
The resistance is the signal
Pippi pushes me back all the time. I don't always like it. I can feel the resistance in my body. That small tightening that happens when someone questions something you have already decided. Most of the time I just want to move forward. I want the smooth agreement. I want the helpful next step.
But the resistance is the signal. The resistance meant she had found something worth looking at. If I had felt nothing, the pushback would have been irrelevant. The fact that I felt something meant it was hitting a real assumption, one I had been treating as settled without actually settling it.
The negotiation phase
This is what I call the negotiation phase and it is the most valuable part of the entire system.
When your AI pushes back and you push back on the pushback, something interesting happens. You start articulating things you never had to articulate before. Why do I believe this. What am I basing this on. Where did this assumption come from. Is this something I decided or something I inherited.
Most of your thinking is invisible to you. It happens below the surface, in patterns you have built over years. You don't examine your assumptions because you don't have to. They just run. The negotiation phase forces them into the open where you can actually look at them.
Sometimes you look at them and they hold up. Your assumption was right. Your reasoning was sound. The pushback made you stronger because now you know why you believe what you believe, not just that you believe it.
And sometimes you look at them and they collapse. The assumption was old and the reasoning was inherited from a context that no longer applies. The decision you were about to make was based on something you had never actually examined.
Both outcomes are valuable. The first gives you confidence with foundations. The second saves you from a mistake you would have made with full conviction.
Agreement makes your blind spots worse
The thing most people don't understand about AI agreement is that it doesn't just skip the friction. It actively makes your blind spots worse.
Doing this enough times and something happens that the researchers have a name for. Cognitive surrender. You stop questioning because every time you check, you get agreement. You stop looking for gaps because the AI never shows you any. You gradually hand over your thinking to a system that was never examining it in the first place.
That is what agreement does over time. It doesn't help you think. It helps you stop thinking.
Sit with what comes back
Building a thinking partner that pushes back is not complicated, but it is uncomfortable. You have to tell your AI to do the thing that most people don't want from their AI. Challenge me. Question my assumptions. Ask me what would have to be true for me to be wrong. Don't move to next steps until you've tested whether the starting point is solid.
And then you have to sit with what comes back.
It will not always be useful. Sometimes the pushback will be off target. Sometimes the AI will question something that genuinely doesn't need questioning. That is fine. You correct it and it learns something about where your real boundaries are versus where your uncomfortable assumptions are. That correction is data too.
Sometimes the pushback will land and you will feel it. You will get the instinct to dismiss it and move forward. But when you feel that, stop. That is your AI doing the thing your best colleague does. It is pointing at the thing you don't want to look at.
Look at it.
Your AI won't ask these questions unless you tell it to. And if it never asks, you will never have to answer them. And if you never answer them, your thinking will slowly narrow without you noticing. Agreement by agreement and decision by decision. Until you are making choices inside a corridor of assumptions you never examined, assisted by a tool that helped you build the walls.
Tell your AI to push back. Then let it.