All posts
·7 min read

#9 What happens to the mind when it meets the machines?

I love this technology. I have been an early adopter of every tool I could get my hands on, I spend my evenings and my weekends building with AI and I have seen it change what is possible in ways that genuinely excite me. I am the one who keeps pushing to get people on board, to help them see the possibilities and lower the threshold. I am deep inside it, using it and building on it every day. And it is because I am that deep in it that I started noticing something I had not planned for. How do we get everything AI makes possible without losing what makes us good at what we do? It is not about choosing one or the other. It is about putting a cognitive layer on top of how you use AI so that you use it more intentionally.

The questions behind the building

I have been building on my own time for months and when I finally zoomed out I could see the question that my own thinking had been circling underneath everything.

What happens to the human mind in interactions with artificial intelligence? And how can we make sure that we are the ones getting the most out of this interaction?

Not what happens to the machines. My question is about the human side. What does meeting AI again and again every day actually do to a mind? AI is here to stay, so why not use it more consciously to sharpen our human edges even more? How can we take what is already working with AI and put a cognitive layer on top of it that makes us smarter as well?

I would not have seen these questions if I had kept building with speed and never taken the time to stop and write about what I was actually making.

The defences I built before I had a name for them

Wharton researchers gave a name to one piece of the answer this year. They found that people adopt AI outputs about eighty percent of the time even when those outputs are deliberately wrong. They called it cognitive surrender. I read that paper and recognised something I had built defences against in my own practice before it had a name, because I had already felt the pull in my own mind and already started to resist it.

That is not an argument against using AI. That is an argument for using it better. If eighty percent of people are accepting output without thinking, that means eighty percent of the value of human judgement is being left on the floor. The technology is extraordinary. The opportunity is in how we meet it.

That resistance became Pippi. It became the methodology I developed and am calling Cognitive-First AI. And I believe this is something we need to take seriously if we are going to live side by side with AI.

Using my own brain as a lab

Working with tech and I have a lifelong habit of watching my own thinking closely enough to notice when it shifts. That makes my brain a useful place to study this from. Not because it is typical, but because I have been paying attention to how my own mind works for a very long time and I am in constant contact with the thing I am trying to understand.

It started with one case and that was my own mind. Now I have tested it with others and so far it is holding up.

Intact is the word that matters

When I say I want my brain to stay intact I do not mean unchanged. A brain in contact with something new is going to change. That is how minds work. Our minds change in contact with everything. With nature, with other people and with the tools we use. Every interaction shapes the mind in some direction. The question has never been whether the mind changes. The question is whether we are aware enough of the change while it is happening and whether we take the lead of what direction it goes.

I mean that the parts of my thinking that make me good at what I do keep working. The way I can sense something before I even can explain what, the way I can weigh a decision from angles that others might not see, the taste and judgement that tell me when something is right and when it is almost right. Those are the things I want to stay sharp, not despite using AI, but through how I use it. Not because I am holding AI at a distance, but because I have built the interaction in a way that keeps those muscles working.

What I am learning through my own practice with Pippi and through testing the methodology with other people is that you can build the interaction with AI in a way that makes your thinking sharper. And that this is not about using AI less or being more careful. It is about whether the AI is shaped around how you actually think or whether you are quietly adjusting how you think to match what the AI expects from you.

Right now almost every AI tool on the market is built around all the possibilities and the speed and automation seems to be the most important thing. But what if we also, on top of all of that, added the layer that not only takes care of things for us but actually makes us better at how we think?

Pippi is bigger than Pippi

When I started building Pippi I did not know what it would become. It started as a way to make AI work better for my mind, but I also see now that it is only one part of a bigger picture.

Pippi is one expression of something my mind has been working through for a long time. The thing that actually matters is not the system itself. The thing that matters is the cognitive layer underneath it. Can you take something that is already making you more productive and add something on top that also makes you a better thinker? And if you can, what does that layer need to look like?

Pippi is how I have been answering that question with my own mind. The first time I tested my methodology with someone else was the first time I could see the answer hold in a different mind. And that is the part that excites me the most.

What I am calling Cognitive-First AI is the answer as I have it today. AI should be built around how a specific mind actually works, not the other way around and when you build it that way the person gets sharper. The human brings more to the table, not less. And the AI gets better input to work with because the person using it is actively thinking.

It can be tested. And it runs against the direction almost everything else in the industry is moving, which is exactly why it needs to be said.

What the lab has produced

I started with an intuition and my own experience of my own mind noticing something that did not have a name yet. I built a system around it on my own time and it turned into a method that could work for others, not just me. Then I zoomed out and saw the question that had been underneath all of it from the beginning. What happens to the mind when it meets the machines and how do we make sure that we are the ones coming out stronger?

This is what the lab has been for. I am going to keep running it. Because now I know that we can use AI in a way where the human stays sharp and the AI gets a sharper human to work with. That is the future I believe is worth building.