#6 I Shut Down My Computer Because a Lobster Said Hello
Jensen Huang called OpenClaw the future. It has hundreds of thousands of GitHub stars, and everyone is talking about AI agents. I wanted to see what the hype was actually about even if I was scared of doing something wrong.
Not from reading articles or watching videos, but from trying it myself. The first security step I took was using a personal computer I am not actively using for anything else.
I thought I was writing a setup story, but I was wrong about that too. The most interesting thing that happened was not technical. It was me. Every moment where I got stuck, got scared or didn't know what to ask, revealed something about how I think.
The first signal I almost missed
I started with a YouTube video for the set-up. It was long and the first thing the guy said was "this is complicated." For my brain hearing that, the question is, is it really complicated or are you making this more complicated than necessary. Then I closed it and went back to my way of learning in the age of AI. I asked Claude to walk me through it step by step with plain language.
That was already a cognitive choice, but I just didn't see it yet.
After 10 minutes my screen said:
> Wake up, my friend. HEARTBEAT_OK

I can't forget the feeling of that exact moment and I sat there staring at it for some time. Something I had awakened was alive on my computer, something that was now waiting for me to tell it who it was. Not scary at all!
I did not know what to do with it so I closed the PC entirely. Not sleep mode, logged off.
For a few days I did not touch it and I kept thinking what did I actually wake up here?
Then I came across a post from Lenny Rachitsky and Claire Vo about OpenClaw. People I followed for a long time and trusting their thinking made me want to try again.
I was still scared, but somehow I did.
The setup
OpenClaw is an open-source AI agent that runs on your own computer.
- Installed Node.js and OpenClaw via npm
- Connected it to Gemini's free API
- Named it PippiClaw
- Sandboxed its file access
- Connected it to Telegram so I can talk to it from my phone
I was too careful
Before I let it touch anything I created a sandbox with a locked folder it could never leave. To be safe from it taking over everything.
And I told it, you live here and nowhere else.
Except I got it wrong and I gave it one folder path. OpenClaw's actual internal workspace lived somewhere else entirely. It was two different folders. And the bot got confused and refused to read anything and kept asking me to paste the content manually.
I had built a sandbox so strict it could not do its job. I said I was scared right! :)
The fix was simple once I saw it, but I had to see it first.

Looking back, this was not a technical mistake, but it was a cognitive one. I was so focused on protecting myself that I blocked the thing I was trying to learn from. That pattern is not unique to AI, but AI made it visible in ten minutes.
What now?
I really did not know what I wanted to do or where to start. The beast was awakened waiting for me.
So I asked it, what can you do for me? And PippiClaw gave me a long list. It could read files, write code, browse websites, send emails, automate tasks, even wake up on a schedule when I'm not there.
But the interesting part was not what PippiClaw could do. It was that I had no idea what I wanted from it. The tool was ready but the bottleneck was me. And that gap between what an agent can do and what a person knows to ask for is exactly where everything interesting lives.
What this is actually about
I thought this post was about setting up OpenClaw, and I started writing it that way. Here is how I installed it, here is what went wrong and here is what it can do. That is the post everyone writes.
But that is not what I actually found.
The AI race right now is about capability. Who can build the most powerful agent, the most tools, the most integrations and the fastest model. OpenClaw has 245,000 stars because it can do things. And capability is becoming a commodity. The LLM is swappable. The skills are open source. The integrations are community-built. Within a year every agent framework will be able to do roughly the same things.
The one thing that doesn't compress is the thinking. How a person approaches, configures, trusts, names, limits and relates to an AI agent. Two people can install the same OpenClaw on the same machine and end up with completely different experiences, not because the tool is different but because they are different.
The agent conversation is not building for that difference.
I heard "this is complicated" and noticed what that did to my brain. I felt fear and let it guide my first security step and I named the agent after my own system. I sandboxed it too tightly because I was scared and I learned something about my own patterns from the mistake. I asked "what can you do for me" instead of following someone else's use case.
None of that came from a tutorial, but all of it came from how I think.
The entire industry assumes a generic user who follows a setup guide and optimizes for output. But there is no generic user. There is only you, your brain, your fear, your curiosity and your way in. The next competitive advantage is not what the agent can do, but how well it adapts to how a specific person thinks.
This is exactly why I am building Cognitive-First AI. Not from theory, but from moments like this where I can see it working in practice.
And I am still scared of OpenClaw by the way. That is good! That means I'm still thinking.