Part 2: Is OpenClaw Everything AI Promises to Be? Here’s Why It Actually Proves Suleyman Wrong.

If you missed part one of this series, I’d encourage you to start there. The short version: Mustafa Suleyman, Microsoft AI CEO, recently predicted that most white-collar professional tasks will be fully automated within 12 to 18 months. I argued that while the capability trajectory is reasonable, the prediction falls apart the moment you factor in how organizations actually adopt technology, how trust actually gets built, and what mass automation would actually do to the economy.

Just a few weeks before Mr. Suleyman’s interview, something new in the world of AI agents/automation exploded from no-where (sort of). A tool called OpenClaw started showing up everywhere in my feeds (quickly to be replaced by nanobot).

The more I dig into OpenClaw (what a journey), the more I realize how OpenClaw’s amazing abilities furthers my counter-arguments to Mr. Suleyman’s prediction.

First, Let’s Talk About How Cool This Thing Actually Is

Created by Peter Steinberger (who just joined OpenAI!) and a growing open source community, OpenClaw is a self-hosted personal AI agent that lives in whatever chat app you already use. WhatsApp, Telegram, Discord, Signal, iMessage. It has persistent memory. It controls your computer. It browses the web, reads and writes files, manages your calendar, clears your inbox, and writes its own skills. It runs 24/7. It learns who you are over time. People are naming theirs Jarvis and Claudia and Brosef (or Falcon).

The community reaction has been genuinely something to behold. Andrej Karpathy gave it a nod. Federico Viticci at MacStories called it “what the future of personal AI assistants looks like.” The speed of AI’s evolution is dizzing. I’ve been real careful with the hype as my customers need to know what is real and what market cap spin.

Then I installed OpenClaw myself and I get it. This feels different from the usual noise.

So here’s the twist. All of that genuine excitement is precisely what makes OpenClaw the most honest window we have right now into where AI automation actually stands.

A Genius Child With the Keys to the House… and Your Bank Account

The stories people share about OpenClaw are fascinating, and not always for the reasons you might expect.

One user noted their OpenClaw “accidentally started a fight with Lemonade Insurance because of a wrong interpretation.” They found it funny, and it apparently helped their case. But read that again slowly.

An AI agent, acting autonomously, sent an email on someone’s behalf based on a misread, to an insurance company.

In a personal context, that’s an amusing story to share on social media. In a corporate legal department, a financial services firm, or a healthcare organization, that’s a compliance event, a potential liability, and a conversation with your general counsel all before lunch.

I have heard stories of a user who gave their OpenClaw a credit card. Someone else had it autonomously open Google Cloud Console and provision OAuth tokens without being asked. These are not bugs. They are the product working exactly as designed.

And how does OpenClaw work? tl;dr; you tie it into models such as Opus, Sonnet, Haiku, et al. And then OpenClaw starts eating credits. My first install, with major limitations in place, used $5 USD in credits in about 10 conversations. Similar to what others are seeing. Expand that out into the automations you might be hearing about. It could cost you more than your streaming subscriptions just wake up each morning with a recap of news that might interest you.

OpenClaw a genius child.

Brilliant, curious, capable of things that genuinely take your breath away, and still needing constant oversight, correction, and someone watching over its shoulder (and budget). That is not a knock on OpenClaw. It is an honest description of where this technology actually sits right now, hype cycle and all.

And it is exactly the dynamic I was pointing to in part one when I talked about the gap between demo capability and real-world enterprise deployment when also considering change management.

The Three Walls That Don’t Move in 18 Months

When I look at OpenClaw through the lens of Suleyman’s prediction, three things jump out that his timeline simply doesn’t account for.

Trust is the biggest one. And trust is not a technical problem. It doesn’t get solved by a better model or a smarter agent. It gets built over time, through repeated demonstrated reliability, in high-stakes situations, with real consequences on the line.

We are in the very early innings, pre-innings maybe, of the trust building process. We basically know it is an issue with no good answer just yet.

Organizations that are genuinely accountable for outcomes, i.e. regulated industries, publicly traded companies, anything touching patient data or client funds. These organizations are not going to hand autonomous decision-making authority to a system still in the genius child phase. Not in 18 months. Not even close. Yes, the technology may “be there” by then, I just don’t see trust being built that fast.

Change management is the second wall, and anyone who has spent real time inside any organization knows this one in their bones.

Technology adoption inside organizations runs on its own clock, and that clock moves much slower than the technology itself. We saw this with cloud adoption. We saw it with mobile. My company and I have lived, and still live, with this with SharePoint. Change is genuinely hard, risk tolerance is genuinely low, and “good enough” has enormous institutional gravity (thank you Claude for that word).

Deploying autonomous AI agents with full system access is a significantly more complex change management challenge than moving document storage to the cloud. Governance frameworks, policy rewrites, training programs, and liability conversations are going to to take years under the best circumstances.

Finally, cost is the third wall, and it gets the least attention in these conversations. Google it, OpenClaw users all over the world are mentioning burning through their Claude subscription limits quickly. I was shocked to see what happened to my Claude usage when my OpenClaw installed was configured to use only Haiku 4.5, and then only for its hourly heartbeat.

In a hobbyist context, tweaking your OpenClaw to use as little resources as possible is a fun puzzle to solve over a weekend. Now scale that to an enterprise deployment of autonomous agents running around the clock, each one handling real business tasks, each one making constant API calls.

The economics of that at scale are not trivial, and I don’t think they have been seriously modeled by most of the organizations Suleyman is describing. When the CFO asks what this is going to cost annually, and the answer involves consumption-based API pricing for hundreds or thousands of autonomous agents, the adoption timeline gets very long very fast.

Unless the costs for tokens goes down fast.

You could argue the savings of people’s time will offset the cost. I recently heard a story (I did not fact check – it came from a trusted source) of a bay area software developer that is using AI tools to assist their development workflow. They were able to see a 10x gain in productivity. That’s huge. The cost? Their salary in Claude credits. This was what I’d consider a highly trained developer, so would you double their salary in terms of cost to get a 10x productivity gain? Not too challenging to answer, yet will everyone see this type of gain? No.

The Capability Clock vs. The Adoption Clock

Here’s where I want to land this, because I think it’s the most important point in both parts of this series.

I actually think AI could, technically, reach the capability level Suleyman is describing. The trajectory is real. OpenClaw, rough edges and all, is a genuine early view of what that future “might” look like. Anyone is this space will agree the capability clock is ticking, it’s ticking fast, and it seems to speed up daily.

But there is a second clock running alongside it. The adoption clock. And it moves to a completely different rhythm that the hype doesn’t even hint at acknowledging. That clock is governed by trust, change management, cost, regulatory reality, and the deeply human tendency of organizations to move carefully when the stakes are unknowling high.

Those forces don’t respond to model improvements or benchmark scores. They respond to time, demonstrated reliability, and the slow unglamorous work of building confidence through experience.

I look at the hype of Suleyman’s prediction requiring both clocks to reach the finish line at the same time. In 18 months. That’s the part that falls apart.

The more realistic story is the one playing out right in front of us. A growing community of technically adventurous individuals and small teams exploring what’s actually possible, building real workflows, discovering real limitations, and occasionally having their AI accidentally pick a fight with their insurance company. That community is doing genuinely important work, and I’m just happy to have the opportunity to see this from the periphery. The organizations I see and work with every day will watch, learn, and follow on their own timeline and their own terms.

The genius child is remarkable. And it still needs a parent in the room. And organizations of all sizes that actually make up our world are going to take their sweet time deciding whether they trust it to babysit.

What are you seeing on the ground? Are you experimenting with tools like OpenClaw? Is your organization moving faster or slower than you expected on AI adoption? I’d genuinely love to hear what you’re experiencing. Drop a comment below or find me on LinkedIn.

Speak Your Mind

*