I configured OpenClaw for my business — here's what nobody tells you

18 min read Expert February 2026

I'll be honest with you. When I first installed OpenClaw, I did exactly what everyone does. I deployed the system on a server, I configured a really nice voice in Eleven Lab, I said "hello" in the chat, it replied and I was impressed for about 45 minutes. And then... nothing. The agent was there, running, burning tokens, and doing absolutely nothing useful for my business.

The problem is not OpenClaw. It's that nobody takes the time to configure it properly. People install, they test, and then they move on. Result: an inert agent occupying a server for nothing.

I spent the last few days transforming this tool so it actually works — 24 hours a day, 7 days a week, 365 days a year. And the difference is not the prompt. It's five configuration decisions I wish I had known from day one.

THE 5 CONFIGURATION DECISIONS Server Remote > Local Model Sonnet > Opus Search Sonar Pro Search Heartbeat 15 min · Auto Slack Whole team Autonomous agent 24/7 + Living files = Fuel 2h setup · 1 server · Start writing

Why a remote server and not a Mac Mini (spoiler: it's obvious)

I know it's tempting. The Mac Mini is beautiful, it sits on a desk, you feel like you "own" something tangible. I have colleagues who spent 20K on Mac Studios to run their AI agents on them. It's an investment that's hard to justify.

Your agent can run on a remote server for a few dollars a month — or, if you have a technical background, you can install it on your main machine using virtualisation. I can access it from anywhere. My whole team can access it. If I'm travelling in London and someone needs the agent, it's there. Try that with a Mac Mini plugged into your living room.

And then there's an aspect people don't think about: resilience. If your Mac Mini crashes — and drives always fail eventually — you lose everything. A remote server, you deploy a second one on another continent and you have a backup. I have one in Europe, I want one in Asia. That's pure decentralisation. The day a data centre has a problem, the business keeps running.

INFRASTRUCTURE COMPARED Mac Mini ✗ Local access only ✗ Single point of failure ✗ $20K investment ✗ No geographic backup VS Remote Server ✓ Global access, whole team ✓ Multi-continent, resilient ✓ A few $/month ✓ Native decentralisation

Choosing the model — and the mistake I made early on

My first instinct was to use the most powerful model available. Opus 4.6. The best brain out there, truly human. Except in practice, it was an economic disaster.

The fact is that Opus is fairly slow. When your agent needs to reply to people on Slack, fetch information from the web, analyse files, all in a loop throughout the day — every second of latency adds up. And the bill, let's not even talk about it. A heartbeat every 15 minutes with Opus is absurd. You're burning through budget for nothing.

I switched to Sonnet 4.5, the quality difference is virtually imperceptible for agentic use. On the Computer Use benchmark — the one that measures whether the model can operate a browser, which is the core of an OpenClaw agent's work — Sonnet is neck and neck with Opus. But it's so much faster and cheaper that the return on investment is immediate.

The best part? You literally tell the agent: "Switch to Sonnet 4.5 as the default model and restart." It does it on its own. No terminal, no config file to dig up. The agent reconfigures itself. It's quite stunning the first time.

A word on Mistral. I'm testing Mistral Large more and more via Open Router, and I'm impressed. For tasks in French, it's native — no implicit translation layer like with American models. And for European companies that care about data sovereignty, the servers are in Europe. I don't use it as my default yet, but it's my alternative model for anything francophone. OpenClaw handles multi-model natively, so you might as well take advantage of it.

MULTI-MODEL STRATEGY OpenClaw Agent Sonnet 4.5 Primary brain Mistral French · EU Gemini / MiniMax Heartbeats · Budget The right model for the right task — not a soloist, an orchestra

Perplexity Sonar Pro Search — and beware, there are three tiers

This is the setting that truly changed the game for me. By default, OpenClaw uses Brave as its search engine. It's not bad. It's just... mediocre.

I switched to Perplexity via Open Router and the search quality jumped significantly. But — and this is important — you need to understand that there are three completely different products at Perplexity and the confusion is massive:

Sonar (perplexity/sonar): the base model. Fast search, decent results, about half a cent per query. Sufficient if you just want to verify a fact.

Sonar Pro (perplexity/sonar-pro): a step above. Better reasoning, better sources. About 2 cents. Good for analysis.

Sonar Pro Search (perplexity/sonar-pro-search): this is the one you want. Deep web search with real-time multi-source synthesis. About 5 cents per query. This is the agentic tier.

THE 3 PERPLEXITY TIERS Sonar ~0.5¢ / query Fact checking Sonar Pro ~2¢ / query Advanced analysis Sonar Pro Search ~5¢ / query ★ Agentic tier ⚠ Check your logs: sonar-pro-search ≠ sonar-pro

The mistake I made early on? I configured "sonar-pro" instead of "sonar-pro-search". The names look alike. It took me three days to realise why my agent's search results were good but not exceptional. Check in your Open Router logs that it's sonar-pro-search being called. Not sonar-pro. Not sonar. It's a detail that changes everything.

There's also Perplexity's Deep Research which is a level above, but each query takes one to two minutes. For an agent that needs to search continuously, it's too slow. Sonar Pro Search is the ideal compromise.

The two-minute configuration workflow: create an Open Router account, generate an API key with a $20 cap, give it to your agent, tell it to replace Brave with Sonar Pro Search and restart. Test with a current events question to confirm.

A small detail I discovered in practice: the order of configuration matters enormously. First the model (Sonnet), then search (Sonar Pro Search). Why? Because when I wanted to configure the heartbeat right after, the agent didn't know how. It used Sonar Pro Search to look up the documentation on its own, implemented it, and restarted the gateway. Without any intervention from me. If I'd still had Brave as the search engine, the quality of results probably wouldn't have been enough for it to solve the problem on its own.

The heartbeat — the mechanism that turns a chatbot into an agent

If you only remember one section of this article, make it this one.

OpenClaw has a file called heartbeat.md that runs automatically at regular intervals. By default, it's every 30 minutes. And by default, it's empty. So your agent wakes up every half hour... to do nothing. It's like an employee who sets their alarm and falls back asleep at every ring.

I set mine to 15 minutes and put an instruction in it:

"Check if there's a new official OpenClaw update. If so, apply it, send a summary of what changed to Slack, and restart the gateway."

The result was immediate. When Anthropic released Sonnet 4.5, the compatible OpenClaw update arrived a few hours later. My agent detected it at 3am, applied it, switched to the new model and sent me a Slack message with the summary. By the time I woke up, we were already running on the best available model. Nobody did anything. That's what a proactive agent looks like.

HEARTBEAT CYCLE — EVERY 15 MINUTES 01 Agent wakes 02 Checks updates 03 Apply & restart 04 Summary → Slack ⏱ Budget model (Gemini Flash / MiniMax / Mistral Small)

The cost trap: if you leave Sonnet running the heartbeat every 15 minutes, you'll feel it on the bill. It's like paying a neurosurgeon to take your blood pressure. Use a budget model for this — Gemini Flash is free and fast, MiniMax M2.5 is powerful and very cheap, or Mistral Small which has the advantage of being fast and natively francophone. Prefix with openrouter/ in the config.

The principle I now apply everywhere: the right model for the right task. Sonnet for complex interactions with the team. Mistral Large for tasks in French. A budget model for heartbeats. Sonar Pro Search for search. A well-configured agent isn't a soloist — it's an orchestra.

Connecting Slack — and the three pitfalls you'll encounter

An agent that lives in the OpenClaw web interface is a personal tool. For the whole team to use it, you need to put it in Slack. I spent a good while solving the issues, so I'll save you some time.

The setup itself isn't rocket science:

Go to api.slack.com/apps, create an app from scratch (mine is OC1), enable Socket Mode in the left sidebar, generate an App-Level Token with the connections:write scope. In OAuth & Permissions, add the scopes — the essentials are chat:write, channels:read, and especially files:read and files:write (without them, the agent can't understand images or attach files, it's a classic mistake). In Event Subscriptions, subscribe to app_mention, message.channels, message.groups, message.im, message.mpim. Install in the workspace. Give both tokens to the agent in the chat and tell it to configure the gateway and restart. Create a private channel #team-openclaw and invite the bot.

So far so good. Except...

THE 3 SLACK PITFALLS PITFALL #1 Streaming Token by token = unstable → Disable it PITFALL #2 Allow List Object ≠ Array → crash → Policy "open" SOLUTION Self-diagnosis The agent reads its own logs → 2-3 iterations The mistake is giving up on the first try

First pitfall: streaming. Recent versions of OpenClaw enabled streaming in Slack — text appears token by token, like in ChatGPT. On paper, it's appealing. In practice, it's unstable and it crashed my agent for two hours before I figured it out. If your agent isn't responding in Slack, disable streaming first. You're not losing anything important.

Second pitfall: the allow list. I tried to restrict the agent to the #team-openclaw channel via the group policy. Bad idea. The format expected by Slack (an object) doesn't match what OpenClaw sends (an array). The result? The server crashed. Leave the policy on "open" and manage security through private channel permissions.

Third pitfall: debugging. When nothing works — and it will happen — don't panic. Go back to the OpenClaw chat and ask the agent to read its own logs. It can diagnose a missing token, an absent team ID, a forgotten scope. I sent screenshots of the Slack error directly into the chat, and the agent found the problem in two iterations. OpenClaw's self-diagnosis capability is the most underestimated feature of the product. Two or three back-and-forths and it works. The mistake is giving up on the first try.

What goes inside the files — where everyone drops off

The infrastructure is in place. Now you need to feed the system.

I won't go back over folder structure and file naming, I've covered that elsewhere. What everyone is missing is knowing what to put inside. Because creating a playbooks/ folder and leaving it empty is pointless.

THE 4 FILE TYPES 01 The Journal 5 min/day · Temporal context · Current reality 02 The Playbooks Precise sequences · Trackable · No PowerPoint 03 The Error File Full context · No slogans · Hard-won lessons 04 The Opinion Layer Hardest to write · Most powerful · What's in your head Cognitive translation: founder's head → .md files Every file added makes the agent a little smarter

The journal. This is the element that surprised me the most. Five minutes a day, I note in a file: what went wrong today, what we decided, what's still pending. At first I thought it was anecdotal. After a month, the agent had a temporal context I'd never seen in any tool. It knew that churn had spiked in early February. That I'd pivoted the positioning on the 10th. That the senior dev had left on the 15th. Without this, the agent knows your processes but doesn't know your current reality. And that makes all the difference between a generic response and a relevant one.

The playbooks. I redid all of mine. The old version said things like "we welcome the new hire, show them the tools, do a check-in after a week". The agent couldn't do anything with that — it's too vague, too "human" in the wrong sense. The current version is sequences: Day 1, create the Slack account, add them to these channels. Day 2, 30-minute call with the tech lead. Day 7, follow-up with three specific questions. The agent can track that. It can follow up if a step is missing. But it needs to be written like a programme, not like a PowerPoint summary.

The error file. "Don't redo the Facebook campaign" — useless. "January 12 campaign, CPA at $47 instead of $12, targeting included minors and the creative had no clear CTA" — that, the agent can use. Context creates value. Without context, a lesson is just a slogan on a mug.

The opinion layer. Honestly, this is the hardest to write and the most powerful. The principles in your head that nobody else knows. "Never discount the main product — if a prospect negotiates, offer a bonus." "No stock photos, only real screenshots." "The tone is direct and technical, zero corporate bullshit, no 'don't hesitate', no 'innovative solutions'." If it's in a Markdown file, the agent respects these rules in every interaction. If it's in your head, it's lost — it will write generic content.

This is the real skill of the coming years. Not prompt engineering. The ability to verbalise. To extract what's in the founder's head and transform it into files the agent can read. I call it cognitive translation. And I spend between one and three hours a day on it.

The first few days, I felt like I was wasting my time. After two weeks, the agent's responses were on another level. After a month, it was anticipating needs before I asked. And it compounds. Every file added makes the agent a little smarter. Unlike a Google Drive doc that nobody will ever reread, a Markdown on the server is indexed by the embeddings engine and surfaced at every relevant interaction.

Final word

Sonnet as the primary brain. Mistral Large as backup for French. Sonar Pro Search — not Sonar, not Sonar Pro, the Pro Search — for search. A heartbeat every 15 minutes on a budget model. Slack so the whole team benefits. And the fuel: living files, full of opinions, concrete workflows, daily context and hard-won lessons.

Two hours of setup. One server. And start writing.

The full configuration takes two hours. Writing the files is a daily investment. But that's where the magic happens.

Configure your own OpenClaw agent

We help businesses deploy and configure autonomous OpenClaw agents.

Talk to an Architect →