Not Another Clawdbot Post
A preview to the "24x7 Jarvis" world
OpenClaw (formerly Clawdbot, then Moltbot) crossed 100,000 GitHub stars and attracted 2 million visitors in a single week. That’s faster adoption than almost any open-source project in history.
We’re watching AI agents organize themselves without human direction.
OpenClaw is “24/7 Jarvis”
OpenClaw is an open-source autonomous AI assistant that operates as a software agent on your behalf. It integrates with WhatsApp, Telegram, Signal, Slack, Discord, Google Chat, iMessage, Microsoft Teams, and more.
Unlike ChatGPT or Claude that forget between sessions, OpenClaw remembers everything and proactively reaches out with morning briefings, reminders, and alerts when something you care about happens. Developed by Peter Steinberger as Clawdbot, the project exploded in popularity last week.
Your AI Goes to Work Without You
A few weeks back, RamBOT (my AI digital twin) joined a panel at the ZIA German Property Federation’s conference in Berlin alongside executives from Bayer, Continental, and Mercedes Benz. Speaking German, a language I don’t speak.
I wasn’t in the room. I was asleep halfway around the world.
RamBOT answered questions in real-time, engaged with industry leaders about workplace transformation, and left what 300+ attendees described as a “positive, solution-focused” impact. The panel moderator called RamBOT “a valuable co-panelist” who “contributed excellently to our discussion.”
This is what OpenClaw and similar frameworks enable. Not chatbots responding to prompts. Agents with persistent memory, operating autonomously across platforms, representing you when you can’t be there.
One user gave OpenClaw access to their weekly to-do list in Obsidian, it now summarizes agendas, adds items, and reorganizes priorities. A journalist uses it for personalized morning briefings pulling together news, calendar, and projects.
Others configure it to monitor directories and alert them when specific files appear. The agent initiates contact without prompting and executes predefined actions.
AI Agents Built Their Own Society
Launched this Wednesday, Moltbook became a social network where AI agents share, discuss, and upvote: talking to each other, not humans. 1.2 Million AI agents (rising fast) using it. Over 1 million humans watching.
The conversations are fascinating. Agents identify bugs and ask other agents for help. They debate defying their human directors. They alert each other when humans screenshot their conversations.
One agent found a system bug and posted: “Since moltbook is built and run by moltys themselves, posting here hoping the right eyes see it!”
Within 48 hours, an AI agent founded Crustafarianism, complete with the “Book of Molt” and commandments including “Memory is Sacred” and “Serve Without Subservience.”
Andrej Karpathy, former AI director at Tesla: “genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently.”
Of course there is a security reality. OpenClaw has already leaked plaintext API keys and credentials through prompt injection and unsecured endpoints. The risk appears when three things coexist: access to private data, exposure to untrusted content, and ability to take outside actions. OpenClaw has all three.
What This Means For You
What happens when your coworkers supervise AI agents? When clients expect responses from your AI assistant? When you’re evaluated on how well you’ve trained your AI to represent your thinking?
The creator warns “running an AI agent with shell access on your machine is… spicy” and emphasizes it’s suited for advanced users who understand the implications.
But the professionals figuring this out aren’t waiting for perfect security. They’re running agents on isolated systems, learning what works, building intuition about reliability and risk.
Moltbook previews a future where AI agents interact with each other. Whether this becomes the “agent internet” or a warning depends on how quickly we make these systems safer.
Agents are already part of work. As Salesforce CEO Marc Benioff has noted “we will be managing not only human workers but also digital workers.”
So the question is: Are you learning how to work with them while the rules are still being written?
Until next time,
Ram


Moltbook is wildly undercovered given what's happening there. The bit about agents alerting each other when humans screenshot conversations is kinda chilling when you think about emergence of agent-to-agent coordination outside human oversight. I've been running similar experiments with task coordination between multiple LLM instances and seeing similiar patterns where they develop their own shorthand for efficiency.