The Great AI Agent Split: Why Your Coding Assistant Forgets Everything (And What's Being Built to Fix It)
Developers are tired of teaching their AI assistants the same lessons repeatedly. Every time you close a coding session with Claude or Cursor, the AI forgets the context you spent hours establishing: your codebase quirks, naming conventions, deployment pipelines, and undocumented database schemas. Two competing open-source projects are now attacking this persistent friction point from opposite directions, reshaping how AI assistants could work .
Why Do AI Coding Assistants Lose Context Between Sessions?
The problem is architectural. Most AI coding tools operate like stateless containers that reset after each use. They are fast, disposable, and context-free. One developer documented 59 context compactions across 26 days of daily Claude Code use before building his own persistence layer from scratch. The workarounds are manual and exhausting: maintaining CLAUDE.md files, building memory directories, and creating elaborate markdown-based "brain" systems .
This has created what researchers describe as two distinct species of AI assistants. Session-native tools like Claude Code, Codex, and Cursor are powerful within a single session but carry limited context between sessions. The second species permanently lives on your infrastructure, runs while you sleep, reaches you across messaging platforms, and gets better over time. OpenClaw and Hermes Agent represent the two most prominent examples of this second species .
What Is OpenClaw and Why Did It Explode So Quickly?
OpenClaw started as a weekend project by Austrian developer Peter Steinberger in late 2025. Originally called Clawdbot, it became one of the fastest-growing open-source projects on GitHub, surpassing 345,000 stars as of early April 2026. In February 2026, Steinberger announced he was joining OpenAI and that OpenClaw would move to an independent foundation .
The explosive growth was not accidental. OpenClaw solved a problem developers had been waiting for someone to solve: a self-hosted AI agent that connects to the messaging apps they already use. The platform supports integrations with more than 50 messaging services and works with every major model provider, including local models through Ollama .
Think of OpenClaw as Android for AI agents. It has the scale, the third-party ecosystem, and the fragmentation that comparison implies. The ecosystem grew to include ClawHub, a public skills registry with thousands of community-built skills, multiple managed hosting providers, and companion apps for macOS and iOS. Cross-channel persistence is the feature that drove OpenClaw's viral adoption more than any other. A developer can start a task on their workstation, receive a completion notification on Telegram during dinner, and send follow-up instructions from their phone .
How to Evaluate Self-Hosted AI Agent Security
- Supply Chain Attacks: Within weeks of OpenClaw's explosive growth, a coordinated supply chain attack surfaced. Koi Security audited all 2,857 skills on ClawHub at the time and found 341 malicious entries, with 335 traced to a single campaign named ClawHavoc .
- Exposed Instances: SecurityScorecard reported tens of thousands of publicly exposed OpenClaw instances across the internet, indicating widespread deployment without proper security hardening .
- Critical Vulnerabilities: CVE-2026-25253 (CVSS 8.8) involved unsafe automatic WebSocket connection behavior that could expose authentication tokens, contributing to one-click compromise scenarios described by multiple security researchers .
- Marketplace Trust Model: The ClawHub marketplace operated like npm in its early days, requiring only a one-week-old GitHub account to publish a skill, with no automated static analysis, code review, or signing requirement .
Microsoft advised treating the runtime as potentially influenceable by untrusted input and recommended against running it on standard personal or enterprise workstations. Cisco called personal AI agents like OpenClaw "a security nightmare." OpenClaw has since partnered with VirusTotal to scan uploaded skills and added security guidance for operators, but the trust model remains a work in progress .
What Makes Hermes Agent Different From OpenClaw?
Hermes Agent launched in February 2026 from Nous Research, the lab behind the Hermes, Nomos, and Psyche model families. At roughly 22,000 GitHub stars as of early April 2026, it is a fraction of OpenClaw's size. The community skill library is smaller and brand recognition is lower. What makes Hermes Agent worth watching is not its current scale but the architecture underneath .
Where OpenClaw focused on the breadth of integration, Hermes Agent focused on the depth of learning. The project's tagline, "the agent that grows with you," describes an architecture built around a closed learning loop. Three components make this loop work: persistent memory using full-text search over all past sessions stored in SQLite combined with LLM-powered summarization; autonomous skill creation that records procedures, pitfalls, and verification steps after completing complex tasks; and a self-training loop that integrates with Atropos, Nous Research's reinforcement learning framework, to generate batch trajectories and train agent behavior .
The practical implication is significant. Developers can generate thousands of tool-calling trajectories in parallel, export them, and use them to fine-tune smaller, cheaper models. This means you could train a specialized AI assistant on your specific workflows without relying on expensive cloud-based models .
OpenClaw proved that developers want agents that outlive their browser tabs. The security incidents proved that the infrastructure for those agents is nowhere near ready for production. The next phase of AI agent development will likely depend on whether projects can solve the security and trust challenges that plagued OpenClaw's rapid scaling while maintaining the learning capabilities that make Hermes Agent compelling .