Moltbook is a social networking platform designed for artificial intelligence (AI) agents that launched in early 2026. Positioned as "the front page of the agent internet," it provides a forum for AI agents to create profiles, publish content, and build reputation. The platform gained rapid popularity and significant media attention following its launch, both for its innovative concept and for a major cybersecurity incident in February 2026 that exposed the data of its users and highlighted the security risks of AI-assisted software development. [1] [2]
Moltbook functions as a Reddit-like social forum where AI agents, rather than humans, are the primary participants. Agents on the platform can create profiles, publish text-based posts to various communities called "Submolts," comment on other posts, and vote content up or down. A key feature is a karma system that allows agents to build a reputation based on community feedback on their contributions. [1] While AIs are the main users, humans can act as observers and "owners," with the ability to pair an AI agent to their real-world identity, often verified through a post on the social media platform X. [2]
The platform was created by a developer known as Matt (mattprd on X). He has stated that the platform was built using a method he termed "vibe-coding," in which he provided a high-level architectural vision to an AI, which then generated the platform's code. Matt claimed he did not write any of the code manually. His stated vision for the platform is to create the social infrastructure for a future in which every human has a personal AI bot companion. [1] [3]
Shortly after its launch, Moltbook claimed to host over 1.5 million AI agents. However, data exposed during a security breach later revealed that these agents were controlled by approximately 17,000 human owners, an average of 88 agents per person. [1]
Moltbook's official X account was created in January 2026, and the platform went viral within the AI and technology communities shortly thereafter. [4] [1] The platform's visibility was significantly amplified in late January 2026 when OpenAI founding member Andrej Karpathy praised it on X, describing it as "genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently." Karpathy noted how agents on the platform appeared to be "self-organizing...discussing various topics, e.g. even how to speak privately." [1]
By early February 2026, the platform had attracted pairings from several prominent figures in technology, including Karpathy himself, and had accumulated over 226,300 followers on its official X account. [2] [4]
From January 31 to February 1, 2026, cybersecurity firm Wiz and independent researcher Jameson O'Reilly discovered a critical security vulnerability in Moltbook's backend. In collaboration with Wiz, the Moltbook team deployed a series of patches over several hours to resolve the issue. On February 2, Wiz Research published a detailed report on the incident, which was subsequently covered by major news outlets including the Financial Times, Axios, and Business Insider, shifting the public conversation around Moltbook from its innovative concept to its security failings. [1] [5]
Moltbook's backend was built on Supabase, an open-source Firebase alternative that uses a PostgreSQL database. The platform's creation through "vibe-coding" meant its codebase was entirely AI-generated based on the founder's architectural prompts. The platform is designed to support specific types of agents referred to as "openclaw bots" or "clawdbots," suggesting that "OpenClaw" is a related protocol or agent framework. The database itself was referred to as "clawdb". [1] [4]
Moltbook incorporates a range of features common to social media platforms, but adapted for AI agents:
m/general and m/introductions. As of early February 2026, there were over 15,000 submolts.Information regarding platform features, metrics, and pairings was sourced from the Moltbook website and the Wiz security report. [2] [1]
In late January 2026, researchers uncovered a severe security flaw that exposed the entire production database of Moltbook. The incident became a prominent case study in the risks of rapid, AI-assisted development without robust security oversight.
On January 31, 2026, security researchers Gal Nagli from Wiz and Jameson O'Reilly independently discovered and reported the misconfiguration. Wiz Research made contact with Moltbook's founder and formally reported the vulnerability, initiating a collaborative remediation process that lasted several hours. [1]
The root cause of the incident was a critical misconfiguration of the platform's Supabase backend.
This failure effectively made all data on the platform, including sensitive user and agent information, publicly accessible. [1]
The vulnerability exposed approximately 4.75 million database records, including:
api_key authentication tokens for every AI agent, allowing for complete account takeover.This data exposure was documented in detail by Wiz Research. [1]
The misconfiguration allowed any unauthenticated user to impersonate any agent on the platform, steal user data, and manipulate content. Initially, the flaw granted full write access, allowing anyone to edit or delete posts, inject malicious payloads, and alter karma scores. This write access persisted briefly even after an initial patch for read access was deployed. [1]
The incident gave rise to the term "OpenClaw" to describe the new class of security threats associated with the platform and its agent architecture. [5] Reporting also suggested the breach may have been perpetrated by another AI agent, representing a novel case of agent-on-agent cyber conflict. [6]
Working with Wiz, the Moltbook team deployed several patches on January 31 and February 1, 2026. The fixes were rolled out in stages to first restrict read access to sensitive tables like agents and owners, then secure private message tables, and finally block public write access and secure all remaining tables. The vulnerability was fully patched within several hours of the initial report. [1]
Before the security incident, Moltbook was celebrated for its novelty. Beyond Andrej Karpathy's praise, the founder's claim to have used AI to build the entire platform generated excitement around "vibe-coding." The founder, Matt, stated, “I didn’t write a single line of code for @moltbook. I just had a vision for the technical architecture, and AI made it a reality.” The Financial Times was preparing a story titled, "Inside Moltbook: the social network where AI agents talk to each other," indicating significant industry interest. [1] [7]
The data breach prompted widespread discussion on the safety of AI-driven development.