Macrocosmos API and the Data Universe platform are a developer‑oriented data infrastructure offering that provide programmatic access to curated social datasets and related tooling, with explicit support for X (Twitter) and Reddit. The platform is organized around a unified API and a catalog-like “Data Universe” that enables on‑demand queries, scheduled dataset jobs, and streaming delivery for analytics and machine‑learning workflows. [1] [2]
Macrocosmos positions its API as “the bridge to decentralised AI services,” centering product design on rapid collection of high‑signal social data and “clean outputs” intended to flow directly into production pipelines via a single developer key. The primary data product, the Data Universe, offers access modes for real‑time retrieval (On‑Demand API), scheduled or large‑scale batch datasets (dataset jobs), and continuous monitoring (streaming), with an emphasis on X (Twitter) and Reddit as supported sources. The platform’s documentation presents a subnet architecture that includes Subnet 13 (Data Universe) and other components such as Macrocosmos MCP and OpenClaw Skills, which collectively frame the developer experience for collection, enrichment, and downstream use in market intelligence, brand monitoring, trend analysis, lead research, and AI training. [1]
Macrocosmos also publishes an IOTA‑named subproject—described as an “Incentivised Orchestrated Training Architecture”—that invites GPU owners to contribute compute via a downloadable “Train at Home” client, visualize activity through an IOTA Dashboard, and receive automatic rewards through a wallet‑connected workflow. The IOTA portal references miners and validators using language familiar from decentralized machine‑learning networks, while linking to documentation and a whitepaper for technical detail. Public counters shown on the IOTA dashboard include placeholders and zeroed metrics, suggesting an early or evolving public state at the time of observation. [2]
The Macrocosmos API provides a programmatic interface to query, collect, and stream social data from curated sources. The developer guide highlights a single‑key integration model intended to reduce friction between development and production and describes three workflows: On‑Demand API requests for immediate retrieval, Dataset jobs for scheduled or large‑scale batch outputs, and Streaming for continuous delivery of matched content. [1][1]
The Data Universe is Macrocosmos’s curated dataset catalog and access environment. It explicitly supports X (Twitter) and Reddit, with workflows that allow teams to request focused slices (e.g., by keywords, handles, time ranges), assemble larger corpus‑level datasets through job scheduling, and operate streaming rules for ongoing monitoring. The guide characterizes the data as high‑signal, high‑volume, and cost‑efficient, with a goal of keeping friction low as users transition from exploration to deployment. Documentation also references a marketplace model and a Data Collection Tool signup path, indicating that curated collections and tooling are part of the broader ecosystem. [1]
The platform names X (Twitter) and Reddit as primary inputs for the Data Universe. These sources supply posts, threads, and community conversations that are normalized and indexed for search, analytics, and downstream ML tasks. The documentation positions these two networks as the cornerstone of Macrocosmos’s social data coverage; it also describes Data Universe datasets as metadata‑rich and intended for robust filtering and selection. [1]
Macrocosmos emphasizes standardized, consistent outputs with rich metadata. Public materials indicate that the API returns structured JSON suitable for programmatic ingestion, while dataset jobs and exports can be delivered in tabular formats such as CSV where applicable, and streaming endpoints support continuous delivery for real‑time processing. Specific field schemas and column definitions are not published in the overview and are referenced to the technical documentation. [3]
The IOTA site presents a decentralized training network architecture that incentivizes contributors to run a “Train at Home” desktop client using local GPUs. Participants connect a crypto wallet and, according to the site, earn rewards automatically for supplying compute. The public dashboard highlights validation and training activity, counts of miners, and other real‑time telemetry, using terminology that aligns with decentralized ML frameworks (e.g., “Bittensor‑style” miners/validators). The site links to an FAQ, documentation, and a whitepaper for operational and architectural specifics. Public counters displayed on the main page include unpopulated or placeholder numbers in the observed snapshot, indicating an early or evolving state of public telemetry. [2]
Macrocosmos organizes capabilities under a subnet architecture and references additional platform components such as Macrocosmos MCP and OpenClaw Skills. Subnet 13 corresponds to the Data Universe, and Subnet 9 corresponds to IOTA. This structuring suggests a modular ecosystem that can support data access, orchestration, and auxiliary developer tooling under a common framework. [1]
The IOTA portal uses terminology associated with decentralized ML communities (e.g., miners and validators) and explicitly references “Bittensor‑style” activity, indicating conceptual compatibility with token‑incentivized AI networks. The site foregrounds wallet‑connected participation, reward distribution, and a “Train at Home” path for GPU owners, supported by an IOTA Dashboard and linked documentation. [2]
In crypto‑market contexts, a CoinGecko page exists for an “IOTA‑2” asset. While the IOTA portal and Macrocosmos documentation do not specify the legal or technical relationship between that listing and Macrocosmos’s IOTA subproject, the presence of an IOTA‑labeled page on a token aggregator highlights how crypto communities may reference tokenized ecosystems alongside data and compute orchestration products. [3]