logo
ROBOTICS
Over 1PB produced as of January 2026

Data for embodied intelligence

The complete package for robotics foundation models. Video, trajectories, and rich multimodal annotations — across every configuration, from pre-training to post-training to evals.

Full-stack robotics data

We cover every stage of the robotics ML pipeline.

Pre-training

Diverse environments, objects, and manipulation tasks at scale for building foundational robotics models.

Post-training

Expert teleoperation with precise action labels optimized for imitation learning.

Evals

Standardized tasks across manipulation and navigation for measuring real-world performance.

Camera Perspectives

Data across configurations

Multiple camera perspectives and collection modalities.

01

Ego-centric

First-person camera mounted on the robot head or body. Captures the robot's perspective during task execution.

Head-mountedBody cam
02

Overhead

External cameras positioned above the workspace. Bird's-eye view of the full scene and robot motion.

StereoMono
03

Ego-centric + Wrist

Combined head and wrist-mounted cameras. Full context plus close-up manipulation view for detailed grasping tasks.

Dual viewSynchronized
04

Teleoperation

Human-controlled demonstrations with expert operators. High-quality trajectories optimized for imitation learning.

Expert demosAction labels
HOW IT WORKS

AI-powered diversity steering

Our robotics data engine automatically categorizes, validates, and steers collection to continuously improve diversity across every dimension that matters.

Ingest

Raw data streams in from collection sites worldwide

Categorize

AI auto-tags environment, objects, tasks, and operator style

QA + Validate

Automated quality checks and human review for edge cases

Steer

System identifies gaps and redirects collection to fill them

Environment coverage

Kitchens, offices, warehouses, retail spaces, and homes. Captured across varied lighting, clutter, and indoor–outdoor conditions.

Object diversity

Thousands of unique instances across materials (rigid, deformable, transparent), sizes, shapes, and weights.

Task variety

Pick and place, stacking, insertion, tool use, navigation, and multi-step sequential tasks.

Operator diversity

Multiple demonstrators per task with varied skill levels, styles, handedness, and demonstration speeds.

Examples

Robotics sample data

Explore sample robotics footage captured across tasks, sensors, and environments.

Human egocentric: Mono

Washing dishes

Human egocentric: Stereo

Groceries

Human egocentric: Trio

Folding clothes

Bimanual stationary

Stacking plates

Quad + Gripper

Sorting

NEXT-GEN ROBOTICS

During his visit to our SF robotics lab, Dwarkesh got a firsthand look at how we’re redefining robotics development with cutting-edge data collection.

Ready to train robots that generalize?

Let's talk about your robotics data needs.