MOXI MoCap Suit V100-R

High-Performance Robotic Motion Training & VLA Collaboration Solution

Precise kinematics, drive training, control, and performance optimization—trusted by innovators worldwide.

V100-R is MOXI’s state-of-the-art IMU-based motion capture system designed specifically for remote robot control and VLA (Virtual Learning Agent) training. It integrates proprietary nine-axis IMU pose fusion technology, a patented dynamic pose correction algorithm, and a ROS2 SDK. The V100-R is engineered for seamless integration with NVIDIA Jetson platforms and the ROS2 ecosystem, delivering high-precision, low-latency full-body motion capture with real-time data streaming. By simply wearing the V100-R suit along with motion capture gloves, developers can intuitively generate robot training datasets for instant motion mimicry, simulation-based training, and data-driven motion learning—accelerating the development pipeline from prototype to industrial deployment. As a top-tier IMU motion capture system, the V100-R streamlines robotic training and is ideal for smart manufacturing, human-robot collaboration, and edge-AI applications.

Why Choose MOXI’s MoCap Suit V100-R for Humanoid Robot Training?

Scientifically Validated High-Precision Data

  • Equipped with 15 nine-axis IMU sensors plus high-precision finger modules for full-body 360° motion tracking.
  • Data accuracy within ±2°, combined with a 100Hz refresh rate, ensures continuous and reliable datasets for AI/ML training.
  • Unconstrained by line-of-sight issues—unlike traditional RGB-D or LiDAR systems, it’s immune to lighting, occlusion, or metallic interference.

Easy Integration into Your Development Workflow

  • Dedicated ROS2 SDK: Supports ROS2 integration with the NVIDIA Isaac ROS platform, accelerating robotic training and development.
  • NVIDIA Ecosystem Compatibility: Fully compatible with Omniverse and Isaac Sim, enabling digital twin modeling, virtual training scenario generation, and simulation testing.
  • Edge AI Inference: Optimized for NVIDIA Jetson platforms with TensorRT, delivering real-time motion analysis and control. Supports fully local IMU-based motion capture processing in offline environments.
  • Standard Data Outputs: Provides TF, Joint State, and IMU data, with full support for URDF models and visualization tools such as RViz and TF Viewer.
  • Modular, scalable architecture with a high-precision motion data library—ideal for AI training and biomimetic robotic control.

Real-World Application Value

  • Smart Manufacturing: Transform manual workflows into IMU datasets that serve as the foundation for robotic learning.
  • Human-Robot Collaboration Training: Supports imitation learning, posture classification, and fall prediction.
  • Hazardous Environment Replacement: Data-driven robotics enable safe execution of high-risk or specialized operations.
  • AI & XR Research: Comprehensive support for digital twin modeling, simulation testing, and interactive applications.
  • Motion-Driven Control: Convert MoCap posture data into control signals to drive biomimetic robots and robotic arms—seamlessly enabling simulation → training → deployment, with full digital-twin and control integration.
  • AI-Enhanced Perception: Provides real-time human joint data for training and inference, such as fall prediction and posture classification, giving AI models more timely and accurate motion insights—even on low-latency edge devices.

The V100-R ROS2 SDK is currently in development, with priority support for ROS2 standards and strong compatibility with the NVIDIA Jetson and Isaac Sim ecosystem. We are actively enhancing ROS2 capabilities while exploring limited ROS1 compatibility to meet diverse robotics development needs. For the latest development updates, please contact us directly.

How to Use MOXI’s V100-R for Robot Training

A human operator wearing the V100-R suit controls robot motion in real time, capturing precise motion data.
During remote operation, the robot collects sensor data to deepen its understanding of movement and surroundings.
Merging motion and sensor data enables AI-driven deep learning to refine robotic movement patterns.
The robot autonomously learns and performs actions under diverse conditions, powered by AI insights.

Key Use Cases

Smart Manufacturing

Convert operator workflows into IMU action datasets that robots can learn from.

Human-Robot Collaboration Training

Use fused nine-axis IMU data for better posture information and enhanced imitation learning.

Hazardous or Specialized Environments

Utilize motion data to drive robots in executing high-risk or complex tasks.

AI & XR Research

Support applications in digital double modeling, simulation testing, and immersive interactive scenarios.

Business Strategy & Real-World Applications

Business Aspect Strategic Positioning Strategic Positioning Target Industries / Applications Industry Advantage & Value Proposition
Industry Role Positioning 「A “Modular Human Motion Input Interface” → providing IMU MoCap outputs that integrate seamlessly into control and learning workflows Humanoid robot development, AI training platforms A universal, wearable, and cost-effective physical motion input solution with strong cross-platform integration (Jetson / ROS2 / Isaac)
Technology Licensing & SDK Services Offering a ROS2 SDK that integrates with Isaac Sim / Jetson Robotics hardware & software developers, OEM/ODM partners Empowers third-party products to rapidly integrate IMU-based control features, creating new OEM/ODM business opportunities
Training Data Collaboration Supplying high-precision human motion datasets for AI model training and biomimetic learning (e.g., imitation learning) AI research institutes, robotics training platforms Provides reliable, labeled human motion data—ideal for building digital twins, autonomous control models, and realistic biomimetic learning
Standard Solution Advancement Delivering the “MOXI + Jetson/ROS2” standard development kit Developer communities, industry prototyping & validation Lowers development barriers, enables rapid prototyping and deployment, while boosting brand visibility and community adoption

Specifications

MOXI V100-R sensor

Sensor hub (x2) Weight : 8g, Dimension : 1.65 inches(42mm) * 0.35inches(9.12mm)
Sensor unit (x8) Weight : 3.8g, Dimension : 1.26 inches(32mm) * 0.26inches(6.64mm)
Operating temperature range 32°F ~ 122°F(0°C ~ 50°C)
Tracker 15 calibrated 9-axis IMU-based sensors
Dynamic accuracy ±2°
Angular accuracy Rounding to the 2nd decimal place
Data output rate 100Hz
Streaming data rate 100Hz

MOTi sensor

Sensor 5 calibrated 9-axis IMU-based sensors
Sensor Battery Capacity 420mAh, > 8Hrs
Water resistance Class IPX4
Operating temperature 32°F ~ 122°F (0°C ~ 50°C)
Wireless Communication Bluetooth
Dynamaic accuracy ±2°
Dimension & Weight 46.3x38.8x14.3mm,19.2g
Button Power switch
Data Output & streaming data rate 100Hz

MOXI Transponder 1.0

Battery Li-polymer, 2,000mAh
Battery run time > 4 Hrs
Water resistance Class IPX5
Wi-Fi Dual band, Wi-Fi 4, Wi-Fi 5 or Wi-Fi 6
Connector Water proof
Bluetooth Support Low Energy, Bluetooth v4.1 or above
Dimension & Weight 60.2 x 76.8 x 19.2mm, 75 grams
Button Power switch
Indicator Battery Capacity, PC/Network Connection, Top connection, Pants connection
APP
TransEZ App (Android, iOS) For transponder configuration and management

Software

MOXI Player (for Robot)
Platform: Windows 10, 11
  • MOXI SDK
  • URDF Importer
  • Bone Mapping / Retargeting
MOXI Receiver (for Robot)
Platform: Linux
  • Supports NVIDIA Isaac SIM / Lab
  • Compatible with ROS 2 Humble and Jazzy editions

Windows System Requirements

CPU Intel® Core™ i7
Memory (RAM) 16GB or above
Graphics (GPU) NVIDIA® GeForce RTX™ 4060
Recommended for optimal performance

Join the next frontier of human-robot collaboration

Name
Scroll to Top