ROVR Docs
ROVR Docs

1. Overview
1.1 Introduction
There is no doubt that Advanced Driver Assistance Systems (ADAS) and autonomous driving will be among the most transformative revolutions in transportation technology over the next two decades. Meanwhile, in the field of consumer electronics, virtual reality (VR) and augmented reality (AR) technologies are poised to profoundly impact human history—on par with the revolutionary influence of smartphones over the past decade. At the heart of all this progress lies the rapid evolution of artificial intelligence (AI).
However, behind these exciting developments lies a fundamental and unresolved challenge: do we truly have the capability to acquire large-scale, real-world, highly accurate 3D spatial and 4D spatiotemporal data? Unlike the abundance of text, images, and video that the internet has made readily accessible, the collection of 3D data is far more difficult and constrained.
Take ADAS, the most urgent application field, as a case in point. Its reliance on high-definition (HD) 3D maps is well recognized. However, the scarcity and cost of such maps have become critical bottlenecks, slowing progress across the industry. In response, some have turned to stopgap solutions—sometimes even downplaying the importance of HD maps for the sake of pragmatism—relying instead on low-precision, outdated electronic navigation maps, combined with multi-sensor fusion to compensate for limitations in accuracy and operational reliability. But such compromises often result in significant issues with safety and dependability, sometimes even leading to heartbreaking tragedies.
What’s more, current HD map data collection remains monopolized by a few centralized corporations or organizations. The process is costly, complex, and extremely inefficient. Many of the maps being used today were collected over a decade ago. Although a number of admirable DePIN projects have attempted to crowdsource map creation using smartphones or dashcams in a decentralized manner, such methods simply cannot meet the stringent data precision and quality requirements of the automotive industry. As a result, these projects struggle to attract real customers and cannot achieve true commercial scalability. The core issue is straightforward: centimeter-level HD maps serve as the ground-truth system for ADAS and autonomous driving, and are the most critical infrastructure underlying these technologies. 3D maps generated solely from 2D video streams recorded by smartphones or dashcams are neither accurate nor stable enough to fulfill this role.
VR/AR applications represent another promising frontier. Accurate reconstruction of 3D environments is clearly a foundational requirement. While social and “killer” apps are key to ecosystem growth, a harsh reality remains: the lack of high-quality 3D training data for AI severely limits the ability of VR/AR systems to reconstruct scenes across multiple users or seamlessly integrate with other sensor systems. Therefore, for the foreseeable future, the shortage of precise, real-world 3D data will continue to impede technological breakthroughs and ecosystem maturity in VR/AR.
The same holds true in the field of robotics, where deep learning-based autonomous control systems are becoming increasingly data-hungry. Traditionally, robots relied on rule-based engines and hand-engineered perception pipelines. Today, however, the trend is shifting toward end-to-end large models that aim to provide better generalization, spatial understanding, and human-robot interaction capabilities. To achieve this, models require not only structured laboratory data but, more importantly, massive, real-world, high-resolution 3D and 4D datasets to accurately model and reason about complex environments. This applies across the spectrum—from logistics and household service robots to industrial automation and the next generation of humanoid robots.
In short, whether it’s autonomous driving systems navigating real-world roads, immersive VR/AR environments simulating reality, or intelligent robots making complex decisions in dynamic physical spaces, they all fundamentally depend on the same underlying capability: large-scale, high-precision, real-world 3D and 4D data acquisition and modeling. This represents one of the most critical bottlenecks facing the AI industry today.
1.2 ROVR Technological Philosophy
Over the past 20 years, it has become increasingly clear that traditional 3D data acquisition methods have reached their limits and cannot support the upcoming wave of the artificial intelligence revolution. ROVR adopts a fundamentally new approach to data collection and map creation, pioneering the deep integration of the traditional automotive industry, consumer electronics, Web2, and Web3, to build massive datasets for the forthcoming era of comprehensive AI and four-dimensional spatio-temporal computing.
1.2.1 High-Precision Software and Hardware Systems Conforming to Automotive Industry Standards
ROVR fully adopts onboard RTK (Real-Time Kinematic) and PPK (Post-Processed Kinematic) positioning systems, complemented by AGNSS and integrated navigation technologies, to continuously achieve centimeter-level positioning accuracy.
ROVR utilizes automotive-grade, mass-produced multi-channel solid-state 3D LiDAR combined with survey-grade IMU systems, generating approximately 1.5 million laser reflection points per second, each with accuracy within 2 centimeters, enabling high-fidelity reconstruction of the surrounding 3D environment.
Traditional dashcams and smartphone image sensors use rolling shutters, which cause motion-induced distortions that degrade measurement accuracy. ROVR employs high-definition fixed-focus lenses combined with global shutter sensors to effectively eliminate the jelly effect and image distortion commonly seen in rolling shutter devices, ensuring precise and reliable optical measurements.

1.2.2 Open and Collaborative Ecosystem Governance
1.2.2.1 ROS2-Based Data Protocol ROVR is dedicated to establishing an open, transparent, and interoperable data ecosystem that maximizes the value and accessibility of spatial data. Central to this vision is the adoption of a ROS2 (Robot Operating System 2)-based data protocol framework, which provides standardized, modular, and scalable capabilities for data communication and processing.
Leveraging ROS2 enables seamless integration with a wide array of robotic platforms, autonomous vehicles, and consumer electronics, facilitating real-time data exchange and fostering cross-industry collaboration. This approach significantly enhances data interoperability and scalability, supporting diverse applications including autonomous driving, VR/AR, and robotic perception.
1.2.2.2 Open and Autonomous Ecosystem ROVR operates as a highly transparent and autonomous ecosystem, where any individual or organization can design, manufacture, or customize hardware devices within the project’s open-source framework. Participants can freely engage in data collection, storage, maintenance, and processing, with the flexibility to join or leave the ROVR network at their discretion.
1.2.2.3 Community and Governance The strength of the ROVR network lies in its vibrant community, composed of data contributors, consumers, developers, DIY enthusiasts, hardware manufacturers, and computing platform providers. Together, they drive innovation and ensure the ecosystem’s sustainable growth.
While decentralized and open, ROVR is governed by the ROVR Foundation Ltd., which oversees software and hardware development, data quality assurance, network operations, and community governance. The Foundation facilitates community engagement through R2FCs (ROVR Requests for Comments), enabling transparent decision-making and continuous improvement.
1.2.2.4 Privacy and Data Security Privacy and data security are fundamental to ROVR’s ethos. Data is typically structured and anonymized to safeguard individual privacy—for example, labeling objects as "pedestrians" with attributes such as speed and direction, without capturing personally identifiable details. This balance ensures data utility while protecting user confidentiality.
1.3 Community Contributor Participation
The ROVR network offers diverse and accessible participation pathways for community contributors, fostering an open and efficient ecosystem.
1.3.1 Data Collection Using “Smartphone + TarantulaX”

The TarantulaX Mini is a highly integrated, compact sensor system that includes an enhanced RTK (Real-Time Kinematic) positioning module, an Inertial Measurement Unit (IMU), and a Web3 identity and data validation module. It connects to smartphones via Bluetooth and leverages the phone’s camera to capture road imagery, process and classify data, and upload it to earn $ROVR tokens. ROVR supports most mainstream smartphones and continuously optimizes compatibility across different models. However, due to variations in camera optics, AI processing power, and encryption capabilities, some devices may face compatibility limitations.
1.3.2 Data Collection Using the Professional 3D LiDAR Device “LightCone”

The LightCone device features a multi-channel LiDAR system (120+ channels, 200-meter range), enabling real-time, precise 3D environmental sensing. It also integrates automotive-grade navigation systems and large-aperture (>130 mm) high-precision measurement antennas. LightCone connects to smartphones via Bluetooth or Wi-Fi, with the phone serving as a control terminal that must remain online for device management.
Whether using TarantulaX or LightCone, vehicle speed during data collection must be maintained between 10 km/h and 120 km/h; data collected outside this range will not qualify for rewards. Rewards are calculated based on the kilometers of quality data collected. Quality data is defined as coverage of paved roads with clearly marked lanes and traffic elements, meeting road network classification Level 8 or above, including highways, urban expressways, ramps, and city streets.
1.3.3 Deploying R-Node Nodes and Staking $ROVR Tokens (Planned, Coming Soon)
R-Node nodes serve as data storage and processing points within the ROVR network. Community contributors can deploy nodes that meet the necessary hardware, software, and network specifications, and stake $ROVR tokens to participate in the ecosystem, earning base rewards as well as additional workload-based incentives. The R-Node program is scheduled to launch in the project’s second phase, with initial stages relying primarily on traditional cloud servers and object storage services (OSS) for data management.
1.4 Data Storage, Processing, and Utilization
ROVR Inc. serves as the primary data processing entity within the ROVR network, responsible for cleansing, structuring, and transforming raw sensor data into high-definition maps and other premium data products for commercial use and distribution.
For clients such as OEMs, Tier 1 suppliers, or other industry partners, ROVR Inc. also offers tailored mapping solutions and customized data services to meet specific technical or business needs.
In the near future, ROVR will gradually open data reprocessing capabilities to the broader ecosystem. Qualified third-party organizations or individual developers will be able to perform secondary processing on ROVR’s open data and commercialize the results. This decentralized, permissionless model will become a key pillar of the ROVR ecosystem, unlocking new layers of data value and fostering a vibrant, scalable data economy.
1.5 Project Governance
The ROVR project is jointly operated by two main entities: ROVR Foundation Ltd. and Rovr Labs, Inc.
Responsibilities of ROVR Foundation Ltd.:
Maintain the integrity of the ROVR tokenomics and ensure a sustainable project ecosystem
Provide open-source hardware reference designs and software frameworks
Oversee global data collection and decentralized storage infrastructure
Lead the technical roadmap and ensure consistent innovation and development principles
Responsibilities of Rovr Labs, Inc.:
Transform raw data into high-definition maps or other valuable data products and deliver them to commercial clients
In the near future, the role of productizing and monetizing data will be gradually opened to third parties within the ecosystem, promoting a more decentralized and scalable development model
1.6 Compliance and Data Localization Principles
ROVR places paramount importance on data privacy and regulatory compliance. While the collection and utilization of public road map data are generally lawful in most countries and regions, certain jurisdictions impose strict restrictions due to concerns over privacy, national security, and commercial competition. Notably, in some countries and regions—such as North Korea—unauthorized surveying activities are considered serious offenses. Consequently, the use of ROVR hardware devices is strictly prohibited in these locations; any data generated therein will not be accepted by the ROVR network, and contributors will not receive any rewards.
In addition to legal compliance, ROVR adheres to stringent data localization principles, respecting the legal frameworks governing data storage and transmission in each jurisdiction. Key aspects include:
Local Storage Preference: Data should preferably be stored on compliant servers located within the country or region where the data is generated or where the user resides, minimizing legal risks associated with cross-border data transfers.
Secure Cross-Border Transfers: When cross-border transmission of data is unavoidable, ROVR implements robust encryption and security protocols to ensure the confidentiality and integrity of data during transit.
Sensitive Data Protection: For sensitive datasets, ROVR applies rigorous anonymization and de-identification techniques in accordance with local laws to prevent disclosure of personal information or proprietary business details.
Global Compliance Collaboration: ROVR actively collaborates with partners worldwide to establish a compliant, secure, and efficient data storage and processing ecosystem that meets diverse regulatory requirements and industry standards.
Given ROVR’s decentralized nature, it is imperative that community contributors proactively familiarize themselves with applicable local laws and regulations. Contributors are responsible for ensuring their data collection and usage practices comply with their jurisdiction’s legal framework to mitigate legal risks and avoid unnecessary complications.
ROVR is committed to fostering an open, compliant, and responsible global data ecosystem that balances innovation with legal adherence, thereby supporting the sustainable advancement of intelligent transportation and artificial intelligence technologies.
2. Data Production and Hardware Technologies
2.1 TarantulaX
TarantulaX primarily functions as a high-precision RTK (Real-Time Kinematic) receiver, enhancing positioning accuracy while ensuring data integrity and security through cryptographic measures.
Key Functions and Workflow:
RTK Positioning: TarantulaX connects via a smartphone to NTRIP (Networked Transport of RTCM via Internet Protocol) services provided by RTK base station operators (e.g., GEODNET) to acquire real-time correction data and satellite ephemeris information from nearby RTK base stations. It performs real-time position corrections at one-second intervals, typically improving smartphone positioning accuracy to within 1 centimeter.

Data Authentication and Security: To prevent malicious replay attacks or forgery, TarantulaX employs an unreadable hardware private key to digitally sign the high-precision location data it generates, ensuring the authenticity and validity of the data.

Future Upgrades: The technical details and functionalities described here are subject to continuous improvement and may evolve with future product versions.
2.2 LightCone
LightCone is a significantly more advanced and comprehensive system compared to Tarantula Mini. Designed for professional-grade 3D spatial data collection, LightCone integrates multiple high-performance components to deliver industry-leading precision, reliability, and processing power.

System Overview
LightCone combines multi-channel LiDAR, high-resolution cameras, integrated navigation systems, and AI computing modules into a robust, field-ready device. It is capable of real-time 3D perception of complex environments and is built to meet the demanding requirements of automotive and mapping-grade applications.
Key Specifications
(Note: Specifications are subject to change in future hardware revisions. This document may not reflect the most up-to-date information.)
Hardware Specifications
Item
Details
LiDAR Type
Automotive-grade solid-state LiDAR
Number of Beams
126 beams
Ranging Accuracy
±5 cm
Eye Safety
IEC 60825-1 Class 1 (Eye-Safe)
Measurement Range
Up to 200 m (180 m @10% NIST reflectivity)
Blind Spot
≤ 0.5 m
Field of View (FOV)
120° (horizontal) × 25° (vertical)
Angular Resolution
0.2° × 0.2°
Camera
Automotive-grade HD camera
Positioning Accuracy
±3 cm (global), powered by GEODNET RTK
Data Storage
Onboard 1 TB SSD
IMU & Positioning
Item
Details
IMU Bias Instability (Allan)
Gyro: XY < 2.6°/hour, Z < 2.1°/hour Accelerometer: < 45 µg
IMU Bias (XYZ)
0.13°/√h, 0.050 m/√h
Processing Power
Item
Details
Edge Processor
High-performance SoC
NPU
6 TOPS @ int8
GPU
ARM Mali-G610MC4
Environmental Specs
Item
Details
Operating Temperature
-45°C to 85°C
Storage Temperature
-65°C to 105°C
Protection Rating
IP67 (Waterproof and Dustproof)
Physical Dimensions
Item
Details
Weight
4.6 kg (device only) 7.4 kg (with packaging)
Size
251.6 mm × 204.3 mm × 96.2 mm
3. High-definition Mapping and 3D Data-related Technologies
3.1 HD Maps
HD maps refer to maps with high spatial and temporal resolution. They are primarily used in scenarios that require precise positioning, such as autonomous driving.
These maps include not only basic geographical elements (e.g., road and building locations), but also detailed semantic information such as lane markings, traffic signals, signs, road curvature, elevation, and more.
To produce HD maps, high-precision sensors like LiDAR, cameras, and GNSS systems are used to collect data. The raw data is then processed using mapping algorithms to generate accurate and reliable map products.
In terms of data content, HD maps typically consist of three main components:
3.1.1 Road Model
This includes information at the road level, such as road geometry, road attributes (such as passability), lane information within the road, and information about objects on the road.
3.1.2 Lane Model
This contains information at the lane level, such as lane geometry, lane connectivity, lane attributes (such as direction of travel, whether it can be crossed), and the relationship with the object model.
3.1.3 Object Model
This encompasses the geometry, orientation, category, and corresponding relationships between objects and lanes.

3.2 HD Map Mapping
For HD mapping, we use professional equipment including solid-state LiDAR, 4K cameras, and high-precision positioning devices. Parameter calibration is the first step: internal sensor parameters are calibrated during manufacturing, while external parameters are automatically calibrated through an integrated program, requiring no user intervention.
After data collection, local mapping results are aggregated in the cloud. HD maps typically require an absolute accuracy of less than 50 cm and a relative accuracy within 20 cm over 100 meters.
All collected data is used to automatically generate HD maps. Point cloud data provides centimeter-level ranging, image data offers rich semantic information, and precise pose data enhances overall map consistency and positioning accuracy. Due to the high precision standards demanded by OEMs, manual editing and review are still necessary after automatic generation.

Starting in Q3 2024, we will use the mini device TX for HD map mapping, with online mapping supported by our app. Currently, the data transmission rate is 100 KB/km and is expected to decrease to 50 KB/km by Q2 2025.
After Q1 2025, we will release our professional device LC, which will support online high-precision map mapping.

3.3 HD Map Updating
HD map updating begins with 3D reconstruction of consecutive images using stereo vision techniques.
However, due to latency during camera exposure, the visual 3D reconstruction results are not rigidly aligned with the existing HD map point cloud. To address this, we simulate the imaging process to project the HD map point cloud onto the image coordinate system and match it with real-time captured images—primarily focusing on edge features—to optimize the camera’s intrinsic and extrinsic parameters.
Optimizing the camera parameters is necessary because the smartphone mount cannot maintain a rigid connection over extended periods, causing changes in the smartphone’s extrinsic parameters. Additionally, the presence of the windshield may alter the relative position between the smartphone lens and the windshield, affecting the intrinsic parameters of the entire measurement system.
Once the optimized camera parameters are obtained, depth is estimated for each pixel. Using monocular depth estimation, accurate depth information is generated and compared against the existing HD map data to complete the map updating process.

3.4 4D Data Generation
3.4.1 Controllable Scene Editing
In the era of LLM, data is a precious resource, and its importance is particularly prominent in the field of vision. Unlike SORA, our 4D data generation model is based on the real world and allows for precise scene editing (such as defining accurate three-dimensional positions, dimensions, orientations, velocities, etc., for a vehicle), which is crucial for training large models for tasks like autonomous driving. Below is a demo video based on the Kitti dataset. In the video, we replaced all different types of vehicles with the same vehicle model.
Furthermore, we have obtained LOI from a Tier 2. However, for commercial reasons, the video we are showcasing is generated using the KITTI dataset, a opensource autonomies driving dataset.
3.4.2 Monocular RGBD to Multi-View Generation
Using monocular RGBD data captured by LightCone’s monocular camera combined with LiDAR (with overlapping fields of view), we leverage deep learning techniques to generate multi-view (multi-camera) images. This process synthesizes and enhances visual information from multiple perspectives, enabling richer scene understanding and more accurate 3D perception.

“Thank you very much for your strong support for the binocular data generation project based on AIGC technology. After evaluation by the R&D team, the generated data and annotation results provided by your company meet the training requirements of the binocular perception model. I am sending this email to confirm our cooperation. Please start the formal development work from now on. The specific amount and deliverables shall be subject to the commercial contract”
— LOI email sent to us by a Tier 2 company
3.4.3 Data Usability Validation and Real-World Deployment
The data we generated has completed usability validation, and the client is already using it in a real production environment.
4. Anti-Cheating and Trustless Network
Cheating and spoofing are common issues in many DePIN projects. ROVR has learned from extensive experience and employs multiple anti-cheating and anti-spoofing technologies, eliminating the need for a centralized denylist. In the ROVR network, all nodes are designed as trustless models, and the system only accepts verified data, ensuring authenticity and reliability.
4.1 Digital Signature System
All ROVR hardware devices are integrated with a specialized encryption chip that securely stores a private key, which cannot be accessed externally, used to digitally sign the device's output data.
When outputting data, devices including TarantulaX and LightCone use the encryption chip to sign their data. The transmitted data package contains:
The data collected by the device
The digital signature fingerprint
The public key of the encryption chip
The backend system verifies the digital signature using the public key to ensure data authenticity. Data failing signature verification will be automatically discarded to prevent forged data from entering the network.

4.2 Satellite Observation Files
Spoofing device GPS latitude and longitude information is a common attack in DePIN projects, often executed by forging NMEA messages to perform replay attacks.
ROVR hardware devices provide not only GPS position data but also raw satellite observation files and precise ephemeris files. These files include:
GPS timestamps
Number and positions of observed satellites
Satellite velocities and headings
Satellite signal quality indicators (DOP)
This data is cross-checked with parameters from nearby RTK base stations (currently provided by GEODNET), greatly increasing spoofing complexity.
ROVR supports a quad-constellation GNSS system—GPS, GLONASS, Galileo, and BeiDou—with precise ephemeris files for all constellations. Forging such complex and cross-verified data is virtually impossible.
Easily spoofed NMEA messages

Imagine forging this 250,000-line file:

4.3 Sensor Fusion
ROVR hardware integrates multiple sensors whose data synergistically validate each other.
For example, when a vehicle passes over a speed bump, the image sensor, IMU, LiDAR, and RTK positioning system simultaneously detect vibrations and exhibit consistent dynamic responses.
Malicious actors attempting to forge synchronized data across multiple sensors to mimic realistic motion would face extremely high technical challenges.
4.4 Stricter Penalties for Cheating
To maintain fairness in the network, ROVR enforces strict penalties for cheating:
1st Offense
Account and device frozen for 3 months
2nd Offense
Account and device frozen for 999 years (permanent ban)
ROVR takes cheating very seriously and is committed to preserving a fair and secure network environment.
5. $ROVR Allocation
Total fixed supply: 10 billion
Issuance Network: Solana
Allocation Structure:
51% as rewards for contributors, to participate in building the ROVR Network
20% for project founding team members, and future global core contributors for project R&D and system construction
20% for project investors
9% for ROVR ecosystem including allocation includes liquidity, market operations, promotions, and other aspects essential for sustaining ecosystem development
Category
% of Supply
Token Amount
Community Contributors
51.00%
5,100,000,000
Team
20.00%
2,000,000,000
Investors
20.00%
2,000,000,000
Ecosystem
9.00%
900,000,000
Allocation Structure:
51% Mining Rewards: Released progressively based on users’ mapping mileage and data quality.
20% Team Allocation: 9-month cliff, followed by 36-month linear vesting.
20% Investors: 0.733% unlocked at TGE; remaining tokens locked for 6 months, then linearly vested over 24 months.
9% Ecosystem: 0.018% unlocked at TGE; remaining tokens vested linearly over 12 months.

5.1 $ROVR Calculation
Base Rewards
TarantulaX (TX): 1.6 $ROVR per km
LightCone (LC): 16 $ROVR per km
Starting from TGE, the base rewards for $ROVR will be halved every year
1
1.6 $ROVR
16 $ROVR
2
0.8 $ROVR
8 $ROVR
3
0.4 $ROVR
4 $ROVR
4
……
……
Data Quality Tiers & Reward Ratio
The quality of the collected data (e.g., clarity, brightness, RTK accuracy, etc.) will impact the amount of $ROVR received. The final assessment will be categorized into quartiles:
A - Excellent
100%
B - Good
75%
C - Average
50%
D - Below Average
25%
F - Failing
0
*”F” will be assigned to any driver using multiple TX or LC on a single vehicle.
*Use of 1 TX and 1 LC device simultaneously on the same vehicle is permitted.
Weekly Road Revisit Rewards Decay
Each road will see a 50% reward reduction for every 2 times additional collections.
1st - 2nd
100%
3rd - 4th
50%
5th - 6th
25%
7th - 8th
12.5%
……
……
Road Revisits are calculated globally.
Road Revisits Records Reset Weekly: Revisit records will reset on the Monday of each week at 00:00 GMT.
$ROVR will be transferred within 1 week after data upload, data older than 2 weeks won't be accepted.
This mechanism will take effect on July 1st, 2025 at 00:00 GMT
*Tip: Before driving, check the ROVR App map and follow the guidance to maximize your rewards.
*Rewards Multiplier based on Customer Requirement
The additional $ROVR reward multiplier will be set based on the requirement of future data customer, using time and spatial range (NDS 13-level grid) as criteria.

5.2 $ROVR Burn Mechanism
$ROVR operates on a permanent burn mechanism, with buyback funds sourced from data consumers purchasing ROVR's data products:
60% is used to buy back circulating $ROVR and permanently burn it.
20% covers RTK service fees—used to buy back and permanently burn $GEOD from GEODNET.
20% supports ROVR's operational expenses.
Last updated