# ROVR Docs

## ROVR Docs

<figure><img src="https://1967514283-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FCrCaZAKtrNMj1kPxghZg%2Fuploads%2FOvE2r23X6hBzlihQYVKm%2Fimage.png?alt=media&#x26;token=365303f6-9c26-439f-9c66-ab0090c53c23" alt=""><figcaption></figcaption></figure>

## 1. Overview

### 1.1 Introduction

***

There is no doubt that autonomous mobility—spanning Advanced Driver Assistance Systems (ADAS) and self-driving vehicles—will be one of the most transformative revolutions in transportation technology over the next two decades. At the same time, **Spatial AI** is emerging as a foundational capability far beyond cars: from consumer electronics like virtual reality (VR) and augmented reality (AR), to industrial and household robotics, to the next generation of immersive digital worlds. Together, these domains will redefine how humans and intelligent machines perceive, navigate, and interact with reality.

Yet behind these exciting developments lies a critical and unresolved challenge: **do we truly have the ability to capture large-scale, high-precision, real-world 3D and 4D data?** Unlike the abundance of text, images, and video readily available on the internet, spatial data is far scarcer and much harder to acquire.

Consider ADAS and autonomous driving. These systems fundamentally rely on centimeter-level high-definition (HD) 3D maps as ground truth. However, the scarcity and cost of such maps have become severe bottlenecks. Existing approaches—whether centralized map providers or stopgap solutions using outdated navigation maps fused with sensors—are inadequate, often leading to issues of safety and reliability. Meanwhile, decentralized experiments using smartphones or dashcams have shown promise but ultimately fall short of the precision and stability demanded by the automotive industry.

Now zoom out: the same bottleneck applies across other frontiers.

* **VR/AR**: Rich 3D reconstruction is the foundation for immersive experiences, yet the lack of high-quality spatial training data limits realism, cross-user consistency, and seamless sensor integration.
* **Robotics**: As robots move from rule-based engines toward end-to-end large models, they require massive, diverse, real-world 3D/4D datasets to achieve robust spatial understanding, generalization, and safe human-robot interaction. From logistics and service robots to industrial automation and humanoid robotics, the need for reliable spatial intelligence is universal.

In short, whether it is a self-driving car navigating city streets, a VR headset reconstructing a shared virtual space, or a humanoid robot making sense of a factory floor, **all of these breakthroughs depend on the same underlying capability: the scalable acquisition and modeling of high-precision real-world 3D and 4D data**. This is the critical bottleneck of Spatial AI—and solving it unlocks progress not only in transportation, but across robotics, immersive computing, and beyond.

### 1.2 **ROVR Technological Philosophy**

***

Over the past 20 years, it has become increasingly clear that traditional 3D data acquisition methods have reached their limits and cannot support the upcoming wave of the artificial intelligence revolution. ROVR adopts a fundamentally new approach to data collection and map creation, pioneering the deep integration of the traditional automotive industry, consumer electronics, Web2, and Web3, to build massive datasets for the forthcoming era of comprehensive AI and four-dimensional spatio-temporal computing.

**1.2.1 High-Precision Software and Hardware Systems Conforming to Automotive Industry Standards**

ROVR fully adopts onboard **RTK (Real-Time Kinematic)** and **PPK (Post-Processed Kinematic)** positioning systems, complemented by **AGNSS** and integrated navigation technologies, to continuously achieve centimeter-level positioning accuracy.

ROVR utilizes automotive-grade, mass-produced multi-channel solid-state 3D LiDAR combined with survey-grade IMU systems, generating approximately **1.5 million laser reflection points per second**, each with accuracy within 2 centimeters, enabling high-fidelity reconstruction of the surrounding 3D environment.

{% embed url="<https://youtu.be/YpYR1KRL97w>" %}

Traditional dashcams and smartphone image sensors use rolling shutters, which cause motion-induced distortions that degrade measurement accuracy. ROVR employs **high-definition fixed-focus lenses** combined with **global shutter sensors** to effectively eliminate the jelly effect and image distortion commonly seen in rolling shutter devices, ensuring precise and reliable optical measurements.

<figure><img src="https://1967514283-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FCrCaZAKtrNMj1kPxghZg%2Fuploads%2FHekHP3TcamJguQHQVxCT%2Fimg003.png?alt=media&#x26;token=3fae7682-2e03-44db-b4bf-b65c7a2968ce" alt=""><figcaption></figcaption></figure>

#### 1.2.2 Open and Collaborative Ecosystem Governance

**1.2.2.1 ROS2-Based Data Protocol**\
ROVR is dedicated to establishing an open, transparent, and interoperable data ecosystem that maximizes the value and accessibility of spatial data. Central to this vision is the adoption of a **ROS2 (Robot Operating System 2)**-based data protocol framework, which provides standardized, modular, and scalable capabilities for data communication and processing.

Leveraging ROS2 enables seamless integration with a wide array of robotic platforms, autonomous vehicles, and consumer electronics, facilitating real-time data exchange and fostering cross-industry collaboration. This approach significantly enhances data interoperability and scalability, supporting diverse applications including autonomous driving, VR/AR, and robotic perception.

**1.2.2.2 Open and Autonomous Ecosystem**\
ROVR operates as a highly transparent and autonomous ecosystem, where any individual or organization can design, manufacture, or customize hardware devices within the project’s open-source framework. Participants can freely engage in data collection, storage, maintenance, and processing, with the flexibility to join or leave the ROVR network at their discretion.

**1.2.2.3 Community and Governance**\
The strength of the ROVR network lies in its vibrant community, composed of data contributors, consumers, developers, DIY enthusiasts, hardware manufacturers, and computing platform providers. Together, they drive innovation and ensure the ecosystem’s sustainable growth.

While decentralized and open, ROVR is governed by the **ROVR Foundation Ltd.**, which oversees software and hardware development, data quality assurance, network operations, and community governance. The Foundation facilitates community engagement through **R2FCs (ROVR Requests for Comments)**, enabling transparent decision-making and continuous improvement.

**1.2.2.4 Privacy and Data Security**\
Privacy and data security are fundamental to ROVR’s ethos. Data is typically structured and anonymized to safeguard individual privacy—for example, labeling objects as "pedestrians" with attributes such as speed and direction, without capturing personally identifiable details. This balance ensures data utility while protecting user confidentiality.

### 1.3 Community Contributor Participation

***

The ROVR network offers diverse and accessible participation pathways for community contributors, fostering an open and efficient ecosystem.

**1.3.1 Data Collection Using “Smartphone + TarantulaX”**

<figure><img src="https://1967514283-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FCrCaZAKtrNMj1kPxghZg%2Fuploads%2FECkBFABMqj0GfgqMbXt9%2Fimage.png?alt=media&#x26;token=8a28aad0-b78a-465f-8b72-eff847334b87" alt="" width="375"><figcaption></figcaption></figure>

The TarantulaX Mini is a highly integrated, compact sensor system that includes an enhanced RTK (Real-Time Kinematic) positioning module, an Inertial Measurement Unit (IMU), and a Web3 identity and data validation module. It connects to smartphones via Bluetooth and leverages the phone’s camera to capture road imagery, process and classify data, and upload it to earn $ROVR tokens.\
ROVR supports most mainstream smartphones and continuously optimizes compatibility across different models. However, due to variations in camera optics, AI processing power, and encryption capabilities, some devices may face compatibility limitations.

{% embed url="<https://youtu.be/g-6CIWREDmQ>" %}

{% embed url="<https://youtu.be/RaayNuy6xsE>" %}

**1.3.2 Data Collection Using the Professional 3D LiDAR Device “LightCone”**

<figure><img src="https://1967514283-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FCrCaZAKtrNMj1kPxghZg%2Fuploads%2FZLpovdxYthIzfoHdMGvd%2Fimage.png?alt=media&#x26;token=f794bede-adb7-4160-9b29-0d3a1d20851d" alt="" width="375"><figcaption></figcaption></figure>

The LightCone device features a multi-channel LiDAR system (120+ channels, 200-meter range), enabling real-time, precise 3D environmental sensing. It also integrates automotive-grade navigation systems and large-aperture (>130 mm) high-precision measurement antennas.\
LightCone connects to smartphones via Bluetooth or Wi-Fi, with the phone serving as a control terminal that must remain online for device management.

{% embed url="<https://youtu.be/OwrU5Do2BuU>" %}

Whether using TarantulaX or LightCone, vehicle speed during data collection must be maintained between 10 km/h and 120 km/h; data collected outside this range will not qualify for rewards. Rewards are calculated based on the kilometers of quality data collected.\
Quality data is defined as coverage of paved roads with clearly marked lanes and traffic elements, meeting road network classification Level 8 or above, including highways, urban expressways, ramps, and city streets.

**1.3.3 Deploying R-Node Nodes and Staking $ROVR Tokens (Planned, Coming Soon)**

R-Node nodes serve as data storage and processing points within the ROVR network. Community contributors can deploy nodes that meet the necessary hardware, software, and network specifications, and stake $ROVR tokens to participate in the ecosystem, earning base rewards as well as additional workload-based incentives.\
The R-Node program is scheduled to launch in the project’s second phase, with initial stages relying primarily on traditional cloud servers and object storage services (OSS) for data management.

### 1.4 Data **Storage, Processing, and Utilization**

***

ROVR Inc. serves as the primary data processing entity within the ROVR network, responsible for cleansing, structuring, and transforming raw sensor data into high-definition maps and other premium data products for commercial use and distribution.

For clients such as OEMs, Tier 1 suppliers, or other industry partners, ROVR Inc. also offers tailored mapping solutions and customized data services to meet specific technical or business needs.

In the near future, ROVR will gradually open data reprocessing capabilities to the broader ecosystem. Qualified third-party organizations or individual developers will be able to perform secondary processing on ROVR’s open data and commercialize the results. This decentralized, permissionless model will become a key pillar of the ROVR ecosystem, unlocking new layers of data value and fostering a vibrant, scalable data economy.

### 1.5 **Project Governance**

***

The ROVR project is jointly operated by two main entities: **ROVR Foundation Ltd.** and **Rovr Labs, Inc.**

**Responsibilities of ROVR Foundation Ltd.:**

* Maintain the integrity of the ROVR tokenomics and ensure a sustainable project ecosystem
* Provide open-source hardware reference designs and software frameworks
* Oversee global data collection and decentralized storage infrastructure
* Lead the technical roadmap and ensure consistent innovation and development principles

**Responsibilities of Rovr Labs, Inc.:**

* Transform raw data into high-definition maps or other valuable data products and deliver them to commercial clients
* In the near future, the role of productizing and monetizing data will be gradually opened to third parties within the ecosystem, promoting a more decentralized and scalable development model

### 1.6 Compliance and Data Localization Principles

***

ROVR places paramount importance on data privacy and regulatory compliance. While the collection and utilization of public road map data are generally lawful in most countries and regions, certain jurisdictions impose strict restrictions due to concerns over privacy, national security, and commercial competition. Notably, in some countries and regions (Mainland China, North Korea, Cuba, Iran, Venezuela, Russia, Belarus, Syria and Ukraine) unauthorized surveying activities are considered serious offenses. Consequently, the use of ROVR hardware devices is strictly prohibited in these locations; any data generated therein will not be accepted by the ROVR network, and contributors will not receive any rewards.

In addition to legal compliance, ROVR adheres to stringent **data localization principles**, respecting the legal frameworks governing data storage and transmission in each jurisdiction. Key aspects include:

* **Local Storage Preference:** Data should preferably be stored on compliant servers located within the country or region where the data is generated or where the user resides, minimizing legal risks associated with cross-border data transfers.
* **Secure Cross-Border Transfers:** When cross-border transmission of data is unavoidable, ROVR implements robust encryption and security protocols to ensure the confidentiality and integrity of data during transit.
* **Sensitive Data Protection:** For sensitive datasets, ROVR applies rigorous anonymization and de-identification techniques in accordance with local laws to prevent disclosure of personal information or proprietary business details.
* **Global Compliance Collaboration:** ROVR actively collaborates with partners worldwide to establish a compliant, secure, and efficient data storage and processing ecosystem that meets diverse regulatory requirements and industry standards.

Given ROVR’s decentralized nature, it is imperative that community contributors proactively familiarize themselves with applicable local laws and regulations. Contributors are responsible for ensuring their data collection and usage practices comply with their jurisdiction’s legal framework to mitigate legal risks and avoid unnecessary complications.

ROVR is committed to fostering an open, compliant, and responsible global data ecosystem that balances innovation with legal adherence, thereby supporting the sustainable advancement of intelligent transportation and artificial intelligence technologies.

## 2. Data Production and Hardware Technologies

### 2.1 TarantulaX

***

**TarantulaX** primarily functions as a high-precision RTK (Real-Time Kinematic) receiver, enhancing positioning accuracy while ensuring data integrity and security through cryptographic measures.

**Key Functions and Workflow:**

* **RTK Positioning:**\
  TarantulaX connects via a smartphone to NTRIP (Networked Transport of RTCM via Internet Protocol) services provided by RTK base station operators (e.g., GEODNET) to acquire real-time correction data and satellite ephemeris information from nearby RTK base stations. It performs real-time position corrections at one-second intervals, typically improving smartphone positioning accuracy to within 1 centimeter.

<figure><img src="https://1967514283-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FCrCaZAKtrNMj1kPxghZg%2Fuploads%2FEbd38qveeghNpCKayFjP%2Fimage.png?alt=media&#x26;token=f8cdf0b4-021b-48db-bdfc-58129c967a86" alt="" width="375"><figcaption></figcaption></figure>

* **Data Authentication and Security:**\
  To prevent malicious replay attacks or forgery, TarantulaX employs an unreadable hardware private key to digitally sign the high-precision location data it generates, ensuring the authenticity and validity of the data.

<figure><img src="https://1967514283-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FCrCaZAKtrNMj1kPxghZg%2Fuploads%2FmMMQgzZweZaEMBovkEl5%2Fimg006.jpg?alt=media&#x26;token=040cd1cf-86ee-4ebe-9847-9a1892b7dc8c" alt="" width="375"><figcaption></figcaption></figure>

* **Future Upgrades:**\
  The technical details and functionalities described here are subject to continuous improvement and may evolve with future product versions.

### 2.2 LightCone

***

**LightCone** is a significantly more advanced and comprehensive system compared to Tarantula Mini. Designed for professional-grade 3D spatial data collection, LightCone integrates multiple high-performance components to deliver industry-leading precision, reliability, and processing power.

<figure><img src="https://1967514283-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FCrCaZAKtrNMj1kPxghZg%2Fuploads%2FpNUknLHeDqRePEsf9LuG%2Fimage.png?alt=media&#x26;token=baf41727-1493-4931-9a23-556869b60346" alt=""><figcaption></figcaption></figure>

#### System Overview

LightCone combines multi-channel LiDAR, high-resolution cameras, integrated navigation systems, and AI computing modules into a robust, field-ready device. It is capable of real-time 3D perception of complex environments and is built to meet the demanding requirements of automotive and mapping-grade applications.

#### Key Specifications

*(Note: Specifications are subject to change in future hardware revisions. This document may not reflect the most up-to-date information.)*

#### Hardware Specifications

| **Item**                 | **Details**                                |
| ------------------------ | ------------------------------------------ |
| **LiDAR Type**           | Automotive-grade solid-state LiDAR         |
| **Number of Beams**      | 126 beams                                  |
| **Ranging Accuracy**     | ±5 cm                                      |
| **Eye Safety**           | IEC 60825-1 Class 1 (Eye-Safe)             |
| **Measurement Range**    | Up to 200 m (180 m @10% NIST reflectivity) |
| **Blind Spot**           | ≤ 0.5 m                                    |
| **Field of View (FOV)**  | 120° (horizontal) × 25° (vertical)         |
| **Angular Resolution**   | 0.2° × 0.2°                                |
| **Camera**               | Automotive-grade HD camera                 |
| **Positioning Accuracy** | ±3 cm (global), powered by GEODNET RTK     |
| **Data Storage**         | Onboard 1 TB SSD                           |

***

#### IMU & Positioning

| **Item**                         | **Details**                                                          |
| -------------------------------- | -------------------------------------------------------------------- |
| **IMU Bias Instability (Allan)** | <p>Gyro: XY < 2.6°/hour, Z < 2.1°/hour<br>Accelerometer: < 45 µg</p> |
| **IMU Bias (XYZ)**               | 0.13°/√h, 0.050 m/√h                                                 |

***

#### Processing Power

| **Item**           | **Details**          |
| ------------------ | -------------------- |
| **Edge Processor** | High-performance SoC |
| **NPU**            | 6 TOPS @ int8        |
| **GPU**            | ARM Mali-G610MC4     |

***

#### Environmental Specs

| **Item**                  | **Details**                     |
| ------------------------- | ------------------------------- |
| **Operating Temperature** | -45°C to 85°C                   |
| **Storage Temperature**   | -65°C to 105°C                  |
| **Protection Rating**     | IP67 (Waterproof and Dustproof) |

***

#### Physical Dimensions

| **Item**   | **Details**                                            |
| ---------- | ------------------------------------------------------ |
| **Weight** | <p>4.6 kg (device only)<br>7.4 kg (with packaging)</p> |
| **Size**   | 251.6 mm × 204.3 mm × 96.2 mm                          |

## 3. High-definition Mapping and 3D Data-related Technologies

### 3.1 HD Maps

***

HD maps refer to maps with high spatial and temporal resolution. They are primarily used in scenarios that require precise positioning, such as autonomous driving.

These maps include not only basic geographical elements (e.g., road and building locations), but also detailed semantic information such as lane markings, traffic signals, signs, road curvature, elevation, and more.

To produce HD maps, high-precision sensors like LiDAR, cameras, and GNSS systems are used to collect data. The raw data is then processed using mapping algorithms to generate accurate and reliable map products.

<figure><img src="https://1967514283-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FCrCaZAKtrNMj1kPxghZg%2Fuploads%2Fs4ewSYny0iF9Keic3zfJ%2Fimage.png?alt=media&#x26;token=50371bec-dda9-4238-be27-ee11a29a8e27" alt=""><figcaption><p><a href="https://rovr.network/map/">https://rovr.network/map/</a></p></figcaption></figure>

In terms of data content, HD maps typically consist of three main components:

**3.1.1 Road Model**

This includes information at the road level, such as road geometry, road attributes (such as passability), lane information within the road, and information about objects on the road.

**3.1.2 Lane Model**

This contains information at the lane level, such as lane geometry, lane connectivity, lane attributes (such as direction of travel, whether it can be crossed), and the relationship with the object model.

**3.1.3 Object Model**

This encompasses the geometry, orientation, category, and corresponding relationships between objects and lanes.

<figure><img src="https://1967514283-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FCrCaZAKtrNMj1kPxghZg%2Fuploads%2Ffcp9hEHThDPtQQMfLY7E%2Fimg010.png?alt=media&#x26;token=881c4bd7-6535-41dc-a73d-c22d45e1ff9f" alt=""><figcaption></figcaption></figure>

### 3.2 HD Map Mapping

***

For HD mapping, we use professional equipment including solid-state LiDAR, 4K cameras, and high-precision positioning devices. Parameter calibration is the first step: internal sensor parameters are calibrated during manufacturing, while external parameters are automatically calibrated through an integrated program, requiring no user intervention.

After data collection, local mapping results are aggregated in the cloud. HD maps typically require an absolute accuracy of less than 50 cm and a relative accuracy within 20 cm over 100 meters.

All collected data is used to automatically generate HD maps. Point cloud data provides centimeter-level ranging, image data offers rich semantic information, and precise pose data enhances overall map consistency and positioning accuracy. Due to the high precision standards demanded by OEMs, manual editing and review are still necessary after automatic generation.

<figure><img src="https://1967514283-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FCrCaZAKtrNMj1kPxghZg%2Fuploads%2FoND25ytUvmEOybYcTweP%2Fimg011.png?alt=media&#x26;token=3cdf90b0-1b55-4092-a305-83d55b896248" alt=""><figcaption></figcaption></figure>

Starting in Q3 2024, we will use the mini device TX for HD map mapping, with online mapping supported by our app. Currently, the data transmission rate is 100 KB/km and is expected to decrease to 50 KB/km by Q2 2025.

After Q1 2025, we will release our professional device LC, which will support online high-precision map mapping.

<figure><img src="https://1967514283-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FCrCaZAKtrNMj1kPxghZg%2Fuploads%2FRf6YqBFLyxEbMNTZuME3%2Fimage%203.png?alt=media&#x26;token=f393d611-83e8-49c7-95a8-a7bea0c42e25" alt=""><figcaption></figcaption></figure>

### 3.3 HD Map Updating

***

HD map updating begins with 3D reconstruction of consecutive images using stereo vision techniques.

However, due to latency during camera exposure, the visual 3D reconstruction results are not rigidly aligned with the existing HD map point cloud. To address this, we simulate the imaging process to project the HD map point cloud onto the image coordinate system and match it with real-time captured images—primarily focusing on edge features—to optimize the camera’s intrinsic and extrinsic parameters.

Optimizing the camera parameters is necessary because the smartphone mount cannot maintain a rigid connection over extended periods, causing changes in the smartphone’s extrinsic parameters. Additionally, the presence of the windshield may alter the relative position between the smartphone lens and the windshield, affecting the intrinsic parameters of the entire measurement system.

Once the optimized camera parameters are obtained, depth is estimated for each pixel. Using monocular depth estimation, accurate depth information is generated and compared against the existing HD map data to complete the map updating process.

<figure><img src="https://1967514283-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FCrCaZAKtrNMj1kPxghZg%2Fuploads%2Fg2MViKOeLdEuNwiwGuPv%2Fimg013.png?alt=media&#x26;token=f9cae99e-110b-454d-80bb-5566549e78cd" alt=""><figcaption></figcaption></figure>

### 3.4 **4D Data Generation**

***

#### 3.4.1 Controllable Scene Editing

In the era of LLM, data is a precious resource, and its importance is particularly prominent in the field of vision. Unlike SORA, our 4D data generation model is based on the real world and allows for precise scene editing (such as defining accurate three-dimensional positions, dimensions, orientations, velocities, etc., for a vehicle), which is crucial for training large models for tasks like autonomous driving. Below is a demo video based on the Kitti dataset. In the video, we replaced all different types of vehicles with the same vehicle model.&#x20;

{% embed url="<https://youtu.be/SVX2RH4Ca7s>" %}

Furthermore, we have obtained LOI from a Tier 2. However, for commercial reasons, the video we are showcasing is generated using the KITTI dataset, a opensource autonomies driving dataset.

#### 3.4.2 **Monocular RGBD to Multi-View Generation**

Using monocular RGBD data captured by LightCone’s monocular camera combined with LiDAR (with overlapping fields of view), we leverage deep learning techniques to generate multi-view (multi-camera) images. This process synthesizes and enhances visual information from multiple perspectives, enabling richer scene understanding and more accurate 3D perception.

<figure><img src="https://1967514283-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FCrCaZAKtrNMj1kPxghZg%2Fuploads%2FwzsJXIbn5leaNGivii3W%2Freconstruction.gif?alt=media&#x26;token=29e78fc5-de1a-4054-a1b9-a7c98b92f11f" alt=""><figcaption></figcaption></figure>

“Thank you very much for your strong support for the binocular data generation project based on AIGC technology. After evaluation by the R\&D team, the generated data and annotation results provided by your company meet the training requirements of the binocular perception model. I am sending this email to confirm our cooperation. Please start the formal development work from now on. The specific amount and deliverables shall be subject to the commercial contract”

— LOI email sent to us by a Tier 2 company

#### 3.4.3 **Data Usability Validation and Real-World Deployment**

The data we generated has completed usability validation, and the client is already using it in a real production environment.

{% embed url="<https://youtu.be/JbVvQp_2V9w>" %}

## 4. Anti-Cheating and Trustless Network

***

Cheating and spoofing are common issues in many DePIN projects. ROVR has learned from extensive experience and employs multiple anti-cheating and anti-spoofing technologies, eliminating the need for a centralized denylist. In the ROVR network, all nodes are designed as trustless models, and the system only accepts verified data, ensuring authenticity and reliability.

### 4.1 Digital Signature System

All ROVR hardware devices are integrated with a specialized encryption chip that securely stores a private key, which cannot be accessed externally, used to digitally sign the device's output data.

When outputting data, devices including TarantulaX and LightCone use the encryption chip to sign their data. The transmitted data package contains:

* The data collected by the device
* The digital signature fingerprint
* The public key of the encryption chip

The backend system verifies the digital signature using the public key to ensure data authenticity. Data failing signature verification will be automatically discarded to prevent forged data from entering the network.

<figure><img src="https://1967514283-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FCrCaZAKtrNMj1kPxghZg%2Fuploads%2FpHWGk20GuXkIY57EKnvH%2Fimage%204.png?alt=media&#x26;token=99b506f7-4b67-428d-9602-6c1d7396c913" alt=""><figcaption></figcaption></figure>

### 4.2 Satellite Observation Files

Spoofing device GPS latitude and longitude information is a common attack in DePIN projects, often executed by forging NMEA messages to perform replay attacks.

ROVR hardware devices provide not only GPS position data but also raw satellite observation files and precise ephemeris files. These files include:

* GPS timestamps
* Number and positions of observed satellites
* Satellite velocities and headings
* Satellite signal quality indicators (DOP)

This data is cross-checked with parameters from nearby RTK base stations (currently provided by GEODNET), greatly increasing spoofing complexity.

ROVR supports a quad-constellation GNSS system—GPS, GLONASS, Galileo, and BeiDou—with precise ephemeris files for all constellations. Forging such complex and cross-verified data is virtually impossible.

* Easily spoofed NMEA messages

<figure><img src="https://1967514283-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FCrCaZAKtrNMj1kPxghZg%2Fuploads%2FQS5EYjsfaIqucWi6GaCl%2Fimage%205.png?alt=media&#x26;token=506a6871-0d5a-4830-b43d-6b3dc439ec04" alt=""><figcaption></figcaption></figure>

* Imagine forging this 250,000-line file:

<figure><img src="https://1967514283-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FCrCaZAKtrNMj1kPxghZg%2Fuploads%2FAfNbOFenxVRtZ8tGDoG6%2Fimage%206.png?alt=media&#x26;token=4b8b3599-4ba2-40b2-82e4-83e5d9bc1d0d" alt=""><figcaption></figcaption></figure>

### 4.3 Sensor Fusion

ROVR hardware integrates multiple sensors whose data synergistically validate each other.

For example, when a vehicle passes over a speed bump, the image sensor, IMU, LiDAR, and RTK positioning system simultaneously detect vibrations and exhibit consistent dynamic responses.

Malicious actors attempting to forge synchronized data across multiple sensors to mimic realistic motion would face extremely high technical challenges.

### 4.4 Stricter Penalties for Cheating

To maintain fairness in the network, ROVR enforces strict penalties for cheating:

| Offense Number | Penalty                                                 |
| -------------- | ------------------------------------------------------- |
| 1st Offense    | Account and device frozen for 3 months                  |
| 2nd Offense    | Account and device frozen for 999 years (permanent ban) |

ROVR takes cheating very seriously and is committed to preserving a fair and secure network environment.

## 5. $ROVR **Allocation**

***

* Total fixed supply: **10 billion**
* Issuance Network: Solana
* Allocation Structure:
  * 51% as rewards for contributors, to participate in building the ROVR Network
  * 20% for project founding team members, and future global core contributors for project R\&D and system construction
  * 20% for project investors
  * 9% for ROVR ecosystem including allocation includes liquidity, market operations, promotions, and other aspects essential for sustaining ecosystem development

| **Category**               | **% of Supply** | **Token Amount** |
| -------------------------- | --------------- | ---------------- |
| **Community Contributors** | 51.00%          | 5,100,000,000    |
| **Team**                   | 20.00%          | 2,000,000,000    |
| **Investors**              | 20.00%          | 2,000,000,000    |
| **Ecosystem**              | 9.00%           | 900,000,000      |

* Allocation Structure:
  * **51% Mining Rewards**: Released progressively based on users’ **mapping mileage and data quality**.
  * **20% Team Allocation**: 9-month cliff, followed by 36-month linear vesting.
  * **20% Investors**: 0.733% unlocked at TGE; remaining tokens locked for 6 months, then linearly vested over 24 months.
  * **9% Ecosystem**: 0.018% unlocked at TGE; remaining tokens vested linearly over 12 months.

<figure><img src="https://1967514283-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FCrCaZAKtrNMj1kPxghZg%2Fuploads%2FKv4TImCXdcqdBiJSWePF%2F4-1.png?alt=media&#x26;token=1946ab43-0290-4cfa-8262-0e8c62aa9f14" alt=""><figcaption></figcaption></figure>

### 5.1 $ROVR Calculation

***

#### **Base Rewards**

* TarantulaX (TX): **1.6 $ROVR per km**
* LightCone (LC): **16 $ROVR per km**
* **Starting from TGE, April 26, 2025, the base rewards for $ROVR will be halved every year**

| Year | TarantulaX (per km) | LightCone (per km) |
| ---- | ------------------- | ------------------ |
| 1    | 1.6 $ROVR           | 16 $ROVR           |
| 2    | 0.8 $ROVR           | 8 $ROVR            |
| 3    | 0.4 $ROVR           | 4 $ROVR            |
| 4    | ……                  | ……                 |

#### Data Quality Tiers & Reward Ratio

* The quality of the collected data (e.g., clarity, brightness, RTK accuracy, etc.) will impact the amount of $ROVR received. The final assessment will be categorized into quartiles:

| Data Quality Tiers | Reward Ratio |
| ------------------ | ------------ |
| A - Excellent      | 100%         |
| B - Good           | 75%          |
| C - Average        | 50%          |
| D - Below Average  | 25%          |
| F - Failing        | 0            |

\*”F” will be assigned to any driver using **multiple TX or LC** on a single vehicle.

\*Use of **1 TX and 1 LC device simultaneously on the same vehicle** is permitted.

#### Weekly Road Revisit Rewards Decay

* Each road will see a **50% reward reduction** for every **2 times** additional collections.

| Road Visits | Reward Ratio |
| ----------- | ------------ |
| 1st - 2nd   | 100%         |
| 3rd - 4th   | 50%          |
| 5th - 6th   | 25%          |
| 7th - 8th   | 12.5%        |
| ……          | ……           |

* Road Revisits are calculated globally.
* **Road Revisits Records Reset Weekly:** Revisit records will reset on the Monday of each week at 00:00 GMT.
* $ROVR will be transferred within **1 week** after data upload, **data older than 2 weeks won't be accepted**.
* This mechanism will take effect on **July 1st, 2025 at 00:00 GMT**

\*Tip: Before driving, check the ROVR App map and follow the guidance to maximize your rewards.

#### \***Rewards Multiplier based on Customer Requirement**

* The additional **$ROVR reward multiplier** will be set based on the requirement of future data customer, using time and spatial range (NDS 13-level grid) as criteria.

<figure><img src="https://1967514283-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FCrCaZAKtrNMj1kPxghZg%2Fuploads%2FvbldKoI1k7qVDpmjlKcp%2F4-2.png?alt=media&#x26;token=d6f98d3d-ffd7-42aa-98a7-b33969284828" alt=""><figcaption></figcaption></figure>

### 5.2 $ROVR Burn Mechanism

***

* $ROVR operates on a **permanent burn mechanism**, with buyback funds sourced from data consumers purchasing ROVR's data products:
  * 60% is used to buy back circulating $ROVR and permanently burn it.
  * 20% covers RTK service fees—used to buy back and permanently burn $GEOD from GEODNET.
  * 20% supports ROVR's operational expenses.
