Journey into ROVR - Why is Light Cone needed?
Last updated
Last updated
Hello everyone, welcome to the third article in our Journey into ROVR series—Why is Light Cone needed?
After the release of the previous article, we received a lot of feedback from community members, investors, and other partners. This feedback covered various aspects.
This article will provide a detailed overview of the design purpose and data application areas of ROVR's professional-grade hardware product—LightCone, and explain why a specialized hardware product like LightCone is necessary.
LightCone is ROVR's second hardware product, featuring a tri-band GNSS antenna that supports RTK services, automotive-grade LiDAR, a 4K HD camera, high-precision IMU, and the RK3588 AI chip.
Here are the specifications for LightCone:
Lidar Detect Range: 200m@10% NIST
Lidar Resolution: 0.2° * 0.2°
Accuracy: < 5cm
Camera: 4K HD
IMU bias (XYZ): 0.13°/√h, 0.050m/√h
Working Temperature: -45℃ ~ 85℃
Storage Temperature: -65℃ ~ 105℃
Waterproof: IP67
High-precision positioning: The LC features a high-precision IMU and RTK service (provided by our strategic partner GEODNET), enabling centimeter-level positioning accuracy.
Automotive-grade LiDAR: Using automotive-grade sensors, LC ensures system stability and reliability while providing centimeter-level ranging accuracy of 2cm @1sigma, far surpassing the ranging capabilities of pure vision systems, thus greatly expanding the potential use cases of LC data.
High-definition camera: While the LiDAR provides high-precision ranging and geometric information, the pre-calibrated high-definition camera enriches the LiDAR point cloud with additional semantic information.
This is one of the most frequently asked questions from community members and potential investors. What are the fields in which LC data can be applied, who are the customers, and why do they need LiDAR? I will answer these questions one by one.
Part 1: In which fields can LC data be applied, and who are the customers?
LC data is divided into raw data and processed data:
For raw data and its derived data (i.e., secondary data products not processed by ROVR Inc.), it can be used in the perception and end-to-end model training of autonomous driving and robotics.
For processed data (i.e., HD map data obtained through processing by ROVR Inc.), it can be used in the autonomous driving field.
The corresponding customers include global automakers, autonomous driving technology companies, robotics companies, simulation platform companies, and data labeling service providers. Currently, we have received 3 LOIs, 1 POC contract, and several potential partnership letters of intent (which will be disclosed to users after obtaining customer consent).
Part 2: Why is LiDAR needed?
This is one of the most frequently asked questions by potential investors. LiDAR, as an additional sensor, increases the cost of the device. Tesla does not use LiDAR, so why does ROVR believe LiDAR is essential?
In fact, the issue above involves a conceptual confusion. Let me explain why ROVR’s target customers need LiDAR data.
For deep learning systems widely used in robotics and autonomous driving, training and inference are the two core processes of neural networks. They correspond to the model's learning phase and actual application phase, respectively. While both processes involve neural network operations, their goals, procedures, and calculations are different.
Training is the core process of a neural network's "learning." During training, the neural network adjusts its internal parameters (weights and biases) using a set of known inputs and target outputs (labels), allowing the model to better fit the data and ultimately make accurate predictions for unseen data.
Forward Propagation: Input data passes through the layers of the neural network, where each layer computes weighted sums and processes them through activation functions, ultimately producing the prediction result. This step computes the predicted value of the neural network with its current weight settings.
Loss Calculation:
The difference between the network's output and the true labels (target values) is measured using a loss function. For regression problems, Mean Squared Error (MSE) is commonly used, while for classification problems, Cross-Entropy Loss is common. The loss function’s output represents the difference between the network’s prediction and the actual labels, and the goal is to minimize this loss.
Back-propagation: Based on the output from the loss function, the gradient of each network parameter (weight and bias) with respect to the loss is calculated using the chain rule. Back-propagation propagates these gradient values backward through the network, from the output layer to the input layer.
Parameter Update: Using gradient descent or other optimization algorithms (such as Adam, RMS Prop, etc.), the network parameters are updated. Each update aims to reduce the value of the loss function, bringing the model's prediction closer to the true labels.
Iteration (Epochs): The training process usually runs through multiple cycles (epochs). In each cycle, the network sees all of the training data and adjusts its weights to improve prediction accuracy. In each epoch, the loss is recalculated, gradients are back-propagated, and the weights are updated until the predefined stopping conditions are met (such as loss convergence or reaching the maximum number of iterations).
Inference is the process by which a neural network applies its learned parameters (weights and biases) to make predictions on new, unseen data after the training phase is complete. This is also known as the prediction phase. The inference process is "static," meaning it does not involve updates to the weights, but instead computes based on the trained model.
Input Data: The new, unseen input data is provided to the trained neural network.
Forward Propagation: The data passes through the layers of the neural network, where each layer applies weighted sums and activation functions, ultimately producing the prediction. During inference, the network does not perform back-propagation or parameter updates but instead uses the weights obtained during training to perform calculations.
Output Result: The network outputs the prediction, such as a label for image classification, a predicted value for a regression problem, or any other relevant output depending on the model's task.
Above, we have provided a brief introduction to the training and inference of deep learning models. The data from LC is used in the training of deep learning models, with the high-precision distance measurement information generated by the LiDAR serving as ground truth to optimize the model's parameters.
Tesla don't use LiDAR refers to the inference phase, not the training phase. There are numerous images and videos online showing Tesla vehicles equipped with LiDAR for collecting training data, which indicates that Tesla does use and must use LiDAR data during the model training phase.
Therefore, we can conclude the following: Tesla uses LiDAR data during the training phase (which appears to be installed in positions very similar to LC), but does not use LiDAR during the inference phase.
Let's extend the discussion a bit. Do all Driver Support/Automated Driving vehicles avoid using LiDAR during the deep learning model inference phase?
First, let's provide a brief introduction to automated driving. According to the SAE J3016 definition, automated driving is divided into six levels, ranging from Level 0 to Level 5. Based on whether the human driver needs to assume safety responsibility, they can be further divided into two categories: SAE levels 0-2, which correspond to Driver Support, where the user is responsible for safety; and SAE levels 3-5, which correspond to Automated Driving, where the user is not responsible for safety and the automated driving system takes on the safety responsibility.
Actually, we can make a simple distinction (though not rigorous):
For production vehicles with an "autonomous driving" system, they are considered Driver Support, and in the event of an accident, the human driver assumes safety responsibility, while the car manufacturer is not liable (this excludes the L3 autonomous driving functionality in certain countries and regions offered by Audi A8 and Benz S-Class, which is essentially unusable due to ODD restrictions).
All Robotaxis fall under automated driving because in case of an accident, the Robotaxi company takes responsibility. Examples include Waymo, Cruise, Baidu Apollo Go.
Whether or not LiDAR is used during the inference phase can be distinguished by whether the vehicle is equipped with LiDAR. Here are images of some production vehicles and Robotaxis. Production vehicles: Tesla, which follows a pure vision approach and does not equip LiDAR; and some brands like NIO, which are equipped with LiDAR.
Robotaxi: Both Waymo and Baidu Apollo Go are equipped with multiple LiDARs.
Therefore, based on the current industry situation, Robotaxis widely use LiDAR, and it is also possible to use LiDAR during the inference phase of autonomous driving.
This is not intended to participate in the debate over technical approaches, and the following represents only my personal opinion. From the perspective of functional safety and the requirements for expected functional safety, system redundancy is necessary, which in turn requires sensor redundancy. Therefore, a single pure vision system is unlikely to meet the sensor redundancy requirements (i.e., if the camera fails, additional sensors are needed to compensate for the functionality). Of course, for systems below L3, car manufacturers do not assume safety responsibility; they often use dozens of pages of electronic agreements to define the ODD of the driver assistance system to the point of near non-use in order to avoid liability.
Thank you very much for taking the time to read our article.
Since we officially started collecting data at 00:00 GMT on Monday, September 2, 2024, our testing users have accumulated over 2.8Million km of data. We greatly appreciate everyone's help.
Meanwhile, the TarantulaX device is now publicly available, and we hope more friends can join us.
Regarding the production, testing, and sales plan for LC (which is also of great interest to everyone):
LC Testing: We plan to release 50 testing units to users in Q1 2025, with a testing period of 2 months.
LC Production: After completing global outdoor testing, we will make adjustments based on any issues identified. We expect to begin small-batch production (up to 1,000 units) of LC by the end of Q2 2025.
LC Sales: During the LC testing phase, we will open pre-orders for the LC devices, with a $500 deposit to gauge potential order volume. The pricing will be based on the collected order data. Once we reach 1,000 orders, the LC will be priced at approximately $2,000.
We look forward to your participation in accelerating ROVR's growth and achieving greatness together.
🔸 ROVR Network (https://rovrlabs.io/)
💬 X: ROVR (***https://x.com/ROVR71776***)
🗯 Discord (***https://discord.com/invite/RjV3E3u4F2***)
Thank you!
ROVR Team