Pipeline of Non-Glare-NeRF: (a): Extract Instant NGP (NeRF) model results. (b): Extract the result of Instant NGP (NeRF) model with pre-processing. Pre-processing includes the process of detecting glare areas and applying interpainting.
Data Pipeline for Depth Evaluation: Using the iPhone's LiDAR sensor and RGB camera, a dataset including accurate depth information and color information was constructed. (a) Data Collection: I used my iPhone to shoot certain scenes, and I extracted RGB images and camera poses for each frame. (b) Acquiring depth information: As shown in Figure, depth information obtained from the LiDAR sensor was combined with RGB images to create the Depth Ground Truth. At this time, the Depth Completion technique was applied to supplement the gaps in the LiDAR data.
Dataset Construction:
This diagram illustrates the process of creating a 3D dataset using ARFrame and ARDepthData.
(i) ARFrame represents frame-based data provided by ARKit, collected in real-time from devices like iPhones.
(ii) ARDepthData represents depth information collected by the LiDAR sensor in the iPhone. ARKit generates ARDepthData based on sceneDepth, which includes two key components:
- depthMap: A map containing depth values (z-values) for each pixel.
- confidenceMap: A map representing the confidence level of the depth data, typically categorized into low, middle, and high levels.
Each point in the depthMap is transformed into its corresponding 3D position in space. The confidenceMap can be used to filter points with low accuracy, ensuring better results.
Dataset Construction: The dataset was constructed by combining the RGB image and the depth information obtained from the iPhone's LiDAR sensor.