FKFS Veranstaltungen

2024 Stuttgart International Symposium
on Automotive and Engine Technology

2. - 3. Juli 2024

Session: Poster |

Automated AI-based Annotation Framework for 3D Object Detection from LIDAR Data in Industrial Areas.

Kosmas Kritikos, Neural Concept SA
Gina Abdelhalim, Karlsruhe Institute of Technology

Autonomous Driving is being utilized in various settings, including indoor areas such as industrial halls. Additionally, LIDAR sensors are currently popular due to their superior spatial resolution and accuracy compared to RADAR, as well as their robustness to varying lighting conditions compared to cameras. They enable precise and real-time perception of the surrounding environment. Several datasets for on-road scenarios such as KITTI or Waymo are publicly available. However, there is a notable lack of open-source datasets specifically designed for industrial hall scenarios, particularly for 3D LIDAR data. Furthermore, for industrial areas where vehicle platforms with omnidirectional drive are often used, 360° FOV LIDAR sensors are necessary to monitor all critical objects. Although high-resolution sensors would be optimal, mechanical LIDAR sensors with 360° FOV exhibit a significant price increase with increasing resolution. Most existing AI models for 3D object detection in point clouds are based on high-resolution LIDAR with many channels. This work aims to address these gaps by developing an automated AI-based labeling tool to generate 3D ground truth annotations for object detection from low-resolution LIDAR datasets captured in industrial hall scenarios. The point cloud data inside an industrial area at the KIT Campus Ost is recorded using a 16-channel LIDAR. The recorded objects include a forklift and box pallets for example. An upsampling LIDAR super-resolution approach is used that takes the recorded data as input for generating 64-channel point cloud data. The upsampled data is then utilized to fine-tune a 3D object detection model (Part-A2 net). Our testing results on a restricted dataset are highly promising, achieving a mean Average Precision of 95% at an IoU threshold of 0.75. The labeling tool is fully automated and utilizes the trained model for object detection. Manual corrections are also available. This research is part of the project FLOOW.