Date of Award

12-2022

Degree Name

Doctor of Philosophy

Department

Electrical and Computer Engineering

First Advisor

Ikhlas Abdel-Qader, Ph.D.

Second Advisor

Bradley Bazuin, Ph.D.

Third Advisor

Osama Abudayyeh, Ph.D.

Fourth Advisor

Rakan Chabaan, Ph.D.

Keywords

Clustering, DBSCAN, depth completion, instance segmentation, LiDAR, sensor fusion

Abstract

Depth sensing is critical for safe and accurate maneuvering in robotics and self-driving car (SDC) applications. Most recent LiDAR sensors, such as Ouster and Velodyne, offer 360 degrees of scanning at the rate of ten frames per second, making them very appropriate for autonomous driving applications. However, LiDAR point cloud data show many shortcomings, especially its data sparsity and unassigned nature, making it very challenging to utilize in applications such as perception, 3D object detection, 3D scene reconstruction, and simultaneous localization and mapping.

In this study, a novel framework using instance image segmentation and the raw LiDAR data for the goal of depth completion is developed. The framework uses a custom-trained two-stage instance segmentation architecture to focus on target objects (e.g., cars, pedestrians, and cyclists) and a fusion-based two-branch guided depth completion encoder-decoder deep neural network to generate accurate dense depth information. Results from the extensive experimental work using the KITTI depth completion dataset indicate that the proposed method achieves better performance than the baseline model. Moreover, to address the raw unassigned nature of LiDAR point cloud data, an adaptive estimation for the tuning parameters of the Density-Based Clustering of Application with Noise (DBSCAN) algorithm in SDC applications is proposed. This method utilizes a field-of-view division scheme and local insights about the LiDAR point cloud data to automate the estimation of the tuning parameters: epsilon and min_points. Experimental simulations using the KITTI object detection dataset achieved excellent clustering performance while waiving the need to the brute force tuning of parameter values.

Aiming to handle the challenges of the sparse and unassigned nature of LiDAR depth data, the key contributions of this dissertation include the development of a depth completion framework utilizing image instance segmentation features, the integration of object type within the depth completion deep neural networks, the development of an adaptive DBSCAN parameters-estimation technique, and the implementation of the instance segmentation-based depth-completion using sensor fusion framework. However, the overarching contribution is the introduction of a fundamental sensor fusion framework that fuses features and information from image instance segmentation and critical SDCs sensors such as LiDARs, RADARs, and cameras, and results in better perception and scene understanding.

Access Setting

Dissertation-Open Access

Share

COinS