Date of Award

12-2024

Degree Name

Doctor of Philosophy

Department

Mechanical and Aerospace Engineering

First Advisor

Richard Meyer, Ph.D.

Second Advisor

Damon Miller, Ph.D.

Third Advisor

Zachary Asher, Ph.D.

Fourth Advisor

Lee Wells, Ph.D.

Keywords

Autonomous vehicles, calibration, camera, fault tolerant perception, LIDAR, sensor misalignment

Abstract

Driving is one of the most popular modes of transportation in the world. The United States Department of Transportation’s (USDOT) Federal Highway Administration (FHWA) reported 2.8 trillion vehicle-miles traveled (VMT) in 2020, and the National Highway Traffic Association (NHTSA) recorded 3.2 trillion VMT in 2019. Also recorded in the NHTSA report were 39,096 fatalities and 2.7 million injuries due to traffic accidents, costing the economy an estimated $242 billion. Most of these recorded accidents can be attributed to human error or misjudgment. It is for this reason that governments and the automotive industry are looking at autonomous vehicle (AV) technologies to help drive down the number of vehicular accidents and reduce their economic impact. The AVs chain of operations can be concisely described by the three following actions, SENSE-PLAN-ACT. The perception system is the first module in the PLAN section, and it is fed information from the sensors that make up the SENSE section. The perception system is responsible for interpreting the world and providing the results to the path planning and other decision systems. The perception system performance is dependent on the operating state of the sensors, e.g. is a sensor in fault or being adversely affected by the weather or environmental conditions, and approach to sensor measurement interpretation. Therefore, ensuring the accurate performance of the perception system is imperative to ensure safe navigation of the AV. This work presents three papers that propose methods to mitigate faults due to sensor misalignment. The first paper presents findings on simulations of the THSS observer in a static environment. The simulations show that the THSS observer can both detect the error and correct the perception of a moving ego-vehicle. The second paper proposes an on-line algorithm that corrects erroneous measurements from a LiDAR on the ego vehicle by determining a set of transformations needed to align a cluster of measurements of a target vehicle to a corresponding image detection captured by a synchronized, forward-facing camera on the ego vehicle. The correction algorithm is first tested assuming the availability of ground truth information to correct the LiDAR, and then tested with camera images used to determine ground truth, the results show that it is possible to determine the misalignment error. The third, and last paper proposes a method for extrinsically calibrating a LiDAR and a forward-facing monocular camera using 3D and 2D bounding boxes. The proposed algorithm was tested using the KITTI dataset and experimental data. The rotation matrix is evaluated by calculating its Euler angles, and comparing them to the ideal Euler angles that describe the ideal orientation of the LiDAR with respect to the camera. The results show that the rotation matrix from the calibration algorithm is approximately close to both the ideal and the KITTI dataset rotation matrices. The corresponding translation vector is also shown to be close to expected values as well.

Access Setting

Dissertation-Open Access

Share

COinS