Vol. 1 No. 1 (2021): Journal of AI-Assisted Scientific Discovery
Articles

Deep Learning for Autonomous Vehicle Surrounding Object Classification and Tracking

Dr. Ana Castaño Muñoz
Professor of Human-Computer Interaction, Universidad Politécnica de Madrid (UPM)
Cover

Published 30-06-2021

Keywords

  • Vehicle classification,
  • Vehicle tracking approach

How to Cite

[1]
Dr. Ana Castaño Muñoz, “Deep Learning for Autonomous Vehicle Surrounding Object Classification and Tracking”, Journal of AI-Assisted Scientific Discovery, vol. 1, no. 1, pp. 1–22, Jun. 2021, Accessed: Sep. 18, 2024. [Online]. Available: https://scienceacadpress.com/index.php/jaasd/article/view/37

Abstract

The safe operation of autonomous vehicles (AVs) relies on accurately interpreting the surrounding environment, including the classification, detection, and tracking of other moving objects such as pedestrians and vehicles. The capacity of an AV to distinguish and track these other objects in real time is critical in making decisions that lead to safe maneuvering strategies in complex mixed-traffic scenarios. If an AV is untrusted to make these decisions in dynamic scenarios, the trust of passengers and pedestrians, as well as of other road users, will be limited. One way to solve this problem is through the use of the deep-learning-based approaches.[2] While the use of deep learning (DL) as opposed to traditional algorithms has led to considerable improvements in different tasks in the recent past, it is still limited by challenges such as the requirement of large, balanced datasets, extensive parameter optimization, and the potential overfitting of the trained model to the specific scenario in which it has been trained. DL might not work well when different data distributions are encountered, and in case of lack of data in different categories, there is a high degree of bias towards particular object classes as seen in.Existing long-term target tracking methods face critical challenges, such as bounding-box drift, occlusions, and the need to establish reliable correspondences between epochs. The vehicle tracking modulates lane assignment by bridging the duplicate vehicle detections across consecutive images. The vehicle classification model assigns labels and is trained for small datasets with reduced biases towards classes with a higher density in the dataset. The resultant system demonstrates a highly interactive vehicle tracking approach, with the augmented classification helping the end-to-end capabilities in dense traffic scenarios.

Downloads

Download data is not yet available.

References

  1. J. Dequaire, D. Rao, P. Ondruska, D. Wang et al., "Deep Tracking on the Move: Learning to Track the World from a Moving Vehicle using Recurrent Neural Networks," 2016. [PDF]
  2. A. Yousef, J. Flora, and K. Iftekharuddin, "Monocular Camera Viewpoint-Invariant Vehicular Traffic Segmentation and Classification Utilizing Small Datasets," 2022. ncbi.nlm.nih.gov
  3. H. Gao, Q. Qiu, W. Hua, X. Zhang et al., "CVR-LSE: Compact Vectorization Representation of Local Static Environments for Unmanned Ground Vehicles," 2022. [PDF]
  4. Mahammad Shaik, et al. “Envisioning Secure and Scalable Network Access Control: A Framework for Mitigating Device Heterogeneity and Network Complexity in Large-Scale Internet-of-Things (IoT) Deployments”. Distributed Learning and Broad Applications in Scientific Research, vol. 3, June 2017, pp. 1-24, https://dlabi.org/index.php/journal/article/view/1.
  5. Tatineni, Sumanth. "Beyond Accuracy: Understanding Model Performance on SQuAD 2.0 Challenges." International Journal of Advanced Research in Engineering and Technology (IJARET) 10.1 (2019): 566-581.
  6. Vemoori, V. “Towards Secure and Trustworthy Autonomous Vehicles: Leveraging Distributed Ledger Technology for Secure Communication and Exploring Explainable Artificial Intelligence for Robust Decision-Making and Comprehensive Testing”. Journal of Science & Technology, vol. 1, no. 1, Nov. 2020, pp. 130-7, https://thesciencebrigade.com/jst/article/view/224.
  7. G. A. Salazar-Gomez, M. A. Saavedra-Ruiz, and V. A. Romero-Cano, "High-level camera-LiDAR fusion for 3D object detection with machine learning," 2021. [PDF]
  8. S. Hecker, D. Dai, and L. Van Gool, "End-to-End Learning of Driving Models with Surround-View Cameras and Route Planners," 2018. [PDF]
  9. S. Mohapatra, M. Hodaei, S. Yogamani, S. Milz et al., "LiMoSeg: Real-time Bird's Eye View based LiDAR Motion Segmentation," 2021. [PDF]
  10. S. Garg, N. Sünderhauf, F. Dayoub, D. Morrison et al., "Semantics for Robotic Mapping, Perception and Interaction: A Survey," 2021. [PDF]
  11. S. Kuutti, R. Bowden, Y. Jin, P. Barber et al., "A Survey of Deep Learning Applications to Autonomous Vehicle Control," 2019. [PDF]
  12. Q. Liu, Z. Li, S. Yuan, Y. Zhu et al., "Review on Vehicle Detection Technology for Unmanned Ground Vehicles," 2021. ncbi.nlm.nih.gov
  13. A. Singh and V. Bankiti, "Surround-View Vision-based 3D Detection for Autonomous Driving: A Survey," 2023. [PDF]
  14. D. Katare, D. Perino, J. Nurmi, M. Warnier et al., "A Survey on Approximate Edge AI for Energy Efficient Autonomous Driving Services," 2023. [PDF]
  15. J. Park, M. Wen, Y. Sung, and K. Cho, "Multiple Event-Based Simulation Scenario Generation Approach for Autonomous Vehicle Smart Sensors and Devices," 2019. ncbi.nlm.nih.gov
  16. A. Mahyar, H. Motamednia, and D. Rahmati, "Deep Perspective Transformation Based Vehicle Localization on Bird's Eye View," 2023. [PDF]
  17. T. Suleymanov, L. Kunze, and P. Newman, "Online Inference and Detection of Curbs in Partially Occluded Scenes with Sparse LIDAR," 2019. [PDF]
  18. J. Beltran, C. Guindel, F. Miguel Moreno, D. Cruzado et al., "BirdNet: a 3D Object Detection Framework from LiDAR information," 2018. [PDF]
  19. N. Ding, "An Efficient Convex Hull-based Vehicle Pose Estimation Method for 3D LiDAR," 2023. [PDF]
  20. J. Philion, A. Kar, and S. Fidler, "Learning to Evaluate Perception Models Using Planner-Centric Metrics," 2020. [PDF]
  21. S. Ribouh, R. Sadli, Y. Elhillali, A. Rivenq et al., "Vehicular Environment Identification Based on Channel State Information and Deep Learning," 2022. ncbi.nlm.nih.gov
  22. F. Lu, Z. Liu, H. Miao, P. Wang et al., "Fine-Grained Vehicle Perception via 3D Part-Guided Visual Data Augmentation," 2020. [PDF]
  23. K. Li, K. Chen, H. Wang, L. Hong et al., "CODA: A Real-World Road Corner Case Dataset for Object Detection in Autonomous Driving," 2022. [PDF]
  24. M. Ahmed Ezzat, M. A. Abd El Ghany, S. Almotairi, and M. A.-M. Salem, "Horizontal Review on Video Surveillance for Smart Cities: Edge Devices, Applications, Datasets, and Future Trends," 2021. ncbi.nlm.nih.gov
  25. D. Yu, H. Lee, T. Kim, and S. H. Hwang, "Vehicle Trajectory Prediction with Lane Stream Attention-Based LSTMs and Road Geometry Linearization," 2021. ncbi.nlm.nih.gov
  26. A. Asgharpoor Golroudbari and M. Hossein Sabour, "Recent Advancements in Deep Learning Applications and Methods for Autonomous Navigation: A Comprehensive Review," 2023. [PDF]
  27. F. Islam, M. M Nabi, and J. E. Ball, "Off-Road Detection Analysis for Autonomous Ground Vehicles: A Review," 2022. ncbi.nlm.nih.gov
  28. E. Khatab, A. Onsy, and A. Abouelfarag, "Evaluation of 3D Vulnerable Objects’ Detection Using a Multi-Sensors System for Autonomous Vehicles," 2022. ncbi.nlm.nih.gov
  29. Z. Wei, F. Zhang, S. Chang, Y. Liu et al., "MmWave Radar and Vision Fusion for Object Detection in Autonomous Driving: A Review," 2022. ncbi.nlm.nih.gov