Vol. 3 No. 2 (2023): Journal of AI-Assisted Scientific Discovery
Articles

Explainable AI for Transparent Risk Assessment in Cybersecurity for Autonomous Vehicles

Dr. Alexandre Vieira
Professor of Informatics, University of Porto, Portugal
Cover

Published 30-12-2023

How to Cite

[1]
Dr. Alexandre Vieira, “Explainable AI for Transparent Risk Assessment in Cybersecurity for Autonomous Vehicles”, Journal of AI-Assisted Scientific Discovery, vol. 3, no. 2, pp. 130–152, Dec. 2023, Accessed: Aug. 10, 2024. [Online]. Available: https://scienceacadpress.com/index.php/jaasd/article/view/101

Abstract

The deployment of autonomous vehicle systems holds great promise for our society, but also raises many concerns related to their proper operation and validation. We argue that the assurance and integrity of these systems cannot rely solely on different forms of verification, particularly machine learning (ML) produced by deep learning. Verification alone struggles to handle the complexity of ML-based systems. In the cybersecurity of autonomous vehicles, one general requirement is to make the cyber risk understandable and predictable by using explainable AI (XAI) models. This article proposes, from the perspective of autonomous vehicle cybersecurity risk management, to join the flourishing area of XAI with the well-assessed realms of security risk assessment. We envision trustworthiness and reliability in the relationship between data and predictions, as well as data validity for training the ML models, as the most important factors to balance in any autonomous vehicle XAI solution for the cybersecurity risk assessment case.

Downloads

Download data is not yet available.

References

  1. A. Doshi-Velez and B. Kim, "Towards a rigorous science of interpretable machine learning," arXiv preprint arXiv:1702.08608, 2017.
  2. F. Doshi-Velez and B. Kim, "Considerations for evaluation and generalization in interpretable machine learning," in Proceedings of the AAAI Conference on Artificial Intelligence, 2018, pp. 979-985.
  3. Z. C. Lipton, "The mythos of model interpretability," ACM Queue, vol. 16, no. 3, pp. 31-57, 2018.
  4. P. Hall, N. Gill, and B. Schmidt, "Towards an interpretability evaluation framework for machine learning," arXiv preprint arXiv:1901.04592, 2019.
  5. S. M. Lundberg and S.-I. Lee, "A unified approach to interpreting model predictions," Advances in Neural Information Processing Systems, vol. 30, pp. 4765-4774, 2017.
  6. F. Hohman, M. Kahng, R. Pienta, and D. H. Chau, "Visual analytics in deep learning: An interrogative survey for the next frontiers," IEEE Transactions on Visualization and Computer Graphics, vol. 25, no. 8, pp. 2674-2693, Aug. 2019.
  7. Tatineni, Sumanth. "Compliance and Audit Challenges in DevOps: A Security Perspective." International Research Journal of Modernization in Engineering Technology and Science 5.10 (2023): 1306-1316.
  8. Vemori, Vamsi. "Evolutionary Landscape of Battery Technology and its Impact on Smart Traffic Management Systems for Electric Vehicles in Urban Environments: A Critical Analysis." Advances in Deep Learning Techniques 1.1 (2021): 23-57.
  9. Mahammad Shaik. “Rethinking Federated Identity Management: A Blockchain-Enabled Framework for Enhanced Security, Interoperability, and User Sovereignty”. Blockchain Technology and Distributed Systems, vol. 2, no. 1, June 2022, pp. 21-45, https://thesciencebrigade.com/btds/article/view/223.
  10. Vemori, Vamsi. "Towards a Driverless Future: A Multi-Pronged Approach to Enabling Widespread Adoption of Autonomous Vehicles-Infrastructure Development, Regulatory Frameworks, and Public Acceptance Strategies." Blockchain Technology and Distributed Systems 2.2 (2022): 35-59.
  11. S. Rajalingham et al., "Large-scale, high-resolution comparison of the core visual object recognition behavior of humans, monkeys, and state-of-the-art deep artificial neural networks," Journal of Neuroscience, vol. 38, no. 33, pp. 7255-7269, 2018.
  12. M. T. Ribeiro, S. Singh, and C. Guestrin, "Why should I trust you? Explaining the predictions of any classifier," in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, pp. 1135-1144.
  13. R. Guidotti et al., "A survey of methods for explaining black box models," ACM Computing Surveys, vol. 51, no. 5, pp. 93:1-93:42, 2018.
  14. K. G. Papakonstantinou, G. Korkinof, and S. T. Papadopoulos, "Risk assessment framework for autonomous vehicles," in Proceedings of the 12th IEEE International Conference on Intelligent Transportation Systems, 2019, pp. 2063-2070.
  15. A. Holzinger et al., "From machine learning to explainable AI," in Proceedings of the World Symposium on Digital Intelligence for Systems and Machines (DISA), 2018, pp. 55-66.
  16. J. Lin et al., "Explainable deep learning: A field guide for the uninitiated," in Proceedings of the IEEE International Conference on Big Data (Big Data), 2018, pp. 1676-1685.
  17. A. B. Arrieta et al., "Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI," Information Fusion, vol. 58, pp. 82-115, 2020.
  18. C. Rudin, "Please stop explaining black box models for high-stakes decisions and use interpretable models instead," Nature Machine Intelligence, vol. 1, no. 5, pp. 206-215, 2019.
  19. F. Doshi-Velez et al., "Accountability of AI under the law: The role of explanation," arXiv preprint arXiv:1711.01134, 2017.
  20. A. Torralba, A. A. Efros, "Unbiased look at dataset bias," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011, pp. 1521-1528.
  21. A. Adadi and M. Berrada, "Peeking inside the black-box: A survey on explainable artificial intelligence (XAI)," IEEE Access, vol. 6, pp. 52138-52160, 2018.
  22. S. S. Shen et al., "Interpretable credit risk modeling for SMEs using machine learning with feature engineering," IEEE Access, vol. 7, pp. 164111-164121, 2019.
  23. P. Gadepally et al., "The big data ecosystem at Lincoln Laboratory," IEEE Transactions on Big Data, vol. 2, no. 1, pp. 34-45, 2016.
  24. K. Simonyan, A. Vedaldi, and A. Zisserman, "Deep inside convolutional networks: Visualising image classification models and saliency maps," arXiv preprint arXiv:1312.6034, 2013.