Abstract: Autonomous Vehicles (AVs) are revolutionizing transportation, but their reliance on interconnected cyber-physical systems exposes them to unprecedented cybersecurity risks. This study addresses the critical challenge of detecting real-time cyber intrusions in self-driving vehicles by leveraging a dataset from the Udacity self-driving car project. We simulate four high-impact attack vectors, Denial of Service (DoS), spoofing, replay, and fuzzy attacks, by injecting noise into spatial features (e.g., bounding box coordinates) to replicate adversarial scenarios. We develop and evaluate two lightweight neural network architectures (NN-1 and NN-2) alongside a logistic regression baseline (LG-1) for intrusion detection. The models achieve exceptional performance, with NN-2 attaining an AUC score of 93.15% and 93.15% accuracy, demonstrating their suitability for edge deployment in AV environments. Through explainable AI techniques, we uncover unique forensic fingerprints of each attack type, such as spatial corruption in fuzzy attacks and temporal anomalies in replay attacks, offering actionable insights for feature engineering and proactive defense. Visual analytics, including confusion matrices, ROC curves, and feature importance plots, validate the models' robustness and interpretability. This research sets a new benchmark for AV cybersecurity, delivering a scalable, field-ready toolkit for Original Equipment Manufacturers (OEMs) and policymakers. By aligning intrusion fingerprints with SAE J3061 automotive security standards, we provide a pathway for integrating machine learning into safety-critical AV systems. Our findings underscore the urgent need for security-by-design AI, ensuring that AVs not only drive autonomously but also defend autonomously. This work bridges the gap between theoretical cybersecurity and life-preserving engineering, offering a leap toward safer, more secure autonomous transportation. |