Roberts, ShannonGershon, PninaCapan, MugeJensen, DavidWang, Meng2025-08-182025-08-182025-05https://hdl.handle.net/20.500.14394/55876In the domain of traffic safety, the Human-Machine Interface (HMI) plays a critical role in modern vehicles, enhancing the driving experience, promoting safety, and facilitating communication between drivers and their vehicles. The objective of the dissertation is to propose a unified framework for designing adaptive HMIs that leverage multimodal data, including semantic, contextual, and vehicle kinematic features, to improve drivers’ takeover experience and support driver awareness. The first component of this work investigates the role of HMI design, specifically its modality, specificity, and timing, on driver performance during takeover situations. A driving simulator study revealed that HMIs with multimodal warnings and high specificity significantly improved driver takeover performance, reducing off-road glances and enhancing speed control. The second component addresses the variability in driver behavior during Level 2 (L2) automation. Using clustering algorithms on vehicle kinematic data (e.g., speed, acceleration), two distinct driver groups were identified: variable drivers and stable drivers. Variable drivers exhibited higher crash rates and longer response times to auditory warnings, whereas stable drivers maintained more consistent speed control. These insights emphasize the need for personalized adaptive HMIs that accommodate different driver characteristics. The third component introduces a two-stage latent complexity framework to capture and model roadway complexity for crash density prediction. By integrating semantic, contextual, and vehicle kinematic features, the framework generates hidden contextual representations that improve predictive performance. Analysis shows that incorporating these latent complexity features enhances the crash density prediction model performance, achieving an improvement in predictive accuracy from 87.98% to 90.46%. Together, these components form a holistic approach to data-driven, adaptive HMIs. By leveraging multimodal data from semantic, contextual, and vehicle kinematic features, the framework enables personalized, predictive, and context-aware interfaces that support control transitions and adapt to complex roadway environments, ultimately enhancing safety and system intelligence in automated driving.en-USAttribution 4.0 Internationalhttp://creativecommons.org/licenses/by/4.0/Roadway complexityScene perception evaluationHuman-machine interfaceLarge language modelsMultimodal modelingDATA-DRIVEN ADAPTIVE HMI DESIGN: LEVERAGING MULTIMODAL DATA FOR SAFER DRIVINGDissertation (Open Access)https://orcid.org/0000-0002-3304-0610