This study proposes a multi-layered XR system architecture for highly automated driving environments, where XR serves as a core interface delivering real-time information and immersive experiences to users. The architecture consists of five layers: environmental sensing (LiDAR, RADAR, cameras, V2X), edge computing and AI-based situation analysis, real-time XR content rendering using Unity/Unreal, user devices (AR HUDs, HMDs, MR displays), and security and management (encryption, access control, OTA updates). Case studies of Audi, Hyundai Mobis, and NVIDIA validate the technical feasibility of the system. Key operational factors include real-time responsiveness, driver attention management, communication efficiency, and data privacy. The study also explores system integration based on OpenXR, glTF, and ISO/SAE 21434 standards. Various application scenarios—such as in-drive AR navigation, MR content consumption during stops, and emotionally responsive UX—demonstrate the scalability of XR in autonomous mobility.