Search for a command to run...
Advancements in mixed reality (MxR) technology have significantly enhanced neurosurgical navigation by improving visualization, spatial accuracy, and surgical outcomes. MxR navigation (MRN) provides a cost-effective and intuitive alternative to conventional navigation systems, enabling stable spatial mapping and independent navigation. Despite this potential, implementing MRN is complex, creating a need for structured workflows and rigorous accuracy assessments. To address this gap, this protocol presents a structured, modular, and reproducible lab protocol based on the open-source platform 3D Slicer, enabling effective implementation and evaluation of developed MRN systems in neurosurgical contexts. This protocol clearly defines sequential steps, required inputs, and expected outputs for both visualization-oriented and navigation-oriented workflows. It encompasses preprocessing (anonymization, quality checks, multimodal image fusion), semi-automatic segmentation of anatomical structures (e.g., lesions, vessels, fiber tracts), and the generation of precise 3D surface models (STL or OBJ formats). For navigation scenarios, the protocol includes parameterization of fiducial landmarks, anatomical surfaces, and laser projections for precise virtual-to-physical registration. Accuracy and performance validation are rigorously assessed using virtual and physical static digital twins, generating quantitative displacement analyses and intuitive visualizations directly interpretable in clinical and research environments. The modular design and exclusive reliance on open-source software ensure reproducibility, flexibility, and broad interdisciplinary accessibility, benefiting users from diverse backgrounds with basic knowledge in anatomy, neurosurgery, and computer graphics.