Search for a command to run...
Abstract Recent efforts to build comprehensive tissue and tumor atlases leverage diverse spatial technologies to measure transcriptomic, proteomic, epigenetic, and other modalities with hundreds to thousands of features at thousands to millions of spatially resolved locations in a tissue slice. Integrating such data across spatial technologies that differ in molecular features, spatial resolution, and tissue morphology remains a major challenge. We introduce MultimOdal Spatial Alignment and Integration with Coordinate neural Field (MOSAICField), a unified framework for aligning spatial slices across arbitrary combinations of experimental modalities. MOSAICField computes two types of spatial alignments across multiple slices from the same tissue: physical alignment , which reconstructs a contiguous 3D model of the original tissue, and morphological alignment , which maps distinct morphological or anatomical structures, such as ducts, veins, or neurons, that may traverse the tissue at different angles relative to the direction of slicing. MOSAICField computes both alignments using a deep neural network that estimates a nonlinear deformation field with a multimodal feature loss. We evaluate MOSAICField on simulated data and a prostate cancer sample from the Human Tumor Atlas Network (HTAN), containing more than a dozen spatial slices with multimodal profiling data. MOSAICField constructs an accurate 3D tumor model, tracks the architecture of the prostatic ductal system, and improves analysis of features within and across modalities, outperforming existing methods.