Search for a command to run...
Understanding how motor cortex generates movement is a foundational challenge in neuroscience [50]. Unsupervised dimensionality reduction techniques, such as principal component analysis (PCA), are widely used to transform high-dimensional neural data into a more interpretable, low-dimensional space [12]. It is broadly assumed that the dimensionality of a particular motor task-that is, the number of principal components needed to explain a fixed fraction of variance-is an intrinsic property of the underlying neural dynamics, potentially modulated by task complexity [21]. Here, we show that unsupervised estimates of dimensionality do not converge but instead scale with the number of recorded neurons. Across four animals, we found that while low-dimensional structure reflects changes in behavioral context, traditional metrics of dimensionality are relatively insensitive to increasingly complex movements. Rather, dimensionality increases with electrode count, exhibiting non-saturating growth as recordings scale to 1000 electrodes. While decoders trained on unsupervised subspaces showed only modest gains with scale, supervised methods leveraged additional electrodes to separate neural states more effectively. Surprisingly, the percentage of variance required for accurate supervised reach decoding became vanishingly small at high electrode counts ( <i><</i> 0.1% at 1000 electrodes). Our results challenge current views on cortical dimensionality and highlight the divergent properties of supervised versus unsupervised learning, motivating careful consideration of appropriate computational methods as experimental data volumes scale.