Search for a command to run...
Currently, a wide array of clustering algorithms have emerged, yet many approaches rely on K-means to detect clusters. However, K-means is highly sensitive to the selection of the initial cluster centers, which poses a significant obstacle to achieving optimal clustering results. Moreover, its capability to handle nonlinearly separable data is less than satisfactory. To overcome the limitations of traditional K-means, we draw inspiration from manifold learning to reformulate the K-means algorithm into a new clustering method based on manifold structures. This method not only eliminates the need to calculate centroids in traditional approaches, but also preserves the consistency between manifold structures and clustering labels. Furthermore, we introduce the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\ell _{2,1}$</tex-math></inline-formula>-norm to naturally maintain class balance during the clustering process. Additionally, we develop a versatile K-means variant framework that can accommodate various types of distance functions, thereby facilitating the efficient processing of nonlinearly separable data. The experimental results of several databases confirm the superiority of our proposed model.