Search for a command to run...
The Hough transform (HT) is widely used in computer vision, tomography, and neural networks. Numerous algorithms for HT computation have been proposed, making their systematic comparison essential. However, existing comparative methodologies are either non-universal and limited to certain HT formulations or task-oriented, relying on application-specific criteria that do not fully capture algorithmic properties. This paper introduces a novel unified methodology for the systematic comparison of HT algorithms. It evaluates key characteristics, including computational complexity, accuracy, and auxiliary space complexity, while explicitly accounting for the property of self-adjointness. The methodology integrates both implementation-level and theoretical considerations related to the interpretation of HT as a discrete approximation of the Radon transform. A set of mathematically justified evaluation functions, not previously described in the literature, is proposed to support our methodology. Importantly, the methodology is universal, applicable across diverse HT paradigms, encompasses pattern-based and Fourier-based fast HT (FHT) algorithms, and offers a comprehensive alternative to existing task-specific methodologies. Its application to several state-of-the-art FHT algorithms (FHT2DT, FHT2SP, ASD2, KHM, and Fast Slant Stack) yields new experimentally confirmed theoretical insights, identifies ASD2 as the most balanced algorithm, and provides practical guidelines for algorithm selection. In particular, the methodology reveals that for image sizes up to 3000, the maximum normalized computational complexity increases as follows: FHT2DT (1.1), ASD2 (15.3), and KHM (30.6), while the remaining algorithms exhibit at least 1.1 times higher values. The maximum orthotropic approximation error equals 0.5 for ASD2, KHM, and Fast Slant Stack; lies between 0.5 and 1.5 for FHT2SP; and reaches 2.1 for FHT2DT. In terms of worst-case normalized auxiliary space complexity, the lowest values are achieved by FHT2DT (2.0), Fast Slant Stack (4.0, lower bound), and ASD2 (6.8), with all other algorithms requiring at least 8.2 times more memory.