Search for a command to run...
Abstract 3D building models play a critical role in smart cities and strongly support applications in urban planning, augmented reality and urban event simulation. Urban scale city modelling with City Geography Markup Language (CityGML) LOD2 building models have been constructed in over several developed cities due to their significant role and relatively high cost. However, existing single-building reconstruction methods for LOD2 models are unsatisfactory in preserving roof details, and large-scale 3D building reconstruction still requires extensive manual editing. This paper proposes a fully automated framework for generating CityGML LOD2 building models with preferred roof details from photogrammetric point clouds from aerial oblique images, aiming to address two key challenges: (1) difficulties in LOD2 building model generation caused by missing facade photogrammetric point clouds, and (2) insufficient fidelity of building roof details. Based on the observation that buildings have typical “roof-vertical walls-ground” structures, this paper infers facade areas by height maps generated from roof point clouds. Besides, the Hypothesis-Selection-Based (HSB) polygon surface reconstruction frameworks are extended by introducing a novel voxel depth index to measure the importance of each candidate planar unit in preserving roof details. Experimental comparison with existing HSB methods and deep-learning-based methods revealed that the reconstruction of proposed methods achieves the best geometry accuracy in Root Mean Squared Error (RMSE) ranging from 0.157m to 0.660m, and also achieves the best model coverage that is between 75.14% to 93.15%. Reconstruction applications using two typical datasets which include 288 buildings and 106 buildings respectively indicate that our method is competent for the task of large-scale 3D building reconstruction, thus, supporting various urban computing applications related to fine-scale 3D building models.