Home / Current / Detail Article

Multi-Sensor LiDAR Integration and Interactive Web Visualization of 3D Point Cloud Data

Calvin Wijaya
Ruli Andaru ORCID
Harintaka ORCID
Bambang Kun Cahyono ORCID
Abstract
Point cloud data has become central to 3D modeling and digital twin development in architecture, engineering, and construction. While Light Detection and Ranging (LiDAR) offers high accuracy, integrating multiple sensor types for building-scale visualization—particularly for interactive 3D web applications—remains limited. This study addresses this gap by combining data from three LiDAR systems: the Leica RTC360 Terrestrial Laser Scanner (TLS), the Topcon GLS2000 TLS, and the DJI Zenmuse L2 Unmanned Laser Scanner (ULS), to generate a comprehensive 3D representation of the Engineering Research and Innovation Centre (ERIC) at Universitas Gadjah Mada. The instruments were assigned distinct roles: indoor scanning with the RTC360, façade mapping with the GLS2000, and roof capture with UAV-mounted LiDAR. Data acquisition followed geodetic workflows, georeferenced using total stations and GNSS corrections, pre-processed with dedicated software, and aligned via cloud-to-cloud registration in CloudCompare, achieving a Root Mean Square Error (RMSE) of 0.0149 m. Indoor point clouds were classified into eight semantic categories using PointNet and manually refined. The integrated dataset was published in Potree format, enabling interactive web-based exploration with visualization, measurement, clipping, and annotation tools. Results show that multi-sensor LiDAR integration enhances completeness of building-scale models, while web deployment improves accessibility for diverse stakeholders.
Keywords: LiDAR, Terrestrial Laser Scanning (TLS), Unmanned Laser Scanning (ULS), Multi-Sensor Integration, Point Cloud, 3D Visualization, Web-Based Visualization, Potree, Building Information, Digital Twin
References
[1] P. Boguslawski, S. Zlatanova, D. Gotlib, M. Wyszomirski, M. Gnat, and P. Grzempowski, “3D building interior modelling for navigation in emergency response applications,” International Journal of Applied Earth Observation and Geoinformation, vol. 114, Nov. 2022, doi: 10.1016/j.jag.2022.103066. [2] P. E. D. Love and J. Matthews, “The ‘how’ of benefits management for digital technology: From engineering to asset management,” Autom Constr, vol. 107, Nov. 2019, doi: 10.1016/j.autcon.2019.102930. [3] W. Fang, P. E. D. Love, H. Luo, J. Li, and Y. Lu, “Moving beyond 3D digital representation to behavioral digital twins in building, infrastructure, and urban assets,” Advanced Engineering Informatics, vol. 64, Mar. 2025, doi: 10.1016/j.aei.2025.103130. [4] I. Jeddoub, G. A. Nys, R. Hajji, and R. Billen, “Digital Twins for cities: Analyzing the gap between concepts and current implementations with a specific focus on data integration,” Aug. 01, 2023, Elsevier B.V. doi: 10.1016/j.jag.2023.103440. [5] V. V. Lehtola et al., “Digital twin of a city: Review of technology serving city needs,” International Journal of Applied Earth Observation and Geoinformation, vol. 114, Nov. 2022, doi: 10.1016/j.jag.2022.102915. [6] Y. Jiang, S. Yin, K. Li, H. Luo, and O. Kaynak, “Industrial applications of digital twins,” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, vol. 379, no. 2207, Oct. 2021, doi: 10.1098/rsta.2020.0360. [7] J. Leng, D. Wang, W. Shen, X. Li, Q. Liu, and X. Chen, “Digital twins-based smart manufacturing system design in Industry 4.0: A review,” J Manuf Syst, vol. 60, pp. 119–137, Jul. 2021, doi: 10.1016/J.JMSY.2021.05.011. [8] H. Masoumi, S. Shirowzhan, P. Eskandarpour, and C. J. Pettit, “City Digital Twins: their maturity level and differentiation from 3D city models,” Big Earth Data, vol. 7, no. 1, pp. 1–46, 2023, doi: 10.1080/20964471.2022.2160156. [9] M. Haraguchi, T. Funahashi, and F. Biljecki, “Assessing governance implications of city digital twin technology: A maturity model approach,” Technol Forecast Soc Change, vol. 204, Jul. 2024, doi: 10.1016/j.techfore.2024.123409. [10] J. Oldfield, P. Van Oosterom, J. Beetz, and T. F. Krijnen, “Working with open BIM standards to source legal spaces for a 3D cadastre,” ISPRS Int J Geoinf, vol. 6, no. 11, Nov. 2017, doi: 10.3390/ijgi6110351. [11] Y. Li, Y. Li, and Z. Ding, “Building Information Modeling Applications in Civil Infrastructure: A Bibliometric Analysis from 2020 to 2024,” Nov. 01, 2024, Multidisciplinary Digital Publishing Institute (MDPI). doi: 10.3390/buildings14113431. [12] S. A. Bello, S. Yu, C. Wang, J. M. Adam, and J. Li, “Review: Deep learning on 3D point clouds,” Remote Sens (Basel), vol. 12, no. 11, pp. 1–34, 2020, doi: 10.3390/rs12111729. [13] H. Zhang et al., “Deep learning-based 3D point cloud classification: A systematic survey and outlook,” Sep. 01, 2023, Elsevier B.V. doi: 10.1016/j.displa.2023.102456. [14] S. Emadi and M. Limongiello, “Optimizing 3D Point Cloud Reconstruction Through Integrating Deep Learning and Clustering Models,” Electronics (Switzerland), vol. 14, no. 2, Jan. 2025, doi: 10.3390/electronics14020399. [15] R. Zhang, Y. Wu, W. Jin, and X. Meng, “Deep-Learning-Based Point Cloud Semantic Segmentation: A Survey,” Electronics (Basel), vol. 12, no. 17, p. 3642, 2023, doi: 10.14733/cadconfp.2023.95-100. [16] S. Liu, M. Zhang, P. Kadam, and C.-C. J. Kuo, 3D point cloud analysis : traditional, deep learning, and explainable machine learning methods. Switzerland: Springer International Publishing, 2021. [17] Y. Guo, H. Wang, Q. Hu, H. Liu, L. Liu, and M. Bennamoun, “Deep Learning for 3D Point Clouds: A Survey,” IEEE Trans Pattern Anal Mach Intell, pp. 1–1, 2020, doi: 10.1109/tpami.2020.3005434. [18] G. Rocha, L. Mateus, J. Fernández, and V. Ferreira, “A scan-to-bim methodology applied to heritage buildings,” Heritage, vol. 3, no. 1, pp. 47–65, 2020, doi: 10.3390/heritage3010004. [19] M. Buldo, L. Agustin-Hernandez, C. Verdoscia, and R. Tavolare, “A scan-To-BIM workflow proposal for cultural heritage. automatic point cloud segmentation and parametric-Adaptive modelling of vaulted systems,” International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives, vol. 48, no. M-2–2023, pp. 333–340, 2023, doi: 10.5194/isprs-Archives-XLVIII-M-2-2023-333-2023. [20] Y. Xu, X. Tong, and U. Stilla, “Voxel-based representation of 3D point clouds: Methods, applications, and its potential use in the construction industry,” Jun. 01, 2021, Elsevier B.V. doi: 10.1016/j.autcon.2021.103675. [21] H. Harintaka and C. Wijaya, “Improved deep learning segmentation of outdoor point clouds with different sampling strategies and using intensities,” Open Geosciences, vol. 16, no. 1, Jan. 2024, doi: 10.1515/geo-2022-0611. [22] L. Truong-Hong and D. F. Laefer, “Application of terrestrial laser scanner in bridge inspection: Review and an opportunity,” Engineering for Progress, Nature and People, no. April 2016, pp. 2713–2720, 2014, doi: 10.2749/222137814814070190. [23] Y. Xie, J. Tian, and X. X. Zhu, “Linking Points With Labels in 3D: A Review of Point Cloud Semantic Segmentation,” ArXiv, no. August, 2019. [24] W. Blaszczak-Bak, A. Masiero, P. Bąk, and K. Kuderko, “Integrating Data from Terrestrial Laser Scanning and Unmanned Aerial Vehicle with LiDAR for BIM Developing,” in International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives, International Society for Photogrammetry and Remote Sensing, May 2024, pp. 25–30. doi: 10.5194/isprs-archives-XLVIII-1-2024-25-2024. [25] G. Lenda and U. Marmol, “Integration of high-precision UAV laser scanning and terrestrial scanning measurements for determining the shape of a water tower,” Measurement (Lond), vol. 218, Aug. 2023, doi: 10.1016/j.measurement.2023.113178. [26] X. Dai et al., “A novel method for ULS-TLS forest point cloud registration based on height context descriptor,” For Ecosyst, p. 100369, Dec. 2025, doi: 10.1016/j.fecs.2025.100369. [27] S. Shokirov, T. Jucker, S. R. Levick, A. D. Manning, and K. N. Youngentob, “Using multiplatform LiDAR to identify relationships between vegetation structure and the abundance and diversity of woodland reptiles and amphibians,” Remote Sens Ecol Conserv, vol. 10, no. 4, pp. 448–462, Aug. 2024, doi: 10.1002/rse2.381. [28] M. Balestra et al., “LiDAR Data Fusion to Improve Forest Attribute Estimates: A Review,” Current Forestry Reports, vol. 10, no. 4, pp. 281–297, Aug. 2024, doi: 10.1007/s40725-024-00223-7. [29] H. Niwa, H. Ise, and M. Kamada, “Suitable LiDAR Platform for Measuring the 3D Structure of Mangrove Forests,” Remote Sens (Basel), vol. 15, no. 4, Feb. 2023, doi: 10.3390/rs15041033. [30] Y. Deng et al., “Registration of TLS and ULS Point Cloud Data in Natural Forest Based on Similar Distance Search,” Forests, vol. 15, no. 9, Sep. 2024, doi: 10.3390/f15091569. [31] M. Weinmann, Reconstruction and analysis of 3D scenes: From irregularly distributed 3D points to object classes. Springer International Publishing, 2016. doi: 10.1007/978-3-319-29246-5. [32] C. C. Aggarwal, Outlier Analysis, 2nd ed. New York, USA ISBN: Springer International Publishing, 2017. doi: 10.1016/b978-012724955-1/50180-7. [33] P. J. Besl and N. D. McKay, “A Method for Registration of 3-D Shapes,” IEEE Trans Pattern Anal Mach Intell, vol. 14, Feb. 1992. [34] D. S. Grant, Cloud To Cloud Registration for 3D Point Data. 2013. [35] C. R. Qi, H. Su, K. Mo, and L. J. Guibas, “PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation,” Computer Vision and Pattern Recognition, 2017. [36] M. Schütz, , Stefan Ohrhallinger, and M. Wimmer, “Fast Out-of-Core Octree Generation for Massive Point Clouds,” Computer Graphics Forum, vol. 39, no. 7, Nov. 2020. [37] M. Schütz, “PotreeConverter-Uniform Partitioning of Point Cloud Data into an Octree.” [38] G. Metzer, R. Hanocka, R. Giryes, N. J. Mitra, and D. Cohen-Or, “Z2P: Instant Visualization of Point Clouds,” ArXiv, vol. 41, no. 2, 2022. [39] M. Meijers, “PCSERVE-ND-Pointclouds Retrieval over the Web,” in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Copernicus Publications, Oct. 2022, pp. 193–200. doi: 10.5194/isprs-annals-X-4-W2-2022-193-2022. [40] Z. H. Chen, E. Che, F. X. Li, M. J. Olsen, and Y. Turkan, “Web-based Deep Segmentation of Indoor Point Clouds,” in International Symposium on Automation and Robotics in Construction, 2019. [41] F. Gaspari, F. Ioli, F. Barbieri, C. Rivieri, M. Dondi, and L. Pinto, “Rediscovering cultural heritage sites by interactive 3d exploration: A practical review of open-source webgl tools,” in International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives, International Society for Photogrammetry and Remote Sensing, Jun. 2023, pp. 661–668. doi: 10.5194/isprs-Archives-XLVIII-M-2-2023-661-2023. [42] O. Martinez-Rubi et al., “Taming the beast: Free and open-source massive point cloud web visualization,” in Capturing Reality, Salzburg, Austria, 2015. [Online]. Available: http://ahn2.pointclouds.nl/