Figure 3. (a) Original drone image with two YOLO bounding boxes. (b) Masked image. The first bounding box is placed in the dark area and the corresponding pixels will not be mapped to the TP, only the small bounding box on the tower of interest will be mapped.
A method that maps the paint damages found in the 2D images using a combined AI and color threshold approach onto a large-scale structure, in this case, a transition piece, is explained in reference [12]. This method has also been applied in this study. The method combines the information found in the metadata of the images with the position of the known 3D model of the large-scale structure. This 3D model can both be a photogrammetry reconstruction, see [22] and [23] of the large-scale structure or a CAD model that has been moved to the correct position in the georeferenced coordinate system used in the images.
The purpose of mapping the damages onto the large-scale structure is to make a detailed visual Digital Twin that gives an overview of the defects/damages in 3D thus making it possible to identify any systematics in the positions of the damages. This information can be used in the optimization of production. The YOLO algorithm identifies together with the color threshold algorithm the pixels in an image that corresponds to paint defects/damages, this 2D information is mapped to the 3D CAD or reconstructed model using the equation below