Figure 1. The flowchart shows the different steps needed to
build a visual Digital Twin of a large-scale structure and subsequently
update the physical structure.
Methodology
The proposed Digital Twin presents the observed damages on a transition
piece. Several steps have been taken to make a very detailed visual
Digital Twin. The damages are found in images collected during drone
flights with the use of artificial intelligence. The algorithm You Only
Look Once (YOLO), see [17] and [18] has been selected due to its
benefits such as high classification accuracy and real-time throughput
in our specific case. The images only require to pass on time through
the network unlike other object detector algorithms, hence the name of
the algorithm. YOLO reasons at the level of the overall image, instead
of successively examining many regions. More than 2000 images, labeled
with 10 different damage categories, were used to train the
convolutional neural network. From these images, even more images were
made by changing the contrast and light etc. in the images. This was
done to improve the resilience of the network towards the changes in the
light conditions caused by changing weather seasons. The process of
labeling the images was performed with a purpose-built semiautomatic
tool. The output from the YOLO algorithm is a bounding box containing
the damage together with confidence levels, see reference [19].
Figure 2 below shows the original drone image together with the content
of the bounding box for four cases where the YOLO algorithm has
identified some kind of defect or damage. The location of the bounding
box is also shown in the drone image. The AI algorithm currently divides
the defects into 10 different categories. These categories include rust,
scuffs, and several paint damage types. Only paint damage examples are
presented. A color threshold algorithm, see [20] and [21] is
applied to the image sections in all the bounding boxes. The black color
in Figure 2 (a) to (c) shows the segmented pixels. The segmented pixels
represent the paint damage in the image. These pixels are mapped to the
reconstructed model or a 3D CAD model that has been placed in the same
georeferenced coordinate system as the drone images. The segmentation
algorithm can distinguish between diffuse reflection and paint damage,
this can be observed in Figure 2 (a) where diffuse reflection is noticed
in the upper half of the image. Shadows in the images are also not a
problem for the segmentation algorithm as seen in Figure 2 (c) where
shadows can be seen in the upper half of the bounding box image.