Figure 3
(Left) Typical process for ground-truth labeling then training of a CNN model for DL segmentation of Zernike-nanoCT imaged osteocytic bone. (Right) Flow chart of the generation of increasing numbers of ground-truth data. Stage 1 includes manual segmentation of 20 labeled images (ground-truth data) from the input 3D dataset, used to train and validate each of the Sensor3D and U-Net models. In stage 2, the trained Sensor3D model was used to segment 50 images from 3D data. These roughly segmented images were corrected for mislabeled pixels and used as new data to train and validate new Sensor3D and U-Net models. In stage 3, the trained Sensor3D model was used to segment 70 images from 3D data. These labeled images were corrected and selected as ground-truth images, used to train and validate new models. The final labeled results including bone, shade-off and LCN classes were filtered using the `remove islands and closing' morphometrics operations. |