Figure 6
Example images and classification for shade-off (brown), bone (green) and the LCN (blue) classes performed on an ANATOMIX dataset comparing the result obtained with Otsu thresholding, the Sensor3D model with 32 batch size and 70 training size, the U-Net model with 64 batch size and 70 training size versus manual segmentation (ground truth). The standard Otsu thresholding has the worst outcome, mislabeling the bone class with background values because gray value segmentation yields ambiguous results. Both Sensor3D and U-Net models can correctly segment the voids, bone and shade-off regions which Otsu thresholding and other simple segmentation methods cannot. |