view article

Figure 2
(a) The U-Net schematic structure. Concatenations between symmetrical layers are the key difference of U-Nets over standard convolutional neural networks and autoencoders. In our model, we also applied dropout layers in the encoder for regularization. Dropout layers are used for further implicit data augmentation. (b) Examples of loss functions obtained during training for denoising at an intensity factor of 10. Minibatch loss quickly drops and oscillates within the range of the validation loss. Each sudden change in the minibatch loss function after 294 steps (one epoch) is due to the effect of dropout layers. The minibatch loss function is reported for better visualization of the loss decay. Validation decays slowly and mostly overlaps the training loss.

Journal logoJOURNAL OF
APPLIED
CRYSTALLOGRAPHY
ISSN: 1600-5767
Follow J. Appl. Cryst.
Sign up for e-alerts
Follow J. Appl. Cryst. on Twitter
Follow us on facebook
Sign up for RSS feeds