view article

Figure 3
Architecture of the CNN composed of 22 layers: the features include an extraction section, constituted by three convolutional blocks each formed by a Conv1D layer followed by activation, dropout and average pooling layers. The number of Conv1D filters is 80 in the first block and increases incrementally by the same amount in each of the subsequent blocks to become 240 in the last one. The kernel size starts at 200 and is divided by 2 in the second block and by 4 in the third one. Other parameters include sub-sample length = 2, padding = `same' and activation function = `relu'. The dropout rate is 0.3 in each block, and the average pooling 1D layers use a pool size of 3. The flattened layer is followed by the classification section, constituted by four densely connected blocks, each formed by a dense layer followed by a batch normalization one. The numbers of neurons used in the dense layer are 2800, 1400, 700 and 70. Each dense layer uses a l2 kernel regularizer and the `relu' activation function, except for the last one which uses `tanh'. The last block is followed by the output layer formed of seven units (one for each crystal class), with the `softmax' activation function, to ensure that the sum of the seven output neuron values is always equal to 1.

Journal logoJOURNAL OF
APPLIED
CRYSTALLOGRAPHY
ISSN: 1600-5767
Follow J. Appl. Cryst.
Sign up for e-alerts
Follow J. Appl. Cryst. on Twitter
Follow us on facebook
Sign up for RSS feeds