research papers
A semi-supervised deep-learning approach for automatic
classificationaPoolesville High School, Poolesville, MD 20837, USA, bNIST Center for Neutron Research, NIST, Gaithersburg, MD 20899, USA, cDepartment of Materials Science and Engineering, University of Maryland, College Park, MD 20742, USA, dMaterials Measurement Laboratory, NIST, Gaithersburg, MD 20899, USA, and eMaryland Quantum Materials Center, College Park, MD 20742, USA
*Correspondence e-mail: william.ratcliff@nist.gov
The structural solution problem can be a daunting and time-consuming task. Especially in the presence of impurity phases, current methods, such as indexing, become more unstable. In this work, the novel approach of semi-supervised learning is applied towards the problem of identifying the i.e. diffraction patterns with the associated and unlabeled data, i.e. diffraction patterns that lack this information. This approach allows the models to take advantage of the troves of unlabeled data that current supervised learning approaches cannot, which should result in models that can more accurately generalize to real data. In this work, powder diffraction patterns are classified into all 14 Bravais lattices and 144 space groups (the number is limited due to sparse coverage in databases), which covers more crystal classes than other studies. The reported models also outperform current deep-learning approaches for both and classification using fewer training data.
and the of inorganic crystals. The reported semi-supervised generative deep-learning model can train on both labeled data,Keywords: machine learning; powder neutron diffraction; semi-supervised; indexing.
1. Introduction
The first step towards understanding the properties of a crystalline material at a microscopic level is identifying the DICVOL06 (Boultif & Louër, 1991), TOPAS (Coelho, 2018), GSAS-II (Toby & Von Dreele, 2013) or N-TREOR (Werner et al., 1985; Altomare et al., 2000). These programs output a set of space groups and lattice parameters that could represent the crystal. Using Le Bail (Le Bail et al., 1988) and Pawley (Pawley, 1981) refinements, the that fits the diffraction pattern the best can be identified. Rietveld (Rietveld, 1967, 1969) can then be applied to profile the lattice parameters and check the In the presence of impurity phases, this approach becomes more expensive as peaks must be selected manually or tolerance levels must be tuned to discard a certain number of peaks.
However, this is nontrivial. The first part of determination is indexing. There are several programs which can be used, such asOne of the approaches for identifying the positions of atoms in a crystal is the charge-flipping algorithm (CFA) (Oszlányi & Sütő, 2008; Palatinus, 2013; Baerlocher et al., 2007). CFA is an iterative approach that relies on fast Fourier transforms to determine the of a material (Nussbaumer, 1981; Palatinus & Chapuis, 2007). For CFA, the and have to be known, that is, we must have already been successful with some degree of indexing. CFA also cannot handle impurity phases, which are prevalent in many real-world samples.
Data science methods are being used increasingly in materials development (Balachandran, 2020; Vandermause et al., 2020; Reyes & Maruyama, 2019; Karigerasi et al., 2018). An example of this is the use of supervised neural networks (NNs) to analyze diffraction patterns. Supervised learning is an approach that seeks to learn a functional mapping between data and their labels. The benefit of NNs is that they, unlike CFA, do not require additional parameters, such as the or lattice parameters. Although some approaches use NNs to aid in the (Ozaki et al., 2020; Chang et al., 2020; Schmidt et al., 2019; Schleder et al., 2019), others use NNs to classify diffraction patterns on the basis of the These classifiers can be trained to identify impurity phases and can be tailored towards specific detectors or parameters. For example, Ryu et al. (2019) trained an NN to classify the diffraction patterns of crystals that had defects. Liu et al. (2019) used the pair-density function with powder neutron diffraction data for space-group classification. Ziletti et al. (2018) used a convolutional neural network to classify simulated single-crystal diffraction X-ray image data into eight space groups.
A number of studies represent powder diffraction patterns as 2D images. However, the information is inherently one dimensional. Previous groups probably used the image approach to take easy advantage of trained models developed by the machine-learning community. Unfortunately, this could introduce more complexities to the model. Garcia-Cardona et al. (2019) carried out one of the only studies to examine neutron scattering data and used a 1D approach with simulated powder diffraction data to both differentiate perovskites into five crystal systems and tune the lattice parameters using regression. This study only looked at a small subset of crystals.
A significant challenge with NNs is that they struggle to generalize to new data sets. Most models that predict the ; Zhu & Goldberg, 2009; Kingma et al., 2014; Kipf & Welling, 2016). We employ a generative network that can extract features from the unlabeled data distribution and match these features with the corresponding This allows semi-supervised learning to be used on more data sets, especially ones where labels are not available.
of a material use less than 100 space groups in their training data set, which limits their application to new diffraction patterns. However, large labeled diffraction data sets are often rare, as labeling them is an expensive task. For this reason, we use a semi-supervised model, which takes advantage of both labeled and unlabeled data during training (Odena, 2016 and space-group classification using powder neutron diffraction data. Our NNs are trained with data spanning 144 space groups and 14 Bravais lattices. The models used in this study are freely available and can be downloaded (Lolla & Liang, 20212. Methods
2.1. Data
To test our approach under conditions where we know the correct answer, we worked with simulated data sets. Our data were taken from the Inorganic https://www.psds.ac.uk/icsd), which contains structural information about more than 210 000 crystals (Bergerhoff et al., 1983). A total of 138 362 diffraction patterns were simulated using TOPAS (Coelho, 2018). For the Bravais lattices, we combine the rhombohedral and tetragonal classes for a total of 14 classes with `F', `I', `P' and `C' representing the face-centered, body-centered, primitive and base-centered lattices, respectively. We note that there is an inherent class imbalance in the ICSD, as shown in Fig. 1. The most prevalent classes in this data set were the primitive hexagonal, the face-centered cubic and the primitive orthorhombic lattices. The least represented lattices were the face-centered orthorhombic and body-centered orthorhombic lattices.
Database (ICSD;For the space groups, we used 136 454 of the simulated diffraction patterns. We used only the space groups that had more than 50 diffraction patterns, leaving us with 144 out of the 225 space groups present in the ICSD. The most frequent Pnma), which is orthorhombic, and accounts for the disproportionately large number of orthorhombic (P) diffraction patterns in the ICSD data set. A complete list of the 144 space groups used is shown in Table 1. A complete list of the ICSD IDs used in this study can be found in the GitHub repository at the URL https://github.com/usnistgov/semi-supervised-neutron (Lolla & Liang, 2021).
is No. 62 (
|
In this study, we use a 1D approach rather than the traditional 2D image approach. Our data set consists of diffraction patterns of powders. Examples of the 1D diffraction patterns are shown in Fig. 2. To normalize these diffraction patterns, we divided all intensities in each diffraction pattern by the maximum intensity. This ensures that the new maximum intensity is equal to 1 and the minimum is equal to 0.
2.2. Models
We use two approaches to classify the diffraction patterns: a supervised approach using convolutional neural networks (CNNs) and a semi-supervised approach using a semi-supervised generative adversarial network (SGAN).
2.3. Supervised model
We used a 1D ResNet-18, a residual network (He et al., 2016), model to identify the of the diffraction patterns. ResNets are examples of CNNs which are commonly used for image classification network algorithms. CNNs consist of convolutional layers, which are responsible for extracting high-level features, such as edges and colors, from images. These layers are used to create a feature map consisting of the most relevant characteristics of the image. To create this map using 1D data, a filter of size n is applied to a larger sequence with size m, and the dot product of every n consecutive values and the filter is computed. This generates a smaller matrix that only includes the relevant features (LeCun & Bengio, 1995).
A ResNet was used in this study to overcome the degradation problem, which occurs when neural networks are too dense so that the accuracy saturates and then quickly degrades (He et al., 2016). ResNets are characterized by their residual blocks, which contain convolutional layers with an identity function. Fig. 3 shows the model architecture used for the ResNet-18, and includes an example of a ResNet block used in this model. During training, we randomly selected 90% of the data to use as the training set and the remaining 10% of the data were used to test the model. This testing data set was distinct from the training one, so the model did not learn from the testing data. These models and the associated training scripts are available on GitHub (Lolla & Liang, 2021).
2.4. Semi-supervised model
We also used an SGAN (Odena, 2016; Goodfellow et al., 2020; Salimans et al., 2016). The SGAN consists of two models: a Generator and a Discriminator. The Generator tries to fool the Discriminator with fake diffraction patterns, while the Discriminator aims to differentiate between real and fake diffraction patterns. The Discriminator also classifies the real labeled data into the corresponding class.
2.4.1. Generator
The purpose of the Generator is to sample the latent space, a high-dimensional feature space, to generate realistic diffraction patterns. The inputs to the Generator were sampled from a random normal distribution with a mean of 0 and a standard deviation of 1. The Generator consists of 1D Convolutional Transpose layers, 1D Batch Normalization layers and a Leaky rectified linear unit (ReLU) activation function. The Convolutional Transpose layers are used to upsample the data (Radford et al., 2015; Dumoulin & Visin, 2016). The Batch Normalization layers standardize the output of each layer, which reduces error when the model tries to generalize to new inputs (Ioffe & Szegedy, 2015) and has also been shown to reduce mode collapse, a major problem in GANs (Radford et al., 2015). Mode collapse occurs when the Generator only produces a few distinct diffraction patterns despite the latent space input. The Leaky ReLU (Xu et al., 2015) activation with α = 0.2 is used rather than ReLU to reduce the vanishing gradients problem (Radford et al., 2015). Graphs of the ReLU and the Leaky ReLU activation functions are shown in Fig. 4. For negative values, the derivative for the Leaky ReLU function is equal to α, but for the ReLU function, it is equal to 0. By having a nonzero derivative for all values, the Leaky ReLU is used to combat the sparse gradient problem that occurs while training GANs. Due to our normalization method, which was dividing all values in a diffraction pattern by the maximum intensity, the Discriminator's inputs were in the range from 0 to 1. For this reason, a sigmoid activation was applied to the last layer of our Generator, rather than the hyperbolic tangent function recommended by Salimans et al. (2016). Fig. 5 shows the model architecture of the Generator.
2.4.2. Discriminator
The Discriminator has two objectives: to differentiate between real and generated data, and to classify the real data into the correct class. To do this, we used the same 1D ResNet-18 model described in Section 2.3, but an activation function to the last fully connected layer, as proposed by Salimans et al. (2016). This activation function is shown in equation (1) and is a version of the softmax activation:
In this equation, lk(x) represents the logit for class k with data x. By doing this, we eliminate the need for a second output layer and instead use only the logits from the classification layer. By applying this activation function, diffraction patterns with larger logits, which signify more confident predictions, will be classified as `real', whereas diffraction patterns with smaller logits will be classified as `fake'. This encourages the Discriminator to be more confident in its predictions, which sharpens the decision boundary between classes. The Discriminator's architecture is shown in Fig. 6.
While training the Discriminator, there are two modes: supervised and unsupervised. During unsupervised training, the Discriminator acts the same way it would in a regular GAN as it tries to determine that the generated diffraction patterns are fake and the data drawn from the unsupervised set is real. In the supervised mode, the Discriminator is trained to predict the class label for real samples. Training in the unsupervised mode can help the Discriminator extract features from the data, and training on the supervised data will allow the Discriminator to use those extracted features for classification.
2.4.3. Loss functions and objective functions
The modified min–max loss proposed by Goodfellow et al. (2020) was used for the adversarial loss between the networks. The objective function that the Generator tries to maximize is shown in equation (2):
represents the parameters in the Generator and z(i) represents the random values in the latent space. G(z(i)) is the generated diffraction pattern from the Generator and is the probability that the Discriminator predicts that the generated pattern is real.
For the Discriminator, the objective function to be maximized is shown in equation (3):
represents the parameters in the Discriminator, x(i) is the unsupervised real data, and maximizing D(x(i)) implies that the model can identify real data. Like the Generator's objective function, z(i) represents the random values in the latent space and G(z(i)) is the generated diffraction pattern from the latent space. Increasing the value of shows that the Discriminator can determine that the generated patterns are fake. Equation (3) also includes the categorical cross loss, which is shown in the term . Here, C represents the number of classes, ti shows whether the ith class is the label of the diffraction pattern and si is the Discriminator's prediction.
2.4.4. Training details
Fig. 7 shows the training pipeline used in the SGAN. During SGAN training, the Discriminator has three inputs: a generated sequence from the Generator, a powder diffraction pattern that is labeled with either the or the and an unlabeled powder diffraction pattern that does not include the We train our SGAN using four different amounts of labeled training data. In all scenarios, we randomly select 10% of the data as testing data, which is distinct from the labeled training data and the unlabeled training data. In the first scenario, we use 5% of the data as labeled training data and 85% as unlabeled training data. In the second, we use 10% of the data as labeled training data and 80% as unlabeled training data. In the third, we use 25% of the data as labeled training data and 65% of the data as unlabeled training data, and finally we use 50% of the data as labeled training data with 40% of the data as unlabeled training data. We also train our supervised ResNet with the same 5, 10, 25 and 50% of the data to compare the accuracy of the SGAN with that of the purely supervised approach.
To train a supervised classifier, we use only the percentage of labeled training data. The model uses a powder diffraction pattern as input and aims to differentiate between the various
classes.Table 2 shows the hyperparameters used in the ResNet and the SGAN.
|
We used PYTORCH (Paszke et al., 2017) as a deep-learning framework. To accelerate training, each model was trained on eight NVIDIA Tesla V100 Tensor Cores.
3. Results and discussion
3.1. Supervised model
Our supervised ResNet trained on 90% of the data set had an accuracy of 88%. The confusion matrix for the . By plotting the predicted against the actual the confusion matrix provides more information about the sets of classes that the network misclassified. If the model had a perfect testing accuracy, the values along the principal diagonal would sum to 100% as the network would have classified every diffraction pattern correctly. Again, there is a clear imbalance in the sampled ICSD data set, with orthorhombic (F) and orthorhombic (I) having the least samples. From the confusion matrix, we can see that, despite the fact that orthorhombic (P) is the most prevalent class, the model misclassifies some of these as monoclinic (P) crystals. The network also has trouble differentiating between triclinic (P) and monoclinic (P) diffraction patterns, as both of these classes have low symmetries, agreeing with previous studies (Garcia-Cardona et al., 2019). Similarly to Suzuki et al. (2020), we believe that this result was caused by undersampling the triclinic crystals.
model is shown in Fig. 8For the space-group identification, our model had a top-1 accuracy of 80.6% and a top-5 accuracy of 90.27% across all 144 space groups. We trained our model on all 230 space groups and found that the model had a top-1 accuracy of 74% and a top-5 accuracy of 85%. We decided to investigate further the model on only the 144 most prevalent space groups within the data set, due to a major class imbalance. Some space groups had less than 50 diffraction patterns, less than 0.03% of our data set. Accuracy was measured by dividing the number of correctly classified diffraction patterns in the testing set by the total number of patterns in the testing set. Top-5 accuracy is the percentage of samples for which the actual et al. (2019) used machine learning with a pairwise distribution function with a top-1 accuracy of 71% and a top-5 accuracy of 90% across 45 space groups. Tiong et al. (2020) classified X-ray diffraction data into 8, 20, 49 and 72 space groups (Table 3). Their accuracy decreased from 99 to 80% for 8 and 72 space groups, respectively, implying that this accuracy would decrease further if their model was trained on more space groups. Aguiar et al. (2019) had a top-2 accuracy greater than 80% across all space groups, but used a data set consisting of 650 000 diffraction patterns, more than five times the size of the data set used in this study. However, they used a 1D network, suggesting that a 1D approach can lead to more accurate predictions. We note that we did not take advantage of data augmentation.
was one of the model's top five predictions. This outperforms most current models of which we are aware. Liu3.2. Semi-supervised model
We compare the accuracy of the SGAN with the accuracy of the supervised model in Table 4. The SGAN consistently outperforms the purely supervised model, showing that the semi-supervised approach has the potential to be more applicable in the real world. A graph comparing the accuracy of the supervised and semi-supervised models is shown in Fig. 9. This graph shows that although the accuracy of the SGAN is impacted by a lack of data, the difference between the accuracy of the SGAN and the accuracy of the supervised model is greatest when only 5% of the data are used.
|
4. Conclusion
In this study, we use both CNNs and a semi-supervised GAN to investigate supervised and semi-supervised approaches for
classification. We demonstrate that SGANs can prove to be more accurate with limited quantities of labeled data for and space-group classification. Further, we explore a 1D approach rather than a traditional 2D one. Our 1D model is more accurate than 2D image models, which agrees with previous results in the literature. Our semi-supervised model is also more applicable to real data sets which will lack large quantities of labeled data.In the future, we would like to train the SGAN to identify impurity phases and to test the method on real data sets.
Acknowledgements
The authors are grateful to Austin McDannald and Hui Wu from NIST, and Brian Toby from Argonne National Laboratory. We acknowledge Stephan Rühl for the ICSD. The work at the University of Maryland was supported by NIST. Commercial disclaimer: Any mention of commercial products within this article is for information only; it does not imply recommendation or endorsement by NIST.
Funding information
Funding for this research was provided by the National Institute of Standards and Technology (grant No. 60NANB19D027). We also acknowledge support from the Center for High Resolution Neutron Scattering (CHRNS), a national user facility jointly funded by the NCNR and the NSF under agreement No. DMR-2010792.
References
Aguiar, J., Gong, M. L., Unocic, R., Tasdizen, T. & Miller, B. (2019). Sci. Adv. 5, eaaw1949. Web of Science CrossRef PubMed Google Scholar
Altomare, A., Giacovazzo, C., Guagliardi, A., Moliterni, A. G. G., Rizzi, R. & Werner, P.-E. (2000). J. Appl. Cryst. 33, 1180–1186. Web of Science CrossRef CAS IUCr Journals Google Scholar
Baerlocher, C., McCusker, L. B. & Palatinus, L. (2007). Z. Kristallogr. 222, 47–53. Web of Science CrossRef CAS Google Scholar
Balachandran, P. V. (2020). MRS Bull. 45, 579–586. CrossRef Google Scholar
Bergerhoff, G., Hundt, R., Sievers, R. & Brown, I. (1983). J. Chem. Inf. Comput. Sci. 23, 66–69. CrossRef CAS Web of Science Google Scholar
Boultif, A. & Louër, D. (1991). J. Appl. Cryst. 24, 987–993. CrossRef CAS Web of Science IUCr Journals Google Scholar
Chang, M.-C., Wei, Y., Chen, W.-R. & Do, C. (2020). MRS Commun. 10, 11–17. Web of Science CrossRef CAS Google Scholar
Coelho, A. A. (2018). J. Appl. Cryst. 51, 210–218. Web of Science CrossRef CAS IUCr Journals Google Scholar
Dumoulin, V. & Visin, F. (2016). arXiv:1603.07285. Google Scholar
Garcia-Cardona, C., Kannan, R., Johnston, T., Proffen, T., Page, K. & Seal, S. K. (2019). IEEE International Conference on Big Data (Big Data), pp. 4490–4497. New York: IEEE. Google Scholar
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A. & Bengio, Y. (2020). Commun. ACM, 63, 139–144. CrossRef Google Scholar
He, K., Zhang, X., Ren, S. & Sun, J. (2016). Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778. Los Alamitos: IEEE Computer Society. Google Scholar
Ioffe, S. & Szegedy, C. (2015). Proceedings of the 32nd International Conference on Machine Learning, pp. 448–456. PMLR. Google Scholar
Karigerasi, M. H., Wagner, L. K. & Shoemaker, D. P. (2018). Phys. Rev. Mater. 2, 094403. CrossRef Google Scholar
Kingma, D. P., Mohamed, S., Rezende, D. J. & Welling, M. (2014). Advances in Neural Information Processing Systems, edited by Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence & K. Q. Weinberger, pp. 3581–3589. Curran Associates Inc. Google Scholar
Kipf, T. N. & Welling, M. (2016). arXiv:1609.02907. Google Scholar
Le Bail, A., Duroy, H. & Fourquet, J. (1988). Mater. Res. Bull. 23, 447–452. CrossRef ICSD CAS Web of Science Google Scholar
LeCun, Y. & Bengio, Y. (1995). The Handbook of Brain Theory and Neural Networks. Cambridge: MIT Press. Google Scholar
Liu, C.-H., Tao, Y., Hsu, D., Du, Q. & Billinge, S. J. L. (2019). Acta Cryst. A75, 633–643. Web of Science CrossRef IUCr Journals Google Scholar
Lolla, S. & Liang, H. (2021). Semi-supervised Neutron, https://github.com/usnistgov/semi-supervised-neutron. Google Scholar
Nussbaumer, H. J. (1981). Fast Fourier Transform and Convolution Algorithms, pp. 80–111. Berlin: Springer. Google Scholar
Odena, A. (2016). arXiv:1606.01583. Google Scholar
Oszlányi, G. & Sütő, A. (2008). Acta Cryst. A64, 123–134. Web of Science CrossRef IUCr Journals Google Scholar
Ozaki, Y., Suzuki, Y., Hawai, T., Saito, K., Onishi, M. & Ono, K. (2020). NPJ Comput. Mater. 6, 75. Google Scholar
Palatinus, L. (2013). Acta Cryst. B69, 1–16. CrossRef CAS IUCr Journals Google Scholar
Palatinus, L. & Chapuis, G. (2007). J. Appl. Cryst. 40, 786–790. Web of Science CrossRef CAS IUCr Journals Google Scholar
Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L. & Lerer, A. (2017). The Future of Gradient-Based Machine Learning Software and Techniques, NIPS 2017 Autodiff Workshop, Long Beach, California, USA, 9 December 2017. https://openreview.net/forum?id=BJJsrmfCZ. Google Scholar
Pawley, G. S. (1981). J. Appl. Cryst. 14, 357–361. CrossRef CAS Web of Science IUCr Journals Google Scholar
Radford, A., Metz, L. & Chintala, S. (2015). arXiv:1511.06434. Google Scholar
Reyes, K. G. & Maruyama, B. (2019). MRS Bull. 44, 530–537. CrossRef Google Scholar
Rietveld, H. M. (1967). Acta Cryst. 22, 151–152. CrossRef CAS IUCr Journals Web of Science Google Scholar
Rietveld, H. M. (1969). J. Appl. Cryst. 2, 65–71. CrossRef CAS IUCr Journals Web of Science Google Scholar
Ryu, D., Jo, Y., Yoo, J., Chang, T., Ahn, D., Kim, Y. S., Kim, G., Min, H.-S. & Park, Y. (2019). Sci. Rep. 9, 15239. CrossRef PubMed Google Scholar
Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A. & Chen, X. (2016). Adv. Neural Inf. Process. Syst. 29, 2234–2242. Google Scholar
Schleder, G. R., Padilha, A. C., Acosta, C. M., Costa, M. & Fazzio, A. (2019). J. Phys. Mater. 2, 032001. CrossRef Google Scholar
Schmidt, J., Marques, M. R., Botti, S. & Marques, M. A. (2019). NPJ Comput. Mater. 5, 83. Google Scholar
Suzuki, Y., Hino, H., Hawai, T., Saito, K., Kotsugi, M. & Ono, K. (2020). Sci. Rep. 10, 21790. Web of Science CrossRef PubMed Google Scholar
Tiong, L. C. O., Kim, J., Han, S. S. & Kim, D. (2020). NPJ Comput. Mater. 6, 196. Google Scholar
Toby, B. H. & Von Dreele, R. B. (2013). J. Appl. Cryst. 46, 544–549. Web of Science CrossRef CAS IUCr Journals Google Scholar
Vandermause, J., Torrisi, S. B., Batzner, S., Xie, Y., Sun, L., Kolpak, A. M. & Kozinsky, B. (2020). NPJ Comput. Mater. 6, 20. Google Scholar
Werner, P.-E., Eriksson, L. & Westdahl, M. (1985). J. Appl. Cryst. 18, 367–370. CrossRef CAS Web of Science IUCr Journals Google Scholar
Xu, B., Wang, N., Chen, T. & Li, M. (2015). arXiv:1505.00853. Google Scholar
Zhu, X. & Goldberg, A. B. (2009). Introduction to Semi-Supervised Learning, Synthesis Lectures on Artificial Intelligence and Machine Learning 6. Morgan & Claypool. Google Scholar
Ziletti, A., Kumar, D., Scheffler, M. & Ghiringhelli, L. M. (2018). Nat. Commun. 9, 2775. Web of Science CrossRef PubMed Google Scholar
This is an open-access article distributed under the terms of the Creative Commons Attribution (CC-BY) Licence, which permits unrestricted use, distribution, and reproduction in any medium, provided the original authors and source are cited.