research papers
Committee machine that votes for similarity between materials
^{a}Japan Advanced Institute of Science and Technology, 11 Asahidai, Nomi, Ishikawa 9231292, Japan, ^{b}ESICMM, National Institute for Materials Science, 121 Sengen, Tsukuba, Ibaraki 3050047, Japan, ^{c}HPC Systems Inc., 3915 Kaigan, Minatoku, Tokyo 1080022, Japan, ^{d}Applied Artificial Intelligence Institute, Deakin University, Geelong, Australia, ^{e}Center for Materials Research by Information Integration, National Institute for Materials Science 121 Sengen, Tsukuba, Ibaraki 3050047, Japan, and ^{f}JST, PRESTO, 418 Honcho, Kawaguchi, Saitama, 3320012, Japan
^{*}Correspondence email: dam@jaist.ac.jp
A method has been developed to measure the similarity between materials, focusing on specific physical properties. The information obtained can be utilized to understand the underlying mechanisms and support the prediction of the physical properties of materials. The method consists of three steps: variable evaluation based on nonlinear regression, regressionbased clustering, and similarity measurement with a committee machine constructed from the clustering results. Three data sets of well characterized crystalline materials represented by critical atomic predicting variables are used as test beds. Herein, the focus is on the formation energy, lattice parameter and Curie temperature of the examined materials. Based on the information obtained on the similarities between the materials, a hierarchical clustering technique is applied to learn the cluster structures of the materials that facilitate interpretation of the mechanism, and an improvement in the regression models is introduced to predict the physical properties of the materials. The experiments show that rational and meaningful group structures can be obtained and that the prediction accuracy of the materials' physical properties can be significantly increased, confirming the rationality of the proposed similarity measure.
Keywords: data mining; materials informatics; firstprinciples calculations; physical properties of materials; machine learning; similarity.
1. Introduction
Computational materials science encompasses a range of methods to model materials and simulate their responses on different length and time scales (Sumpter et al., 2015). The majority of problems addressed by computational materials science are related to methods that focus on two central tasks. The first aims to predict the physical properties of materials, and the second aims to describe and interpret the underlying mechanisms (Liu et al., 2017; Lu et al., 2017; Ulissi et al., 2017). In the first task of predicting physical properties, computerbased quantum mechanics techniques (Jain et al., 2016; Kohn & Sham, 1965; Jones & Gunnarsson, 1989; Jones, 2015) in the form of well established firstprinciples calculations are generally performed with high accuracy and are applicable to any material, but with high computational cost. Recently, the increase in the use of advanced machinelearning techniques (Murphy, 2012; Hastie et al., 2009; Le et al., 2012) and the volume of computational materials databases (Jain et al., 2013; Saal et al., 2013) have provided new opportunities for researchers to construct prediction models automatically (from a huge amount of precomputed data) that predict specific physical properties with the same level of high accuracy, while dramatically reducing the computational costs (Behler & Parrinello, 2007; Snyder et al., 2012; Pilania et al., 2013; Fernandez et al., 2014; Smith et al., 2017). By contrast, the second task, i.e. describing and interpreting the mechanisms underlying the physical properties of materials, relies mostly on the experience, insight and even luck of the experts involved. In fact, comprehension of multivariate data with nonlinear correlations is typically extremely challenging, even for experts. Thus, the utilization of datamining and machinelearning techniques to discover hidden structures and latent semantics in multidimensional data (Lum et al., 2013; Landauer et al., 1998; Blei, 2012) of materials is promising, but only limited work has been reported so far (Kusne et al., 2015; Srinivasan et al., 2015; Goldsmith et al., 2017).
To apply well established machinelearning methods to solve problems in materials science, the primitive representation of materials must usually be converted into vectors, in such a way that the comparison and calculations using the new representation reflect the nature of the materials and the underlying mechanisms of the chemical and physical phenomena. However, realworld applications, especially for solving the second task, often focus on physical properties of which the mechanism is not fully understood (Rajan, 2015; Ghiringhelli et al., 2015). In these cases, it is almost impossible to represent the materials appropriately as vectors of features so that comparisons using well established mathematical calculations can reflect the similarity/dissimilarity between them. Therefore, a true datadriven approach for solving materials science problems still requires much further fundamental development.
In this study, we focus on establishing a datadriven protocol for solving the second task of computational materials science. Focusing on a specific physical property, we aim to develop a method to measure the similarity between materials from the viewpoint of the underlying mechanisms that act in these materials. The method for measuring this similarity consists of three steps: (i) variable evaluation based on nonlinear regression, (ii) regressionbased clustering and (iii) similarity measurement with a committee machine (Tresp, 2001; Opitz & Maclin, 1999) constructed based on the clustering results. The variable evaluation (Liu & Yu, 2005; Blum &Langley, 1997) aims to identify and remove irrelevant and redundant variables from the data (Duangsoithong & Windeatt, 2009; Almuallim & Dietterich, 1991; Biesiada & Duch, 2007). We carried out this analysis in an exhaustive manner by testing all combinations of predicting variables to find those variables with the potential to yield good prediction accuracy (PA) for the target variable. The regressionbased clustering method is developed from the well known Kmeans clustering method (Lloyd, 1982; MacQueen, 1967; Kanungo et al., 2002) with major modifications for breaking down a large data set into a set of separate smaller data sets, in each of which the target variables can be predicted by a different linear model. Regressionbased clustering models are then constructed for all the selected potential combinations of predicting variables, so as to construct a committee machine that votes for the similarity between the materials.
We evaluated the proposed protocol on three data sets of well characterized crystalline materials represented by appropriate predicting variables, together with their physical properties as determined through firstprinciples calculations or measured experimentally. Our experiments show that the proposed similarity measure can derive rational and meaningful material groupings and can significantly improve the prediction accuracy (PA) of the physical properties of the examined materials.
2. Methods
We consider a data set of p materials. Assume that a material with index i is described by an mdimensional predicting variable vector x_{i} = . The data set is then represented using a (p × m) matrix. The target physicalproperty values of the materials are stored as a pdimensional target vector y = . The entire dataanalysis flow is shown in Fig. 1.
2.1. Kernel regressionbased variable evaluation
To develop a better understanding of the processes that generated the data, we first utilize an exhaustive search to evaluate all variable combinations (Liu & Yu, 2005; Blum & Langley, 1997; Kohavi & John, 1997) to identify and remove irrelevant and redundant variables (Duangsoithong & Windeatt, 2009; Almuallim & Dietterich, 1991; Biesiada & Duch, 2007). We begin by learning nonlinear functions to predict the values of a specific physical property (target quantity) of the materials. We apply the Gaussian kernel ridge regression (GKR) technique (Murphy, 2012), which has recently been applied successfully to several challenges in materials science (Rupp, 2015; Botu & Ramprasad, 2015; Pilania et al., 2013). For GKR, the predicted property y = f(x) at a point x is expressed as the weighted sum of Gaussians:
where p is the number of training data points, σ^{2} is a parameter corresponding to the variance of the Gaussian kernel function, and = is the squared L^{2} norm of the difference between the two mdimensional vectors x_{i} and x. The coefficients c_{i} are determined by minimizing
where y_{i} is the observed physical property for material i. The hyperparameters σ and the regularization parameter λ are selected with the help of crossvalidation, i.e. by excluding some of the materials as a validation set during the training process and measuring the coefficient of determination R^{2}, which is defined (Kvalseth, 1985) as
Here, p_{vld} is the number of validation points and is the average of the validation set used to compare the values predicted for the excluded materials with the known observed values. In this study, we use R^{2} as a measure of PA.
To estimate the PA accurately, we crossvalidate the GKR (Stone, 1974; Picard & Cook, 1984; Kohavi, 1995) repeatedly using the collected data. To obtain a set of proper variable combinations that can accurately predict the target variable, we train the GKR models for all possible combinations of numerical predicting variables. It should be noted that, since we do not yet know the effect of each predicting variable on the target quantity, all the numerical predicting variables are normalized in the same manner in this analysis. With each combination, we search for the regularization parameters to maximize the PA of the corresponding GKR model. Note that each of the selected combinations contributes a perspective on the correlation between the target and the predicting variables. Thus, an ensemble averaging (Tresp, 2001; Dietterich, 2000; Zhang & Ma, 2012) technique can be applied to combine all the prescreened regression models to improve the PA. Further, the similarity between materials regarding the mechanisms of the chemical and physical phenomena associated with the target quantity can be investigated more comprehensively if we consider all the perspectives. Consequently, we need to construct regressionbased clustering models for each obtained potential combination to build the committee machine.
2.2. Regressionbased clustering
In practice, a single linear model is often severely limited for modelling real data, because the data set can be nonlinear or the data themselves can be heterogeneous and contain multiple subsets, each of which fits best to a different linear model. However, in traditional data analysis, linear models are often preferred because of their interpretability. Within a linear model, one can intuitively understand how the predicting variables contribute to the target variable. Therefore, much effort has been devoted to developing subspace segmentation techniques to deconvolute a highdimensional data set into a set of separate small data sets, each of which can be approximated well by different linear subspaces by employing principal component analysis (Fukunaga & Olsen, 1971; Vidal et al., 2015; Einbeck et al., 2008).
In this study, our primary interest is the local linearity between the predicting variables and the target variable, which may reflect the nature of the underlying physics around the point of observation. Therefore, we employ a simple strategy, in which the subspace segmentation is an integration of a conventional clustering method and linear regression analysis. It should be noted that the subspaces may have fewer dimensions than the whole space. Hence, we apply sparse linear regression analysis using L1 regularization (Tibshirani, 1996) instead of the original one.
Our proposed regressionbased clustering method is based on the well known Kmeans clustering method with two major modifications. (i) The sparse linear regression model derived from data associated with materials in a particular cluster (group) is considered to be its common characteristic (centre). The dissimilarities in the characteristics of each material in a group relative to the shared (common) nature of that group (the distance to the centre) are measured according to their deviation from the corresponding linear regression model. (ii) The sum of the differences of all materials in a group from the corresponding linear regression model of another group is used to measure the dissimilarity in the characteristics of that group with regard to the other group. The sum of the dissimilarities between one group and another and that determined in the reverse direction are used to assess the divergence between the two groups.
After performing the variable evaluation, we assume we have selected combinations of predicting variables that yield nonlinear regression models of high PA. With one of the selected combinations, m′ numerical variables are selected from the original m numerical variables. A material in the data set is then described by an m′dimensional predicting variable vector = , and the data are represented using a (p × m′) matrix.
Given the set of p data points represented by m′dimensional numerical vectors, a natural number k ≤ p represents the number of clusters for a given experiment. We assume that there are k linear regression models and that each data point in follows one of them. The aim is to determine those k linear regression models accordingly, to divide into k nonempty disjoint clusters. Our algorithm searches for a partition of into k nonempty disjoint clusters that minimize the overall sum of the residuals between the observed and predicted values (using the corresponding models) of the target variable. The problem can be formulated in terms of an optimization problem as follows.
For a given experiment with cluster number k, minimize
subject to
where y_{j} and y_{j}^{Mi} are, respectively, the observed value and the value predicted by model M_{i} (of k models) for the target property of the material with index j, W = [w_{ij}]_{p×k} is a partition matrix (w_{ij} takes a value of 1 if object x_{j} belongs to cluster and 0 otherwise) and M = is the set of regression models corresponding to clusters .
P can be optimized by iteratively solving two smaller problems:
(i) Fix M = and solve the reduced problem P(W, M) to find (reassign data points to the cluster of the closest centre); and
(ii) Fix W = and solve the reduced problem P(W, M) to find (reconstruct the linear model for each cluster).
Our regressionbased clustering algorithm comprises three steps and iterates until P(W, M) converges to some local minimum values:
(i) The data set is appropriately partitioned into k subsets, 1 ≤ k ≤ p. Multiple linear regression analyses are performed independently with the L1 regularization method (Tibshirani, 1996) on each subset to learn the set of potential candidates for the sparse linear regression models M^{(0)} = . This represents the initial step t = 0;
(ii) M^{(t)} is retained and problem P(W, M^{(t)}) is solved to obtain W^{(t)}, by assigning data points in to clusters based upon models ;
(iii) W^{(t)} is fixed and M^{(t)} is generated such that P(W, M^{(t+1)}) is minimized. That is, new regression models are learned according to the current partition in step (ii). If the convergence condition or a given termination condition are fulfilled, the result is output and the iterations are stopped. Otherwise, t is set to t + 1 and the algorithm returns to step (ii).
The group number k is chosen considering two criteria: high linearity between the predicting and target variables for all members of the group, and no model representing two different groups. The first criterion has higher priority and can be quantitatively evaluated using the Pearson correlation scores between the predicted and observed values for the target variable of the data instances in each group, by applying the corresponding linear model. The second criterion is implemented to avoid the case in which one group with high linearity is further divided into two subgroups that can be represented by the same linear model. The determination of k, therefore, can be formulated in terms of an optimization problem as follows:
where R^{2}_{i,i} and R^{2}_{i,j} are the Pearson correlation scores between the predicted and observed values for the target variable when we apply the linear model M_{i} to data instances in clusters i and j, respectively.
The first term in this optimization function decreases monotonically with respect to the range of varying from 0 to 1. When approaches 1 (the entire cluster exhibits almost perfect linearity between the target and predicting variables), the optimization function drops on a log scale to emphasize the expected region. In contrast, the optimization function increases exponentially when approaches 0 (one of the clusters shows no linearity between the target and predicting variables). The second term in this optimization function is introduced to avoid overestimation of k, in which a group with high linearity further divides into two subgroups that can be represented by the same linear model. It should be noted that the criterion for determining k is also the criterion for evaluating a regressionbased clustering model. Further, cluster labels can be assigned for a material without knowing the value of the target physical property, using the estimated value obtained from a prediction model, e.g. a nonlinear regression model.
2.3. Similarity measure with committee machine
A clustering model, obtained through regressionbased clustering for a particular combination of predicting variables, represents a specific partitioning of the data set into groups in which the linear correlations between the predicting and target variables can be observed. Materials belonging to the same group potentially have the same actuating mechanisms for the target physical property. However, materials that actually have the same actuating mechanisms for a specific physical property should be observed similarly in many circumstances. Therefore, the similarity between materials, focusing on a specific physical property, should be measured in a multilateral manner. For this purpose, for each prescreening of the sets of predicting variables that yield nonlinear regression models of high PA (Section 2.1), we construct a regressionbased clustering model. A committee machine that votes for the similarity between materials is then constructed from all obtained clustering models. The similarity between two materials can be measured naïvely using the committee algorithm (Seung et al., 1992; Settles, 2010), by counting the number of clustering models that partition these two materials into the same cluster. The affinity matrix A of all pairs of materials in the data set is then constructed as follows:
where S_{h} is the set of all prescreened combinations of predicting variables that yield nonlinear regression models of high PA and k_{s} is the cluster number. Further, W^{S} = [w^{S}_{ij}]_{p ×kS} is the partition matrix of the clustering models obtained through regressionbased clustering analysis using the combination of predicting variables S (w^{S}_{ia} takes a value of 1 if material a belongs to cluster i and 0 otherwise). Using this affinity matrix, one can easily implement a hierarchical clustering technique (Everitt et al., 2011) to obtain a hierarchical structure of groups of materials that have similar correlations between the predicting and target variables.
3. Results and discussion
We applied the methods described above to a sequential analysis for automatic extraction of physicochemical information relating to considered materials from three available data sets. For each data set, a bruteforce examination of all combinations of numerical predicting variables was conducted using a nonlinear regression technique, to identify combinations of predicting variables that yielded regression models of high PA for the later analysis process. For each of the prescreened combinations, physically meaningful patterns in the form of material groups, as well as the linear relationships between the selected predicting and target variables, could be detected automatically for the materials in each group utilizing the regressionbased clustering technique. The committee machine was then constructed from the obtained clustering models. Subsequently, a hierarchical structure of material groups similar to each other could be extracted using the hierarchical clustering technique. We evaluated the obtained results from both qualitative and quantitative perspectives. The qualitative evaluations were based on the rationality and interpretability of the obtained hierarchy with reference to the domain knowledge; the quantitative evaluations were performed based on the PA of the predictive models constructed with reference to the obtained similarity between materials.
The exhaustive search for variable selection based on kernel regression consumes a lot of computing resources, such as memory and CPU time, due to combinatorial explosion. We performed our experiments using Apache Spark (Zaharia et al., 2016) on a highperformance cluster with 256 processor cores and 1.1 TB of RAM in total. The calculation cost depends on various factors, such as the number of instances of data, the number of features and the crossvalidation estimate parameters. With our system, the exhaustive search task takes 36, 41 and 28 h, respectively, to perform the first, second and third experiments.
3.1. Experiment 1: mining the quantum calculated formation energy data for Fm3¯m AB materials
In this experiment, we collected computational data for 239 binary AB materials from the Materials Project database (Jain et al., 2013). The A atoms were virtually all metallic forms: alkali, alkaline earth, transition and posttransition metals, as well as lanthanides. The B elements, by contrast, were mostly all metalloids and nonmetallic atoms. We set the computed formation energy E_{form} of each AB material as the physical property of interest. To simplify the demonstration of our method, we limited the collected compounds to those possessing the same cubic structure as the symmetry group (i.e. the NaCl structure).
To represent each material, we used a set of 17 predicting variables divided into three categories, as summarized in Table 1. The first and second categories pertained to the predicting variables of the atomic properties of the element A and element B constituents; these included eight numerical predicting variables: (i) (Z_{A}, Z_{B}); (ii) atomic radius (r_{A}, r_{B}); (iii) average ionic radius (r_{ionA}, r_{ionB}); (iv) (IP_{A}, IP_{B}); (v) (χ_{A}, χ_{B}); (vi) number of electrons in the outer shell (n_{eA}, n_{eB}); (vii) boiling temperature (T_{bA}, T_{bB}); and (viii) melting temperature (T_{mA}, T_{mB}) of the corresponding single substances. The boiling and melting temperatures were as measured under standard conditions (0°C, 10^{5} Pa). Information related to is very valuable for understanding the physical properties of materials. Therefore, we designed the third category with structural predicting variables whose values were calculated from the crystal structures of the materials. In this experiment, owing to the similarities in the crystal structures of the collected materials, we utilized only the unitcell volume (V_{cell}) as the structural predicting variable. The computed E_{form} of each material was set as the target variable.

A kernel regressionbased variable evaluation was performed for these data with 3 × 10fold crossvalidations. We first examined how E_{form} can be predicted from the designed predicting variables for all collected materials. We performed a screening for all possible (2^{17} − 1 = 131 071) variable combinations. Hence, we found a total of 34 468 variable combinations deriving GKR models with R^{2} scores exceeding 0.90 (Fig. 2). Among these, there were 139 variable combinations deriving GKR models with R^{2} scores exceeding 0.96. These predicting variable combinations were then considered as candidates for the next step of the analysis. The highest prediction accuracy (PA) in this experiment is 0.967 (mean of absolute error, abbreviated as MAE: 0.122 eV), obtained using the combination {V_{cell}, χ_{A}, n_{eA}, n_{eB}, IP_{A}, T_{bA}, T_{mA}, r_{B}}. Moreover, we could obtain a superior PA with an R^{2} score of 0.972 (MAE: 0.117 eV) by taking ensemble averages (Tresp, 2001; Dietterich, 2000; Zhang & Ma, 2012) of GKR models, which were constructed using the 139 selected variable combinations.
We performed regressionbased clustering analyses for all 139 selected variable combinations with 1000 initial randomized states. Using evaluation criteria similar to those for determining the number of clusters [formula (5)], the 200 best clustering results among these trials were selected to construct a committee machine that voted for the similarity between materials. The obtained affinity matrix for all the AB materials is shown in Fig. 3(a). The similarity between each material pair varies from 0 to 1. A cell of the affinity matrix takes a value of 0 when the corresponding two materials are never included in the same cluster by a regressionbased clustering model. In contrast, a cell of the affinity matrix takes a value of 1 when the corresponding two materials always appear in the same cluster according to every regressionbased clustering model. Using this similarity, we could roughly divide all the materials into two groups, as represented by the upper left and bottom right of Fig. 3(a).
Fig. 3(b) shows enlarged views of the affinity matrix for two groups of typical materials denoted G1 and G2. We can clearly see that the affinities between materials within each of the two groups, G1 and G2, exceed 0.7, showing high intragroup similarities. In contrast, the affinities between materials in different groups are smaller than 0.2, showing significant dissimilarity between G1 and G2. Further detailed investigation reveals that the materials in G1 are oxides, nitrides and carbides. The maximum common positive of the A elements is greater than or equal to the maximum common negative of the B elements for the compounds in this group. On the other hand, the materials in G2 are halides of alkaline metals, oxides, nitrides and carbides, for which the maximum common positive of the A elements is less than or equal to the maximum common negative of the B elements. Further investigation shows that only seven among 24 compounds in G1 have computed electronic structures with a band gap. In contrast, half of the compounds in G2 have computed electronic structures with a band gap. The obtained results suggest that the bonding nature of compounds in G1 is different from that of compounds in G2. The linearities between the target variable and the predicting variables for the two groups are summarized in Fig. 3(c). The diagonal plots show the correlations between the observed and predicted values for the target variables obtained using linear models of the predicting variables for the materials in the two groups. The offdiagonal plots show the correlations between the observed and predicted values for the target variables obtained using the linear models of the other groups. We could again confirm the intragroup similarity, and the dissimilarity between different groups, in terms of the linearity between the target and predicting variables for the compounds in the two groups.
To evaluate the validity of the analysis process quantitatively, we embedded the similarity measured by the committee machine into the regression of E_{form} of the AB materials. To predict the value of the target variable for a new material, instead of using the entire available data set, we used only one third of the available materials having the highest similarity to the new material. It should again be noted that the similarity between the materials in the data set and the new material can be determined without knowing the value of the target physical property, using the value predicted by ensemble averaging of the nonlinear regression models.
Table 2 summarizes the PA in predicting E_{form} values of the materials obtained using several regression models with the designed predicting variables. The nonlinear model obtained using ensemble averaging of the best nonlinear regression models, having an R^{2} score of 0.972 (MAE: 0.117 eV), could be improved significantly to an R^{2} score of 0.982 (MAE: 0.101 eV) by considering the information from the similarity measurement (Fig. 4a). Therefore, the obtained results provide significant evidence to support our hypothesis that the similarity measured by the committee machine reflects the similarity in the actuating mechanisms of the target material physical property.

3.2. Experiment 2: mining the quantum calculated lattice parameter for bodycentred cubic structure data
In this experiment, a data set of 1541 binary AB bodycentred cubic (b.c.c.) crystals with a 1:1 element ratio was collected from Takahashi et al. (2017). We focused on the computed lattice constant value L_{const} of the crystals. The A elements corresponded to almost all transition metals (Ag, Al, As, Au, Co, Cr, Cu, Fe, Ga, Li, Mg, Na, Ni, Os, Pd, Pt, Rh, Ru, Si, Ti, V, W and Zn) and the B elements corresponded to those with atomic numbers in the ranges of 1–42, 44–57 and 72–83. This data set included unrealistic materials such as the binary material AgHe, which incorporates He, an element that is known to possess a closedshell structure and is, therefore, unlikely to form a solid.
To describe each material, we used a combination of 17 variables that related to basic physical properties of the A and B constituent elements, as summarized in Table 3. These chosen properties were as follows: (i) atomic radius (r_{A}, r_{B}); (ii) mass (m_{A}, m_{B}); (iii) (Z_{A}, Z_{B}); (iv) number of electrons in the outermost shell (n_{eA}, n_{eB}); (v) (ℓ_{A}, ℓ_{B}); and (vi) (χ_{A}, χ_{B}). The values were converted from the categorical symbols s, p, d, f to numerical values representing the orbitals, i.e. 0, 1, 2, 3, respectively. To embed the structure information, four more properties were included: (vii) the density of atoms per unit volume (ρ_{A}, ρ_{B}); (viii) the unitcell density ρ; (ix) the difference in d_{χ}; and (x) the sum of the B and the difference in Sum_{AD} (see Takahashi et al., 2017).

A kernel regressionbased variable selection with 3 × 10fold crossvalidation was performed to examine all combinations of the 17 variables. From the total number of screening variable combinations (2^{17} − 1 = 131 071), we found 60 568 variable combinations for deriving regression models with R^{2} scores exceeding 0.90 (Fig. 2). Among these, there were 57 variable combinations yielding regression models with R^{2} scores exceeding 0.9895. The highest PA for this experiment is 0.989 (MAE: 0.014 Å), which was obtained using the combination {ρ, ℓ_{A}, r_{covB}, m_{A}, m_{B}, ρ_{B}, n_{eB}}. We could obtain a better PA with an R^{2} score of 0.991 (MAE: 0.013 Å) by taking ensemble averaging of GKR models which derived from the 57 selected variable combinations. This result is a considerable improvement over the maximum PA (R^{2} score: 0.90) of the support vector regression technique with the featureselection strategy mentioned by Takahashi et al. (2017).
In the regressionbased clustering analysis, the 57 selected variable combinations, accompanied by 1000 initial randomized states for each combination, were used to search for the most probable clustering results to construct the committee machine. The affinity matrix obtained for all materials is shown in Fig. 5(a), after rearrangement by a hierarchical clustering algorithm (Everitt et al., 2011). Utilizing this similarity, we could roughly divide all materials in the data set into three groups, G1, G2 and G3. Further investigation revealed that most materials in G1 are constructed from two heavy transition metals. In contrast, the materials in G2 and G3 are constructed from a metal and a nonmetal element, e.g. oxides and nitrides. For a given A element, L_{const} of the materials in G1 increases with the of the B element. On the other hand, L_{const} of the materials in G2 remains constant for materials sharing the same A element. Further, L_{const} for the materials in group G3 depends mainly on the difference between the constituent elements A and B. Note that the materials in these three groups are visualized in detail in the supporting information. The linearities between the observed and predicting variables for these groups are shown in Fig. 5(b).
To predict the L_{const} of a new material, we applied the same strategy as that explained in the previous experiment. Table 2 summarizes the PA values obtained in our experiments. The nonlinear model obtained using ensemble averaging of the 57 best nonlinear regression models and having an R^{2} score of 0.991 (MAE: 0.013 Å) could be marginally improved to an R^{2} score of 0.992 (MAE: 0.011 Å) by including information from the similarity measurement (Fig. 4b).
3.3. Experiment 3: mining the experimentally observed Curie temperature data of rare earth–transition metal alloys
In this experiment, we collected experimental data related to 101 binary alloys consisting of transition and rare earth metals from the NIMS AtomWork database (Villars et al., 2004; Xu et al., 2011), which included the crystal structures of the alloys and their observed Curie temperatures T_{C}.
To represent the structural and physical properties of each binary alloy, we used a combination of 21 variables divided into three categories, as summarized in Table 4. The first and second categories contained predicting variables describing the atomic properties of the transition metal elements (T) and rare earth elements (R), respectively. The properties were as follows: (i) (Z_{R}, Z_{T}); (ii) covalent radius (r_{covR}, r_{covT}); (iii) first ionization (IP_{R}, IP_{T}); and (iv) (χ_{R}, χ_{T}). In addition, predicting variables related to the magnetic properties were included: (v) total spin quantum number (S_{3d}, S_{4f}); (vi) total orbital angular momentum quantum number (L_{3d}, L_{4f}); and (vii) total angular momentum (J_{3d}, J_{4f}). For R metallic elements, additional variables J_{4f}g_{j} and J_{4f}(1 − g_{j}) were added, because of the strong effect. As in the two previous experiments, a third category variable was chosen which contained values calculated from the crystal structures of the alloys reported in the AtomWork database. The designed predicting variables included the transition (C_{T}) and rare earth (C_{R}) metal concentrations. Note that if we use the atomic percentage for the concentration, the two quantities are not independent. Therefore, in this work, we measured the concentrations in units of atoms Å^{−3}; this unit is more informative than the atomic percentage as it contains information on the constituent atomic size. As a consequence, (C_{T}) and (C_{R}) were not completely dependent on each other. Other additional structure variables were also added: the mean radius of the between two rare earth elements r_{RR}, between two transition metal elements r_{TT}, and between transition and rare earth elements r_{TR}. We set the experimentally observed T_{C} as the target variable.

A kernel regressionbased variable selection analysis was performed for these data using leaveoneout crossvalidation. Among all the examined variable combinations, (2^{21} − 1 = 2 097 151), we found 84 870 combinations for which the corresponding GKR models exhibited R^{2} scores exceeding 0.90 (Fig. 2). Among these, there were 59 variable combinations yielding GKR models associated with R^{2} scores exceeding 0.95. These predicting variable combinations were selected for the next analysis step. The highest PA in this experiment was 0.968 (MAE: 42.74 K), obtained using the combination {C_{R}, Z_{R}, Z_{T}, χ_{T}, r_{covT}, L_{3d}, J_{3d}}. We could obtain a better PA with an R^{2} score of 0.974 (MAE: 37.87 K) by applying ensemble averaging to the GKR models, which were derived from the selected 59 variable combinations. We considered these variable combinations as candidates for the next step of the analysis.
In the regressionbased clustering analysis, 59 variable combinations with 1000 initial randomized states were used to search for the most probable clustering results to construct the committee machine to vote for the similarity between the alloys. The obtained affinity matrix for all the alloys is shown in Fig. 6(a). An enlarged view of the three groups of alloys having high similarity (denoted G1, G2 and G3) is shown in Fig. 6(b). Further investigation revealed that G1 includes Mn and Cobased alloys with high T_{C}, e.g. Mn_{23}Pr_{6} (448 K), Mn_{23}Sm_{6} (450 K), Co_{5}Pr (931 K) and Co_{5}Nd (910 K). Other lowT_{C} Cobased alloys, e.g. Co_{2}Pr (45 K) and Co_{2}Nd (108 K), are counted as having higher similarity to the Nibased alloys in G3, e.g. Ni_{5}Nd (7 K) and Ni_{2}Ho (16 K). In contrast, G2 includes all the Febased Fe_{17}RE_{2} alloys, where RE represents different rare earth metals. To confirm the value of our similarity measure, Fig. 6(c) shows the linearities between the observed and predicting variables for these groups, as well as the dissimilarities among these groups.
In the next analysis step, we utilized the obtained similarity measure to predict T_{C} for a new material using the same strategy as in the two previous experiments. The nonlinear model obtained using ensemble averaging of the best nonlinear regression models and having an R^{2} score of 0.974 (MAE: 37.87 K) could be improved significantly to attain an R^{2} score of 0.991 (MAE: 24.16 K) utilizing the information from the similarity measurement (Fig. 4c and Table 2). The obtained results provide significant evidence to support our hypothesis that the similarity voted for by the committee machine indicates the similarity in the actuating mechanisms of the T_{C} of the binary alloys.
4. Conclusions
In this work, we have proposed a method to measure the similarities between materials, focusing on specific physical properties, to describe and interpret the actual mechanism underlying a physical phenomenon in a given problem. The proposed method consists of three steps: variable evaluation based on nonlinear regression, regressionbased clustering, and similarity measurement with a committee machine constructed from the clustering result. Three data sets of well characterized crystalline materials represented by key atomic predicting variables were used as test beds. The formation energy, lattice parameter and Curie temperature were considered as target physical properties of the examined materials. Our experiments show that rational and meaningful group structures can be obtained with the help of the proposed approach. The similarity measure information helped significantly increase the prediction accuracy for the material physical properties. Through use of ensemble top kernel ridge prediction models, the R^{2} score increased from 0.972 to 0.982 for the formation energy prediction problem, and from 0.974 to 0.991 for the Curie temperature prediction problem after utilizing the similarity information. However, no significant improvement in the the R^{2} score was observed for the lattice constant prediction problem. Thus, our results indicate that our proposed data analysis flow can systematically facilitate further understanding of a given phenomenon by identifying similarities among materials in the problem data set.
Supporting information
Additional figures. DOI: https://doi.org/10.1107/S2052252518013519/zx5015sup1.pdf
Funding information
This work was partly supported by PRESTO and by the Materials Research by Information Integration Initiative (MI^{2}I) project of the Support Program for StartUp Innovation Hub, from the Japan Science and Technology Agency (JST), and by JSPS KAKENHI GrantinAid for Young Scientists (B) (grant No. JP17K14803), Japan.
References
Almuallim, H. & Dietterich, T. G. (1991). The Ninth National Conference on Artificial Intelligence, pp. 547–552. Menlo Park: AAAI Press. Google Scholar
Behler, J. & Parrinello, M. (2007). Phys. Rev. Lett. 98, 146401. Web of Science CrossRef PubMed Google Scholar
Biesiada, J. & Duch, W. (2007). Computer Recognition Systems 2. Advances in Soft Computing, Vol. 45. Heidelberg: Springer. Google Scholar
Blei, D. M. (2012). Commun. ACM, 55, 77–84. Web of Science CrossRef Google Scholar
Blum, A. L. & Langley, P. (1997). Artif. Intell. 97, 245–271. Web of Science CrossRef Google Scholar
Botu, V. & Ramprasad, R. (2015). Int. J. Quantum Chem. 115, 1074–1083. Web of Science CrossRef Google Scholar
Dietterich, T. G. (2000). Proceedings of the First International Workshop on Multiple Classifier Systems, 21–23 June 2000, Cagliari, Italy. Lecture Notes in Computer Science, Vol. 1857, edited by J. Kittler and F. Roli, pp. 1–15. Heidelberg: Springer. Google Scholar
Duangsoithong, R. & Windeatt, T. (2009). Machine Learning and Data Mining in Pattern Recognition, edited by Petra Perner, pp. 206–220. Heidelberg: Springer. Google Scholar
Einbeck, J., Evers, L. & BailerJones, C. (2008). Principal Manifolds for Data Visualization and Dimension Reduction. Lecture Notes in Computational Science and Engineering, Vol. 58, edited by A. N. Gorban, B. Kégl, D. C. Wunsch and A. Zinovyev, pp. 178–201. Heidelberg: Springer. Google Scholar
Everitt, S., Landau, S., Leese, M. D. & Stahl (2011). Editors. Cluster Analysis, 5th ed., ch. 4, Hierarchical Clustering. Wiley Series in Probability and Statistics. Chichester: Wiley. Google Scholar
Fernandez, M., Boyd, P. G., Daff, T. D., Aghaji, M. Z. & Woo, T. K. (2014). J. Phys. Chem. Lett. 5, 3056–3060. Web of Science CrossRef PubMed Google Scholar
Fukunaga, K. & Olsen, R. (1971). IEEE Trans. Comput. C20, 1615–1616. CrossRef Web of Science Google Scholar
Ghiringhelli, M., Vybiral, J., Levchenko, V., Draxl, C. & Scheffler, M. (2015). Phys. Rev. Lett. 114, 105503. Web of Science CrossRef PubMed Google Scholar
Goldsmith, B. R., Boley, M., Vreeken, J., Scheffler, M. & Ghiringhelli, M. (2017). New J. Phys. 19, 013031. Web of Science CrossRef Google Scholar
Hastie, T., Tibshirani, R. & Friedman, J. H. (2009). Editors. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. New York: Springer. Google Scholar
Jain, A., Ong, S. P., Hautier, G., Chen, W., Richards, W. D., Dacek, S., Cholia, S., Gunter, D., Skinner, D., Ceder, G. & Persson, K. A. (2013). APL Mater. 1, 011002. Google Scholar
Jain, A., Shin, Y. & Persson, A. (2016). Nat. Rev. Mater. 1, 15004. Web of Science CrossRef Google Scholar
Jones, R. O. (2015). Rev. Mod. Phys. 87, 897–923. Web of Science CrossRef Google Scholar
Jones, R. O. & Gunnarsson, O. (1989). Rev. Mod. Phys. 61, 689–746. CrossRef CAS Web of Science Google Scholar
Kanungo, T., Mount, M., Netanyahu, S., Piatko, D., Silverman, R. & Wu, A. Y. (2002). IEEE Trans. Pattern Anal. Mach. Intell. 24, 881–892. Web of Science CrossRef Google Scholar
Kohavi, R. (1995). IJCAI'95 – Proceedings of the 14th International Joint Conference on Artificial Intelligence, 20–25 August 1995, Montreal, Canada, Vol. 2, pp. 1137–1143. San Francisco: Morgan Kaufmann Publishers. Google Scholar
Kohavi, R. & John, H. (1997). Artif. Intell. 97, 273–324. Web of Science CrossRef Google Scholar
Kohn, W. & Sham, L. J. (1965). Phys. Rev. 140, A1133–A1138. CrossRef Web of Science Google Scholar
Kusne, G., Keller, D., Anderson, A., Zaban, A. I. & Takeuchi, I. (2015). Nanotechnology, 26, 444002. Web of Science CrossRef PubMed Google Scholar
Kvalseth, T. O. (1985). Am. Stat. 39, 279–285. CrossRef Web of Science Google Scholar
Landauer, T. K., Foltz, P. W. & Laham, D. (1998). Discourse Process. 25, 259–284. Web of Science CrossRef Google Scholar
Le, T. V., Epa, V. C., Burden, F. R. & Winkler, A. (2012). Chem. Rev. 112, 2889–2919. Web of Science CrossRef PubMed Google Scholar
Liu, H. & Yu, L. (2005). IEEE Trans. Knowl. Data Eng. 17, 491–502. Web of Science CrossRef Google Scholar
Liu, Y., Zhao, T., Ju, W. & Shi, S. (2017). J. Materiomics, 3, 159–177. Web of Science CrossRef Google Scholar
Lloyd, S. P. (1982). IEEE Trans. Inf. Theory, 28, 129–137. CrossRef Web of Science Google Scholar
Lu, W., Xiao, R., Yang, J., Li, H. & Zhang, W. (2017). J. Materiomics, 3, 191–201. Web of Science CrossRef Google Scholar
Lum, Y., Singh, G., Lehman, A., Ishkanov, T., VejdemoJohansson, M., Alagappan, M., Carlsson, J. & Carlsson, G. (2013). Sci. Rep. 3, 1236. Web of Science CrossRef PubMed Google Scholar
MacQueen, J. (1967). Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Vol. 1, Statistics, pp. 281–297. University of California Press. Google Scholar
Murphy, K. P. (2012). Editor. Machine Learning: A Probabilistic Perspective. MIT Press. Google Scholar
Opitz, D. & Maclin, R. (1999). JAIR, 11, 169–198. CrossRef Google Scholar
Picard, R. R. & Cook, D. (1984). J. Am. Stat. Assoc. 79, 575–583. CrossRef Google Scholar
Pilania, G., Wang, C., Jiang, X., Rajasekaran, S. & Ramprasad, R. (2013). Sci. Rep. 3, 2810. Web of Science CrossRef PubMed Google Scholar
Rajan, K. (2015). Annu. Rev. Mater. Res. 45, 153–169. Web of Science CrossRef Google Scholar
Rupp, M. (2015). Int. J. Quantum Chem. 115, 1058–1073. Web of Science CrossRef Google Scholar
Saal, J. E., Kirklin, S., Aykol, M., Meredig, B. & Wolverton, C. (2013). JOM, 65, 1501–1509. Web of Science CrossRef CAS Google Scholar
Settles, B. (2010). Computer Sciences Technical Report No. 1648. University of WisconsinMadison, USA. Google Scholar
Seung, H. S., Opper, M. & Sompolinsky, H. (1992). Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 27–29 July 1992, Pittsburgh, Pennsylvania, USA, pp. 287–294. New York: ACM. Google Scholar
Smith, J. S., Isayev, O. & Roitberg, A. E. (2017). Chem. Sci. 8, 3192–3203. Web of Science CrossRef PubMed Google Scholar
Snyder, J. C., Rupp, M., Hansen, K., Müller, K. & Burke, K. (2012). Phys. Rev. Lett. 108, 253002. Web of Science CrossRef PubMed Google Scholar
Srinivasan, S., Broderick, R., Zhang, R., Mishra, A., Sinnott, B., Saxena, K., LeBeau, M. & Rajan, K. (2015). Sci. Rep. 5, 17960. Web of Science CrossRef PubMed Google Scholar
Stone, M. (1974). J. R. Stat. Soc. Ser. B (Methodological), 36, 111–147. Google Scholar
Sumpter, B. G., Vasudevan, R. K., Potok, T. & Kalinin, S. V. (2015). NPJ Comput. Mater. 1, 15008. Google Scholar
Takahashi, K., Takahashi, L., Baran, J. D. & Tanaka, Y. (2017). J. Chem. Phys. 146, 011002. Web of Science CrossRef Google Scholar
Tibshirani, R. (1996). J. R. Stat. Soc. Ser. B (Methodological), 58, 267–288. Google Scholar
Tresp, V. (2001). Neural Comput. 12, 2000. Google Scholar
Ulissi, Z. W., Tang, M. T., Xiao, J., Liu, X., Torelli, D. A., Karamad, M., Cummins, K., Hahn, C., Lewis, N. S., Jaramillo, T. F., Chan, K. & Nørskov, J. K. (2017). ACS Catal. 7, 6600–6608. Web of Science CrossRef Google Scholar
Vidal, R., Ma, Y. & Sastry, S. (2015). IEEE Trans. Pattern Anal. Mach. Intell. 27, 1945–1959. Web of Science CrossRef Google Scholar
Villars, P., Berndt, M., Brandenburg, K., Cenzual, K., Daams, J., Hulliger, F., Massalski, T., Okamoto, H., Osaki, K., Prince, A., Putz, H. & S. Iwata (2004). J. Alloys Compd. 367, 293–297. Google Scholar
Xu, Y., Yamazaki, M. & Villars, P. (2011). Jpn. J. Appl. Phys. 50, 11RH02. CrossRef Google Scholar
Zaharia, M., Xin, R. S., Wendell, P., Das, T., Armbrust, M., Dave, A., Meng, X., Rosen, J., Venkataraman, S., Franklin, M. J., Ghodsi, A., Gonzalez, J., Shenker, S. & Stoica, I. (2016). Commun. ACM, 59, 56–65. Web of Science CrossRef Google Scholar
Zhang, C. & Ma, Y. (2012). Ensemble Machine Learning: Methods and Applications. Heidelberg: Springer. Google Scholar
This is an openaccess article distributed under the terms of the Creative Commons Attribution (CCBY) Licence, which permits unrestricted use, distribution, and reproduction in any medium, provided the original authors and source are cited.