research papers
Better models by discarding data?
aFaculty of Biology, University of Konstanz, M647, 78457 Konstanz, Germany, and bDepartment of Biochemistry and Biophysics, Oregon State University, Corvallis, OR 97331, USA
*Correspondence e-mail: kay.diederichs@uni-konstanz.de
In macromolecular X-ray crystallography, typical data sets have substantial multiplicity. This can be used to calculate the consistency of repeated measurements and thereby assess data quality. Recently, the properties of a 1/2, that can be used for this purpose were characterized and it was shown that CC1/2 has superior properties compared with `merging' R values. A CC*, links data and model quality. Using experimental data sets, the behaviour of CC1/2 and the more conventional indicators were compared in two situations of practical importance: merging data sets from different crystals and selectively rejecting weak observations or (merged) unique reflections from a data set. In these situations controlled `paired-refinement' tests show that even though discarding the weaker data leads to improvements in the merging R values, the refined models based on these data are of lower quality. These results show the folly of such data-filtering practices aimed at improving the merging R values. Interestingly, in all of these tests CC1/2 is the one data-quality indicator for which the behaviour accurately reflects which of the alternative data-handling strategies results in the best-quality refined model. Its properties in the presence of systematic error are documented and discussed.
CCKeywords: R value; correlation coefficient; data quality; model quality; outlier rejection.
1. Introduction
Since the number of reflections in a crystallographic experiment is high, indicators of aggregated statistical properties are needed. For decades, the `merging' R value,
has been used almost exclusively for this purpose. This normalized linear residual was defined (Arndt et al., 1968) ad hoc to measure the consistency of measurements made with the first two-dimensional detectors by utilizing the multiplicity (also called redundancy) of observations of the unique reflections. The formula sums the absolute deviations of intensities of ni observations of unique reflections from their averages and normalizes using the sum of the intensities. As a relative measure of deviation, it can be calculated as an overall quantity for a data set, but also as a function of resolution. Later, it was shown (Diederichs & Karplus, 1997) that each term of the numerator has to be modified by a factor of [ni/(ni − 1)]1/2 to give a result that is independent of the average multiplicity. The resulting quantity is called Rmeas (or the redundancy-independent merging R, Rr.i.m.; Weiss, 2001) and reports on the consistency of the measured observations. It was also realised that a higher multiplicity of observations results in the merged data being of higher quality than the individual measurements, so a distinct statistic, Rmrgd, was introduced to assess the merged data quality (Diederichs & Karplus, 1997). Thereafter, it was shown that the quality of the merged data could also be estimated by introducing an additional factor of 1/ni1/2 into each term of the Rmeas numerator (Weiss, 2001), as this accounts for the expected increase in accuracy associated with averaging ni measurements. The resulting quantity is called Rp.i.m. (the precision-indicating merging R; Weiss, 2001).
Recently, we (Karplus & Diederichs, 2012) and Evans (2011) have suggested that the Pearson of two `half' data sets (i.e. each derived by averaging half of the observations for a given reflection) might be better suited than merging R factors for assessing data quality. In our work, we designated this quantity CC1/2 and conclusively showed that in many ways it has a better statistical foundation than merging R values (Karplus & Diederichs, 2012) and that in particular it provides a direct assessment of the relative proportions to which signal and noise contribute to the variation in the data in a given resolution shell. Furthermore, we introduced a quantity termed CC*, defined as
CC* (with an associated uncertainty) is an experimental estimate of what could be called CCtrue, the correlation of the final merged data with the underlying true values of these quantities. CC* is thus an upper limit for CCwork and CCfree from a properly refined model, where the latter are correlation coefficients between intensities calculated from the model and those obtained from the experiment. CC* is limiting because if the intensities calculated from a model match the experimental data better than the (unknown) true intensities do, this implies that the model is overfitted, fitting not just the signal but also the noise that is in the data. The relationships between these correlation coefficients are schematically summarized in Fig. 1.
Any scientific experiment entails data processing, which can include outlier detection and rejection. In crystallography, common practices include systematically rejecting certain subsets of data in order to `improve' the resulting merged data set. The kinds of data that are sometimes rejected are whole data sets, high-resolution shells, unique reflections in the final reduced data set and single observations before merging or combinations thereof. While these practices do serve to improve the merging R-value statistics associated with the data, as far as we are aware little effort has been made to assess how these procedures impact the quality of the model that is produced by against the data, which in our view is the single outcome that matters.
The only such study of which we are aware is our recent work using a novel `paired-refinement' strategy to show that the inclusion of often-rejected weak data in high-resolution shells significantly improves the quality of the refined models (Karplus & Diederichs, 2012). Paired means that a starting model is refined using the same protocol against both a full data set and an (in some way) truncated version of the full data set. The resulting two models are then compared in terms of R values (Rwork, Rfree) to judge which model is better. Importantly, the comparison of R values is only meaningful when the truncated data set is also used to calculate Rwork and Rfree of the model that was refined against the full data set.
For our test cases, paired refinements indicated that including data to the resolution at which CC1/2 was in the range 0.1–0.2 led to an improved model, even though at these resolutions the traditionally used statistics were well beyond the conventional limits (the limiting signal-to-noise ratios were near 0.3–0.6 and Rmeas was in excess of 300%). Interestingly, this CC1/2 range is a reasonable match to the value of 0.143 which was proposed in the field of for a quantity (FSC; Fourier shell correlation) related to CC1/2 as an appropriate limit for discarding data because they correspond to a CC* value of 0.5 (Rosenthal & Henderson, 2003). Although this value indeed seems to be a reasonable cutoff for X-ray data for the test cases we studied, we resist generalizing this point and suggest that the high-resolution cutoff is in general better decided using the `paired-refinement' strategy (Karplus & Diederichs, 2012). The latter approach has two main advantages: firstly, it allows for the possibility that individual structures or data sets may behave differently, and secondly, it allows for the possibility that as programs improve they may be able to more fully extract structural information from weak data.
Given the paired-refinement technique and the novel correlation coefficient-based data-quality indicators, it is now possible to systematically investigate the impacts of other data-selection practices mentioned above. Here, we document further properties of CC1/2 and CC* and also explore how the various debatable `data-filtering' procedures impact these new and the conventional R-value-based statistics, as well as model quality.
2. Materials and methods
2.1. Data sets
Crystals of cysteine dioxygenase (CDO) were grown and soaked as described previously (Simmons et al., 2008). Data frames were collected on beamline 5.0.1 at the Advanced Light Source.
The data frames were processed and scaled with XDS (v. December 6, 2010; Kabsch, 2010a,b) and data sets were merged with XSCALE using default parameters. The is P43212, with average unit-cell parameters a = b = 57.5, c = 122.2 Å.
2.2. Quality indicators
In addition to data-quality R values, it is conventional to quantify the signal-to-noise ratio of the merged intensities, given as 〈I/σ〉. We used a custom program HIRESCUT to evaluate Rmeas, 〈I/σ〉, CC1/2 and CC* as a function of resolution, a feature that has since been merged into XDS and XSCALE. Furthermore, we added functionality to HIRESCUT that allows the rejection of single observations or unique reflections according to their I/σ ratio.
The CCP4 (Winn et al., 2011) program SFTOOLS was used to obtain correlation coefficients between experimental intensities and F2calc from the model.
2.3. Refinement
To obtain a model suitable for 2b5h ; Simmons et al., 2006); the S-cysteine-persulfenate ligand was added guided by a difference Fourier map. H-atom positions were constructed and the resulting PDB file was refined in phenix.refine (v.1.7.1; Adams et al., 2010) using default parameters for the weights and number of macrocycles and updating the solvent model in every cycle.
we used the PDB entry representing a related structure (PDB entry3. Results and discussion
The main goal of this work is to consider three common questions that arise as part of data reduction and explore how the new correlation-coefficient-based data-quality measures behave and what the paired-refinement strategy indicates about the choices that will deliver the best refined models. The first of these scenarios is that of having a strong data set and a weaker data set, and we ask whether it is wise to merge the two data sets or to just use the stronger one. The other two scenarios have to do with practices that are not considered good practice by experts but are sometimes used to improve data-reduction statistics by deleting selected weak reflections.
3.1. Can strong data be improved by merging with a weaker data set?
This set of analyses uses three data sets (CDO3, CDO4 and CDO5, each corresponding to 90° of rotation) collected from different crystals that were grown and handled equivalently. According to all data-quality indicators [CC1/2, Rmeas and 〈I/σ(I)〉] CDO3 is the best data set (Table 1), with CDO4 and CDO5 being similar to each other and of lower quality than CDO3. These data sets were merged in all possible combinations, and in phenix.refine, both isotropically and anisotropically, was carried out against the resulting data sets using the same test set of reflections and the same high-resolution limit of 1.57 Å.
Using Table 1, we can investigate the question of how strongly Rmeas, CC1/2 and 〈I/σ(I)〉 are associated with the quality of the resulting model. We find examples of both improvement and deterioration of the merged data set depending on its constituent data sets.
The increased value of Rmeas, if taken as an indicator of data quality, would suggest that merging of CDO3 (Rwork/Rfree = 0.186/ 0.219) with either CDO4 or CDO5 should significantly decrease the quality of the resulting data set, but in fact, based on the overall Rwork/Rfree, the model quality slightly decreases for CDO3+4 (Rwork/Rfree = 0.185/0.221) but slightly improves for CDO3+5 (Rwork/Rfree = 0.185/0.216). However, the comparison of model R values in this way is not really meaningful, since they are calculated against different data sets. This problem is overcome by the paired-refinement technique (Table 2), which unambiguously confirms that the model obtained by against CDO3+5 fits data set CDO3 better than the original model obtained by against CDO3. Conversely, the paired-refinement technique confirms that against a merged CDO3+4 data set does not produce a model that better fits CDO3 than the original model.
|
In both cases, CC1/2/CC* correctly predict this result: the value of CC* in the highest resolution shell is increased relative to CDO3 for the CDO3+5 data set but not for the CDO3+4 data set.
In contrast, in the case of 〈I/σ(I)〉 the overall value decreases but the value in the highest resolution shell increases in both cases, so the influence of data set merging upon model quality is difficult to anticipate.
Merging of the two weak CDO4 and CDO5 data sets yields a better data set, as revealed by the decreased Rfree (Rwork/Rfree = 0.199/0.237) of the model refined against it. However, this model only fits CDO5 better than the original model refined against CDO5; it does not fit the CDO4 data set better despite the increased CC*. The explanation in this case is most likely that CDO4 is less complete, in particular in the highest resolution shell, than CDO3 and CDO5. Therefore, the caveat mentioned above, namely that comparison of crystallographic indicators should be performed against the same data, applies. In addition, slight non-isomorphism between CDO4 versus CDO3 and CDO5 is conceivable. In this case in particular, it is important to use the paired-refinement technique.
We note that in all cases the isotropic CCwork (Table 1) is significantly worse than the anisotropic value and that the anisotropic Rfree is lower by 0.5–1.3% than the isotropic Rfree (Table 1), which indicates that there is some justification for anisotropic However, we decided not to show the detailed results of anisotropic since it is not completely justified: compared with isotropic anisotropic reduced Rwork by about 3%, thus widening the Rfree–Rwork gap; likewise, a wider CCwork–CCfree gap is observed for anisotropic than for isotropic This is consistent with the observation that the anisotropic CCwork reaches or slightly exceeds CC*, indicating overfitting.
In summary, Rmeas is not a useful indicator of merged data-set quality, whereas CC1/2, and to a lesser extent 〈I/σ(I)〉, correctly indicate the direction of quality change upon merging. CC* is a meaningful upper limit of CCwork; CCwork of isotropic models is found to be significantly lower than CC*, whereas CCwork of anisotropic models is slightly higher than CC*. The latter finding also suggests that sphericity restraints in anisotropic are a desirable feature of a program; currently, only SHELXL (Sheldrick, 2008) and REFMAC (Murshudov et al., 2011) support this.
3.2. Does discarding weak observations to improve conventional data-quality statistics actually lead to better models?
It is obvious that by discarding the weakest data it is possible to create a data set with better conventional statistics at high resolution [i.e. lower Rmeas and higher 〈I/σ(I)〉], so that a higher resolution cutoff appears to be justified. While we did not find publications about these practices, we know from discussion with students and colleagues that they are being applied, in particular to prevent possible criticism by reviewers. To assess the impact of such `data-filtering' practices on the CC1/2 statistic and on model quality, we reprocessed the CDO3 data set to reject either all negative unique reflections (data set CDO3b) or all negative observations (data set CDO3c). The high-resolution statistics for the three data sets (Fig. 2) shows a few interesting features. Firstly, as expected, both CDO3b and CDO3c have much lower Rmeas and higher 〈I/σ(I)〉 in the high-resolution bins. Secondly, more reflections are rejected in CDO3b, which makes sense because any reflections having at least a single positive observation will be included in CDO3c, while even reflections with positive observations will be deleted from CDO3b whenever the positive observations are more than offset by negative observations of the same reflection. Thirdly, the high-resolution CC1/2 values of both CDO3b and CDO3c are lower than those of CDO3, with CDO3b showing the greater decrease.
To assess the relative quality of the models resulting from refinements against these three data sets applying the paired-refinement strategy, the same starting model was refined against each data set and the resulting models were used, without further Rwork and Rfree against the other two data sets. These R values (Table 3) show that the model resulting from against all reflections (data set CDO3) is unambiguously the best model, giving both the lowest Rfree and the least overfitting (as judged from the reduced Rfree–Rwork gap) with all three data sets. Furthermore, the model refined against data set CDO3b is consistently better than that resulting from against CDO3c, even though data set CDO3b has fewer unique reflections than data set CDO3c. We conclude that data `massaging' or `filtering' by rejecting negative unique reflections, or – even worse – negative observations, with the purpose of enhancing Rmeas or 〈I/σ(I)〉 values is counterproductive and leads to worse models. This conclusion is entirely consistent with the concept that the inclusion of weak data (even so weak as to be negative), when appropriately weighted, improves the resulting model and that they should not be discarded.
to obtain
|
In contrast to the behaviour of Rmeas and 〈I/σ(I)〉, the CC1/2 values at high resolution of the CDO3b and CDO3c data sets actually decrease, paralleling the changes in model quality even though CC1/2 might be expected to also increase given that the remaining data are stronger. As is illustrated in Fig. 3, the data-filtering practices are not producing a typical data set with a higher signal-to-noise ratio, but are introducing large systematic errors into the data by skewing the distribution of reflection intensities from what would be expected for a data set that has a certain level of signal and random errors. In the next section, we present an analysis of how systematic errors such as these can influence the CC1/2 and CC* values.
3.3. Theory of the impact of systematic errors on CC1/2 and CC*
Here, we use the terminology and definitions of the work that introduced CC1/2 (Karplus & Diederichs, 2012). To calculate the intra-data-set CC1/2, the measurements belonging to each unique reflection of the experimental data set are randomly assigned to two half data sets and the observations belonging to each half data set are averaged to give I1 and I2, respectively. We observe that by choosing observations randomly, none of the two half data sets is preferred in any way; thus, their variances σ2∊1 and σ2∊2 are the same (σ∊2). We may thus define J − 〈J〉 = τ for `true' measurements with mean zero and variance στ2, ∊1 as the errors in random half data set 1, with an of zero and variance σ∊2, and ∊2 as the errors in random half data set 2, with an of zero and variance σ∊2.
CC* as given by (1) is an estimate of CCtrue, the correlation between the arithmetic average of the half data set intensities I1 and I2 and the true intensities J. This estimate should be accurate since no approximations are involved in deriving (1). When deriving (1), we assumed that τ, ∊1 and ∊2 are mutually independent. However, when systematic errors in the measurement or data processing are present τ, ∊1 and ∊2 may no longer be independent of each other.
An example of systematic error that leaves τ, ∊1 nd ∊2 mutually independent is the case of an error in a data-processing program that results in a positive offset to all intensities. This is clearly a systematic error, but it does not affect τ, ∊1 and ∊2 (since any offset is subtracted when constructing τ, ∊1 and ∊2, which have an of zero) and thus has no influence on CC1/2 or CC*. This type of error would, however, lower Rmerge but increase Rwork/Rfree. In this `Gedankenexperiment', data and model R values are thus anticorrelated, whereas a CC-based data-quality indicator is unchanged. A simple way to detect such a problem would be to monitor the scale factors between Iobs and Icalc [note that comparing Iobs and Icalc rather than Fobs and Fcalc avoids artifacts introduced by the French–Wilson (French & Wilson, 1978) procedure for converting intensities to amplitudes].
Three conceptually simple examples for systematic errors that do invalidate one or more of the assumptions of independence are the following.
|
In all cases of systematic error we can still assume that E(∊1τ) = E(∊2τ), since the random assignment of measurements to half-data sets on average prevents any particular of the two expectation values being larger than the other.
The difference CC*2 – CC2true is zero if the above assumptions about the mutual independence of τ, ∊1 and ∊2 hold. It is interesting that even after dropping these assumptions we can calculate CC*2 − CC2true. This offers a way to predict whether the estimate CC* overestimates or underestimates CCtrue. Collecting and rearranging terms as in the Supplementary Material of Karplus & Diederichs (2012), we obtain
The denominator of the right-hand side is positive. It is noteworthy that no matter whether the true signal τ is negatively or positively correlated with ∊1 and ∊2, the second term of the numerator is always negative. Overall, the numerator may be positive or negative, or its two terms could cancel. Cases where ∊1 and ∊2 are positively correlated, as in the first two of the three examples above, seem to have practical relevance; for these, the first term E(∊1∊2) is larger than zero, which favours CC* > CCtrue (i.e. overestimation of CCtrue). However, the second term may cancel the first term, or if it is larger than the first term the overall result may be an underestimation of CCtrue.
The third example yields CC*2 − CC2true < 0, which means that CC* is an underestimate of CCtrue. The deviation from the truth occurs in opposite directions in the two half data sets, so that their average is close to the truth (high CCtrue) but their agreement may be poor (low CC1/2) such that CC* < CCtrue.
These examples demonstrate that even if the assumption that CC* is an accurate estimate of CCtrue may not be completely fulfilled, CC* is often a conservative estimate of CCtrue owing to the safeguarding effect of the second term of the numerator. This clearly is a desirable property of a data-quality indicator.
We also note that in the case of vanishing signal (τ→0) only the first term of the numerator remains. Thus, at high resolution the value of CC*2 − CC2true is dominated by the of ∊1∊2.
3.4. Application of the theory to the specific data-filtering test cases
Applying these theoretical considerations to the two data-filtering practices explored, we can understand that the decreases in CC1/2 do not in this case occur owing to there being less signal in these data. Rather, the reduction of CC1/2 is owing to systematic error being introduced: if negative unique reflections are rejected (data set CDO3b) then in high-resolution shells where the signal vanishes, ∊1 and ∊2 become negatively correlated (Fig. 3b). This, according to (2), is augmented by the correlation between τ and ∊1, ∊2, leading to a substantial reduction of CC* below CCtrue. In other words, the systematic error causes CC1/2 to be an underestimate of the level of signal in the data, meaning that in turn CC* is no longer a valid upper limit for CCwork and CCfree. Indeed, consideration of the results demonstrates that at high resolution CCwork and CCfree are both higher than CC* for data set CDO3b (Table 4).
|
Considering data set CDO3c, the CC1/2 (and thus the CC*) value is lower than that of CDO3, but to a lesser extent than CDO3b (Fig. 2). Since rejection of negative observations does not necessarily result in a correlation between ∊1 and ∊2 (Fig. 3c), the reason for the decrease is not the first term of the numerator, as for CDO3b, but rather the second term. Here, the rejection of negative observations increases the intensity of the merged data over that of the true data, which leads to a correlation between τ and ∊1, ∊2. Again, according to (2), this decreases CC* below CCtrue. The fact that CCwork and CCfree are much reduced for CDO3c compared with models refined against CDO3 or CDO3b (Table 4) is owing to the fact that at high resolution the rejection of negative observations leads to systematically increased intensities for the affected reflections, but not for reflections without negative observations. This is a systematic error that the model cannot fit; i.e. it reduces CCwork and CCfree. This effect does not happen for CDO3b, which just has its weak reflections discarded; in this latter case, CCwork and CCfree are higher than for the model refined against CDO3 since the model refined against CDO3b fits the subset of stronger, less noisy intensities.
The rejection of negative intensities in this section serves as an example for a broader class of data-massaging practices, namely all those employing a positive σ cutoff. Employing such a cutoff can be expected to result in even worse models than those obtained with a cutoff of zero, as shown here. Negative σ cutoffs of about −3σ and below, on the other hand, may be expected to reject true outliers and to affect very few reflections. In addition, it should be noted that the artificial increase of average intensities at high resolution brought about by rejection of weak reflections invalidates the French–Wilson procedure for estimating amplitudes from intensities and may result in a model with unrealistically low temperature factors.
Unfortunately, non-isomorphism between data sets cannot be treated using the theory laid out above, since the concept of `truth' is undefined when two data sets from different crystals are merged: if the data sets are best described by different structures, then which is the `true' one? Obviously, further research is needed to obtain meaningful prediction of the model quality resulting from the merging of slightly non-isomorphous data sets (Giordano et al., 2012).
4. Summary
Since the introduction of data R values, decisions based on these have been influencing protocols dealing with rejection of complete data sets, resolution shells with weak data and weak (e.g. negative) reflections. Procedures such as those analyzed here that discard, filter or massage data in order to minimize data R values need to be abandoned since they lead to suboptimal atomic models. They should be replaced by evaluations of data quality using better suited correlation-coefficient-based criteria, together with unambiguous identification of the best models, which can be performed using a paired-refinement strategy.
References
Adams, P. D. et al. (2010). Acta Cryst. D66, 213–221. Web of Science CrossRef CAS IUCr Journals Google Scholar
Arndt, U. W., Crowther, R. A. & Mallett, J. F. W. (1968). J. Phys. E Sci. Instrum. 1, 510–516. CrossRef CAS Google Scholar
Diederichs, K. & Karplus, P. A. (1997). Nature Struct. Biol. 4, 269–275. CrossRef CAS PubMed Web of Science Google Scholar
Diederichs, K. & Karplus, P. A. (2013). In Advancing Methods for Biomolecular Crystallography, edited by R. Read, A. G. Urzhumtsev & V. Y. Lunin. New York: Springer-Verlag. Google Scholar
Evans, P. R. (2011). Acta Cryst. D67, 282–292. Web of Science CrossRef CAS IUCr Journals Google Scholar
French, S. & Wilson, K. (1978). Acta Cryst. A34, 517–525. CrossRef CAS IUCr Journals Web of Science Google Scholar
Giordano, R., Leal, R. M. F., Bourenkov, G. P., McSweeney, S. & Popov, A. N. (2012). Acta Cryst. D68, 649–658. Web of Science CrossRef CAS IUCr Journals Google Scholar
Kabsch, W. (2010a). Acta Cryst. D66, 125–132. Web of Science CrossRef CAS IUCr Journals Google Scholar
Kabsch, W. (2010b). Acta Cryst. D66, 133–144. Web of Science CrossRef CAS IUCr Journals Google Scholar
Karplus, P. A. & Diederichs, K. (2012). Science, 336, 1030–1033. Web of Science CrossRef CAS PubMed Google Scholar
Murshudov, G. N., Skubák, P., Lebedev, A. A., Pannu, N. S., Steiner, R. A., Nicholls, R. A., Winn, M. D., Long, F. & Vagin, A. A. (2011). Acta Cryst. D67, 355–367. Web of Science CrossRef CAS IUCr Journals Google Scholar
Rosenthal, P. B. & Henderson, R. (2003). J. Biol. Chem. 333, 721–745. CAS Google Scholar
Sheldrick, G. M. (2008). Acta Cryst. A64, 112–122. Web of Science CrossRef CAS IUCr Journals Google Scholar
Simmons, C. R., Krishnamoorthy, K., Granett, S. L., Schuller, D. J., Dominy, J. E., Begley, T. P., Stipanuk, M. H. & Karplus, P. A. (2008). Biochemistry, 47, 11390–11392. Web of Science CrossRef PubMed CAS Google Scholar
Simmons, C. R., Liu, Q., Huang, Q., Hao, Q., Begley, T. P., Karplus, P. A. & Stipanuk, M. H. (2006). J. Biol. Chem. 281, 18723–18733. CrossRef PubMed CAS Google Scholar
Weiss, M. S. (2001). J. Appl. Cryst. 34, 130–135. Web of Science CrossRef CAS IUCr Journals Google Scholar
Winn, M. D. et al. (2011). Acta Cryst. D67, 235–242. Web of Science CrossRef CAS IUCr Journals Google Scholar
This is an open-access article distributed under the terms of the Creative Commons Attribution (CC-BY) Licence, which permits unrestricted use, distribution, and reproduction in any medium, provided the original authors and source are cited.