feature articles
Why direct and post-refinement determinations of
may give different resultsaChemical Crystallography Laboratory, Department of Chemistry, University of Oxford, 12 Mansfield Road, Oxford, Oxfordshire OX1 3TA, England
*Correspondence e-mail: david.watkin@chem.ox.ac.uk
Direct determination of the x) parameter and its are usually not much influenced by changes in the weighting schemes, but if they are then there are probably problems with the data or model. Post-refinement analyses give Flack parameters strongly influenced by the choice of weights. Weights derived from those used in the main least squares lead to post-refinement estimates of the Flack parameters and their standard uncertainties very similar to those obtained by direct Weights derived from the variances of the observed structure amplitudes are more appropriate and often yield post-refinement Flack parameters similar to those from direct but always with lower standard uncertainties. Substantial disagreement between direct and post-refinement determinations are strongly indicative of problems with the data, which may be difficult to identify. Examples drawn from 28 structure determinations are provided showing a range of different underlying problems. It seems likely that post-refinement methods taking into account the slope of the normal probability plot are currently the most robust estimators of and should be reported along with the directly refined values.
as part of the structure procedure usually gives different, though similar, values to post-refinement methods. The source of this discrepancy has been probed by analysing a range of data sets taken from the recent literature. Most significantly, it was observed that the directly refined Flack (Keywords: absolute structure determination; Flack parameter; refinement; software; problem structures.
1. Introduction
The introduction by Rogers (1981) of a new parameter, η, as a refineable multiplier onto f′′ in the least-squares optimization of a [equation (1)] was the first attempt to directly determine absolute structures as part of the process (hereafter called direct determination).
Flack (1983) recognized that the η parameter had no physical significance except for values of ±1, and introduced a new formulation of the problem. He proposed that a given sample be regarded as a twin by inversion, and that refining the twin fraction would reveal the Representing by I+ and by I−
where the subscript `s' indicates a quantity computed from the atomic model with the x set to zero (i.e. a non-twinned single crystal), `c' a quantity computed from a twinned model (i.e. not necessarily zero) and `o' an observed quantity. Like the Rogers method, this proposal refined the parameter using all the reflection data as part of the normal structure optimization, but had the advantage that the parameter had a real physical significance throughout the whole range from zero to one. This innovation increased awareness of the existence of and fears that samples may not have been For convenience we will use the term Flack parameter to imply x determined by an unspecified method, and Flack (x) to imply its determination as part of the main structure refinement.
The 1993 release of SHELXL included a post-refinement method for determining the by a method which came to be known as `hole-in-one'. Equation (2) can be rearranged to give the directly from observed structure factors and structure factors computed from the atomic model and its inverse (Sheldrick, 2014).1
In spite of fears that post-refinement determinations of et al. (2008) devised a method based on a Bayesian analysis of Friedel differences (see Müller, 1988, for an interpretation of Friedel pairs). These authors recast equation (3) to treat Friedel pairs of reflections simultaneously.
might be compromised because of the neglect of potential covariance with the other refineable parameters, Hooftwhere Ds = (Is+ - Is- ) and similar for Do and Dc.2 For convenience later, we have called x computed from equation (4) the Bijvoet (d) parameter. The advantage of (4) over (3) is that by taking differences, the significance of the real part of the is reduced, making the computation less dependent on details of the model structure.
Their process, which used weights derived from the variances of the observed intensities modified by information obtained from the normal probability plot (n.p.p.) of the Friedel residuals (Abrahams & Keve, 1971), yielded values of the parameter, Hooft (y), not unlike those from the Flack (x) method. The underlying assumption, as in Dyadkin et al. (2016), was that the error distribution was Gaussian. Hooft et al. (2010) show that this distribution is adequate for good data, but that for poor data dramatically improved results are obtained by the use of the student t-distribution. The method further enabled one to estimate the probabilities of the correctness of assignments for or 50:50 racemically twinned samples.
Parsons et al. (2013) examined the use of equation (4) and its quotient form, equation (5), which we will call Parsons(q), both for post-refinement determination of the and as restraints during the direct of Flack (x).
where Qs = (Is+ - Is- )/2As etc. and As = (Is+ + Is- )/2.
These authors showed that the Hooft (y), Parsons(q) and Bijvoet (d) estimates of the parameter were usually similar to direct of the Flack (x), but with significantly lower standard uncertainties. They also observed that using equations (4) or (5) as restraints on the least-squares gave values of the Flack (x) in close agreement with post-refinement estimates of absolute structure.
No explanation was given for the observation that direct CRYSTALS program. Data sets taken from the literature including Escudero-Adán et al. (2014), hereafter EBB, Parsons et al. (2013), hereafter PFW, and Flack (2013), hereafter HDF, were re-examined using these tools.
of the consistently gave larger standard uncertainties than any of the post-refinement methods other than to note that the direct was based on all the reflections used in the while the post-refinement analyses used selected subsets of the full data set. In order to investigate the source of the differences between direct and post-refinements estimations of several different approaches were implemented in the2. Background
During the period before the common availability of area detector diffractometers, it was generally regarded as too expensive to collect a highly redundant set of all Friedel pairs of reflections. Some of the need for redundancy could be reduced by making measurements in geometries which minimized the differences in the experimental errors between Friedel pairs (Le Page et al., 1990). Even so, full sets of Friedel pairs were generally not measured, and after a structure was resolved and refined from an of data in the corresponding Laue group, selected Friedel pairs were remeasured and used for determination (see, for example, Ealick et al., 1975). The introduction of the has led to attempts to use X-ray crystallography both to determine and to determine enantiopurity, i.e. whether the sample used for the measurements was twinned by inversion.
2.1. Probability methods
Prior to the introduction of Flack's parameter, structure analysts had simply tried to ascertain the probability of the ) R-factor ratio method used all the observed reflections, but was difficult to apply convincingly due to uncertainty about a valid definition of the number of involved in swapping from one model to its inverted image.
of the crystal being the same as that of the model, so that an was chosen to give a best match between selected observed and calculated structure factors. The Hamilton (1965Other methods used reflections carefully selected from the existing data sets, or carefully remeasured. Engel (1972) favoured the `Bijvoet Method', in which a selected set of reflections, the sensitive reflections, were remeasured more carefully. Engel used Bh = (Qh − 1)/½(Qh + 1) as a measure of the Bijvoet sensitivity, with Qh = |Fh|/|F−h|. A comparison of the signs of the measured and calculated Bs from a selected set of reflections yields the If the intensities of Friedel pairs of reflections, preferably with a B of the opposite sign, could be found and measured in a neighbouring part of for which absorption and other errors will be similar, then a `double quotient' can be estimated which has the effect (as in the Parsons quotient) of reducing the influence of geometry-related experimental errors. Le Page et al. (1990), recognizing that Rogers' η should be ±1 for an sample, computed the probability that the of the model and that of the sample were the same on the basis of a remeasured set of selected reflections. Probability methods have been revisited again by Hooft et al. (2008), and using a t-distribution (Hooft et al., 2010). They constructed tests on the basis that the material is the P(2) test giving the probability that the model and the material have the same or possibly twinned, the P(3) test distinguishing between the correct assignment, a 50:50 or an inverted assignment. The appeal of probability methods is that, under strict assumptions, they appear to give a clear-cut result.
2.2. Direct of the Flack (x) parameter
Direct x) parameter simultaneously with the other structural parameters is now commonplace. Flack et al. (2006) recommend that a full set of Friedel pairs be measured on an area detector instrument, preferably with high redundancy in order to optimize empirical intensity scaling, and that be started with the set to 0.5 to minimize the risk of to a false minimum. This is particularly important in the case of space groups with floating origins, in which the structure may distort to accommodate an incorrectly assigned – the polar dispersion error (Cruickshank & McDonald, 1967). It has been widely observed that although the Flack (x) is rarely in conflict with a known (Thompson & Watkin, 2011), it can refine to a value away from the ideal value for an material. There is also evidence that the computed from the full variance–covariance matrix is often over-estimated. Parsons, Wagner et al. (2012) have proposed using leverage analysis to identify reflections which are particularly influential in the determination of the and which could be re-measured and used as supplementary observations (restraints) in the An alternative approach (Thompson & Watkin, 2011) re-uses Friedel pairs selected from the existing data set to construct supplementary observations.
of the Flack (2.3. Post-refinement determination of the Flack parameter
The relation between the . The worryingly high of the Flack (x) parameter determined for many materials of known enantiopurity and has led to a search for methods to determine the more robustly than simply including it in the main least-squares especially in cases where the resonant signal is likely to be weak. Not infrequently, these methods involve the use of selected sub-sets of the original or new data.
of a crystalline material and the measured Friedel pairs is given in equation (2)Given a reasonably well refined model, the x by conventional least squares. The disagreement sometimes seen between the hole-in-one method [equation (3)] and Bijvoet difference method [equation (4)] might, in part, be due to the additional information introduced by pairing up reflections for the differences, with the possibility that certain kinds of errors in the model or in the data might be correlated and tend to cancel out.
can be estimated by solving equations (2), (3), (4) or (5) forThe denominators in the Parsons(q) expression (5) were based (Parsons et al., 2013) on an extension of the earlier recognition that on a serial four-circle diffractometer setting angles could be chosen so that the absorption effect for reflections h and would be similar (Le Page et al., 1990). On an area detector diffractometer these conditions are rarely satisfied, and in any case the final intensity of each reflection is usually the average of several measurements made with quite different setting angles.
Equation (5) can be rewritten as
Here Ao and As seem to be scale factors down-weighting the contribution of strong reflections to the parameter. However, when each reflection pair is weighted by the inverse of the variance of the observed quantities, this down-weighting disappears.
If equation (6) is rewritten as
we can see that if Ao can be regarded as As ± error, the ratio Ao:As could take large values when the calculated is very small – such reflections must be excluded from any quotient calculation. In fact, if Ao is not very similar to As then there is a reasonable probability that there is something wrong with the model, the data or both. We can also see that the Ao/As terms act as per-reflection scale factors and should be counted as independent variables.
Just as plots of Fo versus Fc can be of diagnostic value in a normal structure so plots of Do versus Ds and 2Ao versus 2As can give insight into determination (Parsons, Pattison & Flack, 2012). The 2Ao − 2As plot should have a unit gradient and might identify outliers in which the quotient in equation (7) lies far from unity. For materials correctly assigned, the Do versus Ds scatterplot should also have a unit gradient, and for materials with a large Friedif (Flack & Bernardinelli, 2008) this is usually clearly evident. For materials with a Friedif less than 100 the linear relationship is always less clear (Cooper et al., 2016).
Fig. 1(a) shows a scatterplot of Do versus Ds and 2Ao versus 2As for structure SL-6418 (Friedif = 498; Smith & Lamb, 2012). The best line through Do and Ds (green points) has a gradient of 1.063 (5) and an intercept of −0.002 (21), the is 0.960, and the coefficient of determination is 0.929. The value of (1 − 2x) is reliably determined. Fig. 1(b) is a similar plot for structure EBB-5001 (Friedif = 6.5). The best Do − Ds line appears to be independent of the scatter of the observations, yet a least-squares fit gives a gradient of 0.92 (17) [corresponding to a Bijvoet (d) of 0.04 (9)], 0.116 and coefficient of determination of 0.014.
Except when the data points all lie on an exactly vertical line, it is always possible to fit a regression line. However, if the spread of the observations along the dependent axis is much greater than that along the independent axis, the line will have little or no physical significance. The ). Rogers (1981) had been worried that in the Hamilton method, some of the resonant differences would be below the observable threshold, so that `Many of the reflections are mere passengers in the calculations of the ΔF' yet contribute to the and falsely improve the apparent reliability of the analysis. Ealick et al. (1975) chose to work with reflections for which the `sensitivity factor', SF = |Do − Ds|/Ao, was the largest [note that, ignoring the effect of scale factors and the Lp correction etc. for Poisson statistics, Io is proportional to σ2(Io) so that SF is a measure of signal-to-noise]. Rabinovich & Hope (1980) introduced the idea of `observability', D = (DsAo)/(Asσ(Do)) similar to Ealick's sensitivity factor. The ratio Ao/As in this expression means that it is strongly related to the Parsons quotient.
is independent of the number of observations, but the is proportional to so that the can be reduced by including more `vanilla' data – the Emperor of China Syndrome (Parrish, 1960The importance of a given datum on its own fitted value is measured by its leverage (Prince, 2004). Since the mean values of Do and Ds (and the corresponding quotients) are close to zero, fitting a straight line can be regarded as a one-parameter model, so that the leverage of each data point is given by
where di are the values of either Ds or Qs. The data with the greatest leverage are those with large absolute values of Ds or Qs. Remember that although Ds does not depend directly on As, large Ds can only be possible for large As. If each observation in the post-refinement determination of the is weighted by the inverse of its variance, Pii is proportional to the square of the signal:noise. To a first approximation, σ2(I) ∝ I (Evans, 2006; but see also §5.2.6) so that the resonant difference originating from strong reflections will have large standard uncertainties, and be down-weighted. The most useful reflections are likely to be those of intermediate intensity and with a large resonant difference. This is in agreement with the leverage analysis for the Flack (x) parameter in the least-squares of all structural parameters (Parsons, Wagner et al., 2012).
Equation (4) can be made to yield values of the on a per basis
Plotting x from equation (9) against Ds (Fig. 2) should give a horizontal line at the value of the If |Ds| is very small compared to |Do|, the value of x can take extreme values. For a structure with low individual x can be ill-determined, and even for good data many extreme values can be seen. The massive vertical distribution near the centre of the plot (which includes both positive and negative estimates of x) corresponds to small values of the denominator in equations (4) and (9), and it is only the data lying distant from |Ds| = 0 which contain useful information.
3. Data quality
3.1. Friedel completeness
The introduction of the Flack analysis meant that an indication of the ) showed that, strictly speaking, it does not even require the measurement of any Friedel pairs, but simply that any Friedel pairs that are measured are not merged together. Flack et al. (2006) discuss at length the need for extensive Friedel coverage in the case where a structure is pseudo-centrosymmetric. Trials in the 1980s using the Enraf–Nonius CAD-4 serial diffractometer showed that in some cases (for example, an organometallic spontaneously resolving in P21) a good indication of the could be obtained without measuring all Friedel pairs. These results were never published, but an example can be simulated using area-detector data. The model for HDF-gg3255 (Abud et al., 2011), in P212121, Friedif 600 (Flack & Bernardinelli, 2008), was refined using a full data set (only 114 unpaired acentric reflections), the all-positive quadrant of data plus the h = −1 layer, and just the all-positive quadrant. The same model, with Flack(x) set to 0.5, all atomic coordinates slightly perturbed, |F|2 observations and the weighting scheme optimized for the full data set, was used to start all three refinements (see Table 1).
could be obtained without re-measuring any data. Bernardinelli & Flack (1987
|
This simulation is only indicative since Friedel pairs were measured in the original experiment and used to obtain frame scale factors and absorption corrections, but it casts some light on the robustness of the Flack analysis (see also https://www.ccp14.ac.uk/ccp/web-mirrors/hugorietveld/stxnews/stx/discuss/dis-fals.htm).
3.2. Outliers and data quality
Merli & Sciascia (2011) and many others, e.g. PFW-2013 and Le Page et al. (1990), recognized that outliers in the data would degrade the analysis. Hooft et al. (2008) provided a filter to try to ensure that only reliable data were used in the determination of the Hooft (y) parameter. Parsons et al. (2013) give an example in which exclusion of a single reflection changed the from 0.18 (8) to 0.08 (8). The detection of outliers is a vexing problem. Reflections with large residuals can be due to errors in the observed or modelled values, or both quantities. When a model is fully parameterized (all atoms have been found, disorder resolved, dealt with), then there is a good chance that an individual Is is more likely to be `correct' than the corresponding Io because each computed is, in effect, a complexly weighted average of all the observed structure factors. Under these conditions, a large residual is usually attributed to error in the observation, and these reflections – the outliers – may be filtered out. In structural an outlier can be identified by comparing the residual with the experimentally determined If the fully developed model will not refine so that this residual is reduced, it is usually assumed that the discrepancy is a fault in the observation. Robust/resistant weighting schemes are designed to reduce the influence of these suspect reflections in a smoothly continuous way rather than simply rejecting selected data (Prince, 1994). In the case of determining the Hooft (y) parameter, the observed Friedel difference could be compared with the calculated difference and reflections with improbably large residuals be excluded from the computation. In the original implementation in PLATON (Hooft et al., 2008, and now integrated into CRYSTALS), the filtering was via the user-adjustable variable Outlier Crit. In later versions the filter is automated such that reflections for which the observed Friedel difference is more than twice the largest calculated difference, Dsmax, are eliminated (see also PFW-2013). A very small value for the Friedel difference can still occur even when the two contributing reflections are strong, and are accepted by the `three sigma' criterion. In the PFW-2013 implementation, reflections for which either or both Io+ and Io− were less than three standard uncertainties were also eliminated, as were reflections with significant deviations from the (Do − Ds) n.p.p. best-line. Whereas in the conventional least-squares of crystal structures some practitioners insist on using all reflections, it is now established practice to filter out some reflections for the post analysis of Filters are provided in CRYSTALS to exclude reflections which may either introduce instability into the calculations (very small denominators) or are suspected of being in serious error.
3.3. Iterative reweighting
The Le Page algorithm (Le Page et al., 1990) in effect assigns a value of ± 1 to the Rogers' η value of the selected reflections on a one-by-one basis as opposed to direct of η from all the reflections in the main least-squares calculations. It tacitly assumes that the material is Equation (9) enables us to also evaluate the on a reflection-by-reflection basis – the data used in creating Fig. 2. We could in principle evaluate the from each pair of carefully selected and remeasured reflections – or even from just one very carefully selected and very carefully measured pair. Because x is a continuously meaningful parameter in the range 0–1, it is not necessary to assign it an integer value. Now, rather than remeasuring selected reflection pairs to estimate x, we can use all the pairs measured in the original data collection to give individual estimates of x. With the exception of unknown correlations introduced during the measurement process, these estimates of x will be experimentally independent (or at least as independent as the measurements of the original data were). As was seen in Fig. 2, the values of x can take values wildly outside of the 0–1 range – these are physically impossible and correspond to outliers originating either from large experimental errors, or are artefacts of a small denominator in equation (9). Following the arguments of Blessing & Langs (1987) for the merging of equivalent reflections, we can merge these individual x-values, and since each x-value has an associated experimental variance, we can compute both the external variance
and the internal variance (Appendix A)
The probability of an individual xi can be estimated from
Friedel pairs yielding a value of x differing from the average value of x by several variances have a low probability. This probability can be used as a modifier for the weight () used to compute a new weighted average value of x, and the process repeated (Blessing & Langs, 1987). Since the distribution of the computed Flack parameters may be dispersed, skewed or long-tailed, the process is started using the median value of xi as an initial estimate of x. Thus, rather slack values can be set for the various initial filter thresholds used in selecting reflections, and a smoothly varying function can be used to down-weight suspect data. Friedel pairs with a probability pi greater than a user-adjustable threshold (typically 0.001) are counted to provide an indication of the number of `useful' reflections in the data. The process is terminated when the number of `useful' reflections is the same for two successive iterations, or until ten iterations are completed. In this latter case, the process is regarded as being unconverged and unsuccessful. This situation seems to arise when the is small compared with the errors in the intensity measurements. The on the final value of x′ is estimated from the weighted external variance
Iterative reweighting (Prince, 1994) using the Tukey biweight algorithm (Tukey, 1976) gave essentially the same results as the Blessing method.
In order to provide the user with a visual representation of the data, a histogram of the x can be plotted (Fig. 3). The normalized sum of the weights of the reflections in each bin is also plotted. The number of pairs containing `useful' information and the number of pairs yielding an x value falling in the range −0.5 < x < 1.5 is also output. For convenience, we will denote the value of x′ and its s.u. determined by this histogram method as the Histogram (h) parameter, and σ(h) its s.u. The σ(h) can be further scaled by the gradient of the Friedel residual n.p.p. Note that the weights could also be used for the computation of a Bijvoet (d) or Parsons(q) parameter.
ofThe expected and actual information content of the data can be visualized (Fig. 4) by plotting histograms of Ds/σ(Do) and Do/σ(Do) (Bernardinelli & Flack, 1987). A distribution of Ds/σ(Do) which is very narrow and centred on zero indicates that there is little information in the data. When this is accompanied by a broad Do/σ(Do) distribution we have an indication that the data is very noisy.
3.4. Ratios of averages and averages of ratios
Letting (1 − 2x) in equation (4) be represented by c, then for each we have
An average value of ci can be computed as a least-squares estimate (see Appendix A)
or as a simple mean
leading to 〈x〉 and x′. Equation (16) is a ratio of averages (there is a 1/n term in both the numerator and the denominator), equation (17) is the average of the individual ratios, ci. In general, if all the summations are made over the same number of data points and there are no wildly eccentric outliers, the values of 〈x〉 and x′ are similar. An indication of the presence of outliers can be obtained by computing these coefficients using all the measured Friedel pairs. If they are substantially different, the distribution of the errors in Do may be skewed, there may be outliers, the errors may swamp any signal or there may be contributors to (15), where the Dsi are tiny. Weighted versions of equations (16) and (17) can be recomputed during the Blessing & Langs (1987) process, where outliers are progressively down-weighted. If convergence is achieved before the maximum number of cycles is reached, 〈x〉 and x′ are usually very similar. Both values are output by CRYSTALS.
4. Experimental considerations
4.1. Restraints
The result of using selected reflections as restraints either in the Parsons et al. (2013) method or the Thompson & Watkin (2011) method seems at first to be reassuring, but a similar result can also be achieved by computing the value and of the from the data which would otherwise have been used as restraints and simply using this as one idealized restraint. Using HDF-gg3255 (Friedif = 600) as an example again gave the following results for an unrestrained and restrained refinements using various target values of the and a requested of 0.005 (Table 2). The SHELX-type weights were optimized for each refinement.
|
The only impact of imposing the restraint that the R-factors or the other estimates of x. Setting a target of 0.5 with a of 0.005 leads to a refined close to the target, and causes a small increase in the R-factors. The Hooft and Histogram estimates of x decrease a little, and since these are computed from the refined structural model, indicate that the model has relaxed in some way. Raising the target to 1.0 causes a very significant change in the R-factors, but the refined value of the Flack (x) almost satisfies the restraint. The automatically adjusted SHELX-type weighting parameter a increased as the Flack restraint was increased, progressively down-weighting strong reflections in order to try to achieve a flat analysis of residuals, emphasizing the dangers of modifying the weights until the model is finalized. The n.p.p. for the main became progressively more S-shaped as progressively invalid Flack values were imposed. The resonant difference n.p.p.s, using pure statistical weights, remained fairly straight throughout. Preserving the atomic coordinates and weights from this last and resetting the to zero gave the R-factors in the row labelled with an asterisk. with a target Flack of unity can be achieved simply by causing a small distortion of the model which has minimal impact on the conventional R-factor but increases the reweighted R-factor. For HDF-gg3255 the median bond length distortion with the inverse restraint was 0.01 Å and the maximum 0.03 Å, i.e. similar to Müller's (1988) of structures and their inverses. The median change in the arithmetic Uequiv was 0.001 Å2 and the maximum 0.004 Å2. These results can be interpreted (for a reasonable data set) as showing that small changes can be forced on the value of the without having an appreciable change on the atomic model, and hence on estimates of the based on that model. They also show that while an incorrect assignment of will affect fine details of the molecular geometry, small errors in the structural model only have a small effect on the post-refinement determination of the absolute structure.
should be zero is to reduce the refined value of the parameter from 0.0018 to 0.0004. There is no appreciable change in the4.2. Correlation between Flack and other parameters
In order to demonstrate that the x, y and z coordinates of the non-H atoms in the fully refined unrestrained structure of HDF-gg3255 (called `original' in the table) were randomly perturbed from their refined positions with a mean displacement of 0.0 and a of 0.1 Å. Just the overall scale and Flack (x) parameters were then refined for five different perturbations of the structure, each of which had a conventional R-factor of ∼ 14% (Table 3). Although the directly refined Flack (x) parameter was less well defined, the table shows why it may be possible to assign a reasonably reliable estimate of the quite early on in a structure analysis by the post-refinement methods (Sheldrick, 2015).
parameters are only weakly correlated with the atomic structure, the
|
4.3. Influence of weighting schemes
In the discussion so far it has been assumed that the weights for the post-refinement analyses have been derived from the observed variances of the original diffraction data via equations. However, it has long been established practice to use more complex weighting schemes in the main structure These weights are computed from empirical formulae with coefficients selected to give a flat distribution of weighted residuals. This process is intended to allow for unidentified errors in the data and shortcomings in the model (Cruickshank, 1961). Weights computed in this way have an influence on the Flack (x) parameter and its s.u. as determined during the main (Bernardinelli & Flack, 1987). In order to see the influence of these weights on the post-refinement determination of they can be converted to observational pseudo-variances by
where weightlsq is the weight assigned to the reflection during refinement.
PFW-fyo12e (Parsons et al., 2013) contains only carbon, nitrogen and hydrogen, and Friedif is 11.8. This data set, specifically collected with a view to exploring the differences between direct and post-refinement evaluations of has Flack (x) = 0.17 (38) and Bijvoet (d) = 0.01 (08) but contains no evident source for the discrepancy between the two methods. The data set has an average multiplicity of observation of ∼ 36.
The structural model, including the Flack (x), was refined under three regimes: (a) using pure statistical weights 1/σ2(I), (b) in which the weights were rescaled by a common factor to give a goodness-of-fit (GoF) of 1.0, and (c) using optimized SHELX-type weights, which involves adding terms to σ2(I). For each regime, post-refinement analyses were computed with pure statistical weights, and with ones derived from the least-squares weights. The results are summarized in Table 4.
|
For this data set we see that the choice of weighting scheme has little influence on the s.u. of the Flack (x) parameter determined in the main least squares, although it does have an influence on the value of the parameter itself [column headed Flack (x)]. In regime (a), post-refinement analysis gives the same results whether weighted by simple statistical weights, or weights derived from the LSQ weights (since these were also simple statistical). However, all of the post-refinement methods gave standard uncertainties reduced to ∼ 20% of those from the direct The n.p.p. for the weighted Friedel differences was substantially linear with a unit gradient, although the gradient for the n.p.p. of residuals was 4.5 (Fig. 5a). The histogram of the weighted structure-factor residual w(Fo2 -Fc2)2 as a function of intensity (Fig. 6a) shows an unacceptable upward trend as a function of intensity.
The gradient of the n.p.p. can be made unity simply by rescaling all of the reflection variances. This rescaling has no effect on the refined parameter values, and because of the way parameter standard uncertainties are conventionally computed (Cruickshank & Robertson, 1953), it has no effect on their standard uncertainties. Because structural parameters are unchanged by this scaling, the calculated Friedel differences are unchanged, so that the row (b)STAT in Table 4 is identical to the rows (a) with the exceptions of the GoF and n.p.p. for the main which are both now close to unity (Fig. 5b). Row (b)LSQ in Table 4 contains some interesting features. Although the n.p.p. for the main now has a unit gradient, the n.p.p. for the Friedel differences has a gradient of 0.2, the inverse of that for the original main n.p.p. As a consequence, the standard uncertainties in almost all the post-refinement analyses rose to values not dissimilar to those obtained by direct of Flack (x). The exceptions to this increase in the s.u. of the parameters are those computed by the Hooft method and the histogram method rescaled by the gradient of the n.p.p. Simply rescaling the weights to produce a GoF of unity is, however, not a useful procedure because it fails to produce a uniform distribution of weighted residuals as a function of intensity (Fig. 6b). For well behaved weights, the average should be approximately unity for all intervals across the intensity range. It is now generally accepted that a good strategy for obtaining a uniform distribution of weighted residual is not to scale the observed variances, but to augment them with terms depending upon the magnitude of the observed and/or calculated structure factors (see, for example, the SHELX76 instruction manual). The structure was re-refined using SHELX-type weights giving rows (c) in Table 4. With these weights, the gradient of the n.p.p. was close to unity and the analysis of variance roughly flat. The s.u. of the directly refined Flack (x) parameter hardly changed with the new weights, but the parameter itself increased by one-half an s.u. The shifts in the structural parameters had no visible effect on the computed Friedel differences, so that the row (c)STAT is the same as the other purely statistically weighted post-refinement analyses. The standard uncertainties for post-refinement analyses in (c)LSQ are similar to those in (b)LSQ, but the itself has increased. Fig. 7 shows the relationships between the weights and the standard uncertainties of the observations under the three regimes.
We can see that for the strong reflections (to the left of the plots) the SHELX-type weighting scheme down-weights the observations in much the same way as a simple scale factor, but that the down-weighting becomes progressively less for the weak data.
Similar results are seen for most of the materials reported in Table S1 of the supporting information. The weighting scheme for the main usually must be more complex than simple statistical weighting in order to achieve a flat distribution of residuals. The effect of these weights is to increase the s.u. of the directly refined Flack (x) (Bernardinelli & Flack, 1987). The same effect is seen if the augmented weights are used in the post-refinement determination of the The n.p.p. computed for Friedel pairs using intensity statistic weights tends to have a unit gradient, suggesting that the error estimates for the differences are valid. The n.p.p. for weights based on the LSQ sometimes have a distinctly non-unit gradient, with pronounced curved tails. This seems to suggest that the modifiers added to σ(I) in the weighting scheme to achieve a constant unit χ2 may be reflecting deficiencies in the model as much as in the data. Note that the hole-in-one method usually gives similar results to other post-refinement methods when simple statistical weights are used.
5. Results and examples
5.1. Overview
The above computations were performed on a selection of structures from data collected locally or taken from the literature. The examples were chosen to cover a range of values for Friedif, the SHELX format .res and .hklf data, this was used in preference to the and .fcf format data. This was especially useful when I or σ(I) for weak data in the .fcf file had only one significant figure. Each structure was re-refined in CRYSTALS and the parameters for a SHELX-type weighting scheme optimized. The atomic parameters were first refined in a single matrix together with the overall scale and Flack (x) parameters. Additional refinements were then performed from this atomic model on just the overall scale and the Flack (x) parameter, first using the optimized weights, and then with weights derived directly from the counting statistics.
its or had attracted comments in the body of the paper. When the deposited data included theTable S1 in the supporting information contains the results of the analysis of 28 data sets. In every case the results from the full matrix (rows A&B) were almost identical to those from the small-block (rows C&D), indicating that for a fully refined structure there is little correlation between the structural parameters and the parameters (Fig. 8).
Rows E & F give the results of refining Flack (x) and scale using simple statistical weights. Refining the whole structure with weights derived from unmodified intensity variances would have led to shifts in the atomic parameters.
Table 5 contains sample data for two materials from Table S1. In each case rows E are almost identical to rows F, showing that direct of the Flack (x) parameter using simple intensity statistical weights gives the same results as post-refinement analysis.
|
The most significant differences are between rows C and D – the SHELX-weighted main and post-refinement analyses with either counting or based weights. They show that direct of the Flack (x) is more or less unchanged when using either simple statistical or modified (SHELX-type) weights, providing the atomic model is not allowed to adjust. However, the post-refinement determination of is sensitive to the weights used. Post-refinement analysis using weights derived from those used in the main least squares yields results very similar to those found by direct of Flack (x). However, using simple statistical weights almost always leads to significantly lower standard uncertainties (Fig. 9). The influence on the parameter itself is more variable (Fig. 10). We find that the discrepancy often seen between direct and post-refinement values of the is linked to the weights used in the refinement.
In Table S1 we see that the slope of the n.p.p. for the statistically weighted Friedel differences is generally close to unity, but the slope with weights from the main is almost always less than unity (Fig. 11).
Because both the Hooft (y) and scaled Histogram (h) methods take into account the slope of the n.p.p., they give very similar values for both the parameter and its s.u. independently of the weighting scheme used. It would seem, for general work at least, that the Hooft (y) parameter as implemented in PLATON (Hooft et al., 2008) is a widely available suitably robust estimator of absolute structure.
Fig. 12 shows standard uncertainties computed from the main least squares, and by the hole-in-one, Hooft and Bijvoet difference methods versus the histogram method. The main was done with SHELX-type weights, the post-refinement analyses with simple statistical weights.
5.2. Examples
The various estimators of Table S2. The refinements for the structures which gave a s.u. for the Flack (x) substantially larger than the s.u. determined by other methods (the clear outliers in Fig. 12) were examined in detail to try to understand the source of the discrepancies.
are summarized in5.2.1. Motherwell (Watkin, unpublished)
The data for 2-methyl-4-nitroaniline (previously published by Howard et al., 1992; Ferguson et al., 2001), Friedif = 5.94, in Cc, was remeasured without the intention of determining the using Mo radiation from a conventional source. The data collection strategy yielded data containing little or no resonant signal. From equation (4) one would expect the parameter to be 0.5 with an s.u. simply reflecting the noise in the data. This result is more or less achieved by all except the Hooft (y) post-refinement methods. For other materials, with a larger value for Friedif, one would expect larger values for Ds and thus a smaller s.u. on the parameter, enabling to be detected.
5.2.2. HDF-tp3005W (Zhang et al., 2012)
The x) parameter and its are larger than values obtained by post-refinement analyses. The n.p.p. (Fig. 13a) for the residuals from the post-refinement determination is acceptably linear, but the plot for the residuals in the main shows serious deviations from linearity (Fig. 13b). The weights for the plot illustrated were computed from a SHELXL-type scheme. Weights derived from three, four- or five-parameter Chebychev polynomials (Carruthers & Watkin, 1979) fared no better. The relatively large value for the second parameter in the SHELX-type weighting scheme is often taken as a sign of but none could be identified using ROTAX (Cooper et al., 2002). The original authors reported positional disorder in one of the residues, but this was well modelled. They also reported that the crystals were very small and the data collection was difficult, requiring the use of synchrotron radiation. Since the of a conventional model produces unweighted residuals whose distribution cannot be matched by conventional weighting schemes, it seems likely that the model is deficient or the error distribution in this experiment is unusual.
of this material was known from the starting materials. Both the refined Flack (5.2.3. HDF-sf3166 (Seela et al., 2012)
This material, of known x) parameter of −0.24 (49) and a Histogram (h) parameter of 0.00 (14). The n.p.p. for the residuals from both the post-refinement determination and the main were good straight lines with a unit gradient. However, of the 2630 Friedel pairs having Ds > 0.01σ(Do), only 7.6% of the Friedel pairs give a in the range −0.5 to 1.5 during the histogram post-refinement analysis (Fig. 14). There are no Freidel differences having a theoretical magnitude of more than 0.5σ(Do).
Friedif of 5.4 and with two molecules in the gave a refined Flack (It is not uncommon to find pseudo-symmetry between the independent molecules in structures with Z > 1. The CRYSTALS MATCH procedure identified a pseudo-glide plane parallel to c, Fig. 15. If the terminal 2-(hydroxymethyl)tetrahydrofuran-3-ol is excluded from the matching procedure, the remaining 45 atoms conform to the pseudo-glide (x, 0.97 − y, z − 0.52) with an r.m.s. deviation in equivalent torsion angles of 16°.
5.2.4. PFW cholestane (Parsons et al., 2013)
Cholestane contains only carbon and hydrogen, and two molecules in the .
Friedif is 9.0. The effect of including Freidel pairs with progressively smaller resonant differences is shown in Table 6
|
Filtering out those reflections with |Ds| < 0.1σ(Do) gives Bijvoet (d), Hooft (y) and Histogram (h) parameters close to the ideal value of zero for this material. Reducing the threshold to include reflections with |Ds| < 0.01σ(Do) increases the number of reflections used from 737 to 3274, but the Bijvoet (d) and Hooft (y) parameters go slightly negative. When the weaker resonant differences are included, the histogram filtering reduces the percentage of reflections used from 94 to 73%. 24% (163) of these reflections have an individual in the range −0.5 to 1.5 when |Ds| < 0.1σ(Do), falling to only 10% (236) when the threshold is reduced to 0.01. The n.p.p. for the residuals from the main structural lie on a good straight line with a unit gradient, but unlike the case of tp3005 (and most well determined structures), the n.p.p. for the resonant differences has a gradient of 1.30 and a distinct downwards tail (Fig. 16).
The deviations could be due to errors in Do, Ds or the weights. As demonstrated earlier, Ds is not strongly influenced by fine details of the structure, so one is left suspecting the problem is with the intensities or their standard uncertainties. Since the structure refined to a conventional R of 0.029, it seems that the s.u. of the observations may have been underestimated. The weights used in the main are based on the reported intensity standard uncertainties modified to ensure a uniform analysis of variance. Fig. 17 is a plot of SQRT(weight) versus 1/σ(F2).
5.2.5. EBB-threonine (Escudero-Adán et al., 2014)
The paper EBB 2014 is a rich mine of useful data sets collected under a variety of conditions with Mo Kα radiation. Five of the D-threonine data sets were re-refined in CRYSTALS, yielding essentially the same results as obtained by the original authors. Those authors drew attention to data set EBB-5206, which had an anomalously large value for the directly refined Flack (x) parameter (EBB Fig. 3). The Flack parameters determined by post-refinement methods were also anomalously high, yet all methods gave standard uncertainties not unlike those from the other threonine data sets. EBB attribute these anomalous results to the reduced number of reflections (6401, redundancy 3.2) compared with other analyses (e.g. 8324 for EBB-5204, which had a redundancy of 11.6). We were not convinced by this argument because EBB-5205 had a similar number of reflections and redundancy (7710, 3.7), but yielded a quite normal refined Fortunately these authors had deposited complete reflection data sets (.hklf files) so we were able to examine them in detail.
Data completeness: Data collection EBB-5206 was terminated prematurely to try to reduce the redundancy. As Fig. 18(a) shows, this strategy also had the unfortunate effect of reducing the completeness of the data in the region between the Bragg angles of 40 and 45°, even when Friedel pairs were merged. Most serious is the systematic pattern to some of the missing reflections, including, for example, the row lines (h00) where h is even, (h10) where h is odd and some patches of (hk0) where h is 9–12 etc. There was a small dip in completeness for data set 5204 at about 45° (Fig. 13b).
Signal to noise: Fig. 19 shows some measures of the quality of the data as a function of resolution. This suggests that for EBB-5204 the data collection strategy was not homogeneous, and that the frame exposure time was increased for the high-angle data. There is a hint of a further increase in exposure time at about 45°, a feature more clearly seen in data sets EBB-5213 and EBB-5215. The number of reflections with I > 10σ(I) remains high right across the data set.
Analysis of refinement residuals: Both data sets seem to refine well, with SHELX-type weighting schemes achieving a goodness-of-fit sufficiently close to unity, Fig. 20.
Some insight into the deviations comes from examination of the weighted and unweighted residuals (Fo2-Fc2)2 as a function of intensity and of resolution (Fig. 21). The very large number of medium intensity reflections (blue curve) dominates the determination of the parameters for the weighting scheme, which leads to over-weighting of the strong reflections (green bars in the top illustrations). The distribution of residuals as a function of resolution is not good for either data set, with the low-angle data (strong) being over weighted, and the high-angle underweighted. The role of the weighting scheme is to make the binned average value of the weighted residual approximately unity. For conventional data sets it is usually assumed that the principal contributors to (Fo2-Fc2)2 are errors in Fo, but for these extended data sets it is possible that the usual independent spherical atom model emphasizes errors into Fc. A further complication may be that a single weighting scheme may not be appropriate when the data collections are not made under constant conditions.
Analysis of Friedel Residuals: In spite of the unusual distribution of the residuals, the n.p.p. for the Friedel residuals were very linear with gradients close to unity (Fig. 22). Based on these, one would expect to obtain similar outcomes from post-refinement determination of the of both EBB-5206 and EBB-5204.
In Table 7 we can see that for data set EBB-5204 the value for the determined directly or by post-refinement is not strongly affected by the weighting scheme. of EBB-5206 with simple statistical weights has a goodness-of-fit of 4.3, but gives a directly refined Flack (x) of 0.02 (34). Except for the hole-in-one method, other post-refinement procedures lead to larger values of the but with smaller standard uncertainties. of EBB-5206 with SHELX-type weights give Flack (x) of 0.29 (24), in agreement with the original authors. Post-refinement determinations, also using the SHELX-type weights, give much the same value for the but with the Hooft and scaled histogram methods (which involve the gradient of the n.p.p.) giving reduced standard uncertainties. Except for hole-in-one, post-refinement methods using statistical weights yield slightly smaller Flack parameters and much reduced standard uncertainties. These results suggest that the anomalous reported value for the directly refined Flack (x) parameter is a consequence of the weights used for the main The algorithm used to determine the coefficients in the weighting expression was unchanged for all the data sets we examined. We are led to suspect that the failure of this algorithm for data set EBB-5206 is due to the large number of missing reflections in the narrow band between 40 and 45°.
|
Leverage analysis: EBB tried to show that the reliability of an analysis increases as the resolution of the data included in the analysis increases. We used their data in a slightly different way, with different conclusions. The leverage of an individual reflection in a post-refinement determination is proportional to the square of the signal-to-noise. A histogram of the mean signal-to-noise as a function of resolution should indicate where in the data set the most influential information lies. Fig. 23 is such a plot for EBB-5215 (redundancy = 8.2) and EBB-5204 (redundancy = 11.6).
The atomic and Flack (x) parameters of EBB-5215 were refined using all the data and a SHELX-type weighting scheme [R = 0.023, wR2 = 0.066, Flack (x) = 0.01 (17)]. The post-refinement was then determined using firstly all reflections, and then only the reflections in three non-overlapping resolution ranges, chosen to contain approximately the same number of points. Table 8 shows that for EBB-5215 all methods produce a steady increase in the as the resolution band increases even though frame exposure times seem to have been increased.
|
5.2.6. PFW-fyo12e (PFW-2013)
This material has previously been referred to in §4.3. Since the data were collected carefully, it was worth further exploring the cause for the difference between the standard uncertainties of the determined by direct and post-refinement methods. The steep gradient of the n.p.p. for the main (4.64) with purely statistical weights suggests that the standard uncertainties of the observations are severely underestimated. A plot of the internal versus the external sample standard uncertainties for the merged data is a rather dispersed straight line (gradient 1.4) showing that the manufacturer's estimates of individual uncertainties reflect reasonably well the dispersion between equivalent measurements (Fig. 24).
It was expected that a plot of the σ(I) against I [equation (14)] for the merged data would approximate to a (Evans, 2006). Instead, it was found to be a rather good straight line (Fig. 25) with a gradient of 0.045.
of the meanFor Poisson statistics the signal:noise [I/σ(I)] can be increased by accumulating more photons. Diederichs (2010) had observed that [I/σ(I)] tended to a limiting value for synchrotron data. Plots of I/σmean(I) for PFW-fyo12e show a similar tendency, except that there appears to be two (or possibly three) limiting values (Fig. 26).
This raises the possibility that the data is not homogeneous, in the sense that it is derived from more than one experimental regime. A histogram of the frequency of distribution of redundancy is at least bimodal (Fig. 27), with a long low-order tail.
The intensities used during the least squares are usually the (weighted) means of a set of equivalent reflections. The variance of this mean is related to the sample variance [equation (11) or (12)] by 1/(redundancy). Reflections measured 25 times have a variance almost twice as large as a reflection measured 45 times. It is possible that this variability of redundancy leads to the various asymptotic limits seen in Fig. 26. Whatever the individual variabilities of the standard uncertainties of the means, on average σ(F2) ≃ 0.05F2 (from Fig. 25). The a and b terms in the SHELX-type weighting scheme are 0.044 and 0.291, so that it is these which dominate the weighting during Weighting the co-refinement of data measured under different regimes (for example, widely differing redundancies) may warrant further investigation, a situation alluded to in Bernardinelli & Flack (1987).
5.2.7. PFW R-mandelic acid (PFW-2013)
PFW-2013 report that this material (Friedif = 35) crystallized as plates which on cooling to 150 K showed evidence of strain broadening, so that the actual data collection was performed at 220 K. 32 194 reflections were measured, yielding 2860 unique reflections, an average redundancy of 11.3. Rint was 0.04, indicating a fair level of self-consistency amongst the data. The final R factor, 0.0549, was higher than one would expect for this type of material, but might be explained by the strain broadening. There are two molecules in the differing by a small rotation about the single bond to the phenyl group, and no evidence for disorder. The σ(Flack(x)) = 0.37 greatly exceeds the σ(Flack(h)) = 0.05, in spite of the significant value of Friedif. The n.p.p. for the main residuals (Fig. 28) is far from ideal, suggesting a problem with the data, the weights or the model itself.
Alternative weighting schemes to a SHELX-type formula did not significantly improve the n.p.p. Although the program DIFABS (Walker & Stuart, 1983), once used as a method for estimating empirical absorption corrections, has long been replaced by the use of multi-scan methods, it still provides a useful diagnostic tool. The program fits a smoothly varying function of azimuth and declination of the incident and emergent beams to the residual between |Fo| and |Fc|, the so-called absorption surface. For merged area detector data there are no `incident' or `emergent' beams, but these can be replaced by the scattering vector to generate a visualization of the residual. If the multi-scan procedure has adequately modelled absorption and illuminated volume effects, variations of this surface from unity will indicate that there are problems with the model, or undetected errors in the data. Fig. 29(a) is the plot for R-mandelic acid. It shows variations between 0.9 and 1.1 with some very sharp gradients, indicating that there is a problem with the analysis.
The Fo-Fc plot (Fig. 30) is a fair straight line with unit gradient, and without any very outstanding outliers. However, although the distribution is bounded by a reasonably well defined lower edge, the upper edge is distinctly ragged. This condition, together with the DIFABS plot, is often symptomatic of twinning.
ROTAX analysis (Cooper et al., 2002) indicated the possibility of by the law [1,0,0; 0,−1,0; −0.8,0,−1]. including this reduced the R-factor to 0.0496 and greatly improved the Fo versus Fc plot and DIFABS surface (Fig. 29b), but did little to improve the n.p.p. If the components of the non-inversion twin are labelled A and B, and corresponding components by a and b, then was continued with the constraint that A+B+a+b = 1.0 and the restraint 0.000(1) = b- (aB/A) on the assumption that the inversion ratios (a/A and b/B) are the same. The refined scale factors are A = 0.8 (2) , B = 0.13 (4), a = 0.0 (2), b = 0.01 (4).3 Inclusion of inversion had no effect on the R-factor. For the moment the post-refinement analysis in CRYSTALS will not handle non-inversion twinning.
5.2.8. FSTW-YIFZAP (Gowda et al., 2007)
This material, falling at the bottom of Table 1 in FSTW (Flack et al., 2011), caught the interest of those authors because of the small variation of RD (= Σ|Dobs − Dmodel|/Σ|Dobs|, summed over the Friedel pairs) as a function of imposed values for the Flack (x) parameter. They came to the conclusion that the reported uncertainty in the was very grossly underestimated. Using the and .fcf files recovered from the IUCr we were unable to reproduce with CRYSTALS some of the results recorded elsewhere in the PLATON was used to convert the file to a SHELXL ins (data) file, and the refinements repeated with SHEXL-2013/2, using the TWIN/BASF commands or using the hole-in-one and Quotient methods for estimating the Flack (x) parameter (Table 9).
|
The SHELX and CRYSTALS analyses are reasonably compatible, but in poor agreement with the published values. In the absence of evidence to the contrary, we attribute this conflict to the fact that the original authors were able to use the full precision of the reflection data stored in an .hklf file, but for the re-calculations we had to use the limited precision of the .fcf file. Whatever the source of the discrepancy, it remains clear that while direct of Flack (x) in CRYSTALS and the in SHEXL-2013/2 lead to very similar results, these are quite different values from the post-refinement methods. The usual diagnostic tools were used to try to locate the source of the discrepancy. The gradient of the n.p.p. (Fig. 31a) was 0.91, but with substantial displacement from the origin of the graph, which usually implies a feature in the data which cannot be matched by the model. The n.p.p. for the post-refinement analysis of the Friedel differences (Fig. 31b) had a least-squares gradient of 4.7. Examination of the plot showed that very many of the reflections in the central region lay on a line of unit gradient, but there were substantial numbers of outliers at the extremes of the plot. From other analyses, we have seen that the calculated Friedel differences are only weakly correlated with the atomic parameters, so we must assume that the non-linearity of the n.p.p. is due either to errors on the observed Friedel differences or in their standard uncertainties.
Examination of the DIFABS map, Fig. 32, showed deep hollows and high peaks with a maximum ratio of 1:1.77. This could be indicative of uncorrected absorption. The authors give the crystal size as 0.52 × 0.46 × 0.09 mm – a thin plate – and used an analytical correction by the method of Clark & Reid (1995) giving minimum and maximum corrections of 0.86 and 1.16, a ratio of 1:1.35.
The Fo versus Fc plot was only weakly indicative of and ROTAX suggested an unconvincing 1,0,0.734; 0,−1,0; 0,0,1. with this gave a major component of 0.88 (3). The text of the article made no mention of but the file contained an entry for the and its Because of this, PLATON had added the necessary TWIN/BASF instructions to the SHELXL instruction file. Attempts to refine the non-merohedral and inversion in CRYSTALS failed, the normal matrix becoming singular in spite of the application of appropriate restraints and constraints.
At this point we retrieved the supporting information. From this it was clear that the original authors had detected the same as ROTAX, and had refined this model to a minor element of 0.15 using an HKLF5 reflection file. Strangely, in spite of the Flack entry [−0.1 (3)] in the deposited the supporting information states `Owing to the poor quality of the data, the absolute structure couldn't be reliably defined and any references to the Flack parameter have been omitted'.
Our analysis of the data was repeated using the twinned model, but showed no great improvement in the n.p.p. nor the DIFABS surface. The data had been collected with an area detector, standard source and graphite monochromator, so that unless the authors had used a very fine collimator, one might expect the crystal to have been more-or-less fully bathed in the direct beam. 2613 reflections were measured, merging down to 1003 independent observations (Rint = 0.086) of which 621 had I > 2σ(I). Seeing that over 30% of the data could be classed as very weak, the observed and calculated Wilson plots were examined, Fig. 33. The up-turn in the plot of the observed data at about ρ = 0.3 is often characteristic of data being measured to a resolution at which there is little or no signal amongst the noise.
6. Conclusions
X-ray crystallography is unique in that it provides both an estimate of the enantiopurity of a sample, and a ca 1%. NMR with shift reagents can give separate signals for each but there are substantial complications about the binding of the shift reagent, equilibria etc. Chiral HPLC has the advantage of actually separating the enantiomers as individual signals that can be directly ratioed and so can be very deterministic. In many cases one should be able to detect a 1 ma.u. (a.u. = atomic unit) signal from an enantiomeric impurity alongside a signal of 1 a.u. of the main peak, i.e. 0.1%. These techniques are degraded in the presence of impurities. Except for the case of crystallography largely avoids the impurity problem, but suffers in that one crystal is taken as representative of the bulk sample. However, for materials known to be or to have a large it can be a robust way for assigning the of the major (or only) component.
for that estimate without special user action. Chiroptical spectroscopies look at a total signed signal and thus require a reference spectrum to compare against in order to judge the proportions of each When this is available then typically the resolution isThe results of Thompson & Watkin (2011) showed that even in apparently unsuitable cases, there was usually some resonant signal amongst the random noise and systematic errors. Flack used the 2A/D plots to try to visualize the signal. The plots in this paper of Do and Ds versus σ(Do) provide a clear indication of the best possible signal in the data, and the actual signal in the observed data. We know that the value of the must lie in the interval 0–1 and in favourable cases histograms of the Flack x peak in this interval. The broader the spread about this interval, the less reliable the estimate of σ(x). The ratio Ds/σ(Do) is a measure of the information content of a reflection. Measuring data to high resolution increases Ds/Is, but only improves the leverage if care is taken to minimize σ(Do).
Direct x) parameter usually results in a value with a larger than that obtained by post-refinement methods using weights derived from the observed variances, making these latter methods more attractive for publication. However, the value of the Flack (x) parameter and its obtained by free in the main least squares should be compared with the values obtained by a post-refinement method. Substantial differences indicate that there may be a problem with the data or with the proposed model, although other techniques will have to be used to identify the problem.
of the Flack (In the absence of widespread availability of software able to refine structures using both averages and differences of structure amplitudes as observations, the low correlation between the structural parameter values and the Flack x suggests that a post-refinement estimate of the once the model is fully parameterized can be used to guide the final The Bijvoet difference method is a good diagnostic for problems with the data or model since it contains the minimum number of assumptions: the Hooft and Parsons methods both allow for some problems with the data or model and so may be most suitable for routine work. If the Parsons quotient and the Bijvoet difference methods give substantially different results, this may be indicative of absorption or other problems with the main If there is doubt about the enantiopurity of the material, the must be included as part of the model. It can either be refined freely, treated as a constant (constraint) with the value taken from the post-refinement analysis, or a single equation of restraint on the Flack (x) parameter can be introduced using the post-refinement estimate of its value and as target values.
APPENDIX A
Ratios of averages and averages of ratios
For an individual Freidel pair we can write equation (4) as
Defining
we obtain
A1. Ratio of averages
where the terms in square brackets are column vectors of the model and observed Friedel differences, the least-squares estimate of c from a set of Friedel pairs is
from which a weighted value for 〈c〉 can be obtained as
with
Equation (24) can be rewritten as a ratio of averages
Letting
gives
from which
and
A2. Average of ratios
Alternatively, we can evaluate individual ci from equation (20) and xi from (21), and form the (weighted) average of these ratios
Following Blessing & Langs (1987) we can form the internal and external estimates of the variance of the sample, and hence the variance of the average
For a list of paired observations, the ratio of averages and the average of ratios will be the same if there is a linear relationship between the observations and the error distributions are similar. A difference between these two statistics indicates a problem that should be investigated.
Supporting information
Excel spreadsheet for 28 https://doi.org/10.1107/S2052520616012890/ps5053sup1.xlsx
determinations. DOI:Reduced Excel spreadsheet abstracted from S1. DOI: https://doi.org/10.1107/S2052520616012890/ps5053sup2.xlsx
Details on supporting information. DOI: https://doi.org/10.1107/S2052520616012890/ps5053sup3.pdf
Footnotes
1In SHELXL 2014/7 the `hole-in-one' fit has been renamed `classical fit'. This should not be confused with the much older direct as found, for example, in X-RAY76 (Flack, 1983) or CRYLSQ (Olthof-Hazekamp, 1990).
2This equation first appears in this form in Thompson & Watkin (2011).
3The twin scale factors sum to unity if quoted to three decimal places [0.811 (218), 0.133 (036), 0.048 (218), 0.008 (035)].
Acknowledgements
The authors wish to thank Howard Flack for critical advice, to Simon Parsons for software which enabled us to verify some calculations, to Ton Spek for code from PLATON, to George Tranter (Chiralabs Ltd) for advice on non-X-ray techniques, and many colleagues for suggesting additions to the manuscript. Figs. 8–12 and 24–27 were created using Microsoft Excel 2010. All others are lightly retouched screen-dumps from CRYSTALS. SHELXL calculations were made with SHELXL-2013/2, CRYSTALS calculations with the executable dated 17/12/2015 08:36.
References
Abrahams, S. C. & Keve, E. T. (1971). Acta Cryst. A27, 157–165. CrossRef CAS IUCr Journals Web of Science Google Scholar
Abud, J. E., Sartoris, R. P., Calvo, R. & Baggio, R. (2011). Acta Cryst. C67, m130–m133. CSD CrossRef IUCr Journals Google Scholar
Bernardinelli, G. & Flack, H. D. (1987). Acta Cryst. A43, 75–78. CrossRef CAS Web of Science IUCr Journals Google Scholar
Blessing, R. H. & Langs, D. A. (1987). J. Appl. Cryst. 20, 427–428. CrossRef Web of Science IUCr Journals Google Scholar
Carruthers, J. R. & Watkin, D. J. (1979). Acta Cryst. A35, 698–699. CrossRef CAS IUCr Journals Web of Science Google Scholar
Clark, R. C. & Reid, J. S. (1995). Acta Cryst. A51, 887–897. CrossRef CAS Web of Science IUCr Journals Google Scholar
Cooper, R. I., Gould, R. O., Parsons, S. & Watkin, D. J. (2002). J. Appl. Cryst. 35, 168–174. Web of Science CrossRef CAS IUCr Journals Google Scholar
Cooper, R. I., Watkin, D. J. & Flack, H. D. (2016). Acta Cryst. C72, 261–267. Web of Science CrossRef IUCr Journals Google Scholar
Cruickshank, D. W. J. (1961). Computing Methods and the Phase Problem, edited by R. Pepinsky, J. M. Robertson & J. C. Speakman, paper 6. Oxford: Pergamon Press. Google Scholar
Cruickshank, D. W. J. & McDonald, W. S. (1967). Acta Cryst. 23, 9–11. CrossRef IUCr Journals Web of Science Google Scholar
Cruickshank, D. W. J. & Robertson, A. P. (1953). Acta Cryst. 6, 698–705. CrossRef CAS IUCr Journals Web of Science Google Scholar
Diederichs, K. (2010). Acta Cryst. D66, 733–740. Web of Science CrossRef CAS IUCr Journals Google Scholar
Dyadkin, V., Wright, J., Pattison, P. & Chernyshov, D. (2016). J. Appl. Cryst. 49, 918–922. CSD CrossRef CAS IUCr Journals Google Scholar
Ealick, S. E., Van der Helm, D. & Weinheimer, A. J. (1975). Acta Cryst. B31, 1618–1626. CSD CrossRef CAS IUCr Journals Google Scholar
Engel, D. W. (1972). Acta Cryst. B28, 1496–1509. CrossRef CAS IUCr Journals Web of Science Google Scholar
Escudero-Adán, E. C., Benet-Buchholz, J. & Ballester, P. (2014). Acta Cryst. B70, 660–668. Web of Science CSD CrossRef IUCr Journals Google Scholar
Evans, P. (2006). Acta Cryst. D62, 72–82. Web of Science CrossRef CAS IUCr Journals Google Scholar
Fábry, J., Fridrichová, M., Dušek, M., Fejfarová, K. & Krupková, R. (2012). Acta Cryst. C68, o76–o83. Web of Science CSD CrossRef IUCr Journals Google Scholar
Ferguson, G., Glidewell, C., Low, J. N., Skakle, J. M. S. & Wardell, J. L. (2001). Acta Cryst. C57, 315–316. Web of Science CSD CrossRef CAS IUCr Journals Google Scholar
Flack, H. D. (1983). Acta Cryst. A39, 876–881. CrossRef CAS Web of Science IUCr Journals Google Scholar
Flack, H. D. (2013). Acta Cryst. C69, 803–807. Web of Science CrossRef CAS IUCr Journals Google Scholar
Flack, H. D. & Bernardinelli, G. (2008). Acta Cryst. A64, 484–493. Web of Science CrossRef CAS IUCr Journals Google Scholar
Flack, H. D., Bernardinelli, G., Clemente, D. A., Linden, A. & Spek, A. L. (2006). Acta Cryst. B62, 695–701. Web of Science CrossRef CAS IUCr Journals Google Scholar
Flack, H. D., Sadki, M., Thompson, A. L. & Watkin, D. J. (2011). Acta Cryst. A67, 21–34. Web of Science CrossRef CAS IUCr Journals Google Scholar
Gowda, B. T., Nayak, R., Kožíšek, J., Tokarčík, M. & Fuess, H. (2007). Acta Cryst. E63, o2967. Web of Science CSD CrossRef IUCr Journals Google Scholar
Hamilton, W. C. (1965). Acta Cryst. 18, 502–510. CrossRef CAS IUCr Journals Web of Science Google Scholar
Hooft, R. W. W., Straver, L. H. & Spek, A. L. (2008). J. Appl. Cryst. 41, 96–103. Web of Science CrossRef CAS IUCr Journals Google Scholar
Hooft, R. W. W., Straver, L. H. & Spek, A. L. (2010). J. Appl. Cryst. 43, 665–668. Web of Science CrossRef CAS IUCr Journals Google Scholar
Howard, S. T., Hursthouse, M. B., Lehmann, C. W., Mallinson, P. R. & Frampton, C. S. (1992). J. Chem. Phys. 97, 5616–5630. CrossRef CAS Web of Science Google Scholar
Le Page, Y., Gabe, E. J. & Gainsford, G. J. (1990). J. Appl. Cryst. 23, 406–411. CrossRef CAS Web of Science IUCr Journals Google Scholar
Merli, M. & Sciascia, L. (2011). Acta Cryst. A67, 456–468. Web of Science CrossRef IUCr Journals Google Scholar
Müller, G. (1988). Acta Cryst. B44, 315–318. CrossRef IUCr Journals Google Scholar
Olthof-Hazekamp, R. (1990). Xtal 3.0 Reference Manual, edited by R. S. Hall & J. M. Stewart. University of Western Australia, Perth. Google Scholar
Parrish, W. (1960). Acta Cryst. 13, 838–850. CrossRef IUCr Journals Web of Science Google Scholar
Parsons, S., Flack, H. D. & Wagner, T. (2013). Acta Cryst. B69, 249–259. Web of Science CrossRef CAS IUCr Journals Google Scholar
Parsons, S., Pattison, P. & Flack, H. D. (2012). Acta Cryst. A68, 736–749. Web of Science CrossRef CAS IUCr Journals Google Scholar
Parsons, S., Wagner, T., Presly, O., Wood, P. A. & Cooper, R. I. (2012). J. Appl. Cryst. 45, 417–429. Web of Science CSD CrossRef CAS IUCr Journals Google Scholar
Prince, E. (1994). Mathematical Techniques in Crystallography and Material Science, pp. 80–82. Berlin: Springer-Verlag. Google Scholar
Prince, E. (2004). Mathematical Techniques in Crystallography and Material Science, 3rd ed., p. 121. Berlin: Springer–Verlag. Google Scholar
Rabinovich, D. & Hope, H. (1980). Acta Cryst. A36, 670–678. CrossRef CAS IUCr Journals Google Scholar
Rogers, D. (1981). Acta Cryst. A37, 734–741. CrossRef CAS IUCr Journals Web of Science Google Scholar
Seela, F., Xiong, H., Budow, S., Eickmeier, H. & Reuter, H. (2012). Acta Cryst. C68, o174–o178. CSD CrossRef IUCr Journals Google Scholar
Sheldrick, G. M. (2014). Personal communication. Google Scholar
Sheldrick, G. M. (2015). Acta Cryst. A71, 3–8. Web of Science CrossRef IUCr Journals Google Scholar
Smith, M. & Lamb, A. (2012). Personal communication. Oxford Archive No. 6418, C21, H23 Br N2 O4. Google Scholar
Thompson, A. L. & Watkin, D. J. (2011). J. Appl. Cryst. 44, 1017–1022. Web of Science CrossRef CAS IUCr Journals Google Scholar
Tukey, P. J. W. (1976). Proceedings of the First ERDA Statistical Synposium, edited by W. L. Nicholson & J. L. Harris. Ohio: Battelle, Pacific Northwest Laboratories,. Google Scholar
Walker, N. & Stuart, D. (1983). Acta Cryst. A39, 158–166. CrossRef CAS Web of Science IUCr Journals Google Scholar
Weiss, M. S. (2001). J. Appl. Cryst. 34, 130–135. Web of Science CrossRef CAS IUCr Journals Google Scholar
Zhang, W., Oliver, A. G. & Serianni, A. S. (2012). Acta Cryst. C68, o7–o11. Web of Science CSD CrossRef CAS IUCr Journals Google Scholar
© International Union of Crystallography. Prior permission is not required to reproduce short quotations, tables and figures from this article, provided the original authors and source are cited. For more information, click here.