feature articles
Why direct and postrefinement determinations of
may give different results^{a}Chemical Crystallography Laboratory, Department of Chemistry, University of Oxford, 12 Mansfield Road, Oxford, Oxfordshire OX1 3TA, England
^{*}Correspondence email: david.watkin@chem.ox.ac.uk
Direct determination of the x) parameter and its are usually not much influenced by changes in the weighting schemes, but if they are then there are probably problems with the data or model. Postrefinement analyses give Flack parameters strongly influenced by the choice of weights. Weights derived from those used in the main least squares lead to postrefinement estimates of the Flack parameters and their standard uncertainties very similar to those obtained by direct Weights derived from the variances of the observed structure amplitudes are more appropriate and often yield postrefinement Flack parameters similar to those from direct but always with lower standard uncertainties. Substantial disagreement between direct and postrefinement determinations are strongly indicative of problems with the data, which may be difficult to identify. Examples drawn from 28 structure determinations are provided showing a range of different underlying problems. It seems likely that postrefinement methods taking into account the slope of the normal probability plot are currently the most robust estimators of and should be reported along with the directly refined values.
as part of the structure procedure usually gives different, though similar, values to postrefinement methods. The source of this discrepancy has been probed by analysing a range of data sets taken from the recent literature. Most significantly, it was observed that the directly refined Flack (Keywords: absolute ; Flack parameter; refinement; software; problem structures.
1. Introduction
The introduction by Rogers (1981) of a new parameter, η, as a refineable multiplier onto f′′ in the leastsquares optimization of a [equation (1)] was the first attempt to directly determine absolute structures as part of the process (hereafter called direct determination).
Flack (1983) recognized that the η parameter had no physical significance except for values of ±1, and introduced a new formulation of the problem. He proposed that a given sample be regarded as a twin by inversion, and that refining the twin fraction would reveal the Representing by I^{+} and by I^{−}
where the subscript `s' indicates a quantity computed from the atomic model with the x set to zero (i.e. a nontwinned single crystal), `c' a quantity computed from a twinned model (i.e. not necessarily zero) and `o' an observed quantity. Like the Rogers method, this proposal refined the parameter using all the reflection data as part of the normal structure optimization, but had the advantage that the parameter had a real physical significance throughout the whole range from zero to one. This innovation increased awareness of the existence of and fears that samples may not have been For convenience we will use the term Flack parameter to imply x determined by an unspecified method, and Flack (x) to imply its determination as part of the main structure refinement.
The 1993 release of SHELXL included a postrefinement method for determining the by a method which came to be known as `holeinone'. Equation (2) can be rearranged to give the directly from observed structure factors and structure factors computed from the atomic model and its inverse (Sheldrick, 2014).^{1}
In spite of fears that postrefinement determinations of et al. (2008) devised a method based on a Bayesian analysis of Friedel differences (see Müller, 1988, for an interpretation of Friedel pairs). These authors recast equation (3) to treat Friedel pairs of reflections simultaneously.
might be compromised because of the neglect of potential covariance with the other refineable parameters, Hooftwhere D_{s} = (I_{s}^{+}  I_{s}^{} ) and similar for D_{o} and D_{c}.^{2} For convenience later, we have called x computed from equation (4) the Bijvoet (d) parameter. The advantage of (4) over (3) is that by taking differences, the significance of the real part of the is reduced, making the computation less dependent on details of the model structure.
Their process, which used weights derived from the variances of the observed intensities modified by information obtained from the normal probability plot (n.p.p.) of the Friedel residuals (Abrahams & Keve, 1971), yielded values of the parameter, Hooft (y), not unlike those from the Flack (x) method. The underlying assumption, as in Dyadkin et al. (2016), was that the error distribution was Gaussian. Hooft et al. (2010) show that this distribution is adequate for good data, but that for poor data dramatically improved results are obtained by the use of the student tdistribution. The method further enabled one to estimate the probabilities of the correctness of assignments for or 50:50 racemically twinned samples.
Parsons et al. (2013) examined the use of equation (4) and its quotient form, equation (5), which we will call Parsons(q), both for postrefinement determination of the and as restraints during the direct of Flack (x).
where Q_{s} = (I_{s}^{+}  I_{s}^{} )/2A_{s} etc. and A_{s} = (I_{s}^{+} + I_{s}^{} )/2.
These authors showed that the Hooft (y), Parsons(q) and Bijvoet (d) estimates of the parameter were usually similar to direct of the Flack (x), but with significantly lower standard uncertainties. They also observed that using equations (4) or (5) as restraints on the leastsquares gave values of the Flack (x) in close agreement with postrefinement estimates of absolute structure.
No explanation was given for the observation that direct CRYSTALS program. Data sets taken from the literature including EscuderoAdán et al. (2014), hereafter EBB, Parsons et al. (2013), hereafter PFW, and Flack (2013), hereafter HDF, were reexamined using these tools.
of the consistently gave larger standard uncertainties than any of the postrefinement methods other than to note that the direct was based on all the reflections used in the while the postrefinement analyses used selected subsets of the full data set. In order to investigate the source of the differences between direct and postrefinements estimations of several different approaches were implemented in the2. Background
During the period before the common availability of area detector diffractometers, it was generally regarded as too expensive to collect a highly redundant set of all Friedel pairs of reflections. Some of the need for redundancy could be reduced by making measurements in geometries which minimized the differences in the experimental errors between Friedel pairs (Le Page et al., 1990). Even so, full sets of Friedel pairs were generally not measured, and after a structure was resolved and refined from an of data in the corresponding Laue group, selected Friedel pairs were remeasured and used for determination (see, for example, Ealick et al., 1975). The introduction of the has led to attempts to use Xray crystallography both to determine and to determine enantiopurity, i.e. whether the sample used for the measurements was twinned by inversion.
2.1. Probability methods
Prior to the introduction of Flack's parameter, structure analysts had simply tried to ascertain the probability of the ) Rfactor ratio method used all the observed reflections, but was difficult to apply convincingly due to uncertainty about a valid definition of the number of involved in swapping from one model to its inverted image.
of the crystal being the same as that of the model, so that an was chosen to give a best match between selected observed and calculated structure factors. The Hamilton (1965Other methods used reflections carefully selected from the existing data sets, or carefully remeasured. Engel (1972) favoured the `Bijvoet Method', in which a selected set of reflections, the sensitive reflections, were remeasured more carefully. Engel used B_{h} = (Q_{h} − 1)/½(Q_{h} + 1) as a measure of the Bijvoet sensitivity, with Q_{h} = F_{h}/F_{−h}. A comparison of the signs of the measured and calculated Bs from a selected set of reflections yields the If the intensities of Friedel pairs of reflections, preferably with a B of the opposite sign, could be found and measured in a neighbouring part of for which absorption and other errors will be similar, then a `double quotient' can be estimated which has the effect (as in the Parsons quotient) of reducing the influence of geometryrelated experimental errors. Le Page et al. (1990), recognizing that Rogers' η should be ±1 for an sample, computed the probability that the of the model and that of the sample were the same on the basis of a remeasured set of selected reflections. Probability methods have been revisited again by Hooft et al. (2008), and using a tdistribution (Hooft et al., 2010). They constructed tests on the basis that the material is the P(2) test giving the probability that the model and the material have the same or possibly twinned, the P(3) test distinguishing between the correct assignment, a 50:50 or an inverted assignment. The appeal of probability methods is that, under strict assumptions, they appear to give a clearcut result.
2.2. Direct of the Flack (x) parameter
Direct x) parameter simultaneously with the other structural parameters is now commonplace. Flack et al. (2006) recommend that a full set of Friedel pairs be measured on an area detector instrument, preferably with high redundancy in order to optimize empirical intensity scaling, and that be started with the set to 0.5 to minimize the risk of to a false minimum. This is particularly important in the case of space groups with floating origins, in which the structure may distort to accommodate an incorrectly assigned – the polar dispersion error (Cruickshank & McDonald, 1967). It has been widely observed that although the Flack (x) is rarely in conflict with a known (Thompson & Watkin, 2011), it can refine to a value away from the ideal value for an material. There is also evidence that the computed from the full variance–covariance matrix is often overestimated. Parsons, Wagner et al. (2012) have proposed using leverage analysis to identify reflections which are particularly influential in the determination of the and which could be remeasured and used as supplementary observations (restraints) in the An alternative approach (Thompson & Watkin, 2011) reuses Friedel pairs selected from the existing data set to construct supplementary observations.
of the Flack (2.3. Postrefinement determination of the Flack parameter
The relation between the . The worryingly high of the Flack (x) parameter determined for many materials of known enantiopurity and has led to a search for methods to determine the more robustly than simply including it in the main leastsquares especially in cases where the resonant signal is likely to be weak. Not infrequently, these methods involve the use of selected subsets of the original or new data.
of a crystalline material and the measured Friedel pairs is given in equation (2)Given a reasonably well refined model, the x by conventional least squares. The disagreement sometimes seen between the holeinone method [equation (3)] and Bijvoet difference method [equation (4)] might, in part, be due to the additional information introduced by pairing up reflections for the differences, with the possibility that certain kinds of errors in the model or in the data might be correlated and tend to cancel out.
can be estimated by solving equations (2), (3), (4) or (5) forThe denominators in the Parsons(q) expression (5) were based (Parsons et al., 2013) on an extension of the earlier recognition that on a serial fourcircle diffractometer setting angles could be chosen so that the absorption effect for reflections h and would be similar (Le Page et al., 1990). On an area detector diffractometer these conditions are rarely satisfied, and in any case the final intensity of each reflection is usually the average of several measurements made with quite different setting angles.
Equation (5) can be rewritten as
Here A_{o} and A_{s} seem to be scale factors downweighting the contribution of strong reflections to the parameter. However, when each reflection pair is weighted by the inverse of the variance of the observed quantities, this downweighting disappears.
If equation (6) is rewritten as
we can see that if A_{o} can be regarded as A_{s} ± error, the ratio A_{o}:A_{s} could take large values when the calculated is very small – such reflections must be excluded from any quotient calculation. In fact, if A_{o} is not very similar to A_{s} then there is a reasonable probability that there is something wrong with the model, the data or both. We can also see that the A_{o}/A_{s} terms act as perreflection scale factors and should be counted as independent variables.
Just as plots of F_{o} versus F_{c} can be of diagnostic value in a normal structure so plots of D_{o} versus D_{s} and 2A_{o} versus 2A_{s} can give insight into determination (Parsons, Pattison & Flack, 2012). The 2A_{o} − 2A_{s} plot should have a unit gradient and might identify outliers in which the quotient in equation (7) lies far from unity. For materials correctly assigned, the D_{o} versus D_{s} scatterplot should also have a unit gradient, and for materials with a large Friedif (Flack & Bernardinelli, 2008) this is usually clearly evident. For materials with a Friedif less than 100 the linear relationship is always less clear (Cooper et al., 2016).
Fig. 1(a) shows a scatterplot of D_{o} versus D_{s} and 2A_{o} versus 2A_{s} for structure SL6418 (Friedif = 498; Smith & Lamb, 2012). The best line through D_{o} and D_{s} (green points) has a gradient of 1.063 (5) and an intercept of −0.002 (21), the is 0.960, and the coefficient of determination is 0.929. The value of (1 − 2x) is reliably determined. Fig. 1(b) is a similar plot for structure EBB5001 (Friedif = 6.5). The best D_{o} − D_{s} line appears to be independent of the scatter of the observations, yet a leastsquares fit gives a gradient of 0.92 (17) [corresponding to a Bijvoet (d) of 0.04 (9)], 0.116 and coefficient of determination of 0.014.
Except when the data points all lie on an exactly vertical line, it is always possible to fit a regression line. However, if the spread of the observations along the dependent axis is much greater than that along the independent axis, the line will have little or no physical significance. The ). Rogers (1981) had been worried that in the Hamilton method, some of the resonant differences would be below the observable threshold, so that `Many of the reflections are mere passengers in the calculations of the ΔF' yet contribute to the and falsely improve the apparent reliability of the analysis. Ealick et al. (1975) chose to work with reflections for which the `sensitivity factor', SF = D_{o} − D_{s}/A_{o}, was the largest [note that, ignoring the effect of scale factors and the Lp correction etc. for Poisson statistics, I_{o} is proportional to σ^{2}(I_{o}) so that SF is a measure of signaltonoise]. Rabinovich & Hope (1980) introduced the idea of `observability', D = (D_{s}A_{o})/(A_{s}σ(D_{o})) similar to Ealick's sensitivity factor. The ratio A_{o}/A_{s} in this expression means that it is strongly related to the Parsons quotient.
is independent of the number of observations, but the is proportional to so that the can be reduced by including more `vanilla' data – the Emperor of China Syndrome (Parrish, 1960The importance of a given datum on its own fitted value is measured by its leverage (Prince, 2004). Since the mean values of D_{o} and D_{s} (and the corresponding quotients) are close to zero, fitting a straight line can be regarded as a oneparameter model, so that the leverage of each data point is given by
where d_{i} are the values of either D_{s} or Q_{s}. The data with the greatest leverage are those with large absolute values of D_{s} or Q_{s}. Remember that although D_{s} does not depend directly on A_{s}, large D_{s} can only be possible for large A_{s}. If each observation in the postrefinement determination of the is weighted by the inverse of its variance, P_{ii} is proportional to the square of the signal:noise. To a first approximation, σ^{2}(I) ∝ I (Evans, 2006; but see also §5.2.6) so that the resonant difference originating from strong reflections will have large standard uncertainties, and be downweighted. The most useful reflections are likely to be those of intermediate intensity and with a large resonant difference. This is in agreement with the leverage analysis for the Flack (x) parameter in the leastsquares of all structural parameters (Parsons, Wagner et al., 2012).
Equation (4) can be made to yield values of the on a per basis
Plotting x from equation (9) against D_{s} (Fig. 2) should give a horizontal line at the value of the If D_{s} is very small compared to D_{o}, the value of x can take extreme values. For a structure with low individual x can be illdetermined, and even for good data many extreme values can be seen. The massive vertical distribution near the centre of the plot (which includes both positive and negative estimates of x) corresponds to small values of the denominator in equations (4) and (9), and it is only the data lying distant from D_{s} = 0 which contain useful information.
3. Data quality
3.1. Friedel completeness
The introduction of the Flack analysis meant that an indication of the ) showed that, strictly speaking, it does not even require the measurement of any Friedel pairs, but simply that any Friedel pairs that are measured are not merged together. Flack et al. (2006) discuss at length the need for extensive Friedel coverage in the case where a structure is pseudocentrosymmetric. Trials in the 1980s using the Enraf–Nonius CAD4 serial diffractometer showed that in some cases (for example, an organometallic spontaneously resolving in P2_{1}) a good indication of the could be obtained without measuring all Friedel pairs. These results were never published, but an example can be simulated using areadetector data. The model for HDFgg3255 (Abud et al., 2011), in P2_{1}2_{1}2_{1}, Friedif 600 (Flack & Bernardinelli, 2008), was refined using a full data set (only 114 unpaired acentric reflections), the allpositive quadrant of data plus the h = −1 layer, and just the allpositive quadrant. The same model, with Flack(x) set to 0.5, all atomic coordinates slightly perturbed, F^{2} observations and the weighting scheme optimized for the full data set, was used to start all three refinements (see Table 1).
could be obtained without remeasuring any data. Bernardinelli & Flack (1987

This simulation is only indicative since Friedel pairs were measured in the original experiment and used to obtain frame scale factors and absorption corrections, but it casts some light on the robustness of the Flack analysis (see also http://www.ccp14.ac.uk/ccp/webmirrors/hugorietveld/stxnews/stx/discuss/disfals.htm).
3.2. Outliers and data quality
Merli & Sciascia (2011) and many others, e.g. PFW2013 and Le Page et al. (1990), recognized that outliers in the data would degrade the analysis. Hooft et al. (2008) provided a filter to try to ensure that only reliable data were used in the determination of the Hooft (y) parameter. Parsons et al. (2013) give an example in which exclusion of a single reflection changed the from 0.18 (8) to 0.08 (8). The detection of outliers is a vexing problem. Reflections with large residuals can be due to errors in the observed or modelled values, or both quantities. When a model is fully parameterized (all atoms have been found, disorder resolved, dealt with), then there is a good chance that an individual I_{s} is more likely to be `correct' than the corresponding I_{o} because each computed is, in effect, a complexly weighted average of all the observed structure factors. Under these conditions, a large residual is usually attributed to error in the observation, and these reflections – the outliers – may be filtered out. In structural an outlier can be identified by comparing the residual with the experimentally determined If the fully developed model will not refine so that this residual is reduced, it is usually assumed that the discrepancy is a fault in the observation. Robust/resistant weighting schemes are designed to reduce the influence of these suspect reflections in a smoothly continuous way rather than simply rejecting selected data (Prince, 1994). In the case of determining the Hooft (y) parameter, the observed Friedel difference could be compared with the calculated difference and reflections with improbably large residuals be excluded from the computation. In the original implementation in PLATON (Hooft et al., 2008, and now integrated into CRYSTALS), the filtering was via the useradjustable variable Outlier Crit. In later versions the filter is automated such that reflections for which the observed Friedel difference is more than twice the largest calculated difference, D_{s}max, are eliminated (see also PFW2013). A very small value for the Friedel difference can still occur even when the two contributing reflections are strong, and are accepted by the `three sigma' criterion. In the PFW2013 implementation, reflections for which either or both Io^{+} and Io^{−} were less than three standard uncertainties were also eliminated, as were reflections with significant deviations from the (D_{o} − D_{s}) n.p.p. bestline. Whereas in the conventional leastsquares of crystal structures some practitioners insist on using all reflections, it is now established practice to filter out some reflections for the post analysis of Filters are provided in CRYSTALS to exclude reflections which may either introduce instability into the calculations (very small denominators) or are suspected of being in serious error.
3.3. Iterative reweighting
The Le Page algorithm (Le Page et al., 1990) in effect assigns a value of ± 1 to the Rogers' η value of the selected reflections on a onebyone basis as opposed to direct of η from all the reflections in the main leastsquares calculations. It tacitly assumes that the material is Equation (9) enables us to also evaluate the on a reflectionbyreflection basis – the data used in creating Fig. 2. We could in principle evaluate the from each pair of carefully selected and remeasured reflections – or even from just one very carefully selected and very carefully measured pair. Because x is a continuously meaningful parameter in the range 0–1, it is not necessary to assign it an integer value. Now, rather than remeasuring selected reflection pairs to estimate x, we can use all the pairs measured in the original data collection to give individual estimates of x. With the exception of unknown correlations introduced during the measurement process, these estimates of x will be experimentally independent (or at least as independent as the measurements of the original data were). As was seen in Fig. 2, the values of x can take values wildly outside of the 0–1 range – these are physically impossible and correspond to outliers originating either from large experimental errors, or are artefacts of a small denominator in equation (9). Following the arguments of Blessing & Langs (1987) for the merging of equivalent reflections, we can merge these individual xvalues, and since each xvalue has an associated experimental variance, we can compute both the external variance
and the internal variance (Appendix A)
The probability of an individual x_{i} can be estimated from
Friedel pairs yielding a value of x differing from the average value of x by several variances have a low probability. This probability can be used as a modifier for the weight () used to compute a new weighted average value of x, and the process repeated (Blessing & Langs, 1987). Since the distribution of the computed Flack parameters may be dispersed, skewed or longtailed, the process is started using the median value of x_{i} as an initial estimate of x. Thus, rather slack values can be set for the various initial filter thresholds used in selecting reflections, and a smoothly varying function can be used to downweight suspect data. Friedel pairs with a probability p_{i} greater than a useradjustable threshold (typically 0.001) are counted to provide an indication of the number of `useful' reflections in the data. The process is terminated when the number of `useful' reflections is the same for two successive iterations, or until ten iterations are completed. In this latter case, the process is regarded as being unconverged and unsuccessful. This situation seems to arise when the is small compared with the errors in the intensity measurements. The on the final value of x′ is estimated from the weighted external variance
Iterative reweighting (Prince, 1994) using the Tukey biweight algorithm (Tukey, 1976) gave essentially the same results as the Blessing method.
In order to provide the user with a visual representation of the data, a histogram of the x can be plotted (Fig. 3). The normalized sum of the weights of the reflections in each bin is also plotted. The number of pairs containing `useful' information and the number of pairs yielding an x value falling in the range −0.5 < x < 1.5 is also output. For convenience, we will denote the value of x′ and its s.u. determined by this histogram method as the Histogram (h) parameter, and σ(h) its s.u. The σ(h) can be further scaled by the gradient of the Friedel residual n.p.p. Note that the weights could also be used for the computation of a Bijvoet (d) or Parsons(q) parameter.
ofThe expected and actual information content of the data can be visualized (Fig. 4) by plotting histograms of D_{s}/σ(D_{o}) and D_{o}/σ(D_{o}) (Bernardinelli & Flack, 1987). A distribution of D_{s}/σ(D_{o}) which is very narrow and centred on zero indicates that there is little information in the data. When this is accompanied by a broad D_{o}/σ(D_{o}) distribution we have an indication that the data is very noisy.
3.4. Ratios of averages and averages of ratios
Letting (1 − 2x) in equation (4) be represented by c, then for each we have
An average value of c_{i} can be computed as a leastsquares estimate (see Appendix A)
or as a simple mean
leading to 〈x〉 and x′. Equation (16) is a ratio of averages (there is a 1/n term in both the numerator and the denominator), equation (17) is the average of the individual ratios, c_{i}. In general, if all the summations are made over the same number of data points and there are no wildly eccentric outliers, the values of 〈x〉 and x′ are similar. An indication of the presence of outliers can be obtained by computing these coefficients using all the measured Friedel pairs. If they are substantially different, the distribution of the errors in D_{o} may be skewed, there may be outliers, the errors may swamp any signal or there may be contributors to (15), where the D_{si} are tiny. Weighted versions of equations (16) and (17) can be recomputed during the Blessing & Langs (1987) process, where outliers are progressively downweighted. If convergence is achieved before the maximum number of cycles is reached, 〈x〉 and x′ are usually very similar. Both values are output by CRYSTALS.
4. Experimental considerations
4.1. Restraints
The result of using selected reflections as restraints either in the Parsons et al. (2013) method or the Thompson & Watkin (2011) method seems at first to be reassuring, but a similar result can also be achieved by computing the value and of the from the data which would otherwise have been used as restraints and simply using this as one idealized restraint. Using HDFgg3255 (Friedif = 600) as an example again gave the following results for an unrestrained and restrained refinements using various target values of the and a requested of 0.005 (Table 2). The SHELXtype weights were optimized for each refinement.

The only impact of imposing the restraint that the Rfactors or the other estimates of x. Setting a target of 0.5 with a of 0.005 leads to a refined close to the target, and causes a small increase in the Rfactors. The Hooft and Histogram estimates of x decrease a little, and since these are computed from the refined structural model, indicate that the model has relaxed in some way. Raising the target to 1.0 causes a very significant change in the Rfactors, but the refined value of the Flack (x) almost satisfies the restraint. The automatically adjusted SHELXtype weighting parameter a increased as the Flack restraint was increased, progressively downweighting strong reflections in order to try to achieve a flat analysis of residuals, emphasizing the dangers of modifying the weights until the model is finalized. The n.p.p. for the main became progressively more Sshaped as progressively invalid Flack values were imposed. The resonant difference n.p.p.s, using pure statistical weights, remained fairly straight throughout. Preserving the atomic coordinates and weights from this last and resetting the to zero gave the Rfactors in the row labelled with an asterisk. with a target Flack of unity can be achieved simply by causing a small distortion of the model which has minimal impact on the conventional Rfactor but increases the reweighted Rfactor. For HDFgg3255 the median bond length distortion with the inverse restraint was 0.01 Å and the maximum 0.03 Å, i.e. similar to Müller's (1988) of structures and their inverses. The median change in the arithmetic U_{equiv} was 0.001 Å^{2} and the maximum 0.004 Å^{2}. These results can be interpreted (for a reasonable data set) as showing that small changes can be forced on the value of the without having an appreciable change on the atomic model, and hence on estimates of the based on that model. They also show that while an incorrect assignment of will affect fine details of the molecular geometry, small errors in the structural model only have a small effect on the postrefinement determination of the absolute structure.
should be zero is to reduce the refined value of the parameter from 0.0018 to 0.0004. There is no appreciable change in the4.2. Correlation between Flack and other parameters
In order to demonstrate that the x, y and z coordinates of the nonH atoms in the fully refined unrestrained structure of HDFgg3255 (called `original' in the table) were randomly perturbed from their refined positions with a mean displacement of 0.0 and a of 0.1 Å. Just the overall scale and Flack (x) parameters were then refined for five different perturbations of the structure, each of which had a conventional Rfactor of ∼ 14% (Table 3). Although the directly refined Flack (x) parameter was less well defined, the table shows why it may be possible to assign a reasonably reliable estimate of the quite early on in a structure analysis by the postrefinement methods (Sheldrick, 2015).
parameters are only weakly correlated with the atomic structure, the

4.3. Influence of weighting schemes
In the discussion so far it has been assumed that the weights for the postrefinement analyses have been derived from the observed variances of the original diffraction data via equations. However, it has long been established practice to use more complex weighting schemes in the main structure These weights are computed from empirical formulae with coefficients selected to give a flat distribution of weighted residuals. This process is intended to allow for unidentified errors in the data and shortcomings in the model (Cruickshank, 1961). Weights computed in this way have an influence on the Flack (x) parameter and its s.u. as determined during the main (Bernardinelli & Flack, 1987). In order to see the influence of these weights on the postrefinement determination of they can be converted to observational pseudovariances by
where weight_{lsq} is the weight assigned to the reflection during refinement.
PFWfyo12e (Parsons et al., 2013) contains only carbon, nitrogen and hydrogen, and Friedif is 11.8. This data set, specifically collected with a view to exploring the differences between direct and postrefinement evaluations of has Flack (x) = 0.17 (38) and Bijvoet (d) = 0.01 (08) but contains no evident source for the discrepancy between the two methods. The data set has an average multiplicity of observation of ∼ 36.
The structural model, including the Flack (x), was refined under three regimes: (a) using pure statistical weights 1/σ^{2}(I), (b) in which the weights were rescaled by a common factor to give a goodnessoffit (GoF) of 1.0, and (c) using optimized SHELXtype weights, which involves adding terms to σ^{2}(I). For each regime, postrefinement analyses were computed with pure statistical weights, and with ones derived from the leastsquares weights. The results are summarized in Table 4.

For this data set we see that the choice of weighting scheme has little influence on the s.u. of the Flack (x) parameter determined in the main least squares, although it does have an influence on the value of the parameter itself [column headed Flack (x)]. In regime (a), postrefinement analysis gives the same results whether weighted by simple statistical weights, or weights derived from the LSQ weights (since these were also simple statistical). However, all of the postrefinement methods gave standard uncertainties reduced to ∼ 20% of those from the direct The n.p.p. for the weighted Friedel differences was substantially linear with a unit gradient, although the gradient for the n.p.p. of residuals was 4.5 (Fig. 5a). The histogram of the weighted structurefactor residual w(F_{o}^{2} F_{c}^{2})^{2} as a function of intensity (Fig. 6a) shows an unacceptable upward trend as a function of intensity.
The gradient of the n.p.p. can be made unity simply by rescaling all of the reflection variances. This rescaling has no effect on the refined parameter values, and because of the way parameter standard uncertainties are conventionally computed (Cruickshank & Robertson, 1953), it has no effect on their standard uncertainties. Because structural parameters are unchanged by this scaling, the calculated Friedel differences are unchanged, so that the row (b)STAT in Table 4 is identical to the rows (a) with the exceptions of the GoF and n.p.p. for the main which are both now close to unity (Fig. 5b). Row (b)LSQ in Table 4 contains some interesting features. Although the n.p.p. for the main now has a unit gradient, the n.p.p. for the Friedel differences has a gradient of 0.2, the inverse of that for the original main n.p.p. As a consequence, the standard uncertainties in almost all the postrefinement analyses rose to values not dissimilar to those obtained by direct of Flack (x). The exceptions to this increase in the s.u. of the parameters are those computed by the Hooft method and the histogram method rescaled by the gradient of the n.p.p. Simply rescaling the weights to produce a GoF of unity is, however, not a useful procedure because it fails to produce a uniform distribution of weighted residuals as a function of intensity (Fig. 6b). For well behaved weights, the average should be approximately unity for all intervals across the intensity range. It is now generally accepted that a good strategy for obtaining a uniform distribution of weighted residual is not to scale the observed variances, but to augment them with terms depending upon the magnitude of the observed and/or calculated structure factors (see, for example, the SHELX76 instruction manual). The structure was rerefined using SHELXtype weights giving rows (c) in Table 4. With these weights, the gradient of the n.p.p. was close to unity and the analysis of variance roughly flat. The s.u. of the directly refined Flack (x) parameter hardly changed with the new weights, but the parameter itself increased by onehalf an s.u. The shifts in the structural parameters had no visible effect on the computed Friedel differences, so that the row (c)STAT is the same as the other purely statistically weighted postrefinement analyses. The standard uncertainties for postrefinement analyses in (c)LSQ are similar to those in (b)LSQ, but the itself has increased. Fig. 7 shows the relationships between the weights and the standard uncertainties of the observations under the three regimes.
We can see that for the strong reflections (to the left of the plots) the SHELXtype weighting scheme downweights the observations in much the same way as a simple scale factor, but that the downweighting becomes progressively less for the weak data.
Similar results are seen for most of the materials reported in Table S1 of the supporting information. The weighting scheme for the main usually must be more complex than simple statistical weighting in order to achieve a flat distribution of residuals. The effect of these weights is to increase the s.u. of the directly refined Flack (x) (Bernardinelli & Flack, 1987). The same effect is seen if the augmented weights are used in the postrefinement determination of the The n.p.p. computed for Friedel pairs using intensity statistic weights tends to have a unit gradient, suggesting that the error estimates for the differences are valid. The n.p.p. for weights based on the LSQ sometimes have a distinctly nonunit gradient, with pronounced curved tails. This seems to suggest that the modifiers added to σ(I) in the weighting scheme to achieve a constant unit χ^{2} may be reflecting deficiencies in the model as much as in the data. Note that the holeinone method usually gives similar results to other postrefinement methods when simple statistical weights are used.
5. Results and examples
5.1. Overview
The above computations were performed on a selection of structures from data collected locally or taken from the literature. The examples were chosen to cover a range of values for Friedif, the SHELX format .res and .hklf data, this was used in preference to the and .fcf format data. This was especially useful when I or σ(I) for weak data in the .fcf file had only one significant figure. Each structure was rerefined in CRYSTALS and the parameters for a SHELXtype weighting scheme optimized. The atomic parameters were first refined in a single matrix together with the overall scale and Flack (x) parameters. Additional refinements were then performed from this atomic model on just the overall scale and the Flack (x) parameter, first using the optimized weights, and then with weights derived directly from the counting statistics.
its or had attracted comments in the body of the paper. When the deposited data included theTable S1 in the supporting information contains the results of the analysis of 28 data sets. In every case the results from the full matrix (rows A&B) were almost identical to those from the smallblock (rows C&D), indicating that for a fully refined structure there is little correlation between the structural parameters and the parameters (Fig. 8).
Rows E & F give the results of refining Flack (x) and scale using simple statistical weights. Refining the whole structure with weights derived from unmodified intensity variances would have led to shifts in the atomic parameters.
Table 5 contains sample data for two materials from Table S1. In each case rows E are almost identical to rows F, showing that direct of the Flack (x) parameter using simple intensity statistical weights gives the same results as postrefinement analysis.

The most significant differences are between rows C and D – the SHELXweighted main and postrefinement analyses with either counting or based weights. They show that direct of the Flack (x) is more or less unchanged when using either simple statistical or modified (SHELXtype) weights, providing the atomic model is not allowed to adjust. However, the postrefinement determination of is sensitive to the weights used. Postrefinement analysis using weights derived from those used in the main least squares yields results very similar to those found by direct of Flack (x). However, using simple statistical weights almost always leads to significantly lower standard uncertainties (Fig. 9). The influence on the parameter itself is more variable (Fig. 10). We find that the discrepancy often seen between direct and postrefinement values of the is linked to the weights used in the refinement.
In Table S1 we see that the slope of the n.p.p. for the statistically weighted Friedel differences is generally close to unity, but the slope with weights from the main is almost always less than unity (Fig. 11).
Because both the Hooft (y) and scaled Histogram (h) methods take into account the slope of the n.p.p., they give very similar values for both the parameter and its s.u. independently of the weighting scheme used. It would seem, for general work at least, that the Hooft (y) parameter as implemented in PLATON (Hooft et al., 2008) is a widely available suitably robust estimator of absolute structure.
Fig. 12 shows standard uncertainties computed from the main least squares, and by the holeinone, Hooft and Bijvoet difference methods versus the histogram method. The main was done with SHELXtype weights, the postrefinement analyses with simple statistical weights.
5.2. Examples
The various estimators of Table S2. The refinements for the structures which gave a s.u. for the Flack (x) substantially larger than the s.u. determined by other methods (the clear outliers in Fig. 12) were examined in detail to try to understand the source of the discrepancies.
are summarized in5.2.1. Motherwell (Watkin, unpublished)
The data for 2methyl4nitroaniline (previously published by Howard et al., 1992; Ferguson et al., 2001), Friedif = 5.94, in Cc, was remeasured without the intention of determining the using Mo radiation from a conventional source. The data collection strategy yielded data containing little or no resonant signal. From equation (4) one would expect the parameter to be 0.5 with an s.u. simply reflecting the noise in the data. This result is more or less achieved by all except the Hooft (y) postrefinement methods. For other materials, with a larger value for Friedif, one would expect larger values for D_{s} and thus a smaller s.u. on the parameter, enabling to be detected.
5.2.2. HDFtp3005W (Zhang et al., 2012)
The x) parameter and its are larger than values obtained by postrefinement analyses. The n.p.p. (Fig. 13a) for the residuals from the postrefinement determination is acceptably linear, but the plot for the residuals in the main shows serious deviations from linearity (Fig. 13b). The weights for the plot illustrated were computed from a SHELXLtype scheme. Weights derived from three, four or fiveparameter Chebychev polynomials (Carruthers & Watkin, 1979) fared no better. The relatively large value for the second parameter in the SHELXtype weighting scheme is often taken as a sign of but none could be identified using ROTAX (Cooper et al., 2002). The original authors reported positional disorder in one of the residues, but this was well modelled. They also reported that the crystals were very small and the data collection was difficult, requiring the use of synchrotron radiation. Since the of a conventional model produces unweighted residuals whose distribution cannot be matched by conventional weighting schemes, it seems likely that the model is deficient or the error distribution in this experiment is unusual.
of this material was known from the starting materials. Both the refined Flack (5.2.3. HDFsf3166 (Seela et al., 2012)
This material, of known x) parameter of −0.24 (49) and a Histogram (h) parameter of 0.00 (14). The n.p.p. for the residuals from both the postrefinement determination and the main were good straight lines with a unit gradient. However, of the 2630 Friedel pairs having D_{s} > 0.01σ(D_{o}), only 7.6% of the Friedel pairs give a in the range −0.5 to 1.5 during the histogram postrefinement analysis (Fig. 14). There are no Freidel differences having a theoretical magnitude of more than 0.5σ(D_{o}).
Friedif of 5.4 and with two molecules in the gave a refined Flack (It is not uncommon to find pseudosymmetry between the independent molecules in structures with Z > 1. The CRYSTALS MATCH procedure identified a pseudoglide plane parallel to c, Fig. 15. If the terminal 2(hydroxymethyl)tetrahydrofuran3ol is excluded from the matching procedure, the remaining 45 atoms conform to the pseudoglide (x, 0.97 − y, z − 0.52) with an r.m.s. deviation in equivalent torsion angles of 16°.
5.2.4. PFW cholestane (Parsons et al., 2013)
Cholestane contains only carbon and hydrogen, and two molecules in the .
Friedif is 9.0. The effect of including Freidel pairs with progressively smaller resonant differences is shown in Table 6

Filtering out those reflections with D_{s} < 0.1σ(D_{o}) gives Bijvoet (d), Hooft (y) and Histogram (h) parameters close to the ideal value of zero for this material. Reducing the threshold to include reflections with D_{s} < 0.01σ(D_{o}) increases the number of reflections used from 737 to 3274, but the Bijvoet (d) and Hooft (y) parameters go slightly negative. When the weaker resonant differences are included, the histogram filtering reduces the percentage of reflections used from 94 to 73%. 24% (163) of these reflections have an individual in the range −0.5 to 1.5 when D_{s} < 0.1σ(D_{o}), falling to only 10% (236) when the threshold is reduced to 0.01. The n.p.p. for the residuals from the main structural lie on a good straight line with a unit gradient, but unlike the case of tp3005 (and most well determined structures), the n.p.p. for the resonant differences has a gradient of 1.30 and a distinct downwards tail (Fig. 16).
The deviations could be due to errors in D_{o}, D_{s} or the weights. As demonstrated earlier, D_{s} is not strongly influenced by fine details of the structure, so one is left suspecting the problem is with the intensities or their standard uncertainties. Since the structure refined to a conventional R of 0.029, it seems that the s.u. of the observations may have been underestimated. The weights used in the main are based on the reported intensity standard uncertainties modified to ensure a uniform analysis of variance. Fig. 17 is a plot of SQRT(weight) versus 1/σ(F^{2}).
5.2.5. EBBthreonine (EscuderoAdán et al., 2014)
The paper EBB 2014 is a rich mine of useful data sets collected under a variety of conditions with Mo Kα radiation. Five of the Dthreonine data sets were rerefined in CRYSTALS, yielding essentially the same results as obtained by the original authors. Those authors drew attention to data set EBB5206, which had an anomalously large value for the directly refined Flack (x) parameter (EBB Fig. 3). The Flack parameters determined by postrefinement methods were also anomalously high, yet all methods gave standard uncertainties not unlike those from the other threonine data sets. EBB attribute these anomalous results to the reduced number of reflections (6401, redundancy 3.2) compared with other analyses (e.g. 8324 for EBB5204, which had a redundancy of 11.6). We were not convinced by this argument because EBB5205 had a similar number of reflections and redundancy (7710, 3.7), but yielded a quite normal refined Fortunately these authors had deposited complete reflection data sets (.hklf files) so we were able to examine them in detail.
Data completeness: Data collection EBB5206 was terminated prematurely to try to reduce the redundancy. As Fig. 18(a) shows, this strategy also had the unfortunate effect of reducing the completeness of the data in the region between the Bragg angles of 40 and 45°, even when Friedel pairs were merged. Most serious is the systematic pattern to some of the missing reflections, including, for example, the row lines (h00) where h is even, (h10) where h is odd and some patches of (hk0) where h is 9–12 etc. There was a small dip in completeness for data set 5204 at about 45° (Fig. 13b).
Signal to noise: Fig. 19 shows some measures of the quality of the data as a function of resolution. This suggests that for EBB5204 the data collection strategy was not homogeneous, and that the frame exposure time was increased for the highangle data. There is a hint of a further increase in exposure time at about 45°, a feature more clearly seen in data sets EBB5213 and EBB5215. The number of reflections with I > 10σ(I) remains high right across the data set.
Analysis of refinement residuals: Both data sets seem to refine well, with SHELXtype weighting schemes achieving a goodnessoffit sufficiently close to unity, Fig. 20.
Some insight into the deviations comes from examination of the weighted and unweighted residuals (F_{o}^{2}F_{c}^{2})^{2} as a function of intensity and of resolution (Fig. 21). The very large number of medium intensity reflections (blue curve) dominates the determination of the parameters for the weighting scheme, which leads to overweighting of the strong reflections (green bars in the top illustrations). The distribution of residuals as a function of resolution is not good for either data set, with the lowangle data (strong) being over weighted, and the highangle underweighted. The role of the weighting scheme is to make the binned average value of the weighted residual approximately unity. For conventional data sets it is usually assumed that the principal contributors to (F_{o}^{2}F_{c}^{2})^{2} are errors in F_{o}, but for these extended data sets it is possible that the usual independent spherical atom model emphasizes errors into F_{c}. A further complication may be that a single weighting scheme may not be appropriate when the data collections are not made under constant conditions.
Analysis of Friedel Residuals: In spite of the unusual distribution of the residuals, the n.p.p. for the Friedel residuals were very linear with gradients close to unity (Fig. 22). Based on these, one would expect to obtain similar outcomes from postrefinement determination of the of both EBB5206 and EBB5204.
In Table 7 we can see that for data set EBB5204 the value for the determined directly or by postrefinement is not strongly affected by the weighting scheme. of EBB5206 with simple statistical weights has a goodnessoffit of 4.3, but gives a directly refined Flack (x) of 0.02 (34). Except for the holeinone method, other postrefinement procedures lead to larger values of the but with smaller standard uncertainties. of EBB5206 with SHELXtype weights give Flack (x) of 0.29 (24), in agreement with the original authors. Postrefinement determinations, also using the SHELXtype weights, give much the same value for the but with the Hooft and scaled histogram methods (which involve the gradient of the n.p.p.) giving reduced standard uncertainties. Except for holeinone, postrefinement methods using statistical weights yield slightly smaller Flack parameters and much reduced standard uncertainties. These results suggest that the anomalous reported value for the directly refined Flack (x) parameter is a consequence of the weights used for the main The algorithm used to determine the coefficients in the weighting expression was unchanged for all the data sets we examined. We are led to suspect that the failure of this algorithm for data set EBB5206 is due to the large number of missing reflections in the narrow band between 40 and 45°.

Leverage analysis: EBB tried to show that the reliability of an analysis increases as the resolution of the data included in the analysis increases. We used their data in a slightly different way, with different conclusions. The leverage of an individual reflection in a postrefinement determination is proportional to the square of the signaltonoise. A histogram of the mean signaltonoise as a function of resolution should indicate where in the data set the most influential information lies. Fig. 23 is such a plot for EBB5215 (redundancy = 8.2) and EBB5204 (redundancy = 11.6).
The atomic and Flack (x) parameters of EBB5215 were refined using all the data and a SHELXtype weighting scheme [R = 0.023, wR_{2} = 0.066, Flack (x) = 0.01 (17)]. The postrefinement was then determined using firstly all reflections, and then only the reflections in three nonoverlapping resolution ranges, chosen to contain approximately the same number of points. Table 8 shows that for EBB5215 all methods produce a steady increase in the as the resolution band increases even though frame exposure times seem to have been increased.

5.2.6. PFWfyo12e (PFW2013)
This material has previously been referred to in §4.3. Since the data were collected carefully, it was worth further exploring the cause for the difference between the standard uncertainties of the determined by direct and postrefinement methods. The steep gradient of the n.p.p. for the main (4.64) with purely statistical weights suggests that the standard uncertainties of the observations are severely underestimated. A plot of the internal versus the external sample standard uncertainties for the merged data is a rather dispersed straight line (gradient 1.4) showing that the manufacturer's estimates of individual uncertainties reflect reasonably well the dispersion between equivalent measurements (Fig. 24).
It was expected that a plot of the σ(I) against I [equation (14)] for the merged data would approximate to a (Evans, 2006). Instead, it was found to be a rather good straight line (Fig. 25) with a gradient of 0.045.
of the meanFor Poisson statistics the signal:noise [I/σ(I)] can be increased by accumulating more photons. Diederichs (2010) had observed that [I/σ(I)] tended to a limiting value for synchrotron data. Plots of I/σ_{mean}(I) for PFWfyo12e show a similar tendency, except that there appears to be two (or possibly three) limiting values (Fig. 26).
This raises the possibility that the data is not homogeneous, in the sense that it is derived from more than one experimental regime. A histogram of the frequency of distribution of redundancy is at least bimodal (Fig. 27), with a long loworder tail.
The intensities used during the least squares are usually the (weighted) means of a set of equivalent reflections. The variance of this mean is related to the sample variance [equation (11) or (12)] by 1/(redundancy). Reflections measured 25 times have a variance almost twice as large as a reflection measured 45 times. It is possible that this variability of redundancy leads to the various asymptotic limits seen in Fig. 26. Whatever the individual variabilities of the standard uncertainties of the means, on average σ(F^{2}) ≃ 0.05F^{2} (from Fig. 25). The a and b terms in the SHELXtype weighting scheme are 0.044 and 0.291, so that it is these which dominate the weighting during Weighting the corefinement of data measured under different regimes (for example, widely differing redundancies) may warrant further investigation, a situation alluded to in Bernardinelli & Flack (1987).
5.2.7. PFW Rmandelic acid (PFW2013)
PFW2013 report that this material (Friedif = 35) crystallized as plates which on cooling to 150 K showed evidence of strain broadening, so that the actual data collection was performed at 220 K. 32 194 reflections were measured, yielding 2860 unique reflections, an average redundancy of 11.3. R_{int} was 0.04, indicating a fair level of selfconsistency amongst the data. The final R factor, 0.0549, was higher than one would expect for this type of material, but might be explained by the strain broadening. There are two molecules in the differing by a small rotation about the single bond to the phenyl group, and no evidence for disorder. The σ(Flack(x)) = 0.37 greatly exceeds the σ(Flack(h)) = 0.05, in spite of the significant value of Friedif. The n.p.p. for the main residuals (Fig. 28) is far from ideal, suggesting a problem with the data, the weights or the model itself.
Alternative weighting schemes to a SHELXtype formula did not significantly improve the n.p.p. Although the program DIFABS (Walker & Stuart, 1983), once used as a method for estimating empirical absorption corrections, has long been replaced by the use of multiscan methods, it still provides a useful diagnostic tool. The program fits a smoothly varying function of azimuth and declination of the incident and emergent beams to the residual between F_{o} and F_{c}, the socalled absorption surface. For merged area detector data there are no `incident' or `emergent' beams, but these can be replaced by the scattering vector to generate a visualization of the residual. If the multiscan procedure has adequately modelled absorption and illuminated volume effects, variations of this surface from unity will indicate that there are problems with the model, or undetected errors in the data. Fig. 29(a) is the plot for Rmandelic acid. It shows variations between 0.9 and 1.1 with some very sharp gradients, indicating that there is a problem with the analysis.
The F_{o}F_{c} plot (Fig. 30) is a fair straight line with unit gradient, and without any very outstanding outliers. However, although the distribution is bounded by a reasonably well defined lower edge, the upper edge is distinctly ragged. This condition, together with the DIFABS plot, is often symptomatic of twinning.
ROTAX analysis (Cooper et al., 2002) indicated the possibility of by the law [1,0,0; 0,−1,0; −0.8,0,−1]. including this reduced the Rfactor to 0.0496 and greatly improved the F_{o} versus F_{c} plot and DIFABS surface (Fig. 29b), but did little to improve the n.p.p. If the components of the noninversion twin are labelled A and B, and corresponding components by a and b, then was continued with the constraint that A+B+a+b = 1.0 and the restraint 0.000(1) = b (aB/A) on the assumption that the inversion ratios (a/A and b/B) are the same. The refined scale factors are A = 0.8 (2) , B = 0.13 (4), a = 0.0 (2), b = 0.01 (4).^{3} Inclusion of inversion had no effect on the Rfactor. For the moment the postrefinement analysis in CRYSTALS will not handle noninversion twinning.
5.2.8. FSTWYIFZAP (Gowda et al., 2007)
This material, falling at the bottom of Table 1 in FSTW (Flack et al., 2011), caught the interest of those authors because of the small variation of R_{D} (= ΣD_{obs} − D_{model}/ΣD_{obs}, summed over the Friedel pairs) as a function of imposed values for the Flack (x) parameter. They came to the conclusion that the reported uncertainty in the was very grossly underestimated. Using the and .fcf files recovered from the IUCr we were unable to reproduce with CRYSTALS some of the results recorded elsewhere in the PLATON was used to convert the file to a SHELXL ins (data) file, and the refinements repeated with SHEXL2013/2, using the TWIN/BASF commands or using the holeinone and Quotient methods for estimating the Flack (x) parameter (Table 9).

The SHELX and CRYSTALS analyses are reasonably compatible, but in poor agreement with the published values. In the absence of evidence to the contrary, we attribute this conflict to the fact that the original authors were able to use the full precision of the reflection data stored in an .hklf file, but for the recalculations we had to use the limited precision of the .fcf file. Whatever the source of the discrepancy, it remains clear that while direct of Flack (x) in CRYSTALS and the in SHEXL2013/2 lead to very similar results, these are quite different values from the postrefinement methods. The usual diagnostic tools were used to try to locate the source of the discrepancy. The gradient of the n.p.p. (Fig. 31a) was 0.91, but with substantial displacement from the origin of the graph, which usually implies a feature in the data which cannot be matched by the model. The n.p.p. for the postrefinement analysis of the Friedel differences (Fig. 31b) had a leastsquares gradient of 4.7. Examination of the plot showed that very many of the reflections in the central region lay on a line of unit gradient, but there were substantial numbers of outliers at the extremes of the plot. From other analyses, we have seen that the calculated Friedel differences are only weakly correlated with the atomic parameters, so we must assume that the nonlinearity of the n.p.p. is due either to errors on the observed Friedel differences or in their standard uncertainties.
Examination of the DIFABS map, Fig. 32, showed deep hollows and high peaks with a maximum ratio of 1:1.77. This could be indicative of uncorrected absorption. The authors give the crystal size as 0.52 × 0.46 × 0.09 mm – a thin plate – and used an analytical correction by the method of Clark & Reid (1995) giving minimum and maximum corrections of 0.86 and 1.16, a ratio of 1:1.35.
The F_{o} versus F_{c} plot was only weakly indicative of and ROTAX suggested an unconvincing 1,0,0.734; 0,−1,0; 0,0,1. with this gave a major component of 0.88 (3). The text of the article made no mention of but the file contained an entry for the and its Because of this, PLATON had added the necessary TWIN/BASF instructions to the SHELXL instruction file. Attempts to refine the nonmerohedral and inversion in CRYSTALS failed, the normal matrix becoming singular in spite of the application of appropriate restraints and constraints.
At this point we retrieved the supporting information. From this it was clear that the original authors had detected the same as ROTAX, and had refined this model to a minor element of 0.15 using an HKLF5 reflection file. Strangely, in spite of the Flack entry [−0.1 (3)] in the deposited the supporting information states `Owing to the poor quality of the data, the absolute structure couldn't be reliably defined and any references to the Flack parameter have been omitted'.
Our analysis of the data was repeated using the twinned model, but showed no great improvement in the n.p.p. nor the DIFABS surface. The data had been collected with an area detector, standard source and graphite monochromator, so that unless the authors had used a very fine collimator, one might expect the crystal to have been moreorless fully bathed in the direct beam. 2613 reflections were measured, merging down to 1003 independent observations (R_{int} = 0.086) of which 621 had I > 2σ(I). Seeing that over 30% of the data could be classed as very weak, the observed and calculated Wilson plots were examined, Fig. 33. The upturn in the plot of the observed data at about ρ = 0.3 is often characteristic of data being measured to a resolution at which there is little or no signal amongst the noise.
6. Conclusions
Xray crystallography is unique in that it provides both an estimate of the enantiopurity of a sample, and a ca 1%. NMR with shift reagents can give separate signals for each but there are substantial complications about the binding of the shift reagent, equilibria etc. Chiral HPLC has the advantage of actually separating the enantiomers as individual signals that can be directly ratioed and so can be very deterministic. In many cases one should be able to detect a 1 ma.u. (a.u. = atomic unit) signal from an enantiomeric impurity alongside a signal of 1 a.u. of the main peak, i.e. 0.1%. These techniques are degraded in the presence of impurities. Except for the case of crystallography largely avoids the impurity problem, but suffers in that one crystal is taken as representative of the bulk sample. However, for materials known to be or to have a large it can be a robust way for assigning the of the major (or only) component.
for that estimate without special user action. Chiroptical spectroscopies look at a total signed signal and thus require a reference spectrum to compare against in order to judge the proportions of each When this is available then typically the resolution isThe results of Thompson & Watkin (2011) showed that even in apparently unsuitable cases, there was usually some resonant signal amongst the random noise and systematic errors. Flack used the 2A/D plots to try to visualize the signal. The plots in this paper of D_{o} and D_{s} versus σ(D_{o}) provide a clear indication of the best possible signal in the data, and the actual signal in the observed data. We know that the value of the must lie in the interval 0–1 and in favourable cases histograms of the Flack x peak in this interval. The broader the spread about this interval, the less reliable the estimate of σ(x). The ratio D_{s}/σ(D_{o}) is a measure of the information content of a reflection. Measuring data to high resolution increases D_{s}/I_{s}, but only improves the leverage if care is taken to minimize σ(D_{o}).
Direct x) parameter usually results in a value with a larger than that obtained by postrefinement methods using weights derived from the observed variances, making these latter methods more attractive for publication. However, the value of the Flack (x) parameter and its obtained by free in the main least squares should be compared with the values obtained by a postrefinement method. Substantial differences indicate that there may be a problem with the data or with the proposed model, although other techniques will have to be used to identify the problem.
of the Flack (In the absence of widespread availability of software able to refine structures using both averages and differences of structure amplitudes as observations, the low correlation between the structural parameter values and the Flack x suggests that a postrefinement estimate of the once the model is fully parameterized can be used to guide the final The Bijvoet difference method is a good diagnostic for problems with the data or model since it contains the minimum number of assumptions: the Hooft and Parsons methods both allow for some problems with the data or model and so may be most suitable for routine work. If the Parsons quotient and the Bijvoet difference methods give substantially different results, this may be indicative of absorption or other problems with the main If there is doubt about the enantiopurity of the material, the must be included as part of the model. It can either be refined freely, treated as a constant (constraint) with the value taken from the postrefinement analysis, or a single equation of restraint on the Flack (x) parameter can be introduced using the postrefinement estimate of its value and as target values.
APPENDIX A
Ratios of averages and averages of ratios
For an individual Freidel pair we can write equation (4) as
Defining
we obtain
A1. Ratio of averages
where the terms in square brackets are column vectors of the model and observed Friedel differences, the leastsquares estimate of c from a set of Friedel pairs is
from which a weighted value for 〈c〉 can be obtained as
with
Equation (24) can be rewritten as a ratio of averages
Letting
gives
from which
and
A2. Average of ratios
Alternatively, we can evaluate individual c_{i} from equation (20) and x_{i} from (21), and form the (weighted) average of these ratios
Following Blessing & Langs (1987) we can form the internal and external estimates of the variance of the sample, and hence the variance of the average
For a list of paired observations, the ratio of averages and the average of ratios will be the same if there is a linear relationship between the observations and the error distributions are similar. A difference between these two statistics indicates a problem that should be investigated.
Supporting information
Excel spreadsheet for 28 https://doi.org//10.1107/S2052520616012890/ps5053sup1.xlsx
determinations. DOI:Reduced Excel spreadsheet abstracted from S1. DOI: https://doi.org//10.1107/S2052520616012890/ps5053sup2.xlsx
Details on supporting information. DOI: https://doi.org//10.1107/S2052520616012890/ps5053sup3.pdf
Footnotes
^{1}In SHELXL 2014/7 the `holeinone' fit has been renamed `classical fit'. This should not be confused with the much older direct as found, for example, in XRAY76 (Flack, 1983) or CRYLSQ (OlthofHazekamp, 1990).
^{2}This equation first appears in this form in Thompson & Watkin (2011).
^{3}The twin scale factors sum to unity if quoted to three decimal places [0.811 (218), 0.133 (036), 0.048 (218), 0.008 (035)].
Acknowledgements
The authors wish to thank Howard Flack for critical advice, to Simon Parsons for software which enabled us to verify some calculations, to Ton Spek for code from PLATON, to George Tranter (Chiralabs Ltd) for advice on nonXray techniques, and many colleagues for suggesting additions to the manuscript. Figs. 8–12 and 24–27 were created using Microsoft Excel 2010. All others are lightly retouched screendumps from CRYSTALS. SHELXL calculations were made with SHELXL2013/2, CRYSTALS calculations with the executable dated 17/12/2015 08:36.
References
Abrahams, S. C. & Keve, E. T. (1971). Acta Cryst. A27, 157–165. CrossRef CAS IUCr Journals Web of Science Google Scholar
Abud, J. E., Sartoris, R. P., Calvo, R. & Baggio, R. (2011). Acta Cryst. C67, m130–m133. CSD CrossRef IUCr Journals Google Scholar
Bernardinelli, G. & Flack, H. D. (1987). Acta Cryst. A43, 75–78. CrossRef CAS Web of Science IUCr Journals Google Scholar
Blessing, R. H. & Langs, D. A. (1987). J. Appl. Cryst. 20, 427–428. CrossRef Web of Science IUCr Journals Google Scholar
Carruthers, J. R. & Watkin, D. J. (1979). Acta Cryst. A35, 698–699. CrossRef CAS IUCr Journals Web of Science Google Scholar
Clark, R. C. & Reid, J. S. (1995). Acta Cryst. A51, 887–897. CrossRef CAS Web of Science IUCr Journals Google Scholar
Cooper, R. I., Gould, R. O., Parsons, S. & Watkin, D. J. (2002). J. Appl. Cryst. 35, 168–174. Web of Science CrossRef CAS IUCr Journals Google Scholar
Cooper, R. I., Watkin, D. J. & Flack, H. D. (2016). Acta Cryst. C72, 261–267. Web of Science CrossRef IUCr Journals Google Scholar
Cruickshank, D. W. J. (1961). Computing Methods and the Phase Problem, edited by R. Pepinsky, J. M. Robertson & J. C. Speakman, paper 6. Oxford: Pergamon Press. Google Scholar
Cruickshank, D. W. J. & McDonald, W. S. (1967). Acta Cryst. 23, 9–11. CrossRef IUCr Journals Web of Science Google Scholar
Cruickshank, D. W. J. & Robertson, A. P. (1953). Acta Cryst. 6, 698–705. CrossRef CAS IUCr Journals Web of Science Google Scholar
Diederichs, K. (2010). Acta Cryst. D66, 733–740. Web of Science CrossRef CAS IUCr Journals Google Scholar
Dyadkin, V., Wright, J., Pattison, P. & Chernyshov, D. (2016). J. Appl. Cryst. 49, 918–922. CSD CrossRef CAS IUCr Journals Google Scholar
Ealick, S. E., Van der Helm, D. & Weinheimer, A. J. (1975). Acta Cryst. B31, 1618–1626. CSD CrossRef CAS IUCr Journals Google Scholar
Engel, D. W. (1972). Acta Cryst. B28, 1496–1509. CrossRef CAS IUCr Journals Web of Science Google Scholar
EscuderoAdán, E. C., BenetBuchholz, J. & Ballester, P. (2014). Acta Cryst. B70, 660–668. Web of Science CSD CrossRef IUCr Journals Google Scholar
Evans, P. (2006). Acta Cryst. D62, 72–82. Web of Science CrossRef CAS IUCr Journals Google Scholar
Fábry, J., Fridrichová, M., Dušek, M., Fejfarová, K. & Krupková, R. (2012). Acta Cryst. C68, o76–o83. Web of Science CSD CrossRef IUCr Journals Google Scholar
Ferguson, G., Glidewell, C., Low, J. N., Skakle, J. M. S. & Wardell, J. L. (2001). Acta Cryst. C57, 315–316. Web of Science CSD CrossRef CAS IUCr Journals Google Scholar
Flack, H. D. (1983). Acta Cryst. A39, 876–881. CrossRef CAS Web of Science IUCr Journals Google Scholar
Flack, H. D. (2013). Acta Cryst. C69, 803–807. Web of Science CrossRef CAS IUCr Journals Google Scholar
Flack, H. D. & Bernardinelli, G. (2008). Acta Cryst. A64, 484–493. Web of Science CrossRef CAS IUCr Journals Google Scholar
Flack, H. D., Bernardinelli, G., Clemente, D. A., Linden, A. & Spek, A. L. (2006). Acta Cryst. B62, 695–701. Web of Science CrossRef CAS IUCr Journals Google Scholar
Flack, H. D., Sadki, M., Thompson, A. L. & Watkin, D. J. (2011). Acta Cryst. A67, 21–34. Web of Science CrossRef CAS IUCr Journals Google Scholar
Gowda, B. T., Nayak, R., Kožíšek, J., Tokarčík, M. & Fuess, H. (2007). Acta Cryst. E63, o2967. Web of Science CSD CrossRef IUCr Journals Google Scholar
Hamilton, W. C. (1965). Acta Cryst. 18, 502–510. CrossRef CAS IUCr Journals Web of Science Google Scholar
Hooft, R. W. W., Straver, L. H. & Spek, A. L. (2008). J. Appl. Cryst. 41, 96–103. Web of Science CrossRef CAS IUCr Journals Google Scholar
Hooft, R. W. W., Straver, L. H. & Spek, A. L. (2010). J. Appl. Cryst. 43, 665–668. Web of Science CrossRef CAS IUCr Journals Google Scholar
Howard, S. T., Hursthouse, M. B., Lehmann, C. W., Mallinson, P. R. & Frampton, C. S. (1992). J. Chem. Phys. 97, 5616–5630. CrossRef CAS Web of Science Google Scholar
Le Page, Y., Gabe, E. J. & Gainsford, G. J. (1990). J. Appl. Cryst. 23, 406–411. CrossRef CAS Web of Science IUCr Journals Google Scholar
Merli, M. & Sciascia, L. (2011). Acta Cryst. A67, 456–468. Web of Science CrossRef IUCr Journals Google Scholar
Müller, G. (1988). Acta Cryst. B44, 315–318. CrossRef IUCr Journals Google Scholar
OlthofHazekamp, R. (1990). Xtal 3.0 Reference Manual, edited by R. S. Hall & J. M. Stewart. University of Western Australia, Perth. Google Scholar
Parrish, W. (1960). Acta Cryst. 13, 838–850. CrossRef IUCr Journals Web of Science Google Scholar
Parsons, S., Flack, H. D. & Wagner, T. (2013). Acta Cryst. B69, 249–259. Web of Science CrossRef CAS IUCr Journals Google Scholar
Parsons, S., Pattison, P. & Flack, H. D. (2012). Acta Cryst. A68, 736–749. Web of Science CrossRef CAS IUCr Journals Google Scholar
Parsons, S., Wagner, T., Presly, O., Wood, P. A. & Cooper, R. I. (2012). J. Appl. Cryst. 45, 417–429. Web of Science CSD CrossRef CAS IUCr Journals Google Scholar
Prince, E. (1994). Mathematical Techniques in Crystallography and Material Science, pp. 80–82. Berlin: SpringerVerlag. Google Scholar
Prince, E. (2004). Mathematical Techniques in Crystallography and Material Science, 3rd ed., p. 121. Berlin: Springer–Verlag. Google Scholar
Rabinovich, D. & Hope, H. (1980). Acta Cryst. A36, 670–678. CrossRef CAS IUCr Journals Google Scholar
Rogers, D. (1981). Acta Cryst. A37, 734–741. CrossRef CAS IUCr Journals Web of Science Google Scholar
Seela, F., Xiong, H., Budow, S., Eickmeier, H. & Reuter, H. (2012). Acta Cryst. C68, o174–o178. CSD CrossRef IUCr Journals Google Scholar
Sheldrick, G. M. (2014). Personal communication. Google Scholar
Sheldrick, G. M. (2015). Acta Cryst. A71, 3–8. Web of Science CrossRef IUCr Journals Google Scholar
Smith, M. & Lamb, A. (2012). Personal communication. Oxford Archive No. 6418, C21, H23 Br N2 O4. Google Scholar
Thompson, A. L. & Watkin, D. J. (2011). J. Appl. Cryst. 44, 1017–1022. Web of Science CrossRef CAS IUCr Journals Google Scholar
Tukey, P. J. W. (1976). Proceedings of the First ERDA Statistical Synposium, edited by W. L. Nicholson & J. L. Harris. Ohio: Battelle, Pacific Northwest Laboratories,. Google Scholar
Walker, N. & Stuart, D. (1983). Acta Cryst. A39, 158–166. CrossRef CAS Web of Science IUCr Journals Google Scholar
Weiss, M. S. (2001). J. Appl. Cryst. 34, 130–135. Web of Science CrossRef CAS IUCr Journals Google Scholar
Zhang, W., Oliver, A. G. & Serianni, A. S. (2012). Acta Cryst. C68, o7–o11. Web of Science CSD CrossRef CAS IUCr Journals Google Scholar
© International Union of Crystallography. Prior permission is not required to reproduce short quotations, tables and figures from this article, provided the original authors and source are cited. For more information, click here.