research papers
Evaluating crystallographic likelihood functions using numerical quadratures
aCenter for Advanced Mathematics in Energy Research Applications, Computational Research Division, Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720, USA, bMolecular Biophysics and Integrative Bioimaging Division, Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720, USA, and cThe University of Tennessee at Knoxville, Knoxville, TN 37916, USA
*Correspondence e-mail: phzwart@lbl.gov
Intensity-based likelihood functions in crystallographic applications have the potential to enhance the quality of structures derived from marginal diffraction data. Their usage, however, is complicated by the ability to efficiently compute these target functions. Here, a numerical quadrature is developed that allows the rapid evaluation of intensity-based likelihood functions in crystallographic applications. By using a sequence of change-of-variable transformations, including a nonlinear domain-compression operation, an accurate, robust and efficient quadrature is constructed. The approach is flexible and can incorporate different noise models with relative ease.
Keywords: maximum likelihood; refinement; numerical integration.
1. Introduction
The estimation of model parameters from experimental observations plays a central role in the natural sciences, and the use of likelihood-based methods has been shown to yield robust estimates of `best guess' values and their associated confidence intervals (Rossi, 2018). estimation goes back to sporadic use in the 1800s by Gauss (Gauss, 1809, 1816, 1823) and Hagen (1867), and was further developed by Fisher (1915), Wilks (1938), and Neyman and Pearson (Neyman & Scott, 1948; Pearson, 1970). In the crystallographic community, Beu et al. (1962) were the first to explicitly use estimation, applying it to lattice-parameter in powder diffraction. In a late reaction to this work, Wilson (1980) states that `the use of is unnecessary, and open to some objection', and subsequently recasts the work of Beu et al. (1962) into a more familiar least-squares framework. It is important to note that least-squares estimation methods are equivalent to a likelihood formalism under the assumption of normality of the random variables. The use of maximum-likelihood-based methods using non-normal distributions in structural sciences took off after making significant impacts in the analysis of macromolecules. For these types of samples, structure solution and problems were often problematic owing to very incomplete or low-quality starting models, making standard least-squares techniques underperform. In the 1980s and 1990s, likelihood-based methods became mainstream, culminating in the ability to routinely determine and refine structures that were previously thought to be problematic (Lunin & Urzhumtsev, 1984; Read, 1986; Bricogne & Gilmore, 1990; de La Fortelle & Bricogne, 1997; Pannu & Read, 1996; Murshudov et al., 1997). A key ingredient to this success was the development of cross-validation techniques to reduce bias in the estimation of hyper-parameters that govern the behavior of the likelihood functions (Lunin & Skovoroda, 1995; Pannu & Read, 1996). At the beginning of the 21st century, Read and coworkers extended the likelihood formalism to molecular-replacement settings as well, resulting in a significant improvement in the ability to solve structures from marginal starting models (McCoy et al., 2005; Storoni et al., 2004; Read, 2001). The first use of approximate likelihood methods for the detection of heavy atoms from anomalous or derivative data originates from Terwilliger & Eisenberg (1983), who used an origin-removed Patterson correlation function for solution. This approach was shown by Bricogne (1997) to be equivalent to a second-order approximation of a Rice-based likelihood function. A more recent development is the use of a more elaborate likelihood formalism in the location of substructures (Bunkóczi et al., 2015), showing a dramatic improvement in the ability to locate heavy atoms. In density modification, the use of the likelihood formalism has significantly increased its radius of convergence (Terwilliger, 2000; Cowtan, 2000; Skubák et al., 2010).
As the above examples illustrate, impressive progress has been made by the application of likelihood-based methods to a wide variety of crystallographic problems. In all of the described scenarios, key advances were made by deriving problem-specific likelihood functions and applying them to challenging structure-determination problems. In the majority of these cases, a thorough treatment of experimental errors has only a secondary role, resulting in approximations that work well in medium- or low-noise settings. The principal challenge in the handling of random noise in crystallographic likelihood functions is how to efficiently convolve Rice-like distribution functions modeling the distribution of a
from an incomplete model with errors with the appropriate distribution that models the experimental noise. In this manuscript, we develop quadrature approaches to overcome these difficulties. We accomplish this by using a sequence of changes of variables that are amenable to straightforward numerical integration using standard methods. The approach derived has direct applications in model and while the general methodology can also be extended to other crystallographic scenarios. In the remainder of this paper, we will provide a general introduction to likelihood-based methods, provide a relevant background into numerical integration techniques, develop an adaptive quadrature approach, apply it to Rice-type likelihood functions and validate its results.1.1. formalism
The estimation of model parameters θ given some data set = {x1, …, xj, …, xN} via the likelihood formalism is performed in the following manner. Given the probability density function (PDF) f(xj|θ) of a single observation xj given a model parameter θ, the joint probability of the entire data set is, under the assumption of independence of the observations, equal to the product of the individual PDFs,
The probability of the data given the model parameters θ is known as the likelihood of the model parameters given the data:
A natural choice for the best estimate of the model parameters is obtained by finding the θ that maximizes the likelihood function. This choice is called the estimate (MLE). The likelihood function itself can be seen as a probability distribution, allowing one to obtain confidence-limit estimates on the MLE (Rossi, 2018). The determination of the MLE is typically performed by optimizing the log-likelihood:
Often, the distribution needed for the likelihood function has to be obtained via a process known as marginalization. During this integration, a so-called nuisance parameter is integrated out,
where, under the assumption of conditional independence,
Depending on the mathematical form of the distributions involved, this marginalization can range from a trivial analytical exercise to a numerically challenging problem. In likelihood functions in a crystallographic setting, this marginalization is required to take into account the effects of experimental noise, and its efficient calculation is the focus of this communication.
1.2. Motivation
The most common likelihood function used in crystallographic applications specifies the probability of the true structure-factor amplitude given the value of a calculated originating from a model with errors (Sim, 1959; Srinivasan & Parthasarathy, 1976; Luzzati, 1952; Woolfson, 1956; Lunin & Urzhumtsev, 1984):
fa and fc are the distributions for acentric and centric reflections (the so-called Rice distribution), ∊ is a symmetry-enhancement factor, F is the true structure-factor amplitude and FC is the current model structure-factor amplitude, while α and β are likelihood distribution parameters (Lunin & Urzhumtsev, 1984). For the of atomic models given experimental data, the likelihood of the model-based structure-factor amplitudes given the experimental data is needed and can be obtained from a marginalization over the unknown, error-free structure-factor amplitude. Following Pannu & Read (1996) and assuming conditional independence between the distributions of the experimental intensity Io and amplitude F, we obtain
where f(F|FC, α, β) is given by expressions (6) or (7) and f(Io|σI2, F) is equal to a normal distribution with mean F2 and variance σI2. This integral is equivalent to the MLI target function derived by Pannu & Read (1996). Because there is no fast-converging series approximation or simple closed-form analytical expression for this integral, various approaches have been developed, as excellently summarized by Read & McCoy (2016), including a method-of-moments-type approach to find reasonable analytical approximations to the intensity-based likelihood function.
In this work, we investigate the use of numerical integration methods to obtain high-quality approximations of integral (8) while also taking into account uncertainties in the estimated standard deviation. The approach outlined above, in which a Rice function is convoluted with a Gaussian distribution, essentially assumes that the standard deviation of the mean is known exactly. Given that both the standard deviation and the mean are derived from the same experimental data, this assumption is clearly suboptimal, especially when the redundancy of the data is low. In order to take into account possible errors in the observed standard deviation, we will use a t-distribution instead of a normal distribution, which arises as the distribution choice when the true standard deviation is approximated by an estimate from experimental data (Student, 1908). The aim of this work is to derive an efficient means of obtaining target functions that can provide an additional performance boost when working with very marginal data, such as those obtained from time-resolved serial crystallography or free-electron laser data, in which the experimental errors are typically larger than those obtained using standard rotation-based methods or have nonstandard error models (Brewster et al., 2019). Furthermore, high-quality data sets are rarely resolution-limited by the diffraction geometry alone, indicating that many more marginal data are readily available that can potentially increase the quality of the final models if appropriate target functions are used. In the remainder of this manuscript, we develop and compare a number of numerical integration schemes aimed at rapidly evaluating intensity-based likelihood functions and their derivatives that take into account the presence of experimental errors, both in the mean intensity and in its associated standard deviation.
2. Methods
In order to evaluate a variety of numerical integration schemes and approximation methods, the equations are first recast into a normalized structure-factor amplitudes E and normalized intensities Zo framework, with the use of the σA formulation of the distributions involved, assuming a P1 such that ∊ = 1 (Read, 1997). The joint probability distribution of the error-free structure-factor amplitude E and the experimental intensity Zo, given the calculated normalized EC, the model-quality parameter σA, the estimated standard deviation of the observation σZ and the effective ν, reads
for acentric reflections and
for centric reflections. When the distribution of the observed mean intensity Zo is modeled by a t-distribution (Student, 1908) with a location parameter equal to E2, we have
where ν is the effective of the observation, which is related to the effective sample size Neff,
The effective sample size can be taken as the redundancy of an observed intensity, or can be estimated during data processing using the Welch–Satterthwaite equation (Welch, 1947) to take into account the weighting protocols implemented in data processing (Brewster et al., 2019). The t-distribution arises as the distribution of choice given a sample mean and sample variance from a set of observations (Student, 1908). The use of a normal distribution essentially assumes no uncertainty in the variance σZ2, but only in the observed mean Zo. The t-distribution is similar to a normal distribution, but has heavier tails and therefore will be expected to result in likelihood functions that are less punitive to larger deviations between observed and model intensities. When ν tends to infinity, the above distribution converges to a normal distribution,
The above joint probability distributions need to be marginalized over E in to obtain the distribution of interest:
2.1. Variance inflation
A common approach to avoid performing the integration specified above is to inflate the variance of the Rice function (1 − σA2) by the variance of the `observed structure-factor amplitude', yielding (1 − σA2 + σE2) (Green, 1979). This approach circumvents the need to perform an integration, but is suboptimal in a number of different ways. Because we do not observe amplitudes, we are required to estimate the amplitude and its variance from observed intensity data. A common way to perform the intensity-to-amplitude conversion is via a Bayesian estimate (French & Wilson, 1978) under the assumption of a uniform distribution of atoms throughout the Although this so-called Wilson prior is used in most cases, a slightly different result can be obtained when using a constant, improper prior on the possible values of the structure-factor amplitudes on the positive half-line (Sivia & David, 1994). This results in an intensity-to-amplitude conversion that does not rely on the accurate estimation of the mean intensity, possibly complicated by the effects of pseudo-symmetry, diffraction anisotropy or twinning:
Further details are given in Appendix E. While this procedure allows a straightforward intensity-to-amplitude conversion, even when intensities are negative, and can subsequently be used to inflate the variance of the Rice function, it is no substitute for the full integration. Given the simplicity of the variance-inflation approach and its wide usage in a number of crystallographic applications, we will use this approach as a benchmark, using conversion schemes based both on the Wilson prior (denoted French–Wilson) and on the outlined uniform, non-informative prior (denoted Sivia).
2.2. Approaches to numerical integration
Several conventional numerical integration approximations exist for improper integrals such as expression (8). Standard methods include trapezoidal-based methods with a truncated integration range, the use of Laplace's method, Monte Carlo-based methods or approaches based on orthogonal polynomials (Davis & Rabinowitz, 1984). Whereas a straightforward use of a trapezoidal integration scheme is tempting, the shape of the integrand for certain combinations of distribution parameters will result in a fair chance of missing the bulk of the mass of the function unless a very fine sampling grid is used. When using the Laplace approximation, in which the integrand is approximated by an appropriately scaled and translated Gaussian function, the integrand can deviate significantly from a Gaussian, also resulting in a poor performance. These challenges are summarized in Fig. 1, where a number of typical integrand shapes are visualized for different parameter choices. A number of numerical integration and approximation methods will be outlined below, including a discussion of how ground truth is established as a basis for comparison. Here, we will limit ourself to the Laplace approximation owing to its simplicity and the trapezoidal rules because of their excellent convergence properties when applied to analytic functions on the real line and their close relation to classical Gauss quadratures (Trefethen & Weideman, 2014). The use of (quasi) Monte Carlo schemes will not be considered, since these methods are typically used as a `method of last resort' for very high dimensional integrals (Cools, 2002).
2.3. Change of variables and the Laplace approximation
Analytical and numerical integration is often greatly simplified by a change of variables of the integrand (Davis & Rabinowitz, 1984). The change-of-variable theorem relates the integral of some function h(u) under a change of variables u = ψ(x),
The modified shape of the integrand by a change of variables makes the use of the so-called Laplace approximation appealing. In a Laplace approximation, the integrand is approximated by a scaled squared exponential function with a suitably chosen mean and length scale (Peng, 2018). The Laplace approximation can be derived from truncated Taylor expansion of the logarithm of the integrand:
where g(x) = ln[f(x)] and x0 is the location of the maximum of f(x), implying that g′′(x0) = 0. Note that in the last step in equation (18) the assumption is made that f(x) goes to 0 when not near x0 quickly enough that integrating over [a, b] yields the same results as integrating over . Although this approximation does not work for all possible choices of g(x), it has proven to be a successful tool in marginalizing distributions in Bayesian analysis (Kass & Steffey, 1989) and crystallographic applications (Murshudov et al., 2011).
The above expression thus yields
The effectiveness of this approximation hinges on the location of x0 (it should be contained within the original integration domain), the magnitude of g′′(x0) and how rapidly higher-order derivatives of g(x) vanish around x0. The change-of-variable strategy outlined above can aid in increasing the performance of approximation to expression (8).
2.4. Quadrature methods
Even though the change-of-variables approach combined with the Laplace approximation has the potential to yield accurate integral approximations, obtaining reasonable estimates of the derivative of the log-likelihood, as needed for difference maps or for first or higher-order optimization methods, seems less straightforward using the Laplace approach. The difficulty arises from the need to obtain the derivative of the location of the maximum of integrand, as this value is a function of the variables for which derivatives are computed. In addition, the introduction of t-based noise models introduces heavy tails in the distribution for which Gaussian approximations can have a poor performance. For this reason, the use of a quadrature approach is of interest. In a numerical integration with a quadrature, the integral of interest is approximated by a weighted sum of selected function values. The Laplace approximation outlined above can thus be seen as a one-point quadrature, where the location of the function value is located at the maximum of the integrand, and the associated weight is derived from a local Gaussian fit to the integrand. An expanded quadrature approach provides an easy way to increase the precision of the integral by increasing the number of sampling points, but also circumvents issues with computing derivatives of the location of the maximum of the integrand that are encountered when using the Laplace approximation. Quadrature approaches have, however, been assumed to need a large number of terms to obtain sufficient precision (Read & McCoy, 2016), possibly making them an unattractive target for practical crystallographic applications. In order to circumvent or at least ameliorate these issues, we design quadratures that combine the benefits of a Laplace approximation and basic numerical quadratures (Appendix A).
A high-level overview of our integration approach is depicted in Fig. 2. By combining a power transform followed by a hyperbolic transform of the integrand, we transform the integration domain from [0, ∞] onto [0, 1]. While the first power transform (Appendix C) allows the integrand to have more Gaussian-like character, the second change-of-variable operation nonlinearly compresses low-mass regions onto relatively small line segments, while approximately linearly transforming high-mass areas of the integrand to the middle of the new integration domain (Appendix A). This double-transformed function can subsequently be integrated using an equidistant trapezoidal integration scheme. The second change-of-variable operation requires, just like the Laplace approximation, knowledge of the maximum of the power-transformed integrand, which can be obtained using standard optimization methods (Appendices B and C). In a final step, the resulting quadrature expressed on the domain of the doubly transformed variable can be rewritten in the original variables by applying inverse transforms. A subsequent further simplification allows us to recast the whole integration as a sum of weighted Rice functions, where the effects of noisy observations and other errors are hidden in the sampling of E on and the associated weights (Appendix D),
where Ej are the quadrature sampling points and wj are the associated weights. The sampling points and weights are dependent on EC, Zo, σA, σZ and ν. The quadrature sampling used can either be an N-point power-transformed hyperbolic quadrature or a single-point quadrature on the basis of a (power-transformed) Laplace approximation. Further details are given in Appendices A–D.
2.5. Derivatives
The practical use of a likelihood-based target function requires the calculation of its derivatives so that it can be used in gradient-based optimization methods. From expression (20), derivatives with respect to Y ∈ {EC, σA, ν} can be obtained as follows:
The derivatives of the Rice components f·(Ej|EC, σA) with respect to EC are listed in Appendix B.
3. Results and discussion
The first step in evaluating the proposed integration methods is to establish the ground truth of the integral that we wish to approximate. To this end, an equispaced, non-power-transformed trapezoidal quadrature was constructed integrating the function from E = 0 to E = 6 using 50 000 sampling points using all combinations of distribution parameters, as listed in Table 1, under the assumption of Gaussian errors on the intensities. Comparing the results of this integration with those obtained using a hyperbolic quadrature with 1500 points indicates that these two integration methods give similar results. We therefore take the ground truth as the results obtained with a hyperbolic quadrature using 1500 or more sample points. For both the acentric and the centric distributions, setting the power-transform variable γ to 2 provides good results, as shown in Tables 2 and 3, where the mean and standard deviation of the relative error in the log integrand are reported (as percentages). A number of different approximation schemes were tested, comparing results using the mean relative error in the log integrand. Because the variance-inflation approximation does not actually perform a marginalization, but performs a more ad hoc correction to incorporate low-fidelity measurements, its relative error against the log-likelihood is not a fair measure of its performance, nor does it provide insights into its strengths and drawbacks. Instead, we will compare the gradients of the log-likelihood target function with respect to EC for all approximations, as this measure is independent of the different normalizations that arise when computing the full integral as compared with the variance-inflation approaches. Furthermore, given that the gradients of the log-likelihood function form are the Fourier coefficients of the 3D difference or gradient maps used to complete or rebuild structures, comparing the gradients of various approximations with those obtained from the full likelihood function can provide valuable insights into the strengths of different approximations. The use of gradients is of course predicated on being able to estimate the value of σA, which in this case can be performed using a simple line search in fixed resolution shells. Details of these tests and their results can be found below.
|
|
|
3.1. Comparing integration methods
A comparison of the integration results using a number of different approximations are visualized in Fig. 3 for data sets generated according to the procedure outlined in Appendix F. For the results shown the value of σA was set to 0.70, and a fixed error ratio was chosen such that 〈Z/σZ〉 = 1.0. The redundancy was set to 4, resulting in ν = 3. For the Zo, EC pairs, a likelihood function computed using a 1500-point hyperbolic quadrature was treated as the ground truth both for a t-distribution and an error model assuming a normal distribution. These values were compared with the Laplace approximation (a one-point quadrature) and seven-point and 49-point quadratures for both error models. While the Laplace approximation behaves relative well for the normal error model, it fails to deal properly with the elevated tails of the t-distribution, and better results are obtained using a quadrature. Satisfactory results are obtained using quadratures composed of seven or more sampling points. General heuristics can in principle be developed to tailor the specific accuracy of the quadrature on the basis of the hyperparameter of the error model. As expected, t-distributions with low ν values require a larger quadrature to get to a comparable error compared with those originating from a normal distribution owing to the presence of heavier tails.
3.2. Comparing likelihood functions
In order to obtain a better intuition of the behaviors of the target functions, we directly plot them for a few input parameters. Fig. 4 depicts the likelihood function L(EC|Zo) for acentric reflections using just the French–Wilson protocol to estimate the amplitude while not inflating the variance and using the variance-inflation method with both the French–Wilson and the Sivia approaches, as well as the full likelihood functions using both a Gaussian error model and a t-distribution variant. All functions shown have been numerically normalized over 0 ≤ EC ≤ 12. When comparing the curves for weak and negative intensities, there is a remarkably large difference between techniques that use an estimate of Eo on the basis of a non-informative prior (French–Wilson & Sivia) versus those derived by the full integration (Figs. 4a, 4b and 4c). In the case of an observation with lower associated standard deviation, the differences between the approximations are smaller. The differences between a normal error model and a t-distribution manifest themselves in the tail behavior of the likelihood-function approximations, while the locations of the maxima seem relatively unchanged (Fig. 4d). The practical effects of the mismatch between an assumed normal error model and the t-type error models become apparent in the estimation of σA on the basis of the corresponding likelihood approximation. A synthetic data set with errors was constructed using the protocol outlined in Appendix F. The errors were chosen using a fixed error level such that the expected 〈Zo/σZ〉 was 0.5 when ν → ∞ (see Appendix F). The resulting Zo, σZ and EC values were used to determine σA via a golden section-driven likelihood-maximization procedure (Kiefer, 1953). The resulting estimates of σA and their associated estimated standard deviation for different redundancy values (ν + 1) are shown in Fig. 5. While for large values of ν the estimated values of σA are equivalent for both error models, at lower redundancy values the normal error model systematically underestimates σA. When the French–Wilson protocol is used, the resulting σA estimates are underestimated even more (Fig. 6).
3.3. Comparing log-likelihood gradients
Additional insights into the behavior of the likelihood-function approximation can be obtained by directly comparing its gradients for a selected set of parameter combinations. Numerical tests indicate that gradients computed using a 1500-point hyperbolic quadrature of the power-transformed function (with γ set to 2 for both the acentric and centric distribution) are indistinguishable from finite-difference gradients computed with a 50 000-point trapezoidal approach. In order to investigate the quality of the various approximations under common scenarios, we construct a synthetic data set using random sampling methods as outlined in Appendix F. A redundancy of 4 (ν = 3) was used in these tests. Gradients were computed using a 49-point quadrature, using a value of σA estimated from the corresponding approximation to the likelihood function. The results of these comparisons are shown in Fig. 6 and summarized in Tables 4 and 5. The quality of the gradients is gauged by a to the true value. The results indicate that for data for which 〈Z0/σZ〉 is large, all gradient-calculation methods converge to those obtained using the full intensity-based likelihood function with experimental errors and a Student's t noise model, but that for high and intermediate noise levels the variance-inflation method significantly underperforms. While differences between normal and t-style noise models seem small on the basis of the correlation coefficients, significant deviations are seen in individual reflections under high-noise and low-redundancy settings. These aberrant gradients can potentially negatively influence the quality of gradient maps for structure completion.
|
|
4. Conclusions
Numerical procedures for the efficient determination of intensity-based likelihood functions and their gradients are developed and compared. Whereas the Laplace approximation behaves reasonably well for the estimation of the likelihood function itself under a normal noise model, our results show that the both the likelihood and its associated gradients can be significantly improved upon by using a numerical quadrature. Given that the derivative of the log-likelihood function is the key ingredient in gradient-based t-type errors. Algorithmically, the most costly operation is the iterative procedure for finding the maximum of the integrand. The proposed Newton-based method typically converges well within 50 function evaluations, even in the absence of a predetermined good starting point for the line search. The construction of the hyperbolic quadrature does not require any iterative optimization, nor does the subsequent calculation of the associated gradient and function values. Given the large additional overhead in or other applications in crystallography, the use of the presented methodology to compute target functions is likely to have only have a minimal impact on the total run time of the workflow, while providing a rapidly converging approximation to a full intensity-based likelihood that takes experimental errors in both the estimate of the mean intensity and its variance into account. Although only a full integration into a crystallographic software package can determine the situations under which a practical benefit can be obtained from using the outlined approach, the tests here indicate that significant improvements are possible. Furthermore, the ease with which the proposed quadrature method can be adapted to a different of choice of error model is a large benefit over existing approximation methods, making it for instance possible to use experiment-specific noise models in and phasing targets (Sharma et al., 2017).
methods and is used to compute difference maps for structure completion, the proposed approach could improve the convergence of existing and model-building methods. Although it is unclear what the optimal quadrature order or noise model should be in a practical case, our results suggest that it is likely to be below 15 sampling points for normal noise and below 49 forAPPENDIX A
A hyperbolic quadrature
Given a function g(x), with x ≥ 0 and g(x) ≥ 0, we seek to compute its integral over the positive half-line:
Set
Define the supremum of g(x) by x0 such that h′(x0) = 0. For the class of functions we are interested in, g(0) is equal to 0, for instance owing to the power transform outlined in the main text, and is 0 as well. Define the following change of variables on the basis of a shifted and rescaled logistic function,
where k is a positive contant. Note that t(x = 0) = 0 and = 1. The inverse function is
and has a derivative with respect to t equal to
The value x0 determines the approximate `inflection' point of the hyperbolic compression scheme and the constant k controls the slope around the midpoint. An N-point quadrature can now be constructed by uniformly sampling t between 0 and 1,
for 1 ≤ j ≤ N. Given that both g(0) and are zero, the integral G can now be computed via a trapezoidal integration rule,
If k is chosen to be
then the above quadrature for N = 1 yields the Laplace approximation when x0|h′′(x0)|1/2 is large, as |x0 − x(1/2)| goes to zero. If a hyperbolic quadrature is constructed on a distribution of power-transformed variables, these derived weights can be multiplied by the Jacobian of that transformation, so that the final numerical evaluation can be carried out in the original set of variables.
APPENDIX B
Distributions and derivatives
B1. Rice functions, acentrics
The logarithms of the acentric Rice distribution and its derivatives with respect to E and EC are given below.
APPENDIX C
Finding x0
As outlined in the main text, numerical integration via the hyperbolic quadrature is greatly assisted by the change of variables
with Jacobian (see expression 17)
The location of the maximum value of the integrand, x0, is found using a straightforward application of the Newton root-finding algorithm using the first and second derivatives that are outlined below. Set
and
Using the shorthand ht(E, ⋯) to indicate either of these functions, we first apply a power transform (expression 44), as outlined above. Because we need to locate the maximum of this function, we need the derivatives with respect to x as well. The resulting function and its first and second derivatives with respect to x after the change-of-variable operation are given by
The analytic forms for ht(⋯), h′t(⋯) and h′′t(⋯) are given in Appendix B. Decent starting values for the Newton search can be found by performing a single Newton-based update on a set of (say) 15 equispaced values of x sampled between 0 and x = 61/γ. The integrand-weighted mean of the resulting updated sampling points typically refines within ten iterations to the supremum.
APPENDIX D
Likelihood synthesis
Using the above approaches, the full likelihood function can be expressed as a sum of weighted Rice functions (expressions 30 and 34), where E is sampled on the basis of a quadrature derived from a power-transformed variable E = xγ using the hyperbolic sampling scheme outlined above. Taking into account the combination of the power transform and the hyperbolic quadrature, the sampling nodes of the quadrature are equal to
where tj, x0 and k are defined and computed as outlined in Appendices A and C and 1 ≤ j ≤ N. The quadrature weights can now be set to absorb the hyperbolic sampling, the power transform and the error model acting on the observed intensity and its associated standard deviation:
This thus yields a sum of weighted Rice functions that approximates the full likelihood function,
where f·(⋯) is the acentric or centric Rice function.
When the likelihood function is approximated using the power-transformed Laplace approximation instead of using the quadrature approach, we obtain a weighted Rice function
with the weight given by
APPENDIX E
Structure-factor amplitude estimation
In order to use an inflated variance modification as an approximation to the full numerical integration, we need to be able to estimate reflection amplitudes and their standard deviations from observed intensities and their standard deviations. While this process is normally performed using a standard French–Wilson estimation procedure, another route can be adopted following an approach developed by Sivia & David (1994). Assume a uniform, improper prior on E, such that
resulting in a conditional distribution
A normal approximation to this distribution can be obtained by the method of moments or, as performed here, by a maximum a posteriori approximation with a mean equal to the mode of the above distribution and a standard deviation estimated on the basis of the second derivative of the log-likelihood at the location of the mode:
An analytic expression is readily obtained, resulting in
The quality of this approximation will critically depend on the values of Zo and σZ2. Note that standard error propagation on Eo = Zo1/2 yields
which can be seen to be converge to expression (59) for the case where Zo is significantly larger then σZ.
APPENDIX F
Simulating synthetic data
Data for the numerical tests and benchmarks were obtained by sampling from the underlying distribution using the following procedure. For acentric reflections, `true' complex structure factors and `perturbed' complex structure factors are generated by successive draws from normal distributions with zero mean and specified variance:
where N(μ, σ2) denotes the normal distribution. For centric reflections, the following procedure is used:
Noise is added in the following fashion. Given a target variance σ2target, we generate ν + 1 normal random variates to compute the sample mean and sample variance:
Two error models are adopted in the described tests, namely a fixed error ratio or a fixed error level. In the fixed error ratio method, σtarget is different for every simulated intensity, and is chosen to be ZTrue/τ, where τ is equal to the desired level. For the fixed error level method, σtarget is fixed at 1/τ for all intensities. A complete data set is simulated assuming a 9:1 ratio of acentric to centric reflections.
Acknowledgements
The above algorithms are implemented in a set of Python3 routines and are available upon request. Some parts of this work were prepared in partial fulfillment of the requirements of the Berkeley Laboratory Undergraduate Research (BLUR) Program, managed by Workforce Development and Education at Berkeley Laboratory. The content of this article is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.
Funding information
This research was supported in part by the Advanced Scientific Computing Research and the Basic Energy Sciences programs, which are supported by the Office of Science of the US Department of Energy (DOE) under Contract DE-AC02-05CH11231. Further support originates from the National Institute of General Medical Sciences of the National Institutes of Health (NIH) under Award 5R21GM129649-02.
References
Beu, K. E., Musil, F. J. & Whitney, D. R. (1962). Acta Cryst. 15, 1292–1301. CrossRef IUCr Journals Google Scholar
Brewster, A. S., Bhowmick, A., Bolotovsky, R., Mendez, D., Zwart, P. H. & Sauter, N. K. (2019). Acta Cryst. D75, 959–968. CrossRef IUCr Journals Google Scholar
Bricogne, G. (1997). Proceedings of the CCP4 Study Weekend. Recent Advances in Phasing, edited by K. S. Wilson, G. Davies, A. W. Ashton & S. Bailey, pp. 159–178. Warrington: Daresbury Laboratory. Google Scholar
Bricogne, G. & Gilmore, C. J. (1990). Acta Cryst. A46, 284–297. CrossRef CAS Web of Science IUCr Journals Google Scholar
Bunkóczi, G., McCoy, A. J., Echols, N., Grosse-Kunstleve, R. W., Adams, P. D., Holton, J. M., Read, R. J. & Terwilliger, T. C. (2015). Nat. Methods, 12, 127–130. Web of Science PubMed Google Scholar
Cools, R. (2002). J. Comput. Appl. Math. 149, 1–12. CrossRef Google Scholar
Cowtan, K. (2000). Acta Cryst. D56, 1612–1621. Web of Science CrossRef CAS IUCr Journals Google Scholar
Davis, P. J. & Rabinowitz, P. (1984). Methods of Numerical Integration, 2nd ed. New York: Academic Press. Google Scholar
Fisher, R. A. (1915). Biometrika, 10, 507–521. Google Scholar
French, S. & Wilson, K. (1978). Acta Cryst. A34, 517–525. CrossRef CAS IUCr Journals Web of Science Google Scholar
Gauss, C. F. (1809). Theoria Motus Corporum Coelestium in Sectionibus Conicis Solem Ambientium. Hamburg: Perthes & Besser. Google Scholar
Gauss, C. F. (1816). Z. Astronom. Verwandte Wiss. 1, 1816. Google Scholar
Gauss, C. F. (1823). Theoria Combinationis Observationum Erroribus Minimis Obnoxiae. Göttingen: Henricus Dieterich. Google Scholar
Green, E. A. (1979). Acta Cryst. A35, 351–359. CrossRef CAS IUCr Journals Web of Science Google Scholar
Hagen, G. (1867). Grundzüge der Wahrscheinlichkeits-Rechnung. Berlin: Ernst & Korn. Google Scholar
Kass, R. E. & Steffey, D. (1989). J. Am. Stat. Assoc. 84, 717–726. CrossRef Google Scholar
Kiefer, J. (1953). Proc. Am. Math. Soc. 4, 502–506. CrossRef Google Scholar
La Fortelle, E. de & Bricogne, G. (1997). Methods Enzymol. 276, 472–494. PubMed Web of Science Google Scholar
Lunin, V. Y. & Skovoroda, T. P. (1995). Acta Cryst. A51, 880–887. CrossRef CAS Web of Science IUCr Journals Google Scholar
Lunin, V. Y. & Urzhumtsev, A. G. (1984). Acta Cryst. A40, 269–277. CrossRef CAS Web of Science IUCr Journals Google Scholar
Luzzati, V. (1952). Acta Cryst. 5, 802–810. CrossRef IUCr Journals Web of Science Google Scholar
McCoy, A. J., Grosse-Kunstleve, R. W., Storoni, L. C. & Read, R. J. (2005). Acta Cryst. D61, 458–464. Web of Science CrossRef CAS IUCr Journals Google Scholar
Murshudov, G. N., Skubák, P., Lebedev, A. A., Pannu, N. S., Steiner, R. A., Nicholls, R. A., Winn, M. D., Long, F. & Vagin, A. A. (2011). Acta Cryst. D67, 355–367. Web of Science CrossRef CAS IUCr Journals Google Scholar
Murshudov, G. N., Vagin, A. A. & Dodson, E. J. (1997). Acta Cryst. D53, 240–255. CrossRef CAS Web of Science IUCr Journals Google Scholar
Neyman, J. & Scott, E. L. (1948). Econometrica, 16, 1–32. CrossRef Google Scholar
Pannu, N. S. & Read, R. J. (1996). Acta Cryst. A52, 659–668. CrossRef CAS Web of Science IUCr Journals Google Scholar
Pearson, E. S. (1970). Studies in the History of Statistics and Probability, edited by E. S. Pearson & M. G. Kendall, pp. 411–413. London: Charles Griffin. Google Scholar
Peng, R. D. (2018). Advanced Statistical Computing. https://leanpub.com/advstatcomp. Google Scholar
Read, R. J. (1986). Acta Cryst. A42, 140–149. CrossRef CAS Web of Science IUCr Journals Google Scholar
Read, R. J. (1997). Methods Enzymol. 277, 110–128. CrossRef PubMed CAS Web of Science Google Scholar
Read, R. J. (2001). Acta Cryst. D57, 1373–1382. Web of Science CrossRef CAS IUCr Journals Google Scholar
Read, R. J. & McCoy, A. J. (2016). Acta Cryst. D72, 375–387. Web of Science CrossRef IUCr Journals Google Scholar
Rossi, R. J. (2018). Mathematical Statistics: An Introduction to Likelihood Based Inference. Hoboken: John Wiley & Sons. Google Scholar
Sharma, A., Johansson, L., Dunevall, E., Wahlgren, W. Y., Neutze, R. & Katona, G. (2017). Acta Cryst. A73, 93–101. Web of Science CrossRef IUCr Journals Google Scholar
Sim, G. A. (1959). Acta Cryst. 12, 813–815. CrossRef IUCr Journals Web of Science Google Scholar
Sivia, D. S. & David, W. I. F. (1994). Acta Cryst. A50, 703–714. CrossRef CAS Web of Science IUCr Journals Google Scholar
Skubák, P., Waterreus, W.-J. & Pannu, N. S. (2010). Acta Cryst. D66, 783–788. Web of Science CrossRef IUCr Journals Google Scholar
Srinivasan, R. & Parthasarathy, S. (1976). Some Statistical Applications in X-ray Crystallography, 1st ed. Oxford: Pergamon Press. Google Scholar
Storoni, L. C., McCoy, A. J. & Read, R. J. (2004). Acta Cryst. D60, 432–438. Web of Science CrossRef CAS IUCr Journals Google Scholar
Student (1908). Biometrika, 6, 1–25. Google Scholar
Terwilliger, T. C. (2000). Acta Cryst. D56, 965–972. Web of Science CrossRef CAS IUCr Journals Google Scholar
Terwilliger, T. C. & Eisenberg, D. (1983). Acta Cryst. A39, 813–817. CrossRef CAS Web of Science IUCr Journals Google Scholar
Trefethen, L. N. & Weideman, J. A. C. (2014). SIAM Rev. 56, 385–458. CrossRef Google Scholar
Welch, B. L. (1947). Biometrika, 34, 28–35. CAS PubMed Web of Science Google Scholar
Wilks, S. S. (1938). Ann. Math. Stat. 9, 60–62. CrossRef Google Scholar
Wilson, A. J. C. (1980). Acta Cryst. A36, 937–944. CrossRef CAS IUCr Journals Google Scholar
Woolfson, M. M. (1956). Acta Cryst. 9, 804–810. CrossRef CAS IUCr Journals Web of Science Google Scholar
This is an open-access article distributed under the terms of the Creative Commons Attribution (CC-BY) Licence, which permits unrestricted use, distribution, and reproduction in any medium, provided the original authors and source are cited.