[Journal logo]

Volume 21 
Part 1 
Pages 203-208  
January 2014  

Received 28 March 2013
Accepted 3 November 2013
Online 14 December 2013

Open access

SAXS Merge: an automated statistical method to merge SAXS profiles using Gaussian processes

aUnité de Bioinformatique Structurale, Institut Pasteur, 25 rue du Docteur Roux, 75015 Paris, France,bUniversité Paris Diderot, Paris 7, Paris Rive Gauche, 5 rue Thomas Mann, 75013 Paris, France, and cDepartment of Bioengineering and Therapeutic Sciences, Department of Pharmaceutical Chemistry, California Institute for Quantitative Biosciences, Byers Hall, 1700 4th Street, Suite 503 B, University of California San Francisco, San Francisco, CA 94158, USA
Correspondence e-mail: nilges@pasteur.fr

Small-angle X-ray scattering (SAXS) is an experimental technique that allows structural information on biomolecules in solution to be gathered. High-quality SAXS profiles have typically been obtained by manual merging of scattering profiles from different concentrations and exposure times. This procedure is very subjective and results vary from user to user. Up to now, no robust automatic procedure has been published to perform this step, preventing the application of SAXS to high-throughput projects. Here, SAXS Merge, a fully automated statistical method for merging SAXS profiles using Gaussian processes, is presented. This method requires only the buffer-subtracted SAXS profiles in a specific order. At the heart of its formulation is non-linear interpolation using Gaussian processes, which provides a statement of the problem that accounts for correlation in the data.

1. Introduction

Small-angle X-ray scattering (SAXS) is a popular experiment that allows low-resolution structural information on bio­molecules in solution to be gathered (Jacques & Trewhella, 2010[Jacques, D. A. & Trewhella, J. (2010). Protein Sci. 19, 642-657.]; Rambo & Tainer, 2010[Rambo, R. P. & Tainer, J. A. (2010). Curr. Opin. Struct. Biol. 20, 128-137.]; Svergun, 2010[Svergun, D. I. (2010). Biol. Chem. 391, 737-743.]). The SAXS experiment allows for a wide variety of solution conditions and a wide range of molecular sizes. Data collection usually takes between seconds and minutes in a synchrotron facility, or up to a few hours in an in-house X-ray source (Hura et al., 2009[Hura, G., Menon, A., Hammel, M., Rambo, R., Poole II, F., Tsutakawa, S., Jenney Jr, F., Classen, S., Frankel, K., Hopkins, R., Yang, S., Scott, J., Dillard, B., Adams, M. & Tainer, J. (2009). Nat. Methods, 6, 606-612.]).

The SAXS profile of a biomolecule is the subtraction of the SAXS profile of the biomolecule in solution minus the SAXS profile of the matching buffer. SAXS can be used to study a wide variety of biomolecules, such as proteins, RNA or DNA, and their complexes (Lipfert et al., 2009[Lipfert, J., Herschlag, D. & Doniach, S. (2009). Riboswitches, edited by A. Serganov, Methods in Molecular Biology, Vol. 540, pp. 141-159. Totowa: Humana Press.]; Rambo & Tainer, 2010[Rambo, R. P. & Tainer, J. A. (2010). Curr. Opin. Struct. Biol. 20, 128-137.]), under a variety of experimental conditions. Once this profile is obtained, it can be used for a variety of modeling tasks (Jacques & Trewhella, 2010[Jacques, D. A. & Trewhella, J. (2010). Protein Sci. 19, 642-657.]; Rambo & Tainer, 2010[Rambo, R. P. & Tainer, J. A. (2010). Curr. Opin. Struct. Biol. 20, 128-137.]; Svergun, 2010[Svergun, D. I. (2010). Biol. Chem. 391, 737-743.]; Schneidman-Duhovny et al., 2012[Schneidman-Duhovny, D., Kim, S. & Sali, A. (2012). BMC Struct. Biol. 12, 17.]). It is essential to perform the radial averaging and buffer subtraction steps with high accuracy, as an error at that stage would propagate later on.

The SAXS profile consists of a collection of momentum transfer values (scattering vector) q, mean intensities I(q) and standard deviations s(q). Data collection for a given sample is often repeated a number of times N to reduce the noise (or standard deviation) in the SAXS profile by averaging. We consider N as the number of points entering the calculation of I and s, because the variation between repetitions is much greater than that due to radial averaging of a single experiment. Additionally, we suppose that the SAXS profiles were collected at several sample concentrations and X-ray exposure times. Both higher concentration and longer X-ray exposure times can provide more information at higher scattering angles. However, both conditions influence the resulting SAXS profile. At higher concentrations, particle-particle interactions can affect the slope of the very low angle part of the SAXS profile (Glatter & Kratky, 1982[Glatter, O. & Kratky, O. (1982). Small Angle X-ray Scattering. New York: Academic Press.]). At longer exposures, radiation damage can perturb any region of the SAXS profile (Kuwamoto et al., 2004[Kuwamoto, S., Akiyama, S. & Fujisawa, T. (2004). J. Synchrotron Rad. 11, 462-468.]). To remove these unwanted effects it is thus necessary to merge datasets from different experimental conditions. It is the purpose of this method to show that it is possible to do so automatically with minimal user manipulation.

In this article we present the method behind the SAXS Merge webserver, a tool presented by Spill et al. (2014[Spill, Y. G., Kim, S. J., Schneidman-Duhovny, D., Russel, D., Webb, B., Sali, A. & Nilges, M. (2014). Submitted.]) which merges SAXS profiles in a robust, completely automated and statistically principled way. While the method was tested on SAXS datasets, it can also be applied for merging small-angle neutron scattering (SANS) datasets, because the basic equations and methods are similar for the two techniques (Svergun, 2010[Svergun, D. I. (2010). Biol. Chem. 391, 737-743.]). SAXS Merge consists of five steps: data clean-up, profile fitting using Gaussian processes, rescaling of each profile to a common reference, classification and detection of incompatible regions, and merging of the remaining data points. The resulting object is a probability distribution function describing the merged SAXS profile. Resulting data consist of the experimental points that are compatible with the distribution, a maximum posterior estimate of the SAXS profile across all experiments along with a credible interval, and estimates of a number of parameters such as the radius of gyration and the Porod exponent.

2. Five steps for SAXS merging

SAXS Merge consists of five sequential steps: (i) data clean-up, (ii) profile fitting using Gaussian processes, (iii) rescaling of each profile to a common reference, (iv) classification and detection of incompatible regions, and (v) merging of the remaining data points. The first three steps are performed separately on all input SAXS profiles. We now go through each of these five steps sequentially.

2.1. Data clean-up

In this step, we remove from input SAXS profiles data values for which the expected value is not significantly different from zero. Let [{\cal H}_0] be the null hypothesis of a data point being purely noise-induced. Let [{\cal H}_1] be the alternative that it contains some signal. Then with a type-I error of [alpha], we can perform a one-sample one-sided t-test. Let I(qi) be a mean intensity at momentum transfer qi, s(qi) the standard deviation and N the number of repetitions of the experiment. Then the t statistic is

[t = {{I(q_i)} \over {s(q_i)/{N^{1/2}}}},\eqno(1)]

and it has a Student t distribution with [\nu] = N - 1 degrees of freedom. Since we are performing a large number of tests, we apply the Bonferroni correction by defining [alpha] [equivalent to] [\tilde{\alpha}/M] (M is the total number of points in the SAXS profile) and choose [\tilde\alpha] = 0.05 by default. Normality of the noise is assumed, which is reasonable if no parameter varies across the N replicates of an experiment.

Points with no or zero standard deviation are discarded. Optionally, points with much larger variances than average are discarded as well. This option is proposed because SAXS profiles have almost constant s(qi) values, except at extreme values for qi in which case s(qi) diverges. This behaviour is an experimental artefact, and it is reasonable to remove such points. We therefore calculate the median [\bar{s}] and discard points which have s(qi) > [20\bar{s}].

2.2. Profile fitting using Gaussian processes

We have a number of measurements for a SAXS profile, summarized by three sufficient statistics: intensity I(qi), standard deviation s(qi) and number of repetitions N independent of i. The SAXS profile is modelled as the noisy measurement of an unknown smooth function [q\,\,\mapsto\,\,{{\cal I}}(q)] at M different data points. A pointwise treatment of SAXS profiles fails because of the high noise and correlation encountered in the measurements. This pointwise treatment would lead to an inconsistent classification [step (iv), data not shown]. It is crucial to account for the correlation between successive points to be able to detect outlier data points in a robust manner. Thus, we first estimate the most probable SAXS profile, which was measured with noise in a given SAXS experiment.

This functional estimation is achieved with the help of the theory of Gaussian processes. Gaussian process interpolation (GPI) is a form of non-parametric fitting which has a straightforward probabilistic interpretation and provides confidence intervals on the functional estimate. Given some data and an automatically adjusting smoothness penalty, GPI provides the most probable function that fits the data. For more information on Gaussian processes, see p. 535 of MacKay (2003[MacKay, D. J. C. (2003). Information Theory, Inference and Learning Algorithms, 1st ed. Cambridge University Press.]), §13.43 of O'Hagan & Forster (2004[O'Hagan, A. & Forster, J. (2004). Bayesian Inference. London: Arnold.]), Rasmussen & Williams (2006[Gibbs, M. & MacKay, D. J. (1997). Efficient Implementation of Gaussian Processes. Technical Report. Cavendish Laboratory, Department of Physics, Cambridge University, UK.])[Rasmussen, C. & Williams, C. (2006). Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning). Cambridge: MIT Press.] and http://gaussianprocess.org .

2.2.1. Likelihood

Define

[{\bf I} = \left(\matrix{ I(q_1) \cr \vdots \cr I(q_M)}\right),\quad {\bf S} = {\rm{diag}}\left(\matrix{ s^2(q_1) \cr \vdots \cr s^2(q_M)}\right), \quad {{\cal I}}= \left(\matrix{ {\cal I}(q_1) \cr \vdots \cr {\cal I}(q_M)}\right).\eqno(2)]

[{\bf S}] is the sample covariance matrix, assumed to be diagonal given [{{\cal I}}]. We treat [{\bf I}] as a measurement with noise of the function [{\cal I}] at positions [\{q_i\}_{i\,=\,1,\ldots,M}] so that [{\bf I}] = [{{\cal I}} + \boldvarepsilon] where [\boldvarepsilon] is a vector distributed as a multivariate normal distribution with zero mean and covariance matrix [\boldSigma]. We make the assumption that [\boldSigma] [equivalent to] [\sigma^2 {\bf S}], where [sigma] is a proportionality constant that will be estimated in the process. The assumption of a diagonal [{\bf S}] matrix is not entirely correct, as shown by Breidt et al. (2012[Breidt, F. J., Erciulescu, A. & van der Woerd, M. (2012). J. Time Ser. Anal. 33, 704-717.]). However, correlations are expected to be non-zero only between neighbouring annuli (i.e. q values), and the covariance model we introduce next spans much further than that. This assumption leads to the following likelihood,

[\eqalignno{ p({\bf{I}}|{{\cal I}},{\bf{S}},N) \equiv {}& {{1}\over{(2\pi)^{M/2} \left|{\sigma^2{\bf{S}}/N}\right|^{1/2}}} \cr& \times \exp\left[-({1/2}) ({\bf{I}}-{{\cal I}})^\top \left({\sigma^2{\bf{S}}/N}\right)^{-1} ({\bf{I}}-{{\cal I}}) \right]. &(3) }]

2.2.2. Prior

The likelihood alone does not constrain the vector [{{\cal I}}], which is still free to vary. However, we believe that the function [{{\cal I}}] is smooth. This belief is modelled by assuming that the vector [{{\cal I}}] follows a multivariate normal distribution with mean vector [{\bf m}] and covariance matrix [{\bf W}] which have been carefully chosen (see below),

[\eqalignno{ p({{\cal I}}|{\bf{m}},{\bf{W}}) \equiv {} & {{1}\over{\left(2\pi\right)^{M/2} \left|{\bf{W}}\right|^{1/2}}} \cr&\times \exp\left[-({1/2}) ({{\cal I}}-{\bf{m}})^\top {\bf{W}}^{-1} ({{\cal I}}-{\bf{m}}) \right] .&(4)}]

Equivalently, one can say that the function [{{\cal I}}] has a Gaussian process prior distribution with prior covariance function w and prior mean function m.

2.2.3. Choice of w

The covariance function determines the smoothness of the Gaussian process. We choose the commonly used squared exponential form, which yields continuous and infinitely differentiable functions. Therefore, this approach is in principle only usable on smooth SAXS profiles,

[w(q,q^{\,\prime})\equiv\tau^2\exp\left[-{{(q-q^{\,\prime})^2} \over {2\lambda^2}}\right].\eqno(5)]

The covariance function has two parameters: [\tau^2] is the variance that the Gaussian process assigns in regions far from any data point; [lambda] is the persistence length of the profile, in units of q. With this covariance function, we define

[\eqalign{ {\bf w}(q) & \equiv \left(\matrix{ w(q_1,q)\cr \vdots\cr w(q_M,q) }\right)_{\vphantom{\Bigg|}}, \cr {\bf W} & \equiv \left(\matrix{ w(q_1,q_1) & \cdots & w(q_1,q_M) \cr \vdots & \ddots & \vdots \cr w(q_M,q_1) & \cdots & w(q_M,q_M) }\right).}\eqno(6)]

2.2.4. Choice of m

Gaussian process interpolation is a non-parametric approach. However, it is possible to incorporate some prior knowledge in the form of a parametric mean function, making the approach semi-parametric. In our case, this way of proceeding has the advantage of providing an estimate of the radius of gyration and other constants. At the same time, the Gaussian process fits the signal in the data that is unexplained by the parametric mean function so that even high deviations from the prior mean function will be followed by the Gaussian process.

At very low angle, the Guinier plot allows for an estimation of the radius of gyration,

[I(q)\,\propto\,\exp\left(-{{R_{\rm G}^{\,2}} \over {3}}\,q^2\right).\eqno(7)]

For the higher-angle portion of the profile, Porod's law is

[I(q)\,\propto\,q^{-4}.\eqno(8)]

Hammouda (2010[Hammouda, B. (2010). J. Appl. Cryst. 43, 716-719.]) constructed a smooth function encompassing both behaviours, which we use as a starting point for m,

[m(q) \equiv A + \left\{\matrix{(G/q^{\,s})\exp\left[-{{q^2R_{\rm G}^{\,2}}/({3-s})}\right]\hfill & {\rm{if}}\,\,q \le q_1, \cr {{D}/{q^{\,d}}}\hfill &{\rm{if}}\,\,q\,\,\gt\,\,q_1,}\right.\eqno(9)]

[q_1 \equiv ({1}/{R_{\rm G}}) \left[(d-s)(3-s)/2\right]^{1/2},\eqno(10)]

[D \equiv Gq_1^{\,d-s} \exp\left[-{{q_1^2R_{\rm G}^{\,2}}/{(3-s)}}\right].\eqno(11)]

This function has five parameters: A, G, RG, d and s. Some of them can be fixed to certain values, generating a nested family of parametric functions. For example, setting G = 0 reduces m to a constant function. Setting d such that q1 is larger than any input q-value reduces m to the Guinier model with a constant offset. Finally, setting s = 0 reduces m to the simpler Guinier-Porod model described in the first section of Hammouda (2010[Gibbs, M. & MacKay, D. J. (1997). Efficient Implementation of Gaussian Processes. Technical Report. Cavendish Laboratory, Department of Physics, Cambridge University, UK.])[Hammouda, B. (2010). J. Appl. Cryst. 43, 716-719.] (up to a constant offset). Define

[{\bf m} \equiv \left[\matrix{ m(q_1) \cdots m(q_M)} \right]^\top.\eqno(12)]

2.2.5. Hyperprior

The parameters arising in the prior mean or covariance functions as well as [sigma] are collectively called hyperparameters. In this hierarchical approach we can in turn assign a prior to these hyperparameters. Since our knowledge of their plausible values is rather vague, we give a Jeffreys prior to [\sigma^2] and a uniform prior to the other parameters. However, for the sake of model comparison, parameters are bounded within a finite interval to allow for a normalized prior,

[\eqalign{ p(\sigma^2)& = {{1}\over{\log(\sigma^{\rm{max}}/\sigma^{\rm{min}})}} \, {{1}\over{\sigma^2}}, \cr p(P_i)& = {{1}\over{P_i^{\,\rm{max}}-P_i^{\,\rm{min}}}}\qquad P_i\,\in\,\{G,R_{\rm G},d,s,A,\tau,\lambda\}. }\eqno(13)]

2.2.6. Fitting the SAXS profile

In order to find the best fit of the SAXS profile, it is required to optimize the hyperparameters. Defining [\boldTheta] [equivalent to] [(G,R_{\rm G},d,s,A,\tau,\lambda,\sigma)^\top] and D [equivalent to] [({\bf I},{\bf S},N)], this optimization can be achieved by maximizing [p(\boldTheta|D)] with respect to [\boldTheta]. With the help of Bayes' rule, we obtain

[p(\boldTheta|D)\,\propto\,p({\bf I}|{\bf S},N,\boldTheta)\,p(\boldTheta),\eqno(14)]

where [p(\boldTheta)] is given in equation (13)[link] and [p({\bf I}|\boldTheta,{\bf S},N)] is called the marginal likelihood,

[p({\bf I}|{\bf S},N,\boldTheta) \equiv \textstyle\int p\left({\bf I}|{{\cal I}},{\bf S},N\right) p({{\cal I}}|\boldTheta) \,{\rm{d}}{{\cal I}}. \eqno(15)]

Since both the likelihood [equation (3)[link]] and the prior [equation (4)[link]] appearing in this integral are multivariate Gaussian distributions, it is possible to give an analytical expression of the marginal likelihood,

[p({\bf I}|{\bf S},N,\boldTheta) = {{1} \over {\left(2\pi\right)^{M/2} \left|\boldOmega\right|^{1/2} }} \,\,\exp\left(-{{1}\over{2}}\boldvarepsilon^\top\boldOmega^{-1}\boldvarepsilon \right),\eqno(16)]

with [\boldvarepsilon] [equivalent to] [{\bf I}-{\bf m}] and [\boldOmega] [equivalent to] [\sigma^2{\bf S}/N+{\bf W}].

2.2.7. Obtaining functional deviates

To make predictions of [{{\cal I}}] at a new point q we average over all possible values for [\boldTheta], weighted by their posterior probability,

[p[{{\cal I}}(q)|D] = \textstyle\int p[{{\cal I}}(q)|\boldTheta,D]\,p(\boldTheta|D)\,{\rm{d}}\boldTheta.\eqno(17)]

Let us examine the two terms appearing in this last integral. The posterior probability density of the hyperparameters [p(\boldTheta|D)] was already encountered in equation (14)[link].

The remaining term, [p({{\cal I}}(q)|\boldTheta,D)], is the posterior predictive probability density of a new noise-free observation given the hyperparameters. It is called posterior predictive because it allows new values of the SAXS profile given the noisy observations to be predicted. Since the function [{{\cal I}}] has a Gaussian process prior and a multivariate normal likelihood, the posterior distribution for [{{\cal I}}] is also a Gaussian process, with mean function [\hat{\cal I}] and covariance function [\hat{\sigma}_{\cal I}^{\,2}] given by

[\forall q\qquad \hat{\cal{I}}(q)= m(q) + {\bf{w}}(q)^\top \boldOmega^{-1} ({\bf{I}}-{\bf{m}}),\eqno(18)]

[\forall q,q^{\,\prime}\qquad\hat{\sigma}_{\cal{I}}^{\,2} (q,q^{\,\prime}) = w(q,q^{\,\prime})-{\bf{w}}(q)^\top \boldOmega^{-1}{\bf{w}}(q^{\,\prime}).\eqno(19)]

These equations arise from the fact that the vector [[{{\cal I}}(q), I(q_1),\ldots,I(q_M)]^\top] has a multivariate normal distribution with mean vector [[m(q),m(q_1),\ldots,m(q_M)]^\top] and covariance matrix

[\left(\matrix{ w(q,q) & {\bf w}(q)^\top \cr {\bf w}(q) & \boldOmega }\right).\eqno(20)]

The distribution for [{{\cal I}}(q)] then results from the conditioning of the multivariate normal distribution on the observed values,

[\eqalignno{ p[{\cal{I}}(q)|\boldTheta,D] = {}& {{1}\over{(2\pi)^{1/2}\, \hat{\sigma}_{\cal{I}}(q,q) }} \exp\left\{ \!-{{ \left[{\cal{I}}(q)-\hat{\cal{I}}(q)\right]^2 }\over{ \hat{\sigma}_{\cal{I}}^{\,2}(q,q) }} \right\}\!.&(21)}]

Note that it is also possible to generate random functional deviates from the posterior predictive distribution. If k points are wanted for each functional estimate, one can draw them from the multivariate normal distribution with mean vector and covariance matrix built, respectively, from the posterior mean function [\hat{\cal I}] and the posterior covariance function [\hat{\sigma}_{\cal I}^{\,2}] at the values [q_1,\ldots,q_k].

Although we could in principle perform the interpolation by numerically integrating equation (17)[link] for every value of q needed, this approach would be costly in terms of computation power. In fact, two integrals would need to be computed numerically, equation (17)[link] and also the normalization constant of equation (14)[link],

[p({\bf I}|{\bf S},N) = \textstyle\int p({\bf I}|\boldTheta,{\bf S},N)\,p(\boldTheta)\,{\rm{d}}\boldTheta.\eqno(22)]

Luckily, as Gibbs & MacKay (1997[Gibbs, M. & MacKay, D. J. (1997). Efficient Implementation of Gaussian Processes. Technical Report. Cavendish Laboratory, Department of Physics, Cambridge University, UK.]) have pointed out, a Laplace approximation of this last integral is a very good approximation because hyperparameters are usually quite peaked around their most probable value. This approach is known as a type-II maximum likelihood (ML-II),

[p({\bf{I}}|{\bf{S}},N) \simeq p({\bf{I}}|\hat{\boldTheta},{\bf{S}},N)\,p(\hat{\boldTheta}) \Delta\hat{\boldTheta},\eqno(23)]

[\Delta\hat{\boldTheta} = (2\pi)^{N_p} \left|{{\partial^{\,2}E}\over{\partial\boldTheta^2}} ({\bf{I}},\hat{\boldTheta})\right|^{-1/2},\eqno(24)]

[E({\bf{I}},\boldTheta) = -\log p({\bf{I}}|\boldTheta,{\bf{S}},N) - \log p(\boldTheta).\eqno(25)]

Np [equivalent to] [\dim(\boldTheta)] is the number of parameters. [\Delta\hat{\boldTheta}] is the phase space volume in which values of [\boldTheta] are acceptable given D, and is usually small (Rasmussen & Williams, 2006[Rasmussen, C. & Williams, C. (2006). Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning). Cambridge: MIT Press.]). This procedure has a considerable practical advantage, since optimization of the hyperparameters then does not need to be performed for each new [{\cal I}(q)] but only once for this dataset. The optimization itself has been described in §2.2.6[link].

Once the most probable [Theta] has been found, the Laplace approximation gives the normalization constant of [p(\boldTheta|D)],

[p(\boldTheta|D) \simeq {{ p({\bf I}|\boldTheta,{\bf S},N)\,p(\boldTheta) } \over { p({\bf I}|\hat{\boldTheta},{\bf S},N)\,p(\hat{\boldTheta}) \Delta\hat{\boldTheta} }}\eqno(26)]

With the additional hypothesis that [p[{\cal I}(q),\boldTheta,D]\,p(\boldTheta|D)] has the same maximum for [\boldTheta] as [p(\boldTheta|D)] alone, equation (17)[link] becomes

[p[{\cal{I}}(q)|D] \simeq p[{\cal{I}}(q)|\hat{\boldTheta},D] \left|I_n+AB^{-1}\right|^{-1/2}\eqno(27)]

[A = \left\{- {{\partial^{\,2}\log p[{\cal{I}}(q)|\boldTheta,D]}\over{\partial\boldTheta^2}} \right\}_{\boldTheta\,=\,{\hat{\boldTheta}}}\eqno(28)]

[B= {{\partial^{\,2}E({\bf{I}},\hat\boldTheta)}\over{\partial\boldTheta^2}}.\eqno(29)]

It is also possible to compute the posterior mean and covariance functions averaged over all values of [\boldTheta],

[{\cal{I}}^{^{^{\kern-.8ex\circ}}}(q)\simeq \hat{\cal{I}}(q)\left|I_n+A'B^{-1}\right|^{-1/2}\eqno(30)]

[{\sigma^{^{\kern-.8ex\circ}}}_{\cal{I}}{\!\!}^2(q,q)\simeq \hat\sigma_{\cal{I}}^{\,2}(q,q) \left|I_n+A''B^{-1}\right|^{-1/2}\eqno(31)]

[A'= \left[-{{\partial^{\,2}}\over{\partial\boldTheta^2}} \log\hat{\cal I}(q)\right]_{\boldTheta\,=\,\hat\boldTheta}\eqno(32)]

[A''= \left[-{{\partial^{\,2}}\over{\partial\boldTheta^2}} \log\hat\sigma_{\cal{I}}^{\,2}(q,q)\right]_{\boldTheta\,=\,\hat\boldTheta}\eqno(33)]

2.2.8. Choice between different mean functions via model comparison

Sometimes, the information content of a SAXS profile is so low that the number of parameters in the mean function exceed the number of identifiable parameters of the SAXS profile. In that case, overfitting occurs, and it is preferrable to try a simpler mean function.

The previously presented mean function has five parameters. It has been noted that it generates a nested family of parametric functions when some parameters are held fixed. For globular proteins, s can be set to zero, reducing the number of parameters to four. It is also possible to use simpler functions. For example,

[m_G(q)= A + G\exp\left(-{{q^2R_{\rm G}^{\,2}} \over {3}}\right)\eqno(34)]

assumes the SAXS profile only contains the Guinier region; it has three parameters. The flat function has one parameter: mF(q) = A.

Fitting can be performed using a number of different mean functions. The one that is the most plausible is then selected by model comparison. Suppose M1 (M2) represents the model in which the mean and covariance functions total Np 1 (Np 2) parameters, summarized in the parameter vector [\boldTheta_1] ([\boldTheta_2]). The best mean function is the one which has the highest Bayes factor,

[\eqalignno{ {{p(M_1|D)}\over{p(M_2|D)}} & = {{p(M_1)}\over{p(M_2)}} {{p(D|M_1)}\over{p(D|M_2)}} &(35)\cr & = 1 \cdot {{\textstyle\int p(D|\boldTheta_1,M_1)\,p(\boldTheta_1|M_1)\,{\rm{d}}\boldTheta_1}\over{\textstyle\int p(D|\boldTheta_2,M_2)\,p(\boldTheta_2|M_2)\,{\rm{d}}\boldTheta_2}}.&(36)}]

The Bayes factor is the ratio of the evidences if both models have an equal a priori probability. As was just discussed, we simplify this assumption further by performing a Laplace approximation of the integral around the maximum posterior set of parameters. This expansion yields

[\eqalignno{ {{p(M_1|D)}\over{p(M_2|D)}} \simeq {}& {{p({\bf{I}}|\hat\boldTheta_1,{\bf{S}},N,M_1)\,p(\hat\boldTheta_1|M_1)}\over{p({\bf{I}}|\hat\boldTheta_2,{\bf{S}},N,M_2)\,p(\hat\boldTheta_1|M_2)}} \, \,\,\left[(2\pi)^{1/2}\right]^{N_p^{\,1}-N_p^{\,2}} \cr&\times {{ \left|{{\partial^{\,2}E_1}\over{\partial\boldTheta_1^2}}({\bf{I}},\hat\boldTheta_1)\right|^{-1/2} }\Big/\,{ \left|{{\partial^{\,2}E_2}\over{\partial\boldTheta_2^2}}({\bf{I}},\hat\boldTheta_2)\right|^{-1/2} }}. &(37)}]

Details of the calculation, along with gradient and hessian of Hammouda's Generalized Guinier Porod function (Hammouda, 2010[Hammouda, B. (2010). J. Appl. Cryst. 43, 716-719.]), are given in the supporting information.1

2.3. Rescaling

Suppose [{\cal I}_0] is given, and we want to find the scaling factor [gamma] between [{\cal I}_0] and [{\cal I}_1], such that the distance between [{\cal I}_0] and [\gamma{\cal I}_1] is minimized under some metric. We propose three similar models to rescale the SAXS profiles: normal model, normal model with constant offset, and lognormal model (using the logs of the intensities rather than the intensities themselves). In this section we assume [{\cal I}_i] are evaluated at M points, and treat [{\cal I}_i] as a vector with M entries.

In the normal model we use the squared error loss

[{\cal L} \equiv ({\cal I}_0 - \gamma {\cal I}_1)^\top {\bf A} ({\cal I}_0 - \gamma {\cal I}_1),\eqno(38)]

where A is a symmetric positive definite matrix. The risk is

[{\cal R} \equiv {\bb{E}}_{{\cal I}_1}\left \{{\bb{E}}_{{\cal I}_0}\left [({\cal I}_0 - \gamma {\cal I}_1)^\top {\bf A} ({\cal I}_0 - \gamma {\cal I}_1) \right] \right\}.\eqno(39)]

It can be put in the form

[{\cal R} = \left(\hat{{\cal I}}_0 - \gamma \hat{{\cal I}}_1\right)^\top {\bf A} \left(\hat{{\cal I}}_0 - \gamma \hat{{\cal I}}_1\right) + {\rm{tr}}\left[{\bf A}\left(\boldSigma_0+\gamma^2\boldSigma_1\right)\right],\eqno(40)]

where [\boldSigma_i] is the covariance matrix of [{\cal I}_i]. We would like to choose [gamma] and A so that the risk is minimal,

[{{\partial{\cal{R}}}\over{\partial\gamma}}= -2\left(\hat{{\cal I}}_0-\gamma\hat{{\cal I}}_1\right)^\top {\bf A}\, \hat{{\cal I}}_1 + 2\gamma\,{\rm{tr}}\left({\bf A}\boldSigma_1\right),\eqno(41)]

[{{\partial{\cal{R}}}\over{\partial {\bf A}}}= \left(\hat{{\cal I}}_0 - \gamma\hat{{\cal I}}_1\right) \left(\hat{{\cal I}}_0 - \gamma\hat{{\cal I}}_1\right)^\top + \,\,\boldSigma_0 + \gamma^2 \boldSigma_1 .\eqno(42)]

The second equation is a sum of positive matrices, and cannot be zero. Therefore there is no choice of A that minimizes the risk. We choose A [equivalent to] [\boldSigma_1^{-1}]. Minimizing the first equation gives the target value for [gamma],

[\hat{\gamma}\equiv {{ \displaystyle \hat{{\cal I}}_1^\top \boldSigma_1^{-1} \hat{{\cal I}}_0 } \over {\displaystyle \hat{{\cal I}}_1^\top \boldSigma_1^{-1} \hat{{\cal I}}_1 + M }}.\eqno(43)]

The mean vectors are computed from equation (18)[link] or (30)[link]; the covariance matrices from equation (19)[link] or (31)[link].

The normal model with offset has loss function

[{\cal L} \equiv \left[{\cal I}_0 - \gamma({\cal I}_1 + c{\bf J})\right]^\top \boldSigma_1^{-1} \left[{\cal I}_0 - \gamma({\cal I}_1 + c{\bf J})\right],\eqno(44)]

where J is a vector of ones. This model leads to the estimates

[\hat{c} \equiv {{ \displaystyle M\,\hat{{\cal I}}_0^\top \boldSigma_1^{-1} {\bf J} + \hat{{\cal I}}_1^\top \boldSigma_1^{-1} \left(\hat{{\cal I}}_1\hat{{\cal I}}_0^\top - \hat{{\cal I}_0}\hat{{\cal I}}_1^\top\right) \boldSigma_1^{-1} {\bf J} }\over{ \displaystyle \hat{{\cal I}}_1^\top \boldSigma_1^{-1} \left(\hat{{\cal I}}_0{\bf J}^\top - {\bf J}\hat{{\cal I}}_0^\top\right) \boldSigma_1^{-1} {\bf J} }},\eqno(45)]

[\hat{\gamma} \equiv {{ \displaystyle \hat{{\cal I}}_1^\top \boldSigma_1^{-1} \hat{{\cal I}}_0 }\over{\displaystyle \hat{{\cal I}}_1^\top \boldSigma_1^{-1} (\hat{{\cal I}}_1+\hat{c}\,{\bf J}) + M }}.\eqno(46)]

Finally the lognormal model has loss function

[{\cal L} \equiv \left[{\bf J} \log \gamma - \log\left({{{\cal I}}_0 \over {{\cal I}}_1}\right)\right]^\top \boldSigma_1^{-1} \left[{\bf J} \log \gamma - \log\left({{{\cal I}}_0 \over {{\cal I}}_1}\right)\right],\eqno(47)]

which is defined because the intensities are expected to be positive. The estimate for [gamma] is then

[\log\hat\gamma \equiv {{\displaystyle \left[\log\left({{\hat{{\cal I}}_0} / {\hat{{\cal I}}_1}}\right)\right]^\top \boldSigma_{1}^{-1} {\bf J}}\over{\displaystyle {\bf J}^\top\boldSigma_{1}^{-1} {\bf J}}}.\eqno(48)]

By default, all profiles are rescaled to the last profile, which has usually the widest range.

2.4. Classification

To classify the SAXS profiles, it is necessary to rank them. SAXS profiles are ranked as follows. For each profile i, we compute Ii(0) by fitting the Guinier region, and the median [\bar{s}_i] of the errors. We use the median instead of the mean because it is more robust to outliers. The profiles are then ranked by ascending [I_i(0)/\bar{s}_i]. This quantity is expected to increase with either concentration or X-ray dose.

The first profile has the reference status on all intervals that have not been discarded by the first step (i.e. as long as its signal-to-noise ratio is sufficiently high). Let [{\cal I}] be the candidate profile, and [{\cal I}_{\rm{ref}}] the reference profile, for which we have just derived a distribution in the fitting step. Because correlation has been accounted for in the profile fitting step (§2.2.6[link]), pointwise statistical treatment is sufficient. The SAXS profiles are then compared by using a two-sample two-sided t test and regions of disagreement are determined.

We would like to know which measurements of [{\cal I}(q)] and [{\cal I}_{\rm{ref}}(q)] are compatible. We simply assume that each new observation at scattering angle q is drawn from a normal distribution with mean [\mu(q)] and standard deviation [\sigma_{\rm{exp}}(q)], where

[\mu(q)= \hat{\gamma}{\cal I}^{^{^{\kern-.8ex\circ}}}(q),\eqno(49)]

[\sigma^2_{\rm{exp}}(q)= \hat{\gamma}^2 \sigma^{^{\kern-.8ex\circ}}_{\cal{I}}{}^{\!\!\!2}(q).\eqno(50)]

[{\cal I}^{^{^{\kern-.8ex\circ}}}(q)] and [\sigma^{^{\kern-.8ex\circ}}_{\cal I}(q)] are given by equations (30)[link] and (31)[link] and [\hat{\gamma}] by equations (48)[link], (46)[link] or (43)[link]. If no parameter averaging was performed, one can use [{\cal I}] and [\sigma_{\cal I}] instead of [{\cal I}^{^{^{\kern-.8ex\circ}}}] and [\sigma^{^{\kern-.8ex\circ}}_{\cal I}] given by equations (18)[link] and (19)[link], respectively.

We then perform Welch's two-sample two-sided t-test at confidence level [alpha] (Welch, 1947[Welch, B. L. (1947). Biometrika, 34, 28-35.]). Similar to §2.1[link], we compute the t statistic

[t= {{ |\mu(q)-\mu_{\rm{ref}}(q)| }\over{ \left[ \sigma_{\rm{exp}}^{\,2}(q)/N+\sigma_{\rm{exp,ref}}^{\,2}(q)/N_{\rm{ref}}\right]^{1/2}}}\eqno(51)]

with N and Nref the number of repetitions of each experiment. The degrees of freedom are given by the Satterthwaite approximation,

[\nu= {{ \left[\sigma_{\rm{exp}}^{\,2}(q)/N+\sigma_{\rm{exp,ref}}^{\,2}(q)/N_{\rm{ref}}\right]^2 }\over{ \left[ \sigma_{\rm{exp}}^{\,2}(q)/N\right]^2/(N-1)+\left[ \sigma_{\rm{exp,ref}}^{\,2}(q_i)/N_{\rm{ref}}\right]^2/(N_{\rm{ref}}-1)}}.\eqno(52)]

If the p-value of this test is smaller than [alpha] then the functions are locally different and [{\cal I}(q_i)] is discarded.

Usually, the second profile spans a wider q range, so that comparison with the reference profile cannot be carried out at higher angles. In such a case the remaining portion of the second profile is marked as valid, and becomes the reference. Next, the third profile is compared with the low-angle part of the first profile and with the high-angle part of the second profile. If the third profile spans a wider q range than the second profile, its tail becomes the reference for the remaining q values, and so on until all SAXS profiles have been compared.

2.5. Merging

The merging step simply consists of pooling all compatible data points, keeping track of their origins. Gaussian process interpolation is then performed on this merged dataset. It can then happen that some datasets overlap, leading to multiple intensities for the same q values. In that case we discard the points which have the largest standard deviations. This behaviour can be disabled.

3. Conclusion

In this article we have developed SAXS Merge, a fully automated method for merging SAXS profiles in a robust and statistically principled way. It has been released as both a software package and a webserver, as described by Spill et al. (2014[Spill, Y. G., Kim, S. J., Schneidman-Duhovny, D., Russel, D., Webb, B., Sali, A. & Nilges, M. (2014). Submitted.]). The required input consists only of the buffer-subtracted profile files in a three-column format (q, intensity, standard deviation).

Acknowledgements

YGS thanks Riccardo Pellarin for discussion about Bayesian scoring. MN acknowledges funding from the European Union (FP7-IDEAS-ERC 294809).

References

Breidt, F. J., Erciulescu, A. & van der Woerd, M. (2012). J. Time Ser. Anal. 33, 704-717.  [PubMed]
Gibbs, M. & MacKay, D. J. (1997). Efficient Implementation of Gaussian Processes. Technical Report. Cavendish Laboratory, Department of Physics, Cambridge University, UK.
Glatter, O. & Kratky, O. (1982). Small Angle X-ray Scattering. New York: Academic Press.
Hammouda, B. (2010). J. Appl. Cryst. 43, 716-719.  [Web of Science] [CrossRef] [ChemPort] [IUCr Journals]
Hura, G., Menon, A., Hammel, M., Rambo, R., Poole II, F., Tsutakawa, S., Jenney Jr, F., Classen, S., Frankel, K., Hopkins, R., Yang, S., Scott, J., Dillard, B., Adams, M. & Tainer, J. (2009). Nat. Methods, 6, 606-612.  [CrossRef] [PubMed] [ChemPort]
Jacques, D. A. & Trewhella, J. (2010). Protein Sci. 19, 642-657.  [Web of Science] [CrossRef] [ChemPort] [PubMed]
Kuwamoto, S., Akiyama, S. & Fujisawa, T. (2004). J. Synchrotron Rad. 11, 462-468.  [Web of Science] [CrossRef] [ChemPort] [IUCr Journals]
Lipfert, J., Herschlag, D. & Doniach, S. (2009). Riboswitches, edited by A. Serganov, Methods in Molecular Biology, Vol. 540, pp. 141-159. Totowa: Humana Press.
MacKay, D. J. C. (2003). Information Theory, Inference and Learning Algorithms, 1st ed. Cambridge University Press.
O'Hagan, A. & Forster, J. (2004). Bayesian Inference. London: Arnold.
Rambo, R. P. & Tainer, J. A. (2010). Curr. Opin. Struct. Biol. 20, 128-137.  [CrossRef] [ChemPort] [PubMed]
Rasmussen, C. & Williams, C. (2006). Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning). Cambridge: MIT Press.
Schneidman-Duhovny, D., Kim, S. & Sali, A. (2012). BMC Struct. Biol. 12, 17.
Spill, Y. G., Kim, S. J., Schneidman-Duhovny, D., Russel, D., Webb, B., Sali, A. & Nilges, M. (2014). Submitted.
Svergun, D. I. (2010). Biol. Chem. 391, 737-743.  [Web of Science] [CrossRef] [ChemPort] [PubMed]
Welch, B. L. (1947). Biometrika, 34, 28-35.  [ChemPort] [PubMed] [Web of Science]


J. Synchrotron Rad. (2014). 21, 203-208   [ doi:10.1107/S1600577513030117 ]

This is an open-access article distributed under the terms of the Creative Commons Attribution Licence, which permits unrestricted use, distribution, and reproduction in any medium, provided the original authors and source are cited.