Bulk-solvent and overall scaling revisited: faster calculations, improved results

A fast analytical method for calculating mask-based bulk-solvent scale factors and overall anisotropic correction factors is introduced.

A fast and robust method for determining the parameters for a flat (mask-based) bulk-solvent model and overall scaling in macromolecular crystallographic structure refinement and other related calculations is described. This method uses analytical expressions for the determination of optimal values for various scale factors. The new approach was tested using nearly all entries in the PDB for which experimental structure factors are available. In general, the resulting R factors are improved compared with previously implemented approaches. In addition, the new procedure is two orders of magnitude faster, which has a significant impact on the overall runtime of refinement and other applications. An alternative function is also proposed for scaling the bulk-solvent model and it is shown that it outperforms the conventional exponential function. Similarly, alternative methods are presented for anisotropic scaling and their performance is analyzed. All methods are implemented in the Computational Crystallography Toolbox (cctbx) and are used in PHENIX programs.
In the commonly used approach, the total structure factor is defined as where k total is the overall Miller-index-dependent scale factor, F calc and F mask are the structure factors computed from the atomic model and the bulk-solvent mask, respectively, and k mask is a bulk-solvent scale factor. The mask can be computed efficiently using exact asymmetric units as described in Grosse-Kunstleve et al. (2011). The overall scale factor k total can be thought of as the product where k overall is the overall scale factor and k isotropic and k anisotropic are the isotropic and anisotropic scale factors, respectively. k overall is a scalar number that can be obtained by minimizing the least-squares residual where F obs are the observed structure factors and The sum is over all reflections. Solving @LS/@k overall = 0 leads to In the exponential model the anisotropic scale factor is defined as where U cryst is the overall anisotropic scale matrix equivalent to U * defined in Grosse-Kunstleve & Adams (2002); s t = (h, k, l) is the transpose of the Miller-index column vector s. Usó n et al. (1999) define a polynomial anisotropic scaling function that can be rewritten in matrix notation as follows: where V 0 and V 1 are symmetric 3 Â 3 matrices, s 2 = s t G * s and G * is the reciprocal-space metric tensor. Expression (7) is equivalent to the first terms in the Taylor series expansion of the exponential function (6), expðÀ2 2 s t U cryst sÞ ' 1 À 2 2 s t U cryst s þ 2 4 ðs t U cryst sÞðs t U cryst sÞ; with the constant term omitted. The omission of the constant 1 means that k anisotropic is equal to zero for the reflection F 000 , as follows from (7). Therefore, in this work we modify (7) by adding the constant The bulk-solvent scale factor is traditionally defined as where k sol and B sol are the flat bulk-solvent model parameters (Phillips, 1980;Jiang & Brü nger, 1994;Fokine & Urzhumtsev, 2002b). Depending on the calculation protocol, k isotropic may be assumed to be a part of k anisotropic or it can be assumed to be exponential: k isotropic = exp(ÀBs 2 /4), where B is a scalar parameter. Alternatively, it may be determined as described in x2.3 below.
The determination of the anisotropic scaling parameters (U cryst or V 0 and V 1 ) and the bulk-solvent parameters k sol and B sol requires the minimization of the target function (3) with respect to these parameters. Despite the apparent simplicity, this task is quite involved owing to a number of numerical issues (Fokine & Urzhumtsev, 2002b;Afonine et al., 2005a). Previously, we have developed a robust and thorough procedure (Afonine et al., 2005a) to address these issues. This procedure is used routinely in PHENIX (Adams et al., 2010). However, owing to its thoroughness the procedure is relatively slow and may account for a significant fraction of the execution time of certain PHENIX applications (for example, phenix.refine).
In this paper, we describe a new procedure which is approximately two orders of magnitude faster than the approach described in Afonine et al. (2005a) and often leads to a better fit of the experimental data. The speed gain is the result of an analytical determination of the optimal bulksolvent and scaling parameters. The better fit to the experimental data is partially the result of employing a more detailed model for k mask compared with the exponential model in equation (10) and is partially a consequence of the new analytical optimization method. Analytical optimization eliminates the possibility of becoming trapped in local minima, which exists in all iterative local optimization methods, including the procedure used previously.

Anisotropic scaling: exponential model
To obtain the elements of the anisotropic scaling matrix (6), the minimization of (3) is replaced by the minimization of LSL ¼ P s ½lnðF obs Þ À lnðjF model jÞ 2 : For this, we assume that F obs and |F model | are positive. We also assume that the minima of (3) and (11) are at similar locations. This assumption is not obvious and, as discussed below, may not always hold (see x3.3 and Table 2). Expression (11) can be rewritten as Here, Z = [1/(2 2 )]ln[F obs (k overall k isotropic |F calc + k mask F mask |) À1 ]. Defining and using the target function determining the optimal U cryst is The U cryst values that minimize (15) are determined from the condition r U g LSL LSL = 0, which gives a system of six linear equations research papers where M = P s V V, V = (h 2 , k 2 , l 2 , 2hk, 2hl, 2kl) t , denotes the outer product and b = À P s ZV. The desired U cryst matrix is determined by solving the system (16): Crystal-system-specific symmetry constraints can be incorporated via a constraint matrix (C), which we derive from first principles by solving the system of linear equations R t UR = U for all rotation matrices R of the crystal-system point group. Alternatively, symmetry constraints are often derived manually and tabulated (Nye, 1957;Giacovazzo, 1992). For example, the constraint matrix for the tetragonal crystal system is The number of rows in C determines the number of independent coefficients of U cryst . Let U ind be the column vector of independent coefficients; the (redundant) set of six coefficients U cryst is then obtained via The constraint matrix C is introduced into equations (16) and (17) above as follows: The full U cryst is then determined via equation (19).

Anisotropic scaling: polynomial model
The polynomial model (Usó n et al., 1999) for anisotropic scaling allows the direct use of the residual (3) to find the optimal coefficients for V 0 and V 1 in equation (9). An advantage of this model is that no assumptions about the similarity of the location of the minima of targets (3) and (11) are required. Conceptually, a disadvantage of equation (9) is that it is only an approximation of equation (6), as was shown above. However, the number of parameters is doubled in equation (9) compared with equation (6), since V 0 and V 1 are treated independently. The increased number of degrees of freedom may therefore compensate for approximation inaccuracies.
Similarly to x2.1, the optimal coefficients for V 0 and V 1 are determined by the condition r V LS = 0 and can be obtained by solving a system of 12 linear equations. We follow the arguments of Usó n et al. (1999) for not using symmetry constraints in this case.

Bulk-solvent parameters and overall isotropic scaling
Defining K = k À2 total = (k overall k isotropic k anisotropic ) À2 , the determination of the desired scaling parameters k isotropic and k mask is reduced to minimizing in resolution bins, where k overall and k anisotropic are fixed. This minimization problem is generally highly overdetermined because the number of reflections per bin is usually much larger than two. Introducing w = |F mask | 2 , v = (F calc , F mask ) and u = |F calc | 2 and substitution into (22) leads to Minimizing (23) with respect to K and k mask leads to a system of two equations: Developing these equations with respect to k mask , and introducing new notations for the coefficients, we obtain Multiplying the second equation by Y 2 and substituting KY 2 from the first equation into the new second equation, we obtain a cubic equation The senior coefficient in (27) satisfies the Cauchy-Schwarz inequality: Therefore, equation (27) can be rewritten as and solved using a standard procedure.
The corresponding values of K are obtained by substituting the roots of equation (29) into the first equation in (26): If no positive root exists k mask is assigned a zero value, which implies the absence of a bulk-solvent contribution. If several roots with k mask ! 0 exist then the one that gives the smallest value of LS s (K, k mask ) is selected.
If desired, one can fit the right-hand side of expression (10) to the array of k mask values by minimizing the residual for all k mask > 0. This can be achieved analytically as described in Appendix A. Similarly, one can fit k overall exp(ÀB overall s 2 /4) to the array of K values.

Presence of twinning
In case of twinning with N twin-related domains, the total model intensity is where j is the twin fraction of the jth domain, T j is the corresponding twin operator (a 3 Â 3 rotation matrix) and k total includes all scale factors (overall, isotropic and anisotropic). We make the reasonable assumption that k total and k mask are identical for all twin domains. Finding the twin fractions j can be achieved by solving the minimization problem with the constraint condition where I(s) = F 2 obs and s j = T j s. This constrained minimization problem can be reformulated as an unconstrained minimization problem by the standard technique of introducing a Lagrange multiplier: The values { 1 , . . . , N , } that minimize (36) are the solution of the system of N + 1 linear equations with N + 1 variables: or P s P N j¼1 j I j ðs j Þ À IðsÞ The solution of this system is with the (N + 1) Â (N + 1) matrix and V ¼ ½I 1 ðs 1 Þ; . . . ; I N ðs N Þ: Here, 1 is a row or column containing N unit elements to complete the matrix M and b ¼ P The values of " are expected to be between 0 and 1, and is proportional to the sum of squared intensities. Therefore, it is numerically beneficial to multiply the C( 1 , . . . , N ) term in (36) by a constant P s I 2 ðsÞ in order to make the value for numerically similar to the values for the twin fractions .
Once the twin fractions have been found, the procedure described in x2.3 can be used to obtain the overall and bulksolvent scale factors. Similarly to (23), we can write where j are known twin fractions and K and k mask are the scale factors to be determined. Similarly to x2.3, we obtain P N j¼1 j jF calc ðs j Þ þ k mask F mask ðs j Þj 2 ¼ P N j¼1 f j jF calc ðs j Þj 2 þ 2k mask j ½F calc ðs j ÞF mask ðs j Þ þ k 2 mask j jF mask ðs j Þj 2 g: ð44Þ Introducing new variables as before for equation (23) leads to The determination of the twin fractions and scales k total and k mask are iterated several times until convergence. The determination of does not guarantee that the individual twin fractions j are in the range 0-1. For any j outside this range the corresponding twin operation is ignored for the current iteration and the new smaller set of twin fractions and scales are redetermined. However, in the next iteration the full set of is tried again.

Implementation of the new protocol
The scale factors involved in the calculation of F model according to equation (1) are highly correlated. Therefore, the order of their determination is important. Empirically, we found that the determination of k isotropic and k mask followed by the determination of k anisotropic works optimally in most cases. The determination of (k mask , k isotropic ) and k anisotropic is repeated several times until the R factor decreases by less than 0.01% between cycles. The number of cycles required to reach convergence is typically between 1 and 5.

research papers
To determine k anisotropic , our protocol can make use of three available scaling methods: polynomial (poly; x2.2), exponential with analytical calculation of the optimal parameters (exp anal ; x2.1) and exponential with the optimal parameters obtained via L-BFGS (Liu & Nocedal, 1989) minimization (exp min ; Afonine et al., 2005a). The three methods can be tested independently, in which case the result with the lowest R factor is accepted. However, because exp min is up to an order of magnitude slower than the other two methods it is not expected to be used routinely.
The calculation of k isotropic and k mask requires dividing the data into resolution bins (x3.2). If oscillation of k mask between bins occurs, smoothening (Savitzky & Golay, 1964) is applied to the bin-wise determined values of k mask such that it reduces the oscillations without altering the monotonic behavior of k mask as a function of resolution (see Fig. 1). Finally, the smoothed values are assigned to individual reflections using linear interpolation. The k isotropic scales are updated using equation (5) in order to account for the changed k mask .
As illustrated in x3.2, the minimum of the R-factor function and the minimum of the least-squares function (22) can be at significantly different locations in the (k mask , k isotropic ) parameter space. To assure that the final (k mask , k isotropic ) values correspond to the lowest R factor, a fast grid search is performed around the optimal values of the least-squares function.

Binning
The goal of binning is to group data by common features to characterize each group by a set of common parameters. Here, the key parameter is the resolution d of reflections. Binning schemes with bins containing an approximately equal number of reflections (i.e. the resolution range is uniformly sampled in d À3 ) or a predefined number of bins are typically used. Since the low-resolution region of the data is sparse, such binning Examples of smoothening of k mask . The original k mask (blue; obtained as the solution of equation 29) and that after smoothening (red) are shown for three PDB entries with the PDB codes shown on the plots. Table 1 Comparison of binning schemes performed with d À3 and ln(d) spacing for three selected PDB data sets: 1kwn, 3hay and 3gk8.
All three data sets have very low completeness in the lowest resolution bin, which d À3 binning obscures while ln(d) binning makes clear even when using approximately half the number of bins. Completeness in the high-resolution region is similar in the two binning schemes. For each binning method three columns of data are presented: resolution range (Å ), completeness and number of reflections. 1kwn 3hay 3gk8 schemes tend to produce only one or very few low-resolution bins, which is insufficient to best model the bulk-solvent contribution. Unfortunately, decreasing the number of reflections per bin will disproportionally increase the number of bins (N bins ) at higher resolution and may still provide insufficient detail for the low-resolution data (Table 1). An alternative approach which divides the resolution range uniformly on a logarithmic scale ln(d) (Urzhumtsev et al., 2009) efficiently solves this problem. The flowchart of the algorithm is shown in Fig. 2. This scheme allows the higher resolution bins to contain more reflections than the lower resolution bins and more detailed binning at low resolution without increasing the total number of bins. An additional reason for using logarithmic binning is that the dependence of the scales on resolution is approximately exponential (see previous sections), which makes the variation of scale factors more uniform between bins when a logarithmic binning algorithm is used. Table 1 compares binning performed uniformly in d À3 and in ln(d) spacing for three data sets (PDB entries 3hay, 1kwn and 3gk8). Note the data completeness of the low-resolution bins.

Systematic tests
We evaluated the performance of the new scaling protocol by applying it to approximately 40 000 data sets selected from the PDB. The structures were selected by evaluating all PDB entries using phenix.model_vs_data (Afonine et al., 2010) and excluding all entries for which the recalculated R work was greater than the published value by five percentage points.
To score the test results three crystallographic R factors (46) were computed using all reflections, using only low-resolution reflections and using only high-resolution reflections. Lowresolution reflections were selected using the condition d min > 8 Å but selecting at least the 500 lowest resolution reflections. High-resolution reflections were taken from the highest resolution bin. Each of the three anisotropic scaling methods (poly, exp anal and exp min ) was tested independently within each run. Additionally, two other tests were performed: one combining poly and exp anal as described in x3.1 (referred to as poly+exp anal ) and the other using the protocol of Afonine et al. (2005a) (referred to as old). Fig. 3 shows a comparison of the alternative methods for determining k anisotropic (see x3.1). Comparing the polynomial model (poly) versus the analytical exponential model (exp anal ), with a few minor exceptions poly results in slightly lower R factors overall and for the low-resolution reflections, while exp anal results in lower R factors for the high-resolution reflections. Comparing poly versus the original exponential model using minimization (exp min ), the R factors are very similar overall and for the high-resolution reflections, while poly often results in lower R factors for the low-resolution reflections. Comparing the two different exponential models, exp min results in lower R factors overall and nearly identical results for low-resolution reflections, but exp anal results in lower R factors for the high-resolution reflections. Fig. 4 compares the new protocol combining poly and exp anal with the old protocol. With very few exceptions, the new protocol performs better for all three resolution groups.
As described above, occasionally the minima of the R-factor function (46) and the LS function (22) are at significantly different locations in the (k mask , k isotropic ) parameter space (see Flowchart of the logarithmic resolution-binning algorithm. Table 2 Comparison of U cryst corresponding to the minima of the functions LS (3), LSL (11) and R factor (46).  illustrated in Table 2. For this, the best values for U cryst were determined via a systematic search for the minima of the functions (3), (11) and (46) for three combinations of structures and highresolution cutoffs. Note the difference in the optimal U cryst values and the corresponding R factors. The parameterization of the total model structure factor (1) does not make any assumption about the shape of k mask ; for example, it does not assume it to be exponential (10). This provides an opportunity to explore the behavior of k mask as a function of resolution and compare it with k mask obtained via (10). Fig. 6 illustrates the differences between the two methods of determining k mask for six representative PDB entries selected from approximately 40 000 entries after inspection of the k mask values. We observe that the plots of the values obtained using our new approach are in general significantly different from the exponential function. This observation is in line with Fig. 1 of Urzhumtsev & Podjarny (1995).
At very low resolution the structure factors computed from the atomic model are approximately anticorrelated to the structure factors computed from the bulk-solvent mask: Here, p is a scale factor (Urzhumtsev & Podjarny, 1995).
Relation (47) is the basis for alternative bulk-solvent scaling methods that employ the Babinet principle (Moews & Kretsinger, 1975;Tronrud, 1997). Substitution of relation (47) into equation (1) yields Obviously, F model is invariant for any combination of scale factors k total and k mask satisfying the condition k total ð1 À p k mask Þ ¼ const: Since our new scaling procedure determines k mask and k isotropic (which are part of k total ) simultaneously, without imposing constraints on their values, these scale factors may assume unusual values in the low-resolution range. However, we R versus R factor scatter plots comparing the new scaling protocol using poly+exp anal for the anisotropic scale factor with the old protocol. For each structure the full set of structure factors available from the PDB was used to calculate scale factors and to calculate R factors (left). Using the same scalefactor values the R factors were calculated separately for the low-resolution reflections (middle) and high-resolution reflections (right). A large spread of points in the vertical direction above the diagonal (red line) in these latter plots indicates that in many cases the scale factors produced by the old protocol resulted in a poorer fit to the data at low and high resolutions, while the new protocol generates scale factors with a good fit across all resolution ranges. See x3.3 for details.

Figure 5
Plots of R factors (with k isotropic = 0.0961) and the LS function (with k isotropic = 0.0863) for PDB entry 1kwn (left) and R factors (with k isotropic = 0.0131) and the LS function (with k isotropic = 0.0151) for PDB entry 1hqw (right), illustrating that the minima of the R-factor function (46) and the LS function (22) can be at significantly different locations in parameter space. In such cases, a line search around the value of k mask obtained by minimization of the LS function is necessary in order to obtain a value that minimizes the R factor. For plotting purposes, the values of the LS function were scaled to be similar to the R factors.
observe that in practice this only happens for a very small number of the test cases.

Discussion
A new method for overall anisotropic and bulk-solvent scaling of macromolecular crystallographic diffraction data has been developed which is an improvement over the existing algorithm of flat (mask-based) bulk-solvent modeling and overall anisotropic scaling, versions of which are routinely used in various refinement packages such as CNS (Brunger, 2007), REFMAC (Murshudov et al., 2011) and phenix.refine (Afonine et al., 2012). In the process of developing this method, we concluded that the bulk-solvent scale factor k mask deviates quite significantly from the exponential model that has traditionally been used. This new method is approximately two orders of magnitude faster than the previous implementation and yields similar or often better R factors. Table 3 compares runtimes for a number of selected cases covering a broad range of resolutions and atomic model sizes. Therefore, the computational speed of the new method makes it possible to robustly compute bulk-solvent and anisotropic scaling parameters even as part of semi-interactive procedures. An inherent feature of the mask-based bulk-solvent model is that it relies on the existing atomic model to compute the mask. This in turn implies that any unmodeled (as atoms) parts of the unit cell are considered to belong to the bulksolvent region. This may obscure weakly pronounced features in residual maps such as partially occupied solvent or ligands. This is common to all mask-based bulk-solvent modeling methods, leading to the development of algorithms to account for missing atoms (Roversi et al., 2000). In the future, improved maps may be obtained by combining this latter approach with the new fast overall anisotropic and bulksolvent scaling method that we have presented.
The new method is implemented in the cctbx project (Grosse-  and is used in a number of PHENIX applications since v.1.8 of the software, most notably phenix.refine (Afonine et al., 2005b(Afonine et al., , 2012, phenix.maps and phenix.model_vs_data (Afonine et al., 2010). The cctbx project is available at http://cctbx.sourceforge.net under an opensource license. The PHENIX software is available at http:// www.phenix-online.org.
APPENDIX A Analytical derivation of a one-Gaussian approximation of a one-dimensional discrete data set Our goal is to approximate a set of data points {Y(x)} N j = 1 with a Gaussian function, a expðÀbx 2 Þ: For this, we use the standard approach of minimizing a leastsquares (LS) function,  ½Yðx j Þ À a expðÀbx 2 j Þ 2 : If Y(x j ) ! 0 8 x j , j = 1, N, the minimization of LS can be replaced by the minimization of LSL ¼ P N j¼1 fln½Yðx j Þ À ln½a expðÀbx 2 j Þg 2 : The minimum of this LSL function can be determined analytically, flnðaÞ À bx 2 j À ln½Yðx j Þg 2 : Defining u = ln(a), v j = x j 2 , d j = ln[Y(x j )], we obtain The variables {a, b} minimizing the LSL function are determined by the condition @LSL @u ¼ 0 @LSL @b ¼ 0: This leads to À2 P N j¼1 ðu À bv j À d j Þ ¼ 0 À2 P N j¼1 ðu À bv j À d j Þv j ¼ 0 and Defining p = P N j¼1 d j , q = P N j¼1 v j , r = P N j¼1 v 2 j and s = P N j¼1 v j d j , we obtain From this, we obtain and finally a ¼ expðuÞ; b ¼ 1 r ðuq À sÞ: