research papers
Fast and noise-tolerant determination of the center of rotation in tomography
aApplied Physics Program, Northwestern University, 2145 Sheridan Road, Evanston, IL 60208, USA, bAdvanced Photon Source, Argonne National Laboratory, 9700 South Cass Avenue, Argonne, IL 60439, USA, cDepartment of Physics and Astronomy, Northwestern University, 2145 Sheridan Road, Evanston, IL 60208, USA, and dChemistry of Life Processes Institute, Northwestern University, 2170 Campus Drive, Evanston, IL 60208, USA
*Correspondence e-mail: cjacobsen@anl.gov
High-quality tomographic reconstruction is not possible without the accurate localization of the center of rotation. Poor localization leads to artifacts in the data and can even cause reconstructions to fail. There are many approaches to solving this problem, some of which involve the collection of full sinograms, or even provisional tomographic reconstructions, in order to determine the center of rotation. Here, a simple method based on the expected symmetry of the Fourier transform of summed projections approximately 180° apart is presented; unlike cross-correlation methods, it requires only a single Fourier transform to compute, and uses mainly low spatial frequency information which is less susceptible to noise. This approach is shown to be fast, and robust against poor signal-to-noise as well as to projection images acquired at angles that are not exactly 180° apart. This rapid method can be useful as a first step in the processing of tomographic data.
Keywords: X-ray tomography; center of rotation; tomogram alignment.
1. Introduction
In single-tilt-axis projection tomography, a set of 2D slices of the object are reconstructed, with each slice arising from a set of line projections obtained as the object is rotated as shown in Fig. 1. [Some electron microscopes have double-tilt-axis specimen stages (Penczek et al., 1995); hence our use of the term `single-tilt-axis' tomography which describes most tomography experiments at synchrotron light source facilites, and most commercial X-ray microtomography systems.] However, if one uses an incorrect value for the position of the actual axis of rotation, objects appear to `wobble' about the correct axis, resulting in `tuning fork' artifacts in the reconstructed slice if projections are acquired over 180° (Shepp et al., 1979), or one type of ring artifact for 360° data. A number of approaches have been developed to find and correct for this center-of-rotation (COR) shift error:
(i) One approach is to compare a 0° image with the mirrored version of an image taken 180° apart, and find the COR shift to be half the distance between the phase correlation offset between these two images (Gullberg et al., 1986; Donath et al., 2006). This operation requires a total of three Fourier transforms, as well as identification of the cross-correlation peak. Alternatively, one can use the shift of the center of mass between these two images for the same purpose (Donath et al., 2006; Dong et al., 2013).
(ii) One simple approach is to fit a sine function to the sinogram's angle-by-angle center of mass and use this to calculate the COR shift (Hogan et al., 1993). This approach can also be used on the raw sinogram data (Azevedo et al., 1990; Donath et al., 2006). One can also track the center crossing points of identified features in the sinogram (Li et al., 2010). These carry the added benefit that, for low noise data sets, individual projections can be shifted such that features follow a sinusoidal path. When taking the Fourier transform of the sinogram, COR errors lead to the presence of aliasing errors at high spatial frequencies (Edholm et al., 1986), which can be minimized in an alignment strategy (Vo et al., 2014).
(iii) Because of the effects on the reconstructed slice image, one can carry out reconstructions with a range of choices for the COR shift and manually select the one with the best appearance (Walls et al., 2005). Automated approaches include maximizing the number of positive, non-zero pixels in the reconstructed slice (Brunetti & De Carlo, 2004), or the contrast of the slice image (Birk et al., 2010), or minimizing the image's total variation (Cheng et al., 2018). One can also train a neural network to recognize ring-like artifacts and use this network to identify the image with the correct COR shift (Yang et al., 2017).
(iv) In an iterative reprojection approach, one first obtains a provisional 3D image and then aligns projection images to this volume, repeating until convergence is achieved (Dengler, 1989; Owen & Landis, 1996; Birk et al., 2010; Parkinson et al., 2012). This approach can also be applied in (Pryor et al., 2017), and to a single slice in the reconstructed object. Iterative reprojection can correct for additional errors beyond the COR shift, and speedups can be obtained by alternating between alignment steps and reconstruction steps in iterative reconstruction approaches (Gürsoy et al., 2017).
Of these approaches, the first involves minimal data and computation. The second requires full sinogram data, and somewhat longer computation time. The third and fourth approaches require numerous reconstructions, either of a single slice with varying COR guesses (the third approach) or of the entire 3D data set (the fourth approach) with consequent costs in data storage and compute time.
We present here a rapid approach for determination of the COR shift based on phase symmetry in the frequency domain of two projection images obtained approximately 180° apart. This resembles the first approach listed above with some distinct differences. First, while phase correlation finds the shift between a projection and the mirror of its 180° counterpart, it normally involves three two-dimensional Fourier transforms, whereas the phase symmetry approach requires only a single one-dimensional Fourier transform. Second, we make use of the fact that this phase symmetry is quite insensitive to noise at low spatial frequencies of the projection images, as well as being insensitive as to whether the images were taken exactly 180° apart. This rapid COR shift determination method can be used alone, or to provide a highly accurate starting point for correction of additional errors in the third and fourth approaches described above.
2. Method
For an object rotated along the axis as shown in Fig. 1, single point features will produce sinusoids in the sinogram p(t, θ) (using the notation from Fig. 1); therefore for a single feature will result in an arcsine distribution for emission tomography and an inverted arcsine distribution in the case of transmission tomography. This distribution has even symmetry about the rotation axis as shown in Fig. 2(c). Since we collect a discrete number of projections, the integral becomes a sum over those projections, , which we will call the θ-sum of a sinogram. We may also consider a reflection pair, which we define as the sum of one projection taken at θ′ and another taken at θ′ + 180°. For single point features, these projection pairs will sample portions of the arcsine distribution on opposite ends of the rotation axis, resulting in the same even symmetry as shown in Fig. 2(d).
A continuous object can be thought of as an ensemble of single point features. This will result in a linear combination of arcsine distributions in the θ-sum which will necessarily retain the same symmetry about the rotation axis. If, however, the actual rotation axis is shifted from the center of the recorded projection due to a COR error (as shown in Fig. 3), the distribution will be offset; one will instead have a non-centrosymmetric distribution.
A simple way to determine whether a function is non-centrosymmetric is to examine its Fourier transform, since the shift theorem relates an offset center in real space to a phase ramp in Fourier space. If one did this row by row in the reflection pair, one could separate out erroneous contributions to the phase ramp from rows with no features (but only noise), or rows with other types of errors. When taking the Fourier transform of the sum of all rows in the reflection sum, these erroneous contributions would contribute equally to the phase ramp, whereas, if one were to take the Fourier transform of individual rows one by one and then obtain the summation of these complex vectors, the non-erroneous rows would contribute to the phase ramp in a correlated fashion while the noise- or error-corrupted rows would presumably contribute in a non-correlated fashion. Therefore for Nt pixels across the detector for each of Ny detector rows, one is best off either adding the contributions of a total of Ny Fourier transforms each of dimension Nt, or a single Fourier transform of the (Nt, Ny) data array rearranged as a 1D array of dimension N′ = NtNy; we chose to use the latter approach.
The fast Fourier transform of this N′ = NtNy reflection sum array will contain information about the phase ramp resulting from a COR error. However, the magnitudes in the Fourier transforms of images tend to decline as a function of increasing spatial frequency (Burton & Moorhead, 1987; van der Schaaf & van Hateren, 1996), while uncorrelated noise due to photon statistics is equally present at all spatial frequencies. In addition, the appearance of different features in different instances of projection pairs will affect higher spatial frequencies in the θ-sum. Therefore one will have the highest fidelity measurement of the presence of any phase ramp at the lowest non-zero spatial frequency fmin in the FFT of the N′ reflection sum array, as shown in Fig. 3(d) and in Fig. 4(b) below. In principal we could improve the speed of this method by calculating this lowest spatial frequency value only, such as by using the Goertzel algorithm (Goertzel, 1958) with only N′ multiplications and 2N′ additions. However, the optimizations coded into standard FFT routines [which require operations] make the full 1D FFT calculation quite fast enough. Thus we calculate the phase ramp ϕ(fmin) from the real R and imaginary I parts of the lowest spatial frequency of the FFT of the projection pair θ-sum as
where is the sign of the real component, which accounts for transmission or emission tomography. We convert the phase shift to a position shift t′ away from the center of our sinogram (the COR error) using
thus yielding a rapid approach to finding the COR error t′.
Some practical considerations to include are that the method requires the object to be totally within the frame of the image for all exposures. Otherwise, the entire arcsine distribution is not collected and the center finding will begin to fail. Also since we are using only the lowest spatial frequency the method is strongly effected by illumination gradients along the t-axis. More often than not, normalization to the flat- and dark-field projections corrects for illumination gradients, a standard step in any tomographic reconstruction. Further improvement can be had by normalizing the background visible on the right and left sides of the projection images to be equal. These techniques are both accessible in the TomoPy Python package.
3. Demonstration
We compared the above phase symmetry method with two other popular center-finding algorithms: phase correlation between the reflection pair of projections at θ′ and θ′ + 180°, and the sinogram FFT method which involves taking the Fourier transform of a complete sinogram. We tested the phase symmetry and phase correlation methods as a function of departures Δθ from a perfect projection pair at angles of θ′ and θ′ + 180° + Δθ. We also compared all three approaches against the signal-to-noise ratio (SNR) of the acquired projection images.
We compared these approaches by using a data set from TomoBank (De Carlo et al., 2018) of phase contrast tomography of duplex stainless steel taken at the European Synchrotron Radiation Facility (ESRF; TomoBank data set tomo_00064). This data set consists of 450 projections acquired over a 360° range, with a high density of features such that one cannot easily see individual features in the sinogram. Because this data set has very high SNR, we generated a version of this data set where the sinogram was first normalized to its maximum value, then multiplied by an assumed fluence of photons per pixel, and each projection image had Poisson noise added according to each pixel's assumed photon count. One can then calculate the projection data SNR from the correlation of two separate images I1 and I2 with two independent instances of Poisson noise (Bershad & Rockmore, 1974; Huang et al., 2009). Because the SNR depends on the contrast of the object, and because of statistical fluctuations due to specific instances of Poisson noise, we show in Table 1 the values of used and the resulting SNR in various tests.
3.1. Phase correlation method
The phase correlation method performs phase correlation in the Fourier domain on two projections separated by 180° (Gullberg et al., 1986; Donath et al., 2006). Since the object has been rotated 180°, the second projection image is mirrored about the axis before processing. The cross-correlation between the two images will then give a peak at a position that is offset from the array center by twice the COR error t′ along the axis (it will also provide a measure of sample drift along the axis during data acquisition). One can further increase the accuracy of this method by correlating only on the phase of the two Fourier transforms (Shaw et al., 1989). Sub-pixel accuracy can be obtained by embedding the product of the Fourier transforms of the two images in a larger array before inverse FFT, either based on the entire array or on a sub-array (Guizar-Sicairos et al., 2008; van der Walt et al., 2014). In our case we used a sub-pixel accuracy of a tenth of a pixel (i.e. m = 10). Conventional sub-pixel phase correlation would require the Nt × Ny array be embedded in an mNt × mNy array before inverse FFT. A less memory intensive approach obtains a coarse estimate of the correlation peak on an Nt × Ny, then obtains a sub-pixel estimate of the correlation peak using matrix-multiplication discrete Fourier transform (DFT) only on a sub-array around the coarse estimate. Thus while the phase symmetry approach requires only a single 1D FFT of length NtNy, the phase correlation method requires either two Nt × Ny FFTs and one mNt × mNy inverse FFT (for the conventional approach) or two Nt × Ny FFTs, one NtNy inverse FFT, and one matrix-multiplication DFT on a small array.
3.2. Sinogram FFT method
When one has a complete 360° sinogram over a wide range of projection angles and no COR error, its Fourier transform should only have significant values inside a double wedge region of the transform (Edholm et al., 1986). Therefore the sinogram FFT method (Vo et al., 2014) finds the COR error by taking a 180° sinogram and mirroring it about the axis so as to form a 360° sinogram prior to taking its FFT. By shifting the latter 180° of the sinogram along the -axis, one can find where the signal outside of the double wedge is lowest, and thus recover the COR error. Since this sinogram FFT method requires repeated sinogram shifting and FFT calculation over a range of test values for the COR error, it tends to be slower than both the phase symmetry and phase correlation methods.
3.3. Test versus projection data SNR
The phase symmetry method is able to determine the COR error from using the overall phase ramp in the Fourier transform of a reflection pair, which can be measured from the lowest non-zero spatial frequency. As noted in Section 2, because signal in the FFT tends to decline with spatial frequency while the effects of Poisson noise do not, we first tested reproducibility of the three methods in the presence of noise while also examining the power spectrum of the sinogram pair FFT. The results given in Fig. 4(a) show that the phase symmetry method is the most robust of the three tested approaches against low SNR. The reason is made clear in Fig. 4(b), which shows the power spectrum from the FFT of the reflection pair of projections used in the phase symmetry and phase correlation methods. In this test, the phase symmetry method calculation was 32 times faster than phase correlation, and 640 times faster than the sinogram FFT approach.
3.4. Test against non-180° projection pairs
The phase symmetry and phase correlation methods require only two projections acquired exactly 180° apart, which we term a `reflection pair'. To examine the robustness of these approaches, we tested them against angular departures Δθ between this reflection pair relationship; that is, we tested the methods using one projection at θ′ against a second projection at θ′ + 180° + Δθ. These rotation errors can occur for multiple reasons:
(i) Pure projections separated by exactly 180° carry redundant information, so they are not always collected. One might instead collect Nθ rotations over an angular range of 180(1 − 1/Nθ)°.
(ii) Some non-sequential projection acquisition schemes do not inherently collect 180° projections (Köhler, 2004; Münch, 2011).
(iii) Angular errors in rotation stages can result in unintentional deviations from 180°.
Therefore, for each value of Δθ and of SNR (Table 1), we generated 100 projection pairs with different instances of simulated Poisson noise, and averaged the COR error determined using each method. The results shown in Fig. 5 indicate that the phase symmetry approach outperforms phase correlation. This is true both for producing more accurate results even with relatively large values of Δθ of ±30°, and also at lower SNR values.
3.5. Dose fractionation in full-rotation data sets
Dose fractionation states that the total number of photons required to generate a 3D tomographic reconstruction is the same as the number of photons required to produce a 2D projection image of a given sample at the same SNR; that is, one can divide the required dose among Nθ projection angles (Hegerl & Hoppe, 1976). Success in tomographic dose fractionation requires the alignment of low-dose projections (McEwen et al., 1995). We have previously shown that the phase correlation method using differential phase contrast images provides good alignment for both transmission and tomographic data sets (Hong et al., 2014). Given that Figs. 4 and 5 show that the phase symmetry method works well at low SNR, we also tested its ability to accommodate dose fractionation.
To test this, we generated sinograms from a 3D Shepp–Logan phantom (Shepp & Logan, 1974) available in TomoPy (Gürsoy et al., 2014). With this 5123 voxel data set, we would expect to require Nθ = 512(π/2) = 800 rotation angles to fulfill the Crowther criterion (Crowther et al., 1970) for full sampling of the 3D volume. The created object had a median transmission of 0.989, so that detecting this change in transmission relative to the incident beam would suggest an object contrast parameter Θ = |Imax − Imin|/(Imax + Imin)1/2 of Θ = 0.0078, leading to an expectation that the cumulative incident fluence should be about = = 52/(0.0078)2 = 410000 photons per pixel. Using a slightly lower cumulative fluence of = 300000 photons per pixel for a somewhat more challenging test, we distributed this cumulative fluence over Nθ = 360, 720, 1440, and 2880 angles as one would do if they were collecting a dose-fractionated data set (yielding a fluence per projection of = 833, 417, 208, and 104 photons per pixel, respectively). The resulting sinograms shown in Fig. 6(a) illustrate the low SNR of the individual projection images at low fluence, with the per-projection SNR shown in Fig. 6(c). The θ-sum plots in Fig. 6(b) and the θ-sum SNR shown in Fig. 6(c) show that the cumulative fluence of = 300000 is identical for all the dose-fractionated data sets. As shown in Fig. 6(d), the phase symmetry method finds the center of rotation (COR) from a reflection pair of projections quite well when dose fractionation is employed, even for the Nθ = 2880 case of per-projection fluence of = 360 and per-projection SNR = 0.09, while the phase correlation and sinogram FFT methods perform less well.
3.6. Comparison testing with additional data sets
In order to make sure that the phase symmetry method can be more broadly applied, we examined a number of additional data sets. Since TomoBank (De Carlo et al., 2018) has only a few 360° data sets but several 180° data sets, we chose two projections not 180° apart but instead two that were 179° apart, such as 0° and 179° in one test and then 1° and 180° in a second test. As shown in Table 2, near-consistency in the center of rotation calculated for these two independent (near) reflection pairs of projections indicates repeatability of the approach.
|
4. Conclusion
Phase symmetry provides a way to find the center of rotation (COR) error t′ of a tomographic data set in a rapid and noise-tolerant fashion. By adding together a reflection pair of projection images (two projections acquired approximately 180° apart, with some tolerance for departures Δθ from exactly 180° as shown in Fig. 5) and examination of the lowest non-zero spatial frequency of their Fourier transform, one can obtain the COR error t′ using equation (2). Because signal from the object tends to be much larger than Poisson noise at low spatial frequencies in Fourier transforms of images, the phase symmetry approach is robust at low exposure, which is an important consideration for dose fractionation as demonstrated in Fig. 6. However, the method does require that the object is within the field of view at all projection angles so that the reflection pair arcsine distribution is not obscured, and it also assumes that the rotation axis is aligned to the axis of the 2D detector used to acquire projection images. Lastly, it is important to note that the method is sensitive to illumination gradients in the axis direction, though such gradients are normally removed as part of illumination normalization and background normalization.
4.1. Code
The Python code for this method can be found at https://github.com/everettvacek/PhaseSymmetry.
Acknowledgements
We thank Francesco De Carlo and Doğa Gürsoy for helpful conversations on this work, and assistance with TomoPy and TomoBank. This research used resources of the Advanced Photon Source (APS), a US Department of Energy (DOE) Office of Science User Facility operated for the DOE Office of Science by Argonne National Laboratory under Contract No. DE-AC02-06CH11357.
Funding information
Funding for this research was provided by: National Institute of General Medical Sciences (award No. R01-GM104530); Basic Energy Sciences (contract No. DE-AC02-06CH11357).
References
Azevedo, S. G., Schneberk, D. J., Fitch, J. P. & Martz, H. E. (1990). IEEE Trans. Nucl. Sci. 37, 1525–1540. CrossRef Web of Science Google Scholar
Bershad, N. J. & Rockmore, A. J. (1974). IEEE Trans. Inf. Theory, 20, 112–113. CrossRef Web of Science Google Scholar
Birk, U. J., Rieckher, M., Konstantinides, N., Darrell, A., Sarasa-Renedo, A., Meyer, H., Tavernarakis, N. & Ripoll, J. (2010). Biomed. Opt. Expr. 1, 87–96. Web of Science CrossRef Google Scholar
Brunetti, A. & De Carlo, F. (2004). Proc. SPIE, 5535, 652–658. CrossRef Google Scholar
Burton, G. J. & Moorhead, I. R. (1987). Appl. Opt. 26, 157–170. CrossRef CAS PubMed Web of Science Google Scholar
Cheng, C. C., Ching, Y. T., Ko, P. H. & Hwu, Y. (2018). Sci. Rep. 8, 9884. Web of Science CrossRef PubMed Google Scholar
Crowther, R. A., DeRosier, D. J. & Klug, A. (1970). Proc. R. Soc. London A, 317, 319–340. Google Scholar
De Carlo, F., Gürsoy, D., Ching, D. J., Batenburg, K. J., Ludwig, W., Mancini, L., Marone, F., Mokso, R., Pelt, D. M., Sijbers, J. & Rivers, M. (2018). Meas. Sci. Technol. 29, 034004. Web of Science CrossRef Google Scholar
Dengler, J. (1989). Ultramicroscopy, 30, 337–348. CrossRef Web of Science Google Scholar
Donath, T., Beckmann, F. & Schreyer, A. (2006). J. Opt. Soc. Am. A, 23, 1048–1057. Web of Science CrossRef Google Scholar
Dong, D., Zhu, S., Qin, C., Kumar, V., Stein, J. V., Oehler, S., Savakis, C., Tian, J. & Ripoll, J. (2013). IEEE J. Biomed. Heal. Inf. 17, 198–204. Google Scholar
Edholm, P. R., Lewitt, R. M. & Lindholm, B. (1986). Proc. SPIE, 0671, 8–18. CrossRef Google Scholar
Goertzel, G. (1958). Am. Math. Mon. 65, 34–35. CrossRef Google Scholar
Guizar-Sicairos, M., Thurman, S. T. & Fienup, J. R. (2008). Opt. Lett. 33, 156–158. Web of Science PubMed Google Scholar
Gullberg, G. T., Crawford, C. R. & Tsui, B. M. W. (1986). IEEE Trans. Med. Imaging, 5, 23–29. CrossRef PubMed CAS Web of Science Google Scholar
Gürsoy, D., De Carlo, F., Xiao, X. & Jacobsen, C. (2014). J. Synchrotron Rad. 21, 1188–1193. Web of Science CrossRef IUCr Journals Google Scholar
Gürsoy, D., Hong, Y. P., He, K., Hujsak, K., Yoo, S., Chen, S., Li, Y., Ge, M., Miller, L. M., Chu, Y. S., De Andrade, V., He, K., Cossairt, O., Katsaggelos, A. K. & Jacobsen, C. (2017). Sci. Rep. 7, 11818. Web of Science PubMed Google Scholar
Hegerl, R. & Hoppe, W. (1976). Z. Naturforsch. 31, 1717–1721. CrossRef Web of Science Google Scholar
Hogan, J. P., Gonsalves, R. A. & Krieger, A. S. (1993). IEEE Trans. Nucl. Sci. 40, 1238–1241. CrossRef CAS Web of Science Google Scholar
Hong, Y. P., Gleber, S.-C., O'Halloran, T. V., Que, E. L., Bleher, R., Vogt, S., Woodruff, T. K. & Jacobsen, C. (2014). J. Synchrotron Rad. 21, 229–234. Web of Science CrossRef CAS IUCr Journals Google Scholar
Huang, X., Miao, H., Steinbrener, J., Nelson, J., Shapiro, D., Stewart, A., Turner, J. & Jacobsen, C. (2009). Opt. Express, 17, 13541–13553. Web of Science CrossRef PubMed Google Scholar
Köhler, T. (2004). IEEE Nucl. Sci. Symp. Conf. Rec. 6, 3961–3965. Google Scholar
Li, B., Zhang, Y. & Mo, Y. (2010). Proc. SPIE, 7544, 75445O. CrossRef Google Scholar
McEwen, B. F., Downing, K. H. & Glaeser, R. M. (1995). Ultramicroscopy, 60, 357–373. CrossRef CAS PubMed Web of Science Google Scholar
Münch, B. (2011). Opt. Eng. 50, 123201. Google Scholar
Owen, C. H. & Landis, W. J. (1996). Ultramicroscopy, 63, 27–38. CrossRef CAS PubMed Web of Science Google Scholar
Parkinson, D. Y., Knoechel, C., Yang, C., Larabell, C. A. & Le Gros, M. A. (2012). J. Struct. Biol. 177, 259–266. Web of Science CrossRef PubMed Google Scholar
Penczek, P., Marko, M., Buttle, K. & Frank, J. (1995). Ultramicroscopy, 60, 393–410. CrossRef CAS PubMed Web of Science Google Scholar
Pryor, A., Yang, Y., Rana, A., Gallagher-Jones, M., Zhou, J., Lo, Y. H., Melinte, G., Chiu, W., Rodríguez, J. A. & Miao, J. (2017). Sci. Rep. 7, 10409. Web of Science CrossRef PubMed Google Scholar
Schaaf, A. van der & van Hateren, J. H. (1996). Vision Res. 36, 2759–2770. PubMed Web of Science Google Scholar
Shaw, P. J., Agard, D. A., Hiraoka, Y. & Sedat, J. W. (1989). Biophys. J. 55, 101–110. CrossRef CAS PubMed Web of Science Google Scholar
Shepp, L. A., Hilal, S. K. & Schulz, R. A. (1979). Comput. Graph. Image Process. 10, 246–255. CrossRef Web of Science Google Scholar
Shepp, L. A. & Logan, B. F. (1974). IEEE Trans. Nucl. Sci. 21, 21–43. CrossRef Web of Science Google Scholar
Vo, N. T., Drakopoulos, M., Atwood, R. C. & Reinhard, C. (2014). Opt. Express, 22, 19078–19086. Web of Science CrossRef PubMed Google Scholar
Walls, J. R., Sled, J. G., Sharpe, J. & Henkelman, R. M. (2005). Phys. Med. Biol. 50, 4645–4665. Web of Science CrossRef PubMed Google Scholar
Walt, S. van der, Schönberger, J. L., Nunez-Iglesias, J., Boulogne, F., Warner, J. D., Yager, N., Gouillart, E. & Yu, T. (2014). PeerJ, 2, e453. Web of Science PubMed Google Scholar
Yang, X., De Carlo, F., Phatak, C. & Gürsoy, D. (2017). J. Synchrotron Rad. 24, 469–475. Web of Science CrossRef IUCr Journals Google Scholar
This is an open-access article distributed under the terms of the Creative Commons Attribution (CC-BY) Licence, which permits unrestricted use, distribution, and reproduction in any medium, provided the original authors and source are cited.