research papers
Fast and noisetolerant determination of the center of rotation in tomography
^{a}Applied Physics Program, Northwestern University, 2145 Sheridan Road, Evanston, IL 60208, USA, ^{b}Advanced Photon Source, Argonne National Laboratory, 9700 South Cass Avenue, Argonne, IL 60439, USA, ^{c}Department of Physics and Astronomy, Northwestern University, 2145 Sheridan Road, Evanston, IL 60208, USA, and ^{d}Chemistry of Life Processes Institute, Northwestern University, 2170 Campus Drive, Evanston, IL 60208, USA
^{*}Correspondence email: cjacobsen@anl.gov
Highquality tomographic reconstruction is not possible without the accurate localization of the center of rotation. Poor localization leads to artifacts in the data and can even cause reconstructions to fail. There are many approaches to solving this problem, some of which involve the collection of full sinograms, or even provisional tomographic reconstructions, in order to determine the center of rotation. Here, a simple method based on the expected symmetry of the Fourier transform of summed projections approximately 180° apart is presented; unlike crosscorrelation methods, it requires only a single Fourier transform to compute, and uses mainly low spatial frequency information which is less susceptible to noise. This approach is shown to be fast, and robust against poor signaltonoise as well as to projection images acquired at angles that are not exactly 180° apart. This rapid method can be useful as a first step in the processing of tomographic data.
Keywords: Xray tomography; center of rotation; tomogram alignment.
1. Introduction
In singletiltaxis projection tomography, a set of 2D slices of the object are reconstructed, with each slice arising from a set of line projections obtained as the object is rotated as shown in Fig. 1. [Some electron microscopes have doubletiltaxis specimen stages (Penczek et al., 1995); hence our use of the term `singletiltaxis' tomography which describes most tomography experiments at synchrotron light source facilites, and most commercial Xray microtomography systems.] However, if one uses an incorrect value for the position of the actual axis of rotation, objects appear to `wobble' about the correct axis, resulting in `tuning fork' artifacts in the reconstructed slice if projections are acquired over 180° (Shepp et al., 1979), or one type of ring artifact for 360° data. A number of approaches have been developed to find and correct for this centerofrotation (COR) shift error:
(i) One approach is to compare a 0° image with the mirrored version of an image taken 180° apart, and find the COR shift to be half the distance between the phase correlation offset between these two images (Gullberg et al., 1986; Donath et al., 2006). This operation requires a total of three Fourier transforms, as well as identification of the crosscorrelation peak. Alternatively, one can use the shift of the center of mass between these two images for the same purpose (Donath et al., 2006; Dong et al., 2013).
(ii) One simple approach is to fit a sine function to the sinogram's anglebyangle center of mass and use this to calculate the COR shift (Hogan et al., 1993). This approach can also be used on the raw sinogram data (Azevedo et al., 1990; Donath et al., 2006). One can also track the center crossing points of identified features in the sinogram (Li et al., 2010). These carry the added benefit that, for low noise data sets, individual projections can be shifted such that features follow a sinusoidal path. When taking the Fourier transform of the sinogram, COR errors lead to the presence of aliasing errors at high spatial frequencies (Edholm et al., 1986), which can be minimized in an alignment strategy (Vo et al., 2014).
(iii) Because of the effects on the reconstructed slice image, one can carry out reconstructions with a range of choices for the COR shift and manually select the one with the best appearance (Walls et al., 2005). Automated approaches include maximizing the number of positive, nonzero pixels in the reconstructed slice (Brunetti & De Carlo, 2004), or the contrast of the slice image (Birk et al., 2010), or minimizing the image's total variation (Cheng et al., 2018). One can also train a neural network to recognize ringlike artifacts and use this network to identify the image with the correct COR shift (Yang et al., 2017).
(iv) In an iterative reprojection approach, one first obtains a provisional 3D image and then aligns projection images to this volume, repeating until convergence is achieved (Dengler, 1989; Owen & Landis, 1996; Birk et al., 2010; Parkinson et al., 2012). This approach can also be applied in (Pryor et al., 2017), and to a single slice in the reconstructed object. Iterative reprojection can correct for additional errors beyond the COR shift, and speedups can be obtained by alternating between alignment steps and reconstruction steps in iterative reconstruction approaches (Gürsoy et al., 2017).
Of these approaches, the first involves minimal data and computation. The second requires full sinogram data, and somewhat longer computation time. The third and fourth approaches require numerous reconstructions, either of a single slice with varying COR guesses (the third approach) or of the entire 3D data set (the fourth approach) with consequent costs in data storage and compute time.
We present here a rapid approach for determination of the COR shift based on phase symmetry in the frequency domain of two projection images obtained approximately 180° apart. This resembles the first approach listed above with some distinct differences. First, while phase correlation finds the shift between a projection and the mirror of its 180° counterpart, it normally involves three twodimensional Fourier transforms, whereas the phase symmetry approach requires only a single onedimensional Fourier transform. Second, we make use of the fact that this phase symmetry is quite insensitive to noise at low spatial frequencies of the projection images, as well as being insensitive as to whether the images were taken exactly 180° apart. This rapid COR shift determination method can be used alone, or to provide a highly accurate starting point for correction of additional errors in the third and fourth approaches described above.
2. Method
For an object rotated along the axis as shown in Fig. 1, single point features will produce sinusoids in the sinogram p(t, θ) (using the notation from Fig. 1); therefore for a single feature will result in an arcsine distribution for emission tomography and an inverted arcsine distribution in the case of transmission tomography. This distribution has even symmetry about the rotation axis as shown in Fig. 2(c). Since we collect a discrete number of projections, the integral becomes a sum over those projections, , which we will call the θsum of a sinogram. We may also consider a reflection pair, which we define as the sum of one projection taken at θ′ and another taken at θ′ + 180°. For single point features, these projection pairs will sample portions of the arcsine distribution on opposite ends of the rotation axis, resulting in the same even symmetry as shown in Fig. 2(d).
A continuous object can be thought of as an ensemble of single point features. This will result in a linear combination of arcsine distributions in the θsum which will necessarily retain the same symmetry about the rotation axis. If, however, the actual rotation axis is shifted from the center of the recorded projection due to a COR error (as shown in Fig. 3), the distribution will be offset; one will instead have a noncentrosymmetric distribution.
A simple way to determine whether a function is noncentrosymmetric is to examine its Fourier transform, since the shift theorem relates an offset center in real space to a phase ramp in Fourier space. If one did this row by row in the reflection pair, one could separate out erroneous contributions to the phase ramp from rows with no features (but only noise), or rows with other types of errors. When taking the Fourier transform of the sum of all rows in the reflection sum, these erroneous contributions would contribute equally to the phase ramp, whereas, if one were to take the Fourier transform of individual rows one by one and then obtain the summation of these complex vectors, the nonerroneous rows would contribute to the phase ramp in a correlated fashion while the noise or errorcorrupted rows would presumably contribute in a noncorrelated fashion. Therefore for N_{t} pixels across the detector for each of N_{y} detector rows, one is best off either adding the contributions of a total of N_{y} Fourier transforms each of dimension N_{t}, or a single Fourier transform of the (N_{t}, N_{y}) data array rearranged as a 1D array of dimension N′ = N_{t}N_{y}; we chose to use the latter approach.
The fast Fourier transform of this N′ = N_{t}N_{y} reflection sum array will contain information about the phase ramp resulting from a COR error. However, the magnitudes in the Fourier transforms of images tend to decline as a function of increasing spatial frequency (Burton & Moorhead, 1987; van der Schaaf & van Hateren, 1996), while uncorrelated noise due to photon statistics is equally present at all spatial frequencies. In addition, the appearance of different features in different instances of projection pairs will affect higher spatial frequencies in the θsum. Therefore one will have the highest fidelity measurement of the presence of any phase ramp at the lowest nonzero spatial frequency f_{min} in the FFT of the N′ reflection sum array, as shown in Fig. 3(d) and in Fig. 4(b) below. In principal we could improve the speed of this method by calculating this lowest spatial frequency value only, such as by using the Goertzel algorithm (Goertzel, 1958) with only N′ multiplications and 2N′ additions. However, the optimizations coded into standard FFT routines [which require operations] make the full 1D FFT calculation quite fast enough. Thus we calculate the phase ramp ϕ(f_{min}) from the real R and imaginary I parts of the lowest spatial frequency of the FFT of the projection pair θsum as
where is the sign of the real component, which accounts for transmission or emission tomography. We convert the phase shift to a position shift t′ away from the center of our sinogram (the COR error) using
thus yielding a rapid approach to finding the COR error t′.
Some practical considerations to include are that the method requires the object to be totally within the frame of the image for all exposures. Otherwise, the entire arcsine distribution is not collected and the center finding will begin to fail. Also since we are using only the lowest spatial frequency the method is strongly effected by illumination gradients along the taxis. More often than not, normalization to the flat and darkfield projections corrects for illumination gradients, a standard step in any tomographic reconstruction. Further improvement can be had by normalizing the background visible on the right and left sides of the projection images to be equal. These techniques are both accessible in the TomoPy Python package.
3. Demonstration
We compared the above phase symmetry method with two other popular centerfinding algorithms: phase correlation between the reflection pair of projections at θ′ and θ′ + 180°, and the sinogram FFT method which involves taking the Fourier transform of a complete sinogram. We tested the phase symmetry and phase correlation methods as a function of departures Δθ from a perfect projection pair at angles of θ′ and θ′ + 180° + Δθ. We also compared all three approaches against the signaltonoise ratio (SNR) of the acquired projection images.
We compared these approaches by using a data set from TomoBank (De Carlo et al., 2018) of phase contrast tomography of duplex stainless steel taken at the European Synchrotron Radiation Facility (ESRF; TomoBank data set tomo_00064). This data set consists of 450 projections acquired over a 360° range, with a high density of features such that one cannot easily see individual features in the sinogram. Because this data set has very high SNR, we generated a version of this data set where the sinogram was first normalized to its maximum value, then multiplied by an assumed fluence of photons per pixel, and each projection image had Poisson noise added according to each pixel's assumed photon count. One can then calculate the projection data SNR from the correlation of two separate images I_{1} and I_{2} with two independent instances of Poisson noise (Bershad & Rockmore, 1974; Huang et al., 2009). Because the SNR depends on the contrast of the object, and because of statistical fluctuations due to specific instances of Poisson noise, we show in Table 1 the values of used and the resulting SNR in various tests.
3.1. Phase correlation method
The phase correlation method performs phase correlation in the Fourier domain on two projections separated by 180° (Gullberg et al., 1986; Donath et al., 2006). Since the object has been rotated 180°, the second projection image is mirrored about the axis before processing. The crosscorrelation between the two images will then give a peak at a position that is offset from the array center by twice the COR error t′ along the axis (it will also provide a measure of sample drift along the axis during data acquisition). One can further increase the accuracy of this method by correlating only on the phase of the two Fourier transforms (Shaw et al., 1989). Subpixel accuracy can be obtained by embedding the product of the Fourier transforms of the two images in a larger array before inverse FFT, either based on the entire array or on a subarray (GuizarSicairos et al., 2008; van der Walt et al., 2014). In our case we used a subpixel accuracy of a tenth of a pixel (i.e. m = 10). Conventional subpixel phase correlation would require the N_{t} × N_{y} array be embedded in an mN_{t} × mN_{y} array before inverse FFT. A less memory intensive approach obtains a coarse estimate of the correlation peak on an N_{t} × N_{y}, then obtains a subpixel estimate of the correlation peak using matrixmultiplication discrete Fourier transform (DFT) only on a subarray around the coarse estimate. Thus while the phase symmetry approach requires only a single 1D FFT of length N_{t}N_{y}, the phase correlation method requires either two N_{t} × N_{y} FFTs and one mN_{t} × mN_{y} inverse FFT (for the conventional approach) or two N_{t} × N_{y} FFTs, one N_{t}N_{y} inverse FFT, and one matrixmultiplication DFT on a small array.
3.2. Sinogram FFT method
When one has a complete 360° sinogram over a wide range of projection angles and no COR error, its Fourier transform should only have significant values inside a double wedge region of the transform (Edholm et al., 1986). Therefore the sinogram FFT method (Vo et al., 2014) finds the COR error by taking a 180° sinogram and mirroring it about the axis so as to form a 360° sinogram prior to taking its FFT. By shifting the latter 180° of the sinogram along the axis, one can find where the signal outside of the double wedge is lowest, and thus recover the COR error. Since this sinogram FFT method requires repeated sinogram shifting and FFT calculation over a range of test values for the COR error, it tends to be slower than both the phase symmetry and phase correlation methods.
3.3. Test versus projection data SNR
The phase symmetry method is able to determine the COR error from using the overall phase ramp in the Fourier transform of a reflection pair, which can be measured from the lowest nonzero spatial frequency. As noted in Section 2, because signal in the FFT tends to decline with spatial frequency while the effects of Poisson noise do not, we first tested reproducibility of the three methods in the presence of noise while also examining the power spectrum of the sinogram pair FFT. The results given in Fig. 4(a) show that the phase symmetry method is the most robust of the three tested approaches against low SNR. The reason is made clear in Fig. 4(b), which shows the power spectrum from the FFT of the reflection pair of projections used in the phase symmetry and phase correlation methods. In this test, the phase symmetry method calculation was 32 times faster than phase correlation, and 640 times faster than the sinogram FFT approach.
3.4. Test against non180° projection pairs
The phase symmetry and phase correlation methods require only two projections acquired exactly 180° apart, which we term a `reflection pair'. To examine the robustness of these approaches, we tested them against angular departures Δθ between this reflection pair relationship; that is, we tested the methods using one projection at θ′ against a second projection at θ′ + 180° + Δθ. These rotation errors can occur for multiple reasons:
(i) Pure projections separated by exactly 180° carry redundant information, so they are not always collected. One might instead collect N_{θ} rotations over an angular range of 180(1 − 1/N_{θ})°.
(ii) Some nonsequential projection acquisition schemes do not inherently collect 180° projections (Köhler, 2004; Münch, 2011).
(iii) Angular errors in rotation stages can result in unintentional deviations from 180°.
Therefore, for each value of Δθ and of SNR (Table 1), we generated 100 projection pairs with different instances of simulated Poisson noise, and averaged the COR error determined using each method. The results shown in Fig. 5 indicate that the phase symmetry approach outperforms phase correlation. This is true both for producing more accurate results even with relatively large values of Δθ of ±30°, and also at lower SNR values.
3.5. Dose fractionation in fullrotation data sets
Dose fractionation states that the total number of photons required to generate a 3D tomographic reconstruction is the same as the number of photons required to produce a 2D projection image of a given sample at the same SNR; that is, one can divide the required dose among N_{θ} projection angles (Hegerl & Hoppe, 1976). Success in tomographic dose fractionation requires the alignment of lowdose projections (McEwen et al., 1995). We have previously shown that the phase correlation method using differential phase contrast images provides good alignment for both transmission and tomographic data sets (Hong et al., 2014). Given that Figs. 4 and 5 show that the phase symmetry method works well at low SNR, we also tested its ability to accommodate dose fractionation.
To test this, we generated sinograms from a 3D Shepp–Logan phantom (Shepp & Logan, 1974) available in TomoPy (Gürsoy et al., 2014). With this 512^{3} voxel data set, we would expect to require N_{θ} = 512(π/2) = 800 rotation angles to fulfill the Crowther criterion (Crowther et al., 1970) for full sampling of the 3D volume. The created object had a median transmission of 0.989, so that detecting this change in transmission relative to the incident beam would suggest an object contrast parameter Θ = I_{max} − I_{min}/(I_{max} + I_{min})^{1/2} of Θ = 0.0078, leading to an expectation that the cumulative incident fluence should be about = = 5^{2}/(0.0078)^{2} = 410000 photons per pixel. Using a slightly lower cumulative fluence of = 300000 photons per pixel for a somewhat more challenging test, we distributed this cumulative fluence over N_{θ} = 360, 720, 1440, and 2880 angles as one would do if they were collecting a dosefractionated data set (yielding a fluence per projection of = 833, 417, 208, and 104 photons per pixel, respectively). The resulting sinograms shown in Fig. 6(a) illustrate the low SNR of the individual projection images at low fluence, with the perprojection SNR shown in Fig. 6(c). The θsum plots in Fig. 6(b) and the θsum SNR shown in Fig. 6(c) show that the cumulative fluence of = 300000 is identical for all the dosefractionated data sets. As shown in Fig. 6(d), the phase symmetry method finds the center of rotation (COR) from a reflection pair of projections quite well when dose fractionation is employed, even for the N_{θ} = 2880 case of perprojection fluence of = 360 and perprojection SNR = 0.09, while the phase correlation and sinogram FFT methods perform less well.
3.6. Comparison testing with additional data sets
In order to make sure that the phase symmetry method can be more broadly applied, we examined a number of additional data sets. Since TomoBank (De Carlo et al., 2018) has only a few 360° data sets but several 180° data sets, we chose two projections not 180° apart but instead two that were 179° apart, such as 0° and 179° in one test and then 1° and 180° in a second test. As shown in Table 2, nearconsistency in the center of rotation calculated for these two independent (near) reflection pairs of projections indicates repeatability of the approach.

4. Conclusion
Phase symmetry provides a way to find the center of rotation (COR) error t′ of a tomographic data set in a rapid and noisetolerant fashion. By adding together a reflection pair of projection images (two projections acquired approximately 180° apart, with some tolerance for departures Δθ from exactly 180° as shown in Fig. 5) and examination of the lowest nonzero spatial frequency of their Fourier transform, one can obtain the COR error t′ using equation (2). Because signal from the object tends to be much larger than Poisson noise at low spatial frequencies in Fourier transforms of images, the phase symmetry approach is robust at low exposure, which is an important consideration for dose fractionation as demonstrated in Fig. 6. However, the method does require that the object is within the field of view at all projection angles so that the reflection pair arcsine distribution is not obscured, and it also assumes that the rotation axis is aligned to the axis of the 2D detector used to acquire projection images. Lastly, it is important to note that the method is sensitive to illumination gradients in the axis direction, though such gradients are normally removed as part of illumination normalization and background normalization.
4.1. Code
The Python code for this method can be found at https://github.com/everettvacek/PhaseSymmetry.
Acknowledgements
We thank Francesco De Carlo and Doğa Gürsoy for helpful conversations on this work, and assistance with TomoPy and TomoBank. This research used resources of the Advanced Photon Source (APS), a US Department of Energy (DOE) Office of Science User Facility operated for the DOE Office of Science by Argonne National Laboratory under Contract No. DEAC0206CH11357.
Funding information
Funding for this research was provided by: National Institute of General Medical Sciences (award No. R01GM104530); Basic Energy Sciences (contract No. DEAC0206CH11357).
References
Azevedo, S. G., Schneberk, D. J., Fitch, J. P. & Martz, H. E. (1990). IEEE Trans. Nucl. Sci. 37, 1525–1540. CrossRef Web of Science Google Scholar
Bershad, N. J. & Rockmore, A. J. (1974). IEEE Trans. Inf. Theory, 20, 112–113. CrossRef Web of Science Google Scholar
Birk, U. J., Rieckher, M., Konstantinides, N., Darrell, A., SarasaRenedo, A., Meyer, H., Tavernarakis, N. & Ripoll, J. (2010). Biomed. Opt. Expr. 1, 87–96. Web of Science CrossRef Google Scholar
Brunetti, A. & De Carlo, F. (2004). Proc. SPIE, 5535, 652–658. CrossRef Google Scholar
Burton, G. J. & Moorhead, I. R. (1987). Appl. Opt. 26, 157–170. CrossRef CAS PubMed Web of Science Google Scholar
Cheng, C. C., Ching, Y. T., Ko, P. H. & Hwu, Y. (2018). Sci. Rep. 8, 9884. Web of Science CrossRef PubMed Google Scholar
Crowther, R. A., DeRosier, D. J. & Klug, A. (1970). Proc. R. Soc. London A, 317, 319–340. Google Scholar
De Carlo, F., Gürsoy, D., Ching, D. J., Batenburg, K. J., Ludwig, W., Mancini, L., Marone, F., Mokso, R., Pelt, D. M., Sijbers, J. & Rivers, M. (2018). Meas. Sci. Technol. 29, 034004. Web of Science CrossRef Google Scholar
Dengler, J. (1989). Ultramicroscopy, 30, 337–348. CrossRef Web of Science Google Scholar
Donath, T., Beckmann, F. & Schreyer, A. (2006). J. Opt. Soc. Am. A, 23, 1048–1057. Web of Science CrossRef Google Scholar
Dong, D., Zhu, S., Qin, C., Kumar, V., Stein, J. V., Oehler, S., Savakis, C., Tian, J. & Ripoll, J. (2013). IEEE J. Biomed. Heal. Inf. 17, 198–204. Google Scholar
Edholm, P. R., Lewitt, R. M. & Lindholm, B. (1986). Proc. SPIE, 0671, 8–18. CrossRef Google Scholar
Goertzel, G. (1958). Am. Math. Mon. 65, 34–35. CrossRef Google Scholar
GuizarSicairos, M., Thurman, S. T. & Fienup, J. R. (2008). Opt. Lett. 33, 156–158. Web of Science PubMed Google Scholar
Gullberg, G. T., Crawford, C. R. & Tsui, B. M. W. (1986). IEEE Trans. Med. Imaging, 5, 23–29. CrossRef PubMed CAS Web of Science Google Scholar
Gürsoy, D., De Carlo, F., Xiao, X. & Jacobsen, C. (2014). J. Synchrotron Rad. 21, 1188–1193. Web of Science CrossRef IUCr Journals Google Scholar
Gürsoy, D., Hong, Y. P., He, K., Hujsak, K., Yoo, S., Chen, S., Li, Y., Ge, M., Miller, L. M., Chu, Y. S., De Andrade, V., He, K., Cossairt, O., Katsaggelos, A. K. & Jacobsen, C. (2017). Sci. Rep. 7, 11818. Web of Science PubMed Google Scholar
Hegerl, R. & Hoppe, W. (1976). Z. Naturforsch. 31, 1717–1721. CrossRef Web of Science Google Scholar
Hogan, J. P., Gonsalves, R. A. & Krieger, A. S. (1993). IEEE Trans. Nucl. Sci. 40, 1238–1241. CrossRef CAS Web of Science Google Scholar
Hong, Y. P., Gleber, S.C., O'Halloran, T. V., Que, E. L., Bleher, R., Vogt, S., Woodruff, T. K. & Jacobsen, C. (2014). J. Synchrotron Rad. 21, 229–234. Web of Science CrossRef CAS IUCr Journals Google Scholar
Huang, X., Miao, H., Steinbrener, J., Nelson, J., Shapiro, D., Stewart, A., Turner, J. & Jacobsen, C. (2009). Opt. Express, 17, 13541–13553. Web of Science CrossRef PubMed Google Scholar
Köhler, T. (2004). IEEE Nucl. Sci. Symp. Conf. Rec. 6, 3961–3965. Google Scholar
Li, B., Zhang, Y. & Mo, Y. (2010). Proc. SPIE, 7544, 75445O. CrossRef Google Scholar
McEwen, B. F., Downing, K. H. & Glaeser, R. M. (1995). Ultramicroscopy, 60, 357–373. CrossRef CAS PubMed Web of Science Google Scholar
Münch, B. (2011). Opt. Eng. 50, 123201. Google Scholar
Owen, C. H. & Landis, W. J. (1996). Ultramicroscopy, 63, 27–38. CrossRef CAS PubMed Web of Science Google Scholar
Parkinson, D. Y., Knoechel, C., Yang, C., Larabell, C. A. & Le Gros, M. A. (2012). J. Struct. Biol. 177, 259–266. Web of Science CrossRef PubMed Google Scholar
Penczek, P., Marko, M., Buttle, K. & Frank, J. (1995). Ultramicroscopy, 60, 393–410. CrossRef CAS PubMed Web of Science Google Scholar
Pryor, A., Yang, Y., Rana, A., GallagherJones, M., Zhou, J., Lo, Y. H., Melinte, G., Chiu, W., Rodríguez, J. A. & Miao, J. (2017). Sci. Rep. 7, 10409. Web of Science CrossRef PubMed Google Scholar
Schaaf, A. van der & van Hateren, J. H. (1996). Vision Res. 36, 2759–2770. PubMed Web of Science Google Scholar
Shaw, P. J., Agard, D. A., Hiraoka, Y. & Sedat, J. W. (1989). Biophys. J. 55, 101–110. CrossRef CAS PubMed Web of Science Google Scholar
Shepp, L. A., Hilal, S. K. & Schulz, R. A. (1979). Comput. Graph. Image Process. 10, 246–255. CrossRef Web of Science Google Scholar
Shepp, L. A. & Logan, B. F. (1974). IEEE Trans. Nucl. Sci. 21, 21–43. CrossRef Web of Science Google Scholar
Vo, N. T., Drakopoulos, M., Atwood, R. C. & Reinhard, C. (2014). Opt. Express, 22, 19078–19086. Web of Science CrossRef PubMed Google Scholar
Walls, J. R., Sled, J. G., Sharpe, J. & Henkelman, R. M. (2005). Phys. Med. Biol. 50, 4645–4665. Web of Science CrossRef PubMed Google Scholar
Walt, S. van der, Schönberger, J. L., NunezIglesias, J., Boulogne, F., Warner, J. D., Yager, N., Gouillart, E. & Yu, T. (2014). PeerJ, 2, e453. Web of Science PubMed Google Scholar
Yang, X., De Carlo, F., Phatak, C. & Gürsoy, D. (2017). J. Synchrotron Rad. 24, 469–475. Web of Science CrossRef IUCr Journals Google Scholar
This is an openaccess article distributed under the terms of the Creative Commons Attribution (CCBY) Licence, which permits unrestricted use, distribution, and reproduction in any medium, provided the original authors and source are cited.