research papers\(\def\hfill{\hskip 5em}\def\hfil{\hskip 3em}\def\eqno#1{\hfil {#1}}\)

Journal logoJOURNAL OF
SYNCHROTRON
RADIATION
ISSN: 1600-5775

Registration of the rotation axis in X-ray tomography

CROSSMARK_Color_square_no_text.svg

aTianjin Yaohua High School, 106 Nanjing Road, Tianjin 300040, People's Republic of China, bNational Synchrotron Radiation Laboratory, University of Science and Technology of China, Hefei, Anhui 230027, People's Republic of China, cDepartment of Energy Resources Engineering, Stanford University, 473 Via Ortega, Stanford, CA 94305, USA, dAdvanced Photon Source, Argonne National Laboratory, Argonne, IL 60439, USA, eStanford Synchrotron Radiation Lightsource, SLAC National Accelerator Laboratory, Menlo Park, CA 94025, USA, and fBeijing Synchrotron Radiation Facility, Institute of High Energy Physics, Beijing 100049, People's Republic of China
*Correspondence e-mail: xhxiao@aps.anl.gov, liuyijin@slac.stanford.edu

(Received 2 October 2014; accepted 12 December 2014; online 4 February 2015)

There is high demand for efficient, robust and automated routines for tomographic data reduction, particularly for synchrotron data. Registration of the rotation axis in data processing is a critical step affecting the quality of the reconstruction and is not easily implemented with automation. Existing methods for calculating the center of rotation have been reviewed and an improved algorithm to register the rotation axis in tomographic data is presented. The performance of the proposed method is evaluated using synchrotron-based microtomography data on geological samples with and without artificial reduction of the signal-to-noise ratio. The proposed method improves the reconstruction quality by correcting both the tilting error and the translational offset of the rotation axis. The limitation of this promising method is also discussed.

1. Introduction

X-ray tomography is a well known technique (Bautz & Kalender, 2005[Bautz, W. & Kalender, W. (2005). Radiologe, 45, 350-355.]) that has achieved tremendous success in applications in fields ranging from clinical use (Cerqueira et al., 2002[Cerqueira, M. D., Weissman, N. J., Dilsizian, V., Jacobs, A. K., Kaul, S., Laskey, W. K., Pennell, D. J., Rumberger, J. A., Ryan, T. & Verani, M. S. (2002). Circulation, 105, 539-542.]) to archaeology (Lam et al., 1998[Lam, Y. M., Chen, X., Marean, C. W. & Frey, C. J. (1998). J. Archaeol. Sci. 25, 559-570.]), biology (Larabell & Nugent, 2010[Larabell, C. A. & Nugent, K. A. (2010). Curr. Opin. Struct. Biol. 20, 623-631.]), geosciences (Zhu et al., 2011[Zhu, W., Gaetani, G. A., Fusseis, F., Montesi, L. G. J. & De Carlo, F. (2011). Science, 332, 88-91.]) and materials science (Fu et al., 2011[Fu, Q., Saiz, E. & Tomsia, A. P. (2011). Adv. Funct. Mater. 21, 1058-1063.]). Especially when combined with advanced X-ray sources (e.g. synchrotrons), more sophisticated experiments involving energy resolution (Kao et al., 2013[Kao, T. L., Shi, C. Y., Wang, J., Mao, W. L., Liu, Y. & Yang, W. (2013). Microsc. Res. Tech. 76, 1112-1117.]; Rau et al., 2003[Rau, C., Somogyi, A. & Simionovici, A. (2003). Nucl. Instrum. Methods Phys. Res. B, 200, 444-450.]) and time resolution (Ebner et al., 2013[Ebner, M., Marone, F., Stampanoni, M. & Wood, V. (2013). Science, 342, 716-720.]) are possible, providing significant additional insight into scientific processes. Because of the large amounts of data produced in these investigations, it is important to establish efficient and robust data reduction routines for synchrotron tomography. More importantly, because of the demand of high-throughput experiments (Wang et al., 2001[Wang, Y., De Carlo, F., Mancini, D., McNulty, I., Tieman, B., Bresnahan, J., Foster, I., Insley, J., Lane, P., von Laszewski, G., Kesselman, C., Su, M.-H. & Thiebaux, M. (2001). Rev. Sci. Instrum. 72, 2062-2068.]), it is very desirable to achieve a good degree of automation that allows data processing with or without minimum human interaction.

In an ideally configured parallel-beam tomography (as in most synchrotron-based tomography experiments), the specimen is placed on a rotation stage with the rotation axis perpendicular to the optical path of the incoming X-ray beams. The rotation axis is projected onto the center column of a downstream two-dimensional area detector. Multiple projection images are acquired as the sample is rotated over an angular range (typically 0–180°). Three-dimensional data (x, y, the horizontal and vertical axis on the detector plan, and θ, the sample orientation with respect to the rotation axis) are collected and later reconstructed into a real-space three-dimensional matrix (x, y, z: width, depth and height axis) using tomographic reconstruction algorithms (Beylkin, 1987[Beylkin, G. (1987). IEEE Trans. Acoust. Speech Signal Process. 35, 162-172.]). Further analysis, quantification and visualization of the internal structure (Rubin et al., 1996[Rubin, G. D., Beaulieu, C. F., Argiro, V., Ringl, H., Norbash, A. M., Feller, J. F., Dake, M. D., Jeffrey, R. B. & Napel, S. (1996). Radiology, 199, 321-330.]) is achieved with three-dimensional rendering, virtual sectioning, etc. It is essential to determine the position of the projection of the rotation axis on the two-dimensional detector sensor for a good quality reconstruction.

There are mainly three types of method for determining the center of rotation. The first type evaluates the output image from the tomographic reconstruction by defining a parameter, which is a measurement of the image quality, and plotting it as a function of the relative offset of the rotation axis (Gürsoy et al., 2014[Gürsoy, D., De Carlo, F., Xiao, X. & Jacobsen, C. (2014). J. Synchrotron Rad. 21, 1188-1193.]). This method is widely used, but it can be inefficient when large-scale data sets are evaluated; it may also require human interaction when the reconstruction contains artifacts that could cause image reconstruction to converge on some local minimum. The second category searches for the center-of-mass (Azevedo et al., 1990[Azevedo, S. G., Schneberk, D. J., Fitch, J. P. & Martz, H. E. (1990). IEEE Trans. Nucl. Sci. 37, 1525-1540.]; Donath et al., 2006[Donath, T., Beckmann, F. & Schreyer, A. (2006). J. Opt. Soc. Am. A, 23, 1048-1057.]). However, in order to make it work, the sample must be completely contained within the field of view at all projection angles, which is often not possible. The third category studies projection images taken at reverse projection angles (Pan et al., 2012[Pan, Y., De Carlo, F. & Xiao, X. (2012). Proc. SPIE, 8313, 831328.]). By performing image registration of the image pair taken in the reverse projection angles, the offset of the center of rotation can be calculated. This method is usually quite efficient and is adopted in this study with improvements.

While the effects of the translation of the rotation axis with respect to the central column of the two-dimensional area detector are well understood and recognized, the effects of the tilting of the rotation axis are often ignored. In a typical synchrotron-based tomographic experiment, a large amount of work is often required prior to data acquisition to calibrate the sample stage. The rotation axis can be kept vertical by installing the rotation stage on top of the multiple-axis kinematic positioner to adjust the tilt. The required precision of the tilting adjustment is quite high and it depends on the spatial resolution and size of the field of view. With the development of large-area detectors and the implementation of mosaic imaging capability (Liu et al., 2012[Liu, Y., Meirer, F., Williams, P. A., Wang, J., Andrews, J. C. & Pianetta, P. (2012). J. Synchrotron Rad. 19, 281-287.]), the requirement for precise tilting adjustment is becoming even more strict.

It is worth mentioning a less common but interesting method, recently proposed by Vo et al. (2014[Vo, N. T., Drakopoulos, M., Atwood, R. C. & Reinhard, C. (2014). Opt. Express, 22, 19078.]), in which an evaluation of the Fourier transform of the sinogram is carried out in order to calculate the center of rotation. However, the issue of stage and detector tilting was, again, not considered.

In this work, we present an improved method for registration of the rotation axis of the tomography data set. Taking both the rotation stage offset and tilting into consideration, our method corrects the projection images and improves the quality of the reconstruction. The performance of the proposed method is evaluated using synchrotron-based microtomography data on geological samples with and without artificial reduction of the signal-to-noise ratio (S/N).

2. Experiment

Synchrotron-based X-ray microtomography was performed at beamline 2-BM-B (Wang et al., 2001[Wang, Y., De Carlo, F., Mancini, D., McNulty, I., Tieman, B., Bresnahan, J., Foster, I., Insley, J., Lane, P., von Laszewski, G., Kesselman, C., Su, M.-H. & Thiebaux, M. (2001). Rev. Sci. Instrum. 72, 2062-2068.]) of the Advanced Photon Source at Argonne National Laboratory. In this experiment, a monochromatic 25 keV beam was used to study sandstone samples collected from the Heletz structure (Niemi et al., 2012[Niemi, A., Bensabat, J., Fagerlund, F., Sauter, M., Ghergut, J., Licha, T., Fierz, T., Wiegand, G., Rasmusson, M., Rasmusson, K., Shtivelman, V. & Gendler, M. (2012). Energy Proc. 23, 504-511.]), in order to evaluate and to understand the possibility of CO2 geological sequestration (Sedjo & Sohngen, 2012[Sedjo, R. & Sohngen, B. (2012). Annu. Rev. Resource Econ. 4, 127-144.]) in this type of underground formation. The detector used in this experiment has pixel size of 1.48 × 1.48 µm and the field of view was about 3.8 × 3.2 mm. In this experiment, miscalibration of the tilting of the rotation stage (by about 0.5°) was introduced on purpose in the plane defined by the beam path and the vertical axis and/or the plane that is perpendicular to the beam path.

3. Method

As mentioned above, we performed a registration of the rotation center by analysing a pair of images collected at reverse projection angles (at 0 and 180°). In parallel-beam tomography, images at reverse viewing angles are simply mirrored. One should obtain identical images by flipping one of these two mirrored images along the rotation axis. Mismatch of the image features occurs when there is an offset and/or tilting of the rotation axis with respect to the central column of the area detector. Corrections for both tilting and translational offset on the projection data are needed before generating the sinogram for further reconstruction.

For correction of axis tilting, image feature detection and recognition are essential. Algorithms including Speeded Up Robust Features (SURF) (Bay et al., 2008[Bay, H., Ess, A., Tuytelaars, T. & Van Gool, L. (2008). Comput. Vis. Image Underst. 110, 346-359.]) and Scale-Invariant Feature Transform (SIFT) (Lowe, 2004[Lowe, D. G. (2004). Int. J. Comput. Vis. 60, 91-110.]; Vedaldi & Fulkerson, 2010[Vedaldi, A. & Fulkerson, B. (2010). Design, 3, 1-4.]) were implemented to determine the amount of tilting error. SIFT and SURF are both algorithms for image feature identification, which extract key points and compute the corresponding descriptors. The coordinates of the detected pairing features, on the two projection images in reverse viewing angles, provide orientation vectors that are used to determine the center of axis tilting in the plane perpendicular to the beam path. Then the amount of axis tilting is estimated by statistical analysis of the tilting angle of all the pairing features (one value for each pair of features). It is useful to note that the SIFT and the SURF calculation are both fairly efficient. It takes a few minute for SIFT to register two 2k × 2k images on a single CPU; while it takes a few seconds for SURF to accomplish the same task, it is more sensitive to noise (see the discussion below), however. With the implementation of GPU calculation, the speed can be further improved.

For correction of translation with respect to the central column on the area detector, algorithms including phase correlation (Reddy & Chatterji, 1996[Reddy, B. S. & Chatterji, B. N. (1996). IEEE Trans. Image Process. 5, 1266-1271.]), cross-correlation (Szeliski, 2006[Szeliski, R. (2006). Handbook of Mathematical Models in Computer Vision, pp. 273-292. Berlin: Springer.]), SURF and SIFT were employed. A detailed presentation and an evaluation of the results from this method are elaborated in §4[link].

4. Results

A typical projection image (at 0°) of the sandstone sample is shown in Fig. 1[link](a) with the color legend shown on the right-hand edge. Background subtractions including dark-field correction (Gürsoy et al., 2014[Gürsoy, D., De Carlo, F., Xiao, X. & Jacobsen, C. (2014). J. Synchrotron Rad. 21, 1188-1193.]) and bright-field correction (Beer–Lambert law) (Titarenko et al., 2010[Titarenko, V., Titarenko, S., Withers, P. J., De Carlo, F. & Xiao, X. (2010). J. Synchrotron Rad. 17, 689-699.]) were applied to remove features in the raw data that were introduced by defects in the imaging system. The image taken in the opposite viewing angle (180°, the green in Fig. 1[link]b) was flipped horizontally and overlaid on the 0° image (the red in Fig. 1[link]b). The mismatch of the features, as indicated by separation of red and green in Fig. 1[link](b), especially in the magnified view of the highlighted area, indicates that the actual rotation axis is not well aligned (off in both translation and tilting) with the central column of the image. Registration of the projection of the real rotation axis on the tomographic data is critical to ensure the quality of the reconstruction.

[Figure 1]
Figure 1
(a) A typical projection X-ray image (scale bar and color legend in the inset). (b) Overlay of the image at 0° (red) and the horizontally flipped image at 180° (green). A magnified view of the highlighted area (white box) is displayed in the inset, showing mismatch of features due to translation and tilting of the rotation axis with respect to the central column of the image. (c) SIFT-identified matching features between the 0° image and the horizontally flipped 180° image. Each set of two white points connected by a line indicates a pair of matching features.

The SURF and the SIFT methods, both of which detect localized image features and provide information about the transformation based on the relative positions of the matching features, were both implemented (in Matlab®) to identify and to match sample features from the 0° image with the horizontally flipped 180° image (Fig. 1[link]c). In the following discussion, we will present the result of tilting correction using the SIFT algorithm. As illustrated in Fig. 1[link](c), coordinates of the matching features can be used to estimate the amount of relative rotation for these two images. This tilting correction is performed in real space, and is independent of the translation correction under normal experimental conditions. As a result, the tilting correction makes the subsequent translation correction more precise. There are obvious outliers in the point pairs shown in Fig. 1[link](c), which are filtered out by setting a threshold value of the pair distance. To further study the independency/dependency between the rotational and the translational correction, we introduce numerically an additional translational offset of the rotation axis by cropping the projection images in the horizontal direction. Fig. 2[link](a) shows the calculated amount of axis tilting as a function of the numerically introduced translational offset. The variation in the amount of the calculated axis tilting is within ±0.02° when the translational offset is smaller than about 1000 pixels, which is about half of the field view and is often satisfied in real experiments. However, when the translational offset is greater than 1000 pixels, this algorithm breaks down because the detection of matching features fails.

[Figure 2]
Figure 2
(a) The calculated tilting angle as a function of artificially introduced translation offset. (b) The coordinates of each pair of matching features are used to calculate the amount of relative rotation, resulting in a statistical distribution which changes as different amounts of artificial noise are introduced.

As one would expect, the effectiveness of this method depends on the statistical distribution of the values generated from a number of pairing features [illustrated in Fig. 1[link](c)]. In order to study the robustness of this method against the imaging noise, we have carried out systematic evaluation, as described below. The histogram of the estimated tilting angles (one for each feature pair) is plotted in Fig. 2[link](b). Poisson noise, different amounts of random noise, and different densities of the `salt and pepper' noise are added to the raw image, respectively, for evaluating the noise effect. In the case of Poisson noise, the input pixel values are interpreted as means of Poisson distributions scaled up by 1 × 1012, and then the corresponding output pixel is generated from a Poisson distribution with mean of 5.5 and then scaled back down by 1 × 1012 (as described in the Matlab R2014b documentation). The random noise is introduced by adding a random matrix (white noise) scaled by the corresponding percentage of the mean intensity. The salt and pepper noise is added at different densities indicated by the corresponding percentage value. The result plotted in Fig. 2[link](b) shows that the proposed tilting correction method works fairly well with the Poisson noise and the random noise. However, the salt and pepper noise fails the algorithm at a density of 5%. This is because the feature-identification-based algorithms are very sensitive to the salt and pepper noise. Applying the noise reduction methods (especially for the reduction of the salt and pepper noise) prior to the SIFT/SURF calculation could be beneficial.

After correction for the tilting in the rotation axis, the translation correction is relatively straightforward. While the SIFT and the SURF methods could still be used (and were implemented in our software package) for the translation correction, alternative analytical algorithms including phase correlation and cross-correlation are computationally less expensive. In Table 1[link], we list the calculated relative shift of the rotation center using different algorithms and with different amounts of artificial noise. The phase correlation and the cross-correlation method show good robustness to the noise; while the feature-identification-based methods (SIFT and SURF) are more sensitive to the noise, especially to the salt and pepper noise.

Table 1
The calculated translational offset of the center of rotation (in pixels)

Artificial noise Phase correlation Cross correlation SIFT SURF
Raw data −81.8 −81.9 −81.8 −81.9
Poisson −81.8 −82.0 −81.8 −82.6
1% random −81.8 −82.1 −81.8 −82.4
3% random −81.8 −82.1 −81.7 −83.6
5% random −81.8 −82.1 −81.7 −84.2
10% random −81.8 −82.1 −81.6 −84.4
1% salt and pepper −81.8 −81.8 −81.7 −84.2
2% salt and pepper −82.0 −81.5 −80.1 −86.0
5% salt and pepper −82.2 −78.6 −77.5 Failed

For better evaluation of the proposed method, we performed a reconstruction of the experimental data in which the tilting error is introduced in the plane perpendicular to the optical path on purpose. The reconstructed central slices with and without translation and/or rotation corrections for the rotation axis are shown in Fig. 3[link]. The improvement in the quality of the reconstruction [comparing fully corrected Fig. 3[link](d) with the other panels in Fig. 3[link]] is significant when registration of the rotation axis is properly performed.

[Figure 3]
Figure 3
Reconstructed central slices from the experimental data with tilting error in the plane perpendicular to the beam path. (a) Uncorrected reconstruction without rotation-axis registration. (b) Reconstruction with translation correction only. (c) Reconstruction with tilting correction only. (d) Reconstruction with both tilting and translation correction.

However, it worth mentioning that the proposed tilting correction is only sensitive to the tilting error in the plane that is perpendicular to the optical path. In Fig. 4[link], the reconstruction results with and without the proposed tilting correction on data with tilting errors in the plane defined by the beam path and the vertical axis and/or the plane perpendicular to the beam path are shown. Figs. 4(a) (no tilting correction) and 4(b) (with tilting correction) show the reconstruction of the data with tilting error only in the plane perpendicular to the beam; Figs. 4[link](c) (no tilting correction) and 4(d) (with tilting correction) show the reconstruction of data with tilting error only in the plane defined by the beam path and the vertical axis; Figs. 4[link](e) (no tilting correction) and 4(f) (with tilting correction) show the reconstruction of the data with tilting error in both planes. While significant improvement can be seen in Fig. 4[link](b) versus Fig. 4[link](a), the improvement seen in Figs. 4[link](f) to 4[link](e) is visible, however, the artifacts are still obvious. There is no noticeable difference in the quality of the reconstruction shown between Figs. 4[link](c) and 4(d).

[Figure 4]
Figure 4
Panels (a) (no tilting correction) and (b) (with tilting correction) show the reconstruction of the data with tilting error only in the plane perpendicular to the beam; panels (c) (no tilting correction) and (d) (with tilting correction) show the reconstruction of data with tilting error only in the plane defined by the beam path and the vertical axis; panels (e) (no tilting correction) and (f) (with tilting correction) show the reconstruction of the data with tilting error in both planes. The second row of panels are the magnified views of the highlighted areas in the corresponding panel above.

5. Discussion and conclusions

We have presented an improved method for registration of the rotation axis in X-ray tomographic data that takes both tilting and translation errors into consideration. By analysing two images acquired at reverse projection angles, the tilting and translation errors of the rotation axis were calculated and corrected prior to tomographic reconstruction, using several different algorithms. The method has demonstrated robustness and efficiency in handling a synchrotron-based tomographic data set on a geological sample. The noise effect is also systematically studied by numerically introduced artificial noise. This fully automatic functionality has now been implemented in an in-house-developed software package known as TXM-Wizard (Liu et al., 2012[Liu, Y., Meirer, F., Williams, P. A., Wang, J., Andrews, J. C. & Pianetta, P. (2012). J. Synchrotron Rad. 19, 281-287.]).

There are some limitations of the method presented in this work. One concern is that only two image frames are used in this routine while the rest of the data are not used for registration of the rotation axis. We expect a hybrid method that integrates the method presented in this work with conventional methods could be designed to perform more precise registration of the rotation axis in an automatic manner. The method proposed in this work should give a very good starting point for the traditional methods (such as the `scoring' type of processes, which take the information from all of the projection images into consideration) to continue with the fine tuning at good (even sub-pixel) resolution.

Another limitation of the proposed approach is that tilting of the rotation axis in the plane defined by the rotation axis and the optical path cannot be corrected. This is because rotational correction of the projection data is performed in the plane that is perpendicular to the optical path and is only sensitive to the tilting error in this plane. However, the degree of tilting error, which is not correctable yet, can be estimated by evaluating vertical movements of sample features using the same algorithms (especially SURF and SIFT) implemented. Algorithms for X-ray-computed laminography (Helfen et al., 2013[Helfen, L., Xu, F., Suhonen, H., Urbanelli, L., Cloetens, P. & Baumbach, T. (2013). J. Instrum. 8, C05006.]), in which the inclination of the sample rotation axis is introduced on purpose for dealing with specimens of large aspect ratio, could then be used for further correction. The FDK method (Feldkamp et al., 1984[Feldkamp, L. A., Davis, L. C. & Kress, J. W. (1984). J. Opt. Soc. Am. A, 1, 612-619.]) is another option that could be applied in the case when the amount of axis tilting is known.

There is an additional issue with high-resolution imaging: the jitter of the stage becomes visible in the data. With the development of novel X-ray focusing optics (Chang & Sakdinawat, 2014[Chang, C. & Sakdinawat, A. (2014). Nat. Commun. 5, 4243.]; Yan et al., 2014[Yan, H., Conley, R., Bouet, N. & Chu, Y. S. (2014). J. Phys. D, 47, 263001.]), high spatial resolution imaging is becoming increasingly available and it is starting to play a more important role in many research fields. In these cases, the mismatch of features from the 0° and flipped 180° images could also be caused by other motor errors, and more sophisticated algorithms would be needed to correct it. The hybrid method mentioned above could be a possible solution.

Acknowledgements

YL gratefully thanks Dr Junyue Wang (HPSTAR) and Dr Youli Hong (Zeiss) for valuable discussions. Portions of this research used resources of the Advanced Photon Source, a US Department of Energy (DOE) Office of Science User Facility operated for the DOE Office of Science by Argonne National Laboratory under Contract No. DE-AC02-06CH11357. Stanford Synchrotron Radiation Lightsource is a Directorate of SLAC National Accelerator Laboratory and an Office of Science User Facility operated for the US Department of Energy Office of Science by Stanford University. FFH and SMB acknowledge support from The Global Climate and Energy Project (GCEP) at Stanford. ZW acknowledges support from the Science Fund for Creative Research Groups, NSFC (Grant No. 11321503), the National Basic Research Program of China (Grant No. 2012CB825801) and the Knowledge Innovation Program of the Chinese Academy of Sciences (Grant No. KJCX2-YW-N42).

References

First citationAzevedo, S. G., Schneberk, D. J., Fitch, J. P. & Martz, H. E. (1990). IEEE Trans. Nucl. Sci. 37, 1525–1540.  CrossRef Web of Science Google Scholar
First citationBautz, W. & Kalender, W. (2005). Radiologe, 45, 350–355.  Web of Science CrossRef PubMed CAS Google Scholar
First citationBay, H., Ess, A., Tuytelaars, T. & Van Gool, L. (2008). Comput. Vis. Image Underst. 110, 346–359.  Web of Science CrossRef Google Scholar
First citationBeylkin, G. (1987). IEEE Trans. Acoust. Speech Signal Process. 35, 162–172.  CrossRef Web of Science Google Scholar
First citationCerqueira, M. D., Weissman, N. J., Dilsizian, V., Jacobs, A. K., Kaul, S., Laskey, W. K., Pennell, D. J., Rumberger, J. A., Ryan, T. & Verani, M. S. (2002). Circulation, 105, 539–542.  Web of Science CrossRef PubMed Google Scholar
First citationChang, C. & Sakdinawat, A. (2014). Nat. Commun. 5, 4243.  Web of Science PubMed Google Scholar
First citationDonath, T., Beckmann, F. & Schreyer, A. (2006). J. Opt. Soc. Am. A, 23, 1048–1057.  Web of Science CrossRef Google Scholar
First citationEbner, M., Marone, F., Stampanoni, M. & Wood, V. (2013). Science, 342, 716–720.  Web of Science CrossRef CAS PubMed Google Scholar
First citationFeldkamp, L. A., Davis, L. C. & Kress, J. W. (1984). J. Opt. Soc. Am. A, 1, 612–619.  CrossRef Web of Science Google Scholar
First citationFu, Q., Saiz, E. & Tomsia, A. P. (2011). Adv. Funct. Mater. 21, 1058–1063.  Web of Science CrossRef CAS PubMed Google Scholar
First citationGürsoy, D., De Carlo, F., Xiao, X. & Jacobsen, C. (2014). J. Synchrotron Rad. 21, 1188–1193.  Web of Science CrossRef IUCr Journals Google Scholar
First citationHelfen, L., Xu, F., Suhonen, H., Urbanelli, L., Cloetens, P. & Baumbach, T. (2013). J. Instrum. 8, C05006.  Web of Science CrossRef Google Scholar
First citationKao, T. L., Shi, C. Y., Wang, J., Mao, W. L., Liu, Y. & Yang, W. (2013). Microsc. Res. Tech. 76, 1112–1117.  Web of Science CrossRef CAS PubMed Google Scholar
First citationLam, Y. M., Chen, X., Marean, C. W. & Frey, C. J. (1998). J. Archaeol. Sci. 25, 559–570.  Web of Science CrossRef Google Scholar
First citationLarabell, C. A. & Nugent, K. A. (2010). Curr. Opin. Struct. Biol. 20, 623–631.  Web of Science CrossRef CAS PubMed Google Scholar
First citationLiu, Y., Meirer, F., Williams, P. A., Wang, J., Andrews, J. C. & Pianetta, P. (2012). J. Synchrotron Rad. 19, 281–287.  Web of Science CrossRef CAS IUCr Journals Google Scholar
First citationLowe, D. G. (2004). Int. J. Comput. Vis. 60, 91–110.  Web of Science CrossRef Google Scholar
First citationNiemi, A., Bensabat, J., Fagerlund, F., Sauter, M., Ghergut, J., Licha, T., Fierz, T., Wiegand, G., Rasmusson, M., Rasmusson, K., Shtivelman, V. & Gendler, M. (2012). Energy Proc. 23, 504–511.  CrossRef CAS Google Scholar
First citationPan, Y., De Carlo, F. & Xiao, X. (2012). Proc. SPIE, 8313, 831328.  CrossRef Google Scholar
First citationRau, C., Somogyi, A. & Simionovici, A. (2003). Nucl. Instrum. Methods Phys. Res. B, 200, 444–450.  Web of Science CrossRef CAS Google Scholar
First citationReddy, B. S. & Chatterji, B. N. (1996). IEEE Trans. Image Process. 5, 1266–1271.  CrossRef PubMed CAS Web of Science Google Scholar
First citationRubin, G. D., Beaulieu, C. F., Argiro, V., Ringl, H., Norbash, A. M., Feller, J. F., Dake, M. D., Jeffrey, R. B. & Napel, S. (1996). Radiology, 199, 321–330.  CrossRef CAS PubMed Google Scholar
First citationSedjo, R. & Sohngen, B. (2012). Annu. Rev. Resource Econ. 4, 127–144.  Web of Science CrossRef Google Scholar
First citationSzeliski, R. (2006). Handbook of Mathematical Models in Computer Vision, pp. 273–292. Berlin: Springer.  Google Scholar
First citationTitarenko, V., Titarenko, S., Withers, P. J., De Carlo, F. & Xiao, X. (2010). J. Synchrotron Rad. 17, 689–699.  Web of Science CrossRef CAS IUCr Journals Google Scholar
First citationVedaldi, A. & Fulkerson, B. (2010). Design, 3, 1–4.  Google Scholar
First citationVo, N. T., Drakopoulos, M., Atwood, R. C. & Reinhard, C. (2014). Opt. Express, 22, 19078.  Web of Science CrossRef PubMed Google Scholar
First citationWang, Y., De Carlo, F., Mancini, D., McNulty, I., Tieman, B., Bresnahan, J., Foster, I., Insley, J., Lane, P., von Laszewski, G., Kesselman, C., Su, M.-H. & Thiebaux, M. (2001). Rev. Sci. Instrum. 72, 2062–2068.  Web of Science CrossRef CAS Google Scholar
First citationYan, H., Conley, R., Bouet, N. & Chu, Y. S. (2014). J. Phys. D, 47, 263001.  Web of Science CrossRef Google Scholar
First citationZhu, W., Gaetani, G. A., Fusseis, F., Montesi, L. G. J. & De Carlo, F. (2011). Science, 332, 88–91.  Web of Science CrossRef CAS PubMed Google Scholar

© International Union of Crystallography. Prior permission is not required to reproduce short quotations, tables and figures from this article, provided the original authors and source are cited. For more information, click here.

Journal logoJOURNAL OF
SYNCHROTRON
RADIATION
ISSN: 1600-5775
Follow J. Synchrotron Rad.
Sign up for e-alerts
Follow J. Synchrotron Rad. on Twitter
Follow us on facebook
Sign up for RSS feeds