research papers
Responsive alignment for X-ray tomography beamlines
aBrazilian Synchrotron Light Laboratory, Rue Giuseppe Maximo Scolfaro 10000, Campinas, Sao Paulo, Brazil, and bUniversity of Campinas, Barao Geraldo, Campinas, SP 13083-970, Brazil
*Correspondence e-mail: eduardo.miqueles@lnls.br
X-ray computed tomography (CT) is an imaging technique intended to obtain the internal structure and three-dimensional representation of a sample. In general, parallel-beam CT reconstruction algorithms require a precise angular alignment and knowledge of the exact axis of rotation position. Highly brilliant X-ray sources with ever-increasing data-acquisition rates demand optimized alignment techniques to avoid compromising in situ data analysis. This paper presents a method to automatically align the angular orientation and linear position of the rotation axis in a tomography setup, correlating image features from different X-ray projections.
Keywords: X-ray tomography; tomography setup; automatic alignment; linear misalignment; angular misalignment.
1. Introduction
The Brazilian Synchrotron Light Laboratory is currently engaged in the development and construction of Sirius (Rodrigues et al., 2016), a storage-ring-based fourth-generation light source (Hettel, 2014). Its ultra-low emittance (0.28 nm rad) and high allows the execution of very competitive experiments, opening new perspectives for research in different fields such as materials science, structural biology, nanoscience, physics, earth and environmental science and cultural heritage among many others. The proposed X-ray tomography beamline at Sirius, named MicrO and NanO Tomography (MOGNO) (Archilha et al., 2016), is being designed to be a micro- and nano-imaging beamline focused towards multiscale analysis of the internal three-dimensional structures of different materials and objects (Costa et al., 2018). Given the real-time nature of image acquisition provided by MOGNO, owing to the associated energy and high characteristics, it will allow a three-dimensional image to be obtained in the order of 1–5 s, requiring automatic methods for a rapid and robust tomography setup (Vasconcelos et al., 2018).
1.1. Computed tomography
Computed tomography (CT) is a process that uses multiple X-ray radiographs at different angles to produce three-dimensional representative and cross-sectional images of specific areas of a scanned object, allowing the user to see inside the object without cutting. A CT scan usually works with three basic components: an X-ray source, a sample and an X-ray detector (Bonse & Busch, 1996). As the sample rotates upon a stage between θ = 0 and θ = π, several projection images (or frames) are collected by an area detector, typically CCD (charged coupled device) based devices or direct conversion counting detectors like Medipix3RX (Gimenez et al., 2011; Rinkel et al., 2015). After a half-rotation (0–180°), a cube of information is generated; each single slice of this volume is called a sinogram and needs to be processed using a tomographic reconstruction algorithm to produce the three-dimensional images.
Significant progress with spatial resolution and ever-increasing data-acquisition rates provide new tomography techniques, such as four-dimensional tomography (García-Moreno et al., 2018). In this context, in situ data analysis plays an important role in the success of the experiment, requiring new automatic setup methods and faster reconstruction algorithms such as the low-complexity distributed tomographic backprojection with a fast CUDA implementation (Martinez et al., 2017) that is based on an alternative low-cost backprojection operator (Miqueles et al., 2018). This algorithm is currently running at the IMX beamline and was used to generate the three-dimensional images included in this paper. In general, reconstruction algorithms based on backprojection require a precise angular alignment and knowledge of the exact axis of rotation position to guarantee Ludwig–Helgason consistency conditions (Helgason, 2011; Willsky & Prince, 1990; Natterer, 1986).
Fig. 1 illustrates a typical misaligned parallel-beam tomography setup that will be used as a reference in the next section. The x axis is horizontal, the y axis is vertical and the z axis is aligned with the beam direction. The angles of rotation around the axes x, y, and z are denoted as pitch, yaw (θ, tomography axis) and roll, respectively. The plane of the detector is considered to be positioned normal to z. The linear offset d and the misalignment angles α and γ will be discussed in §1.2 and §1.3.
1.2. Angular misalignment
Angular misalignment is a common problem in high-resolution experiments (micro- and nano-tomography). It occurs when the rotation axis is not aligned with the y axis, making α and γ non-zero. In Fig. 1, it is possible to see these two angles in the axis of rotation projection in the planes xy (detector) and yz (normal to the detector).
This type of error cannot be repaired after the experiment so it is the largest critical error that should be avoided. Mathematically, any reconstruction of experiments with angular deviations does not represent the true measured sample. Some beamlines use sensors to perform this alignment, even conventional levels are used (Sun et al., 2006); however, this type of approach does not offer the speed and robustness required for new state-of-the-art experiments.
Our approach corrects the system automatically without human intervention, facilitating calibration of the beamline after modifying the tomography setup even by non-specialized people, as in the case where the users are not familiar with calibration.
1.3. Linear misalignment
In addition to the correction of the angles α and γ, the Ludwig–Helgason condition (Helgason, 2011) requires that the projection of the rotation axis will fall on the central column of the detector plane in a parallel-beam system. That is, the distance d, illustrated in Fig. 1, should be zero. This is a difficult condition to satisfy in a nano-resolution experimental setup. Different methods based on the sinogram analysis (Weitkamp & Bleuet, 2004) can also be applied. However, in this manuscript we provide an analysis on the projection instead of sinograms.
Most popular reconstruction algorithms (like FBP, used on IMX) rely on the previous knowledge of the sinogram ray-offset. Three types of methods are mainly used to find this parameter (Jun & Yoon, 2017). The first analyzes the quality of the reconstructed image and plots it as a function of the relative offset. The second uses the calculated image center of mass; to ensure it works the samples have to be inside the field-of-view during all projections angles, which is often not possible (Donath et al., 2006). The third uses images taken at reverse angles (0, 180° for example); then after registering both images, the offset of the center of rotation can be calculated (Yang et al., 2015).
All of these methods correct the offset after the experiment is completed and change the data. The method presented in this paper minimizes d before the data acquisition by moving the beamline motors to the correct position, generating the experiment already under the conditions of Ludwig–Helgason and reducing the necessity of data processing. Also, the proposed method uses only the projections generated by the beamline to perform the alignments so it can be used in a wide range of resolutions, from micro to nano, without requiring any additional configuration.
2. Alignment methodology
The procedures for correcting the misalignment that are described in the previous section are performed in separate steps and are required whenever the beamline setup is modified, e.g. moving the detector or sample stages, sample-environment interchange, or any type of movement that may cause variation in the axis of rotation.
To work automatically and reliably, the proposed method depends on the use of robust algorithms to locate the features in X-ray projection images. Some of the most used object-detection algorithms are SURF (Bay et al., 2006), SIFT (Lowe, 2004), AKAZE (Alcantarilla & Solutions, 2011), ORB (Rublee et al., 2011) and FAST (Rosten & Drummond, 2006). In this paper, the fast and robust Scale Invariant Feature Transform (SIFT) method is used as an example. SIFT features are invariant to image scaling and rotation, and partially invariant to changes in illumination and three-dimensional camera view point. They are well localized in both the spatial and frequency domains, reducing the probability of disruption by occlusion, clutter or noise (Lowe, 2004).
Both alignment steps extract the projection features at different angles and compare them in order to map the spatial regions in the image. The result of the comparison process is a vector containing the compatible features and their respective locations. The features are filtered to remove outliers and matches from image artifacts. Fig. 2 illustrates the resultant matching of a comparison between two projections of a mouse embryo using the SIFT algorithm.
2.1. Angular alignment
In an aligned tomography setup, the angles γ and α are equal to zero. Also, when a sample rotation is performed, the projection of the features in the detector plane is not affected. However, when there is an angular misalignment feature at different θ, this presents position variations. The effects in the feature projection with γ and α misalignment are represented in Figs. 3 and 4, respectively; the light gray ellipses represent the sample with the rotational stage in θ and the dark gray ellipses represent a sample in θ + π. Colored dots are examples of features. The blue axis represents the central column of the detector, the green axis represents the center of the sample and the red axis is the projection of the rotation axis. The dashed line represents the border of the detector.
To maximize the visualization of α misalignment, it is interesting, though not necessary, to place the sample in the rotational stage with a horizontal shift. For γ, the error maximization occurs by positioning the sample shifted in the direction of the beam path (z axis).
The process for angle alignment is performed separately for pitch and roll and goes through an iterative process for minimization, which is given by the following equation
Each iteration of the process has the following steps. (i) Obtain the first X-ray projection at θ. (ii) Obtain the second X-ray projection at θ + π. (iii) Locate the SIFT features in both projections. (iv) Match the image features. (v) Calculate . (vi) Convert to an angle. (vii) Move the motor for the calculated angle. The process is repeated, changing pitch and roll angles until it reaches the minimum value allowed by the motor precision or a limiting value.
2.2. Linear positioning
For a tomography experiment with an offset of d > 0, as illustrated in Fig. 5, the absolute value of the feature distances between the different projections in θ and θ + π are not equal. This offset can be calculated using the distance of the features from the center of the detector, defined by
To remove fixed-pattern noise from the images, flat-field correction is used. Background (without a sample in front of the detector: I0) and noise or dark (without an X-ray beam: D) images are acquired. The measured projection images (samples at θ and θ + π: Iθ) are then normalized to new images: Nθ = (Iθ − D)/(I0 − D). The rotation axis is calculated using equation (2) and d is minimized, moving on the x axis direction translation motor.
After this step, it is also interesting to align the sample over the rotation axis to make better use of the detector field-of-view. The value of the frame centroid is calculated using the image moments (Chaumette, 2004) and then moved to the detector center.
3. Alignment results
In this section, misaligned experimental tomography images are compared with experiments carried out after the alignment process without human intervention. A common case of sample alignment performed in the IMX beamline is also detailed for the linear positioning step.
3.1. Angular alignment step results
Fig. 6 illustrates the effects of pitch misalignment and roll angles on the reconstruction. There is clear evidence of artifacts with semi-circle shapes in all reconstructed slices, which modify their direction as the slice varies. In the upper part of the sample, the artifacts with the concavity face upwards; however, in the lower part of the sample the artifacts with the concavity face down. In the central part, the intensity decreases. After the setup alignment using the methodology proposed in this paper, the experiment was repeated and the artifacts were eliminated, as shown in Fig. 7.
3.2. Linear positioning step results
To demonstrate the procedure for minimizing d, a sample alignment of a real experiment at the IMX beamline is used as an example. After positioning the sample at the rotational stage, the user starts the alignment process. Each row in Fig. 8 illustrates an iteration of the code. An iteration consists of taking a projection at θ and θ + π, the features between the two images are found using the SIFT algorithm, distance d and the centroid of the sample are calculated and finally the motors are moved to align the center of mass and the projection of the rotation axis to be exactly collinear with the central column of the detector. In this case the procedure is repeated until deviations are <1 pixel.
It is possible to observe that in the first iteration (see Fig. 8) both the rotation-axis projection and the center of mass have an offset. In the second iteration this is not so. To finish the alignment of the sample in the detector center, it is also necessary for the sample to be aligned in the θ + π/2 direction and to ensure that the sample is fully within the field-of-view at all angles during the CT experiment.
Fig. 9 illustrates the iterations necessary to align the sample in θ + π/2 and θ + 3π/4. It is verified that, in the first iteration, the code already finds the rotation axis aligned, since it was aligned during the alignment in the angles θ and θ + π. However, if the code finds a greater number of matches in any iteration, the rotation axis is recalculated to ensure a better positioning. In the example presented, the alignment in θ + π/2 and θ + 3π/4 have one more iteration than the alignment in θ and θ + π, this is due to the fact that the sample was not totally within the field-of-view in the projections, so the code first brings the sample inside it and then recalculates the center of mass until there is no more information entering the image. The position of the center of mass of the sample has no relation to the quality of the reconstruction, it serves only to keep the sample fully within the field-of-view and help the user to fully align the sample.
The magnification located at the top of Fig. 10 illustrates the effects of the rotation-axis displacement on the reconstruction. Artifacts with semi-circle shapes are observed in all reconstructed slices. However, unlike angular misalignment, artifacts have the same intensity and direction in any part of the reconstruction. After alignment using the methodology proposed in this paper, the measurement was repeated and the result is shown in the magnification located at the bottom of Fig. 10. Again the artifacts are completely removed. In the misaligned case, d was only 2 pixels but enough to create visible artifacts.
4. Conclusions
The method proposed uses only the image generated by the beamline detector to perform the alignment process. This is a great advantage as it excludes the need to acquire high-resolution positioning sensors. Another advantage is the adaptation of the alignment resolution according to the image resolution, that is, the same algorithm can be used for both micro- and nano-tomography experiments.
Applying this alignment method to the IMX beamline has proven to be a great help to users as they spend less time on the alignment process. Before this method, users would take around 10 min to linearly align the sample before each measurement, now it has been reduced to <1 min (90% time reduction), with better positioning results. Pitch and roll alignment used to be carried out manually and was time consuming; this methodology is faster, automatic and reliable.
Acknowledgements
The mouse embryo was kindly provided by Murilo Carvalho. We also thank Nathaly L. Archilha for the scientific discussion during the preparation of this manuscript.
References
Alcantarilla, P. F. & Solutions, T. (2011). IEEE Trans. Patt. Anal. Mach. Intell. 34, 1281–1298. Google Scholar
Archilha, N., O' Dowd, F., Moreno, G. & Miqueles, E. (2016). 2016 SEG International Exposition and 86th Annual Meeting, 18–21 October 2016, Dallas, TX, USA. SEG-2016-13959946. Society of Exploration Geophysicists. Google Scholar
Bay, H., Tuytelaars, T. & Van Gool, L. (2006). Proceedings of the 9th European Conference on Computer Vision, 7–13 May 2006, Graz, Austria, pp. 404–417. Springer. Google Scholar
Bonse, U. & Busch, F. (1996). Prog. Biophys. Mol. Biol. 65, 133–169. CrossRef CAS PubMed Web of Science Google Scholar
Chaumette, F. (2004). IEEE Trans. Rob. 20, 713–723. Web of Science CrossRef Google Scholar
Costa, G., Archilha, N. L., O' Dowd, F. & Vasconcelos, G. (2018). Proceedings of the 16th International Conference on Accelerator and Large Experimental Physics Control Systems (ICALEPCS 2017), Barcelona, Spain, 8–13 October 2017. TUPHA203. Google Scholar
Donath, T., Beckmann, F. & Schreyer, A. (2006). J. Opt. Soc. Am. A, 23, 1048. Web of Science CrossRef Google Scholar
García-Moreno, F., Kamm, P. H., Neu, T. R. & Banhart, J. (2018). J. Synchrotron Rad. 25, 1505–1508. Web of Science CrossRef IUCr Journals Google Scholar
Gimenez, E. N., Ballabriga, R., Campbell, M., Horswell, I., Llopart, X., Marchal, J., Sawhney, K. J., Tartoni, N. & Turecek, D. (2011). IEEE Trans. Nucl. Sci. 58, 323–332. Web of Science CrossRef Google Scholar
Helgason, S. (2011). The Radon Transform on Rn. Berlin: Springer. Google Scholar
Hettel, R. (2014). J. Synchrotron Rad. 21, 843–855. Web of Science CrossRef IUCr Journals Google Scholar
Jun, K. & Yoon, S. (2017). Sci. Rep. 7, 41218. Web of Science CrossRef PubMed Google Scholar
Lowe, D. G. (2004). Int. J. Comput. Vis. 60, 91–110. Web of Science CrossRef Google Scholar
Martinez, G., Filho, J. V. F. & Miqueles, E. X. (2017). arXiv:1704.08364. Google Scholar
Miqueles, E., Koshev, N. & Helou, E. S. (2018). IEEE Trans. Image Process. 27, 894–906. Web of Science CrossRef PubMed Google Scholar
Natterer, F. (1986). The Mathematics of Computerized Tomography, Vol. 32 of Classics in Applied Mathematics. Philadelphia: Society for Industrial and Applied Mathematics. Google Scholar
Rinkel, J., Magalhães, D., Wagner, F., Frojdh, E. & Ballabriga Sune, R. (2015). Nucl. Instrum. Methods Phys. Res. A, 801, 1–6. Web of Science CrossRef Google Scholar
Rodrigues, A., Rodrigues, C., Arroyo, F., Marques, S., Farias, R., Rodrigues, F., Citadini, J., Bagnato, O., Seraphim, R., Liu, L., Franco, J., Neuenschwander, R. & Silva, O. (2016). Proceedings of the 7th International Particle Accelerator Conference (IPAC 2016), 8–13 May, 2016, Busan, Korea. WEPOW001. Google Scholar
Rosten, E. & Drummond, T. (2006). Proceedings of the 9th European Conference on Computer Vision, 7–13 May 2006, Graz, Austria, pp. 430–443. Springer. Google Scholar
Rublee, E., Rabaud, V., Konolige, K. & Bradski, G. (2011). 2011 IEEE International Conference on Computer Vision (ICCV), pp. 2564–2571. IEEE. Google Scholar
Sun, Y., Hou, Y., Zhao, F. & Hu, J. (2006). NDT E Int. 39, 499–513. Web of Science CrossRef Google Scholar
Vasconcelos, G., Costa, G. & Miqueles, E. (2018). Proceedings of the 16th International Conference on Accelerator and Large Experimental Physics Control Systems (ICALEPCS 2017), Barcelona, Spain, 8–13 October 2017. THPHA197. Google Scholar
Weitkamp, T. & Bleuet, P. (2004). Proc. SPIE, 5535, 623–628. CrossRef Google Scholar
Willsky, A. S. (1990). Opt. Eng. 29, 535–544. CrossRef Google Scholar
Yang, Y., Yang, F., Hingerl, F. F., Xiao, X., Liu, Y., Wu, Z., Benson, S. M., Toney, M. F., Andrews, J. C. & Pianetta, P. (2015). J. Synchrotron Rad. 22, 452–457. Web of Science CrossRef IUCr Journals Google Scholar
© International Union of Crystallography. Prior permission is not required to reproduce short quotations, tables and figures from this article, provided the original authors and source are cited. For more information, click here.