research papers
Post-reconstruction 3D single-distance phase retrieval for multi-stage phase-contrast tomography with photon-counting detectors
aNational Institute for Nuclear Physics (INFN) – Trieste Division, Italy, bDepartment of Physics, University of Trieste, Italy, cDepartment of Physical Sciences, Earth and Environment, University of Siena, Italy, dNational Institute for Nuclear Physics (INFN) – Pisa Division, Italy, and eElettra – Sincrotrone Trieste SCpA, Italy
*Correspondence e-mail: francesco.brun@ts.infn.it
In the case of single-distance propagation-based phase-contrast X-ray computed tomography with synchrotron radiation, the conventional reconstruction pipeline includes an independent 2D phase retrieval filtering of each acquired projection prior to the actual reconstruction. In order to compensate for the limited height of the X-ray beam or the small sensitive area of most modern X-ray photon-counting detectors, it is quite common to image large objects with a multi-stage approach, i.e. several acquisitions at different vertical positions of the sample. In this context, the conventional reconstruction pipeline may introduce artifacts at the margins of each vertical stage. This article presents a modified computational protocol where a post-reconstruction 3D volume phase retrieval is applied. By comparing the conventional 2D and the proposed 3D reconstructions of a large mastectomy specimen (9 cm in diameter and 3 cm in height), it is here shown that the 3D approach compensates for the multi-stage artifacts, it avoids refined projection stitching, and the image quality in terms of spatial resolution, contrast and contrast-to-noise ratio is preserved.
Keywords: X-ray phase contrast; computed tomography; single-distance phase retrieval; photon-counting detectors; image artifacts.
1. Introduction
Contrast formation in conventional X-ray computed tomography (CT) is based exclusively on the detection of intensity variations due to the attenuation of the transmitted X-rays along the path. According to this approach, weakly absorbing details as well as different materials having similar absorption properties are poorly discriminated. To overcome this limitation, phase-sensitive techniques mainly performed at third-generation synchrotron radiation (SR) laboratories are remarkably attractive. Several techniques have been developed to this aim (Endrizzi, 2018; Bravin et al., 2013) and this article considers the single-distance propagation-based imaging (PBI) method that does not require additional optical hardware or acquisitions at multiple distances. Single-distance PBI SR-CT is of particular interest as it simply uses free-space propagation between the rotating sample and the detector, and information connected to the of the imaged object is then retrieved via digital image processing. Fast acquisitions are therefore possible, thus encouraging the application of PBI for e.g. medical applications (Bravin et al., 2013).
Several single-distance phase retrieval algorithms have been proposed in the literature (Burvall et al., 2011) and two algorithms have become a standard de facto at the energies considered in the vast majority of the medical and biomedical PBI SR-CT applications. These two algorithms are: the so-called Modified Bronnikov Algorithm (MBA) (Groso et al., 2006; Chen et al., 2011) and Paganin's algorithm (Paganin et al., 2002). It has been shown that these two methods present several similarities (Burvall et al., 2011; Chen et al., 2013), and a general preference towards the single-material version of Paganin's algorithm is noticeable among PBI SR-CT users since the tuning of only one parameter (called δ/β) is necessary and a physical meaning can be associated with this ratio.
Although several digital image processing solutions are commonly included to e.g. correct misalignment or compensate artifacts, an established reconstruction workflow for PBI SR-CT comprises at least the essential steps of flat-fielding, phase retrieval and reconstruction via, for example, filtered back projection (FBP). As demonstrated by several freely available software tools (Weitkamp et al., 2011; Gürsoy et al., 2014; Mirone et al., 2014; Chen et al., 2012; Brun et al., 2017), common practice in the SR community is to apply a 2D phase retrieval filter to each flat-corrected projection prior to the actual reconstruction. However, it was recently shown that a 3D version of the phase retrieval filter can be applied post-reconstruction, thus leading to theoretically equivalent results. The mathematical derivation for this has been given by Ruhlandt & Salditt (2016) and Häggmark et al. (2017). Roughly, the property of commutativity for reconstruction and phase retrieval exists, both being linear filters in the Fourier space. The resulting quantitative meaning does not change after the switch of the steps. The only concern about this commutativity might be the logarithm included in the phase retrieval formula. However, for weakly absorbing objects the input for phase retrieval is around 1 due to the low (absorption) contrast; hence the logarithm is approximately linear. It was also shown by Ruhlandt & Salditt (2016) that phase retrieval and reconstruction are interchangeable also when considering approaches other than FBP, such as the Algebraic Reconstruction Technique (ART). Moreover, additional constraints such as non-negativity, range restrictions of the object functions or known support of the object do not alter the validity of this observation.
X-ray photon-counting detectors (XPCDs) are becoming more and more popular in SR imaging. Their adoption for phase-contrast imaging has been widely explored in recent years, as proposed by e.g. Gürsoy & Das (2013). One of the disadvantages of current XPCDs is their small sensitive area (a few cm2). This disadvantage is often compensated by tiling a larger matrix with an adequate number of detector units. A wide field of view is therefore available, thus simplifying the CT acquisition of large objects. On the other hand, the limited height of the resulting field of view is usually compensated by performing additional acquisitions with a vertical translation of the sample over the rotating stage. This is also the most common way to compensate for the sometimes limited height of the SR beam. Multi-stage acquisitions where vertical translation of the sample is involved are quite common in the PBI SR-CT. Suitable strategies are therefore required for the inherent issue of image stitching (Kyrieleis et al., 2009; Vescovi et al., 2018) in order to correctly create the reconstructed volume of the whole large object. Projection stitching typically requires the determination of the center of rotation and in practical multi-stage tomography it might slightly vary from one vertical stage to another. Moreover, when considering fast acquisitions (as for the in vivo case) the `step-and-go' acquisition scheme cannot be considered and therefore the detector acquires images while the sample freely rotates on the rotating stage. This so-called `continuous' mode requires the determination of the exact angular range covered by the scan and, again, in practical multi-stage CT it might slightly vary from one vertical stage to the adjacent one. Both these essential problems are usually tackled and/or validated by operating on reconstructed slices rather than projections. Considering this, a 3D post-reconstruction phase retrieval is definitely a key advantage to avoid the complication of projection stitching.
This article presents a comparison between a reconstruction workflow where a 2D pre-reconstruction Paganin's phase retrieval applied independently to each single vertical stage is considered and a reconstruction pipeline where the 3D post-reconstruction version of the algorithm is used. The comparison considers multi-stage acquisitions in continuous mode (Delogu et al., 2017) of a large mastectomy sample. The considered sample was scanned within the breast CT research program at Elettra (Longo et al., 2016), having as a final goal the volume reconstruction of a breast portion of adequate size for clinical investigation.
Preliminary results based on the same scanned sample have already been presented (Brombal et al., 2018a).
2. Materials and methods
2.1. Scanned sample
A male mastectomy specimen containing an infiltrating ductal carcinoma with a diameter of about 1.2 cm was considered. The sample was fixed in formalin and sealed in a vacuum bag. A diameter of about 9 cm and a height of 3 cm were measured for this sample. The Directive 2004/23/EC of the European Parliament and of the Council of 31 March 2004 on setting standards of quality and safety for the donation, procurement, testing, processing, preservation, storage and distribution of human tissues was followed. The images reported in this study were acquired to guide the pathologist in the localization of lesions for further histological examination, according to the standard procedures of the clinic operative unit of the Anatomy and Histology Department. The sample derives from surgical material sent to the operative unit according to local guidelines for histological examination.
2.2. Acquisition setup and parameters
Tomographic images were acquired at the SYRMEP beamline (Tromba et al., 2010) of Elettra. The X-ray beam was monochromated by means of a Si(111) double-crystal monochromator and tuned at 38 keV. The resulting horizontal beam at the sample was 3 mm in height. Parallel beam geometry was assumed and 1200 projections were collected over 180° in continuous-rotation mode with an angular speed of 4.5°/s. A sample-to-detector distance of 1640 mm was considered. Each scan required 40 s. Due to the small vertical dimension of the beam, ten scans at different vertical positions were required to reconstruct the full-volume mastectomy, corresponding to a total scan time of about 7 min. For each vertical stage, projections having a size of 2150 × 51 pixels were collected.
The detector used in this project is the large-area CdTe PIXIRAD-8 (Bellazzini et al., 2013) composed of eight horizontally tiled modules. The active area of each module is 30.7 mm × 24.8 mm, leading to a global active area of 246 mm × 25 mm, corresponding to 4096 × 476 pixels with a gap of 3 pixels between adjacent modules. Each pixel is associated with two independent 15-bit counters, and dead-time-free mode was adopted. According to this modality, the detector fills one counter while reading the other, thus providing a virtually zero-dead-time acquisition. The resulting voxel size is 60 µm × 60 µm × 60 µm.
2.3. Common pre-processing
The acquired projections were first streamed to a control workstation and then the custom pre-processing procedure described by Brombal et al. (2018b) was applied. This procedure is hereafter summarized for the sake of completeness. The following steps are sequentially applied: (i) dynamic flat-field equalization to correct pixel-to-pixel non-uniformity and polarization time-dependent gain variations of the detector; (ii) inpainting of the dead space between adjacent modules (Brun et al., 2018) by considering linear interpolation involving a kernel of 4 × 4 pixels next to the edge of the block; (iii) removal of bad pixel speckles by means of an α-trimmed mean filter; (iv) dynamic ring removal procedure based on a rank filter and Gaussian smoothing. Sample projections for each of the ten considered vertical stages after pre-processing are reported in Fig. 1. At the end of these steps the projections are ready for either 2D phase retrieval with subsequent tomographic reconstruction or reconstruction with subsequent 3D phase retrieval as sketched in Fig. 2.
2.4. Reconstruction protocol with 2D phase retrieval
The custom MATLAB implementation reported in Fig. 3 was used to filter each pre-processed projection. The code performs a filtering of each projection after 2D Fourier transformation. Two-dimensional replicate padding of the image is performed before the actual filtering to avoid cross-talk between opposite sides of the image (Weitkamp et al., 2011). Additional padding to reach a power-of-two size was not considered. Cropping is then performed after the inverse 2D Fourier transformation. The code for padding and cropping is not reported in Fig. 3.
After having performed the 2D phase retrieval, thanks to the parallel beam geometry, each axial slice is reconstructed independently by applying an implementation of the FBP algorithm with ramp filtering. To correctly compose the final volumetric image, each of the ten considered stages requires a registration to compensate for the unknown angular shift induced by the continuous mode acquisition. The registration is performed via a lossless image rotation with respect to the first acquired stage. A lossless image rotation requires only to specify an angular offset as additional input to the FBP algorithm. To clarify this, with reference to e.g. the well known MATLAB command iradon, given a sinogram R the output reconstructed image I is obtained with , where k is the positive or negative angular shift tuned for each vertical stage scan. Variations of k imply a rotation of the object in the final reconstructed image. This is not equivalent to applying a rotation (via e.g. the MATLAB command imrotate) to the reconstructed image I because interpolation would be involved with consequently a degradation of the quality of I. This angular offset k among the stages can be determined either visually or via cross-correlation of two (polar transformed) adjacent reconstructed slices, i.e. the `last' of one vertical stage against the `first' of the subsequent vertical stage.
2.5. Reconstruction protocol with 3D phase retrieval
In the reconstruction pipeline where 3D phase retrieval is considered, first each axial slice is reconstructed independently and registered to compensate for the unknown rotation as described before in Section 2.4. The custom MATLAB implementation reported in Fig. 4 was then used to filter the reconstructed volume (i.e. the stack of the reconstructed axial slices of all the ten considered vertical stages). The code performs a volume filtering after ND Fourier transformation. Three-dimensional replicate padding of the volume is performed before the actual filtering to avoid cross-talk between opposed sides of the volume. Again, additional padding to reach a power-of-two size was not considered. Cropping is then performed after the inverse ND Fourier transformation. The code for padding and cropping is not reported in Fig. 4.
2.6. Quantitative analysis
The two reconstructed volumes were quantitatively compared in terms of spatial resolution, contrast and contrast-to-noise ratio (CNR) by considering an intermediate reconstructed axial slice (the central one with respect to a stack of 51 slices). The three different gray-level profiles along the small segments reported in blue in Figs. 5 and 6 were fitted with an error function to assess the spatial resolution, and the related uncertainty was determined as the maximum difference between the measured values. The two circular regions (one within glandular tissue referred to as A and the other one within adipose tissue referred to as B) reported in red in Figs. 5 and 6 were used to compute the mean μ and standard deviation σ of the gray levels. Then the contrast C and CNR were determined as
and
A set of adjacent lines along the reconstructed axial slices (green lines in Figs. 5 and 6) was also considered in order to better highlight the artifact at the interface of adjacent reconstructed stages. Each considered line covers 121 voxels and the standard deviation of the gray levels was computed and plotted for all the lines, i.e. along the axial reconstructed slices.
3. Results and discussion
Figs. 5 and 6 report axial, sagittal and coronal views of the final 2150 × 2150 × 510 voxels volume for the 2D and 3D case, respectively. Each figure includes an additional close-up to better highlight the main difference observed when comparing the two approaches. When considering the central axial slice of the stack of 51 reconstructed images for each vertical stage, no significant differences are observed if the gray level values of the 2D case are compared with the 3D case. However, by looking at the sagittal and coronal views, an artifact every 51 slices is easily noticeable for the 2D case. This means that significant differences occur between the reconstructed slices at the margins of two adjacent vertical stages. This artifact is not observed when volume 3D phase retrieval is performed.
The numerical results of the quantitative comparison are summarized in Table 1. The analysis revealed an identical spatial resolution for both the considered cases. The plots of the error functions used for this assessment are reported in Fig. 7. Similarly, a contrast C = 34.4% with CNR = 3.68 for the 2D approach and C = 34.5% with CNR = 3.69 for the 3D case were assessed, thus suggesting no significant differences. On the other hand, the plot profile reported in Fig. 8 shows the above-mentioned artifact since a spike in the measured standard deviation is clearly noticeable every 51 slices.
|
The motivations of this vertical-periodic artifact lie in the absence of knowledge about the neighboring pixels of the upper and lower part of the projection image when applying the 2D independent stage-by-stage processing. The 2D phase retrieval approach cannot consider the real information coming from the adjacent vertical stages. The replicate padding is a reasonable attempt to mitigate for the absence of this information but results were outperformed by the 3D approach. The 3D approach can exploit knowledge of the whole volume and therefore the abrupt variations every 51 axial slices are not observed.
It is worthwhile underlining that this artifact could result similarly compensated if the projections of each vertical stage could be tiled together to compose a single projection of 510 pixels. However, this stitching process is hampered by several factors. At first, imperfections in the relative alignment of the detector and the rotating stage are much better compensated during the reconstruction step by inspecting the computed images. It is quite difficult to recognize geometrical misalignment from the input projections only. Although the simplest case of parallel beam geometry is considered in this article, the proposed methodology can be transferred to e.g. the cone beam case where the calibration of the geometry becomes even more important. Moreover, the continuous acquisition mode combined with the limitations in precision of the rotating stage (backlash) imply an unknown angular shift of the acquired dataset. Since the acquisition is performed over 180°, the selection of the exact projection from a vertical stage to be combined with the other ones requires horizontal flipping (either for some of the projections acquired at first or at last, depending on whether the angular shift is positive or negative). The flipping, in turn, requires knowledge of the center of rotation. Although automatic methods for the determination of the center of rotation exist (Vo et al., 2014), it is common practice to visually assess the correctness of the proposed center of rotation via a few test reconstructions of just one axial slice. Moreover, automatic methods for the center of rotation usually require as input a projection at angle 0° and a second projection at 180°. If the complete coverage of 180° is not granted (as might happen in continuous acquisitions) these methods might fail. Similarly, although again image correlation techniques might be considered to automatically assess the angular shift, the correctness of the proposed results is usually much better supervised by an expert user having a look at reconstructed slices rather than projections. Considering this, the application of the conventional 2D phase retrieval filtering to a single set of stitched projections would require some preliminary reconstruction anyway. The 3D phase retrieval approach allows this stitching phase and the related challenges to be skipped.
Although an accurate benchmark of the two approaches goes beyond the aim of this work, some computational considerations can still be made. First, although seldom performed in SR PBI CT, the 2D phase retrieval is an on-line approach, i.e. it can be applied as soon as each projection is collected without waiting for the acquisition of the whole dataset. This in principle could result in a globally faster method. On the other hand, the 3D approach is an off-line method since it requires as input the whole reconstructed volume which means waiting for the collection of all the projections. More interestingly, memory requirements become significant for 3D phase retrieval. It can be easily noticed from the MATLAB implementation of the 3D phase retrieval filter reported in Fig. 4 that a large amount of memory is required because the whole reconstructed volume has to be loaded into memory to be further processed. For instance, the stacked volume considered in this article is composed of 2150 × 2150 × 510 voxels, which means a 32-bit floating point matrix of about 8.8 GB. Moreover, signal padding is fundamental to avoid cross-talk between opposite sides of the volume when performing Fourier-based filtering. Compared with the 2D case where horizontal and vertical padding is performed for each projection, the 3D case requires additional padding for the third dimension, thus leading to globally more matrix elements to be processed. This aspect has to be taken into account when trying to apply this approach to large datasets on workstations with a limited amount of RAM. Moreover, MATLAB offers only an implementation of the complex-input fast Fourier transform (FFT), while the real FFT (i.e. a version of the FFT algorithm where the input matrix is required to possess only real numbers in order to exploit the symmetry of the Fourier-transformed signal and therefore half of the transformed coefficients are computed) would reduce the memory requirements of the transformed matrix. Other memory-efficient implementations than the one reported in Fig. 4 are therefore possible.
4. Conclusion
CT imaging of large objects might require multiple acquisitions with a vertical translation of the rotating stage. This is particularly true when considering SR sources with a limited vertical size of the beam and/or modern X-ray photon-counting detectors having a field of view with reduced height. Single-distance phase retrieval is required in the case of free-space propagation phase-contrast CT and the conventional approach suggests independently processing the projection images of each vertical stage prior to the actual parallel-beam reconstruction. It was shown that artifacts might occur at the margins of each vertical stage, leading to degraded lateral views of the whole reconstructed volume. The continuous acquisition mode complicates a possible approach based on the stitching of the input projections. A solution was shown based on post-reconstruction 3D volume phase retrieval. Although memory requirements become a concern, the proposed modified computational protocol is able to compensate the above-mentioned artifacts and it is therefore an interesting tool for propagation-based phase-contrast CT.
Acknowledgements
The INFN SYRMA-3D collaboration is acknowledged for the dataset considered in this work.
Funding information
This work is part of the activities of the KEST project, funded by INFN – Istituto Nazionale di Fisica Nucleare (National Scientific Committee 5 for Technological and Inter-Disciplinary research). SD is partially supported by Consorzio per la Fisica Trieste.
References
Bellazzini, R., Spandre, G., Brez, A., Minuti, M., Pinchera, M. & Mozzo, P. (2013). J. Instrum. 8, C02028. Web of Science CrossRef Google Scholar
Bravin, A., Coan, P. & Suortti, P. (2013). Phys. Med. Biol. 58, R1–35. Web of Science CrossRef PubMed Google Scholar
Brombal, L., Donato, S., Brun, F., Delogu, P., Fanti, V., Oliva, P., Rigon, L., Di Trapani, V., Longo, R. & Golosio, B. (2018b). J. Synchrotron Rad. 25, 1068–1077. CrossRef CAS IUCr Journals Google Scholar
Brombal, L., Golosio, B., Arfelli, F., Bonazza, D., Contillo, A., Delogu, P., Donato, S., Mettivier, G., Oliva, P., Rigon, L., Taibi, A., Tromba, G., Zanconati, F. & Longo, R. (2018a). J. Med. Imaging, 6, 031402. CrossRef Google Scholar
Brun, F., Delogu, P., Longo, R., Dreossi, D. & Rigon, L. (2018). Meas. Sci. Technol. 29, 014001. CrossRef Google Scholar
Brun, F., Massimi, L., Fratini, M., Dreossi, D., Billé, F., Accardo, A., Pugliese, R. & Cedola, A. (2017). Adv. Struct. Chem. Imaging, 3, 4. Web of Science CrossRef PubMed Google Scholar
Burvall, A., Lundström, U., Takman, P., Larsson, D. & Hertz, H. (2011). Opt. Express, 19, 10359–10376. CrossRef CAS Google Scholar
Chen, R., Rigon, L. & Longo, R. (2013). Opt. Express, 21, 7384–7399. CrossRef CAS Google Scholar
Chen, R., Xie, H., Rigon, L., Longo, R., Castelli, E. & Xiao, T. (2011). Opt. Lett. 36, 1719–1721. CrossRef CAS Google Scholar
Chen, R.-C., Dreossi, D., Mancini, L., Menk, R., Rigon, L., Xiao, T.-Q. & Longo, R. (2012). J. Synchrotron Rad. 19, 836–845. Web of Science CrossRef IUCr Journals Google Scholar
Delogu, P., Golosio, B., Fedon, C., Arfelli, F., Bellazzini, R., Brez, A., Brun, F., Lillo, F., Dreossi, D., Mettivier, G., Minuti, M., Oliva, P., Pichera, M., Rigon, L., Russo, P., Sarno, A., Spandre, G., Tromba, G. & Longo, R. (2017). J. Instrum. 12, C01016. CrossRef Google Scholar
Endrizzi, M. (2018). Nucl. Instrum. Methods Phys. Res. A, 878, 88–98. CrossRef CAS Google Scholar
Groso, A., Abela, R. & Stampanoni, M. (2006). Opt. Express, 14, 8103–8110. Web of Science CrossRef PubMed CAS Google Scholar
Gürsoy, D. & Das, M. (2013). Opt. Lett. 38, 1461–1463. Google Scholar
Gürsoy, D., De Carlo, F., Xiao, X. & Jacobsen, C. (2014). J. Synchrotron Rad. 21, 1188–1193. Web of Science CrossRef IUCr Journals Google Scholar
Häggmark, I., Vågberg, W., Hertz, H. M. & Burvall, A. (2017). Opt. Express, 25, 33543–33558. Google Scholar
Kyrieleis, A., Ibison, M., Titarenko, V. & Withers, P. (2009). Nucl. Instrum. Methods Phys. Res. A, 607, 677–684. Web of Science CrossRef CAS Google Scholar
Longo, R., Arfelli, F., Bellazzini, R., Bottigli, U., Brez, A., Brun, F., Brunetti, A., Delogu, P., Di Lillo, F., Dreossi, D., Fanti, V., Fedon, C., Golosio, B., Lanconelli, N., Mettivier, G., Minuti, M., Oliva, P., Pinchera, M., Rigon, L., Russo, P., Sarno, A., Spandre, G., Tromba, G. & Zanconati, F. (2016). Phys. Med. Biol. 61, 1634–1649. Web of Science CrossRef CAS Google Scholar
Mirone, A., Brun, E., Gouillart, E., Tafforeau, P. & Kieffer, J. (2014). Nucl. Instrum. Methods Phys. Res. B, 324, 41–48. Web of Science CrossRef CAS Google Scholar
Paganin, D., Mayo, S., Gureyev, T., Miller, P. & Wilkins, S. (2002). J. Microsc. 206, 33–40. Web of Science CrossRef CAS Google Scholar
Ruhlandt, A. & Salditt, T. (2016). Acta Cryst. A72, 215–221. CrossRef IUCr Journals Google Scholar
Tromba, G., Longo, R., Abrami, A., Arfelli, F., Astolfo, A., Bregant, P., Brun, F., Casarin, K., Chenda, V., Dreossi, D., Hola, M., Kaiser, J., Mancini, L., Menk, R., Quai, E., Quaia, E., Rigon, L., Rokvic, T., Sodini, N., Sanabor, D., Schultke, E., Tonutti, M., Vascotto, A., Zanconati, F., Cova, M., Castelli, E. & Siu, K. K. W. (2010). AIP Conf. Proc. 1266, 18–23. CrossRef Google Scholar
Vescovi, R., Du, M., de Andrade, V., Scullin, W., Gürsoy, D. & Jacobsen, C. (2018). J. Synchrotron Rad. 25, 1478–1489. CrossRef IUCr Journals Google Scholar
Vo, N., Drakopoulos, M., Atwood, R. & Reinhard, C. (2014). Opt. Express, 22, 19078–19086. CrossRef Google Scholar
Weitkamp, T., Haas, D., Wegrzynek, D. & Rack, A. (2011). J. Synchrotron Rad. 18, 617–629. Web of Science CrossRef CAS IUCr Journals Google Scholar
© International Union of Crystallography. Prior permission is not required to reproduce short quotations, tables and figures from this article, provided the original authors and source are cited. For more information, click here.