research papers
An iterative image reconstruction algorithm combined with forward and backward diffusion filtering for inline Xray phasecontrast computed tomography
^{a}College of Biomedical Engineering, Tianjin Medical University, Tianjin 300070, People's Republic of China, ^{b}The School of Science, Tianjin University of Technology and Education, Tianjin 300222, People's Republic of China, ^{c}The Dental Hospital of Tianjin Medical University, Tianjin 300070, People's Republic of China, ^{d}Radiation Oncology Department, Tianjin Medical University General Hospital, Tianjin 300070, People's Republic of China, and ^{e}Key Laboratory of Optoelectronic Information Technology, Ministry of Education (Tianjin University), Tianjin 300072, People's Republic of China
^{*}Correspondence email: chunhong_hu@hotmail.com
Inline Xray phasecontrast computed tomography (ILPCCT) can reveal fine inner structures for lowZ materials (e.g. biological soft tissues), and shows high potential to become clinically applicable. Typically, ILPCCT utilizes filtered backprojection (FBP) as the standard reconstruction algorithm. However, the FBP algorithm requires a large amount of projection data, and subsequently a large radiation dose is needed to reconstruct a highquality image, which hampers its clinical application in ILPCCT. In this study, an iterative reconstruction algorithm for ILPCCT was proposed by combining the simultaneous algebraic reconstruction technique (SART) with eightneighbour forward and backward (FAB8) diffusion filtering, and the reconstruction was performed using the Shepp–Logan phantom simulation and a real synchrotron ILPCCT experiment. The results showed that the proposed algorithm was able to produce highquality computed tomography images from fewview projections while improving the convergence rate of the computed tomography reconstruction, indicating that the proposed algorithm is an effective method of dose reduction for ILPCCT.
Keywords: inline Xray phasecontrast computed tomography; simultaneous algebraic reconstruction technique; forward and backward diffusion filtering; fewview projections.
1. Introduction
Xray phasecontrast imaging (PCI) is a powerful imaging technique that can detect subtle differences in the electron density of materials or tissues. Regardless of the anisotropy in the medium, the n, which characterizes the optical properties of an object, can be described with its complex form: n = , where δ is the decrement responsible for the phase information; however, the imaginary part, β, is the of the Xray beam passing through the object responsible for the absorption information (Chen et al., 2013). For weakly absorbing objects such as biological soft tissues, phase information plays a more important role than the absorption information, because δ is approximately three orders of magnitude higher than β (Momose et al., 1996; Stampanoni et al., 2011; Brandlhuber et al., 2016). Compared with conventional absorptionbased Xray imaging, phasebased PCI enables the acquisition of images with higher resolution in biological soft tissues. As a result, PCI has been particularly applied to visualize soft tissue details and has become one of the most popular preclinical imaging techniques (Bravin & Coan, 2012).
In the past decade, various PCI techniques have been proposed; the four major types are Xray interferometry, diffractionenhanced imaging (DEI), Xray grating interferometry and inline Xray phasecontrast imaging (ILPCI). Among these methods, ILPCI shows a high potential to become clinically applicable because of its simplicity (Lee, 2015). By extending ILPCI to computed tomography (CT), inline Xray phasecontrast computed tomography (ILPCCT) holds outstanding potential to reveal detailed microstructures inside biological tissues at micrometrescale resolution (Liu et al., 2010; Xuan et al., 2015; Hetterich et al., 2016; Cao et al., 2017). Typically, filtered backprojection (FBP) is utilized to reconstruct CT images in ILPCCT. However, to produce highquality images, the FBP algorithm requires many projections, which leads to a large total exposure time and thus a large radiation dose. It is significant for clinical applications of ILPCCT to reduce the radiation dose while maintaining the high quality of reconstructed images. In ILPCCT, one approach to decrease the radiation dose is to shorten the total exposure time by reducing the number of projections, such as fewview projections (Melli et al., 2016). The CT iterative reconstruction algorithm can provide excellent reconstructed results from fewview projections and thus has a high potential for ILPCCT. The simultaneous algebraic reconstruction technique (SART) is an important algebraic iterative reconstruction method (Hansen & SaxildHansen, 2012) that formulates the reconstruction problem as a discrete linear transformation and can reconstruct better results than the FBP algorithm using fewview projections. However, when applying the SART algorithm in the case of fewview projections, the reconstructed images still retain some artefacts (e.g. streak artefacts, oscillating artefacts, etc.), which suggests that the performance of the SART algorithm still needs to be improved.
The current strategies of fewview CT reconstruction assume that the reconstructed images are piecewise smooth and include the design of regularization techniques for detail preserving and artefact smoothing in the CT reconstruction procedure, such as the total variation (TV) regularization approach (Sidky et al., 2006). These regularizationbased CT reconstruction techniques typically sacrifice regional fine textures and may compromise clinical tasks. Notably, many ILPCCT images of biological tissues containing complex textures cannot satisfy the assumption of piecewise smoothness, and the regularization techniques developed under this assumption may have limited ability to address this case. To overcome the limitation from the abovementioned assumption, one possible framework for fewview ILPCCT reconstruction is to incorporate an anisotropic filtering method, based on local image features, into the CT iterative reconstruction procedure. In 2002, Gilboa et al. proposed a forward and backward (FAB) diffusionfiltering method (Gilboa et al., 2002). As a powerful nonlinear anisotropic diffusion filtering method, the FAB method can adaptively control diffusion filtering force based on gradient information of local features in the image, thereby enabling a synergy of artefact smoothing and detail preserving. To date, due to its excellent performance in detail preserving and denoising, the FAB method has been developed and applied in many types of images, including synthetic aperture radar images (Zhou et al., 2004), ultrasound images (Nieniewski, 2014) and magnetic resonance images (Prasath et al., 2015). To the best of our knowledge, no study has shown that the FAB method can be used in ILPCCT images. Hence, the FAB method was employed here to preserve detailed features and smooth artefacts in the ILPCCT reconstruction procedure. As gradients of eight neighbours were able to represent the more accurate local features of the image, we developed an eightneighbour FAB algorithm (FAB8).
In this study, the FAB8 method and the SART algorithm were combined to develop CT reconstruction, and a SARTFAB8 algorithm was proposed and applied in ILPCCT reconstructions with fewview projection data. The proposed method consists of two steps per iteration. First, the SARTstep is performed to enforce consistency of the inconsistent projection data and acquire the reconstructed image with artefacts. Second, the FAB step is utilized to reduce the artefacts in the reconstruction image acquired from the `SART step' and improve the convergence of image reconstruction. Finally, the Shepp–Logan phantom simulation and synchrotron ILPCCT experiment were performed to demonstrate the effectiveness and ability of the proposed algorithm.
2. Methods
2.1. ILPCI and its phase retrieval
As a propagationbased imaging technique, ILPCI (Snigirev et al., 1995) can produce highresolution images in weakly absorbing materials, especially for biological tissues (Rastogi et al., 2013; Jian et al., 2016; Mai et al., 2017). In ILPCI, when the quasicoherent Xray beams illuminate the object they will yield the spatially varying phase shifts in the Xray beams. As the beams propagate from the object, the distorted wavefront, which has undergone different deflections, will generate a characteristic pattern in the image plane. Due to Fresnel diffraction, the phase shifts are subsequently transformed to detectable intensity variations, and finally recorded by the detector. In practice, ILPCI presents a very simple experimental setup for PCI (see Fig. 1). ILPCI requires no additional optical element compared with the conventional CT modality except that the provided Xray beams are sufficiently spatially coherent and the sampletodetector distance (SDD) is variable (Chen et al., 2012). Due to its high resolution in biological tissues and simplicity of experimental setup, ILPCI has been widely used in biological science and is one of the most important preclinical imaging techniques.
However, projection images from ILPCI contain absorption information and phase information (Chen et al., 2011) and, therefore, phase retrieval may be implemented to extract the phase information. Generally, phase retrieval requires at least two phasecontrast radiographs, taken at two different SDDs (Nugent et al., 1996), but this method delivers a high radiation dose to the samples and encounters a complicated registration problem. According to Gureyev's study, phase retrieval from a single SDD ILPCCT data set is possible (Gureyev et al., 2004), and several phaseretrieval methods using single SDD ILPCCT data have been proposed, e.g. the Modified Bronnikov algorithm (MBA) method (Bronnikov, 1999, 2002; Groso et al., 2006), the TIEbased method by Paganin (Paganin et al., 2002; Wu et al., 2005), and the phaseattenuation duality Bronnikov algorithm (PADBA) (Chen et al., 2013). In this study, PADBA, a single SDD phaseretrieval method, was implemented on projection images using PITRE software to extract quantitative phase information (Chen et al., 2012). This algorithm is grounded in a priori knowledge that the δ and β parts of the complex are proportional to each other. In our experiment, by some experimental trials, we found that the reconstructed image using a δ/β value of 1000 has a high contrast between the adjoining tissues and enables an optimal distinguishing performance in the regions of edge details and fine textures, which is the best result for our subsequent research, and thus 1000 was considered as the best δ/β value. After phase retrieval, the phase information distribution from the ILPCI of samples can be obtained and its quantitative analysis can be performed.
2.2. The CT iterative reconstruction method
In the ILPCCT experiment, the imaging model can be approximated to a discrete linear transformation as follows,
where A stands for an M ×N system matrix that represents the Xray parallel beam forward projection, is the projection data acquired from the detector, and denotes the phase information distribution of the illuminated object. The goal of ILPCCT reconstruction is to accurately reconstruct f from p.
In this work, a blockiterative based SART technique is adopted, which has the potential to handle largescale data quickly, and is expressed as follows,
where k is the number of iterations, represents the relaxation coefficient of the kth iteration, T stands for the transpose operator, and V and W are the diagonal matrixes with row sums and column sums of A in the diagonal, respectively.
To improve the convergence performance of the SART algorithm, is chosen using the line search method (Hansen & SaxildHansen, 2012), which can be computed as follows,
where represents the square of the nuclear norm.
2.3. The FAB method
The FAB method contains forward diffusion and backward diffusion processes, and it is able to switch the diffusion process between the forward diffusion and backward diffusion according to the i.e. when the is positive, it is the forward diffusion process; when negative, it is the backward diffusion process). The forward diffusion process is capable of smoothing low gradients and thus enables suppression of the streak artefacts and oscillating artefacts in the image. Backward diffusion can retain local high gradients and thus enables preservation of edge details and fine textures in the image. The can be locally adjusted via image features (e.g. edges, textures and moments). Thus, the FAB method enables adaptive control of forward diffusion and backward diffusion processes based on the local features in the image. The formulation of FAB is defined as follows,
(where (i, j) denotes the coordinates of a pixel in the 2D image domain, t represents the evolution time (iterations), is the diffusion operator, is the f_{0} represents the image f at the initial time, and is the gradient of the image f. The parameter k_{f} is the maximum value in the forward diffusion process, and it controls the gradient magnitudes for forward diffusion. The parameters k_{b} and ω define the centre and width of the backward diffusion process, respectively. The parameter α determines the ratio between the strength of the forward and backward forces. In addition, m and n are the exponent parameters for the forward force and backward force, respectively.
To improve the performance of FAB, we developed the FAB8 method to replace the original fourneighbour FAB (FAB4). Here, let , and d = E, W, S, N, SE, SW, NE, NW define the gradients of eight neighbours in eight directions (see Fig. 2), and the definitions are expressed as follows,
Let c_{i,j}^{ center} define the of the central difference of the image f in some pixel (i, j), and c_{i,j}^{ d}, d = E, W, S, N, SE, SW, NE, NW define the diffusion coefficients of the gradients of eight neighbours in eight directions, which are formulated as follows,
To improve the computational stability of FAB in spacediscrete diffusion form, a modified spacediscrete FAB diffusion of the framework of Weik et al. (2009) was adopted, and the fluxes can be expressed as follows,
The continuous nonlinear diffusion in equation (4) can be discretely presented via eight nearest neighbours and the discrete partial differential equations (PDEs) solution (Gerig et al., 1992) can be formulated as follows,
where is the time step that ranges from 0 to 0.25, and f^{ t} denotes the updated image of the tth iteration in the FAB diffusion process.
2.4. Pseudocode for SARTFAB8 algorithm
By combining the FAB8 method with the SART reconstruction, we developed the SARTFAB8 algorithm for ILPCCT. In summary, the pseudocode of the SARTFAB8 algorithm was presented as follows.
2.5. Lowdose noise model in projections for the simulation
To analyze the robustness to lowdose noise of the proposed algorithm, the lowdose noise was introduced into the projection data and the corresponding image reconstructions were performed. Inspired by the previous work (Liu et al., 2012; Bian et al., 2017), the lowdose noise in projections can be modelled as a combination of the Poissondistributed photon noise and Gaussiandistributed electronic noise, as shown in equation (10),
where is the simulated noisy measurement for detector element i at a projection view and I_{0} represents the incident is the logarithmic transform of , m_{ie} and are the mean and variance of the background electronic noise, for detector element i. In this study, the Xray exposure level I_{0} was set to 1.0 × 10^{5}, and m_{ie} and were set to 0 and 10, respectively, for lowdose noisy projections simulation.
2.6. Parameter selection for the SARTFAB algorithm
The backward diffusion process is considered as an illposed problem due to its computational instability. To tackle the illposed problem in FAB, Gilboa et al. (2002) showed that three conditions should be fulfilled, and these conditions are formulated as follows,
(i)
(ii)
(iii)
In this study, the mean absolute gradient (MAG) is implemented to adaptively tune the parameters in FAB according to local gradient information for the image. However, the performance of the FAB method also depends on constant coefficients in MAGbased parameters, such as k_{f}, k_{b}, ω and α. As for the selection of constant coefficients in MAGbased parameters in the most general case, generally, when being adopted to reconstruct some objects with much noise, considering that the parameter k_{f} stands for the strength of smoothing force, a larger coefficient in parameter k_{f} would have a better denoising effect; when being adopted to reconstruct some objects with fine edge details and textures, considering that the parameters k_{b} and ω control the range of preservation of edge details and fine textures, a larger coefficient in paramete k_{b} would guarantee clearer edge details and textures; as for the balance parameter α, which is the ratio between the the strength of forward diffusion and backward diffusion, it can be tuned according to actual needs. Since an optimal set of constant coefficients in MAGbased parameters will enable the best performance of FAB and, inspired by previous work (Tsiotsios & Petrou, 2012; Yang et al., 2015), on the basis of satisfaction of the above three conditions, the optimal constant coefficients can be found in the following way. First, the constant coefficient of one parameter was continuously changed using different scales while fixing the other parameters to generate different image reconstructions. Second, the errors between the above reconstructed results and the reference image were calculated, i.e. the root mean squared error. Finally, the optimal constant coefficient can be determined using the minimal reconstructed error. The other optimal coefficients and parameters can also be found using this method. After trial and error, we obtained the best performance for the cases with and without noise using the following two sets of parameters: (i) one set for the case without noise: kk_{max} = 10, k_{f} = 1 × MAG, k_{b} = 1.6 × MAG, ω = 0.5 × MAG, α = k_{f}/4(k_{b} + ω), n = 4, m = 2, Δt = 0.15; (ii) the other set for the case with the lowdose noise: kk_{max} = 10, k_{f} = 1.4 × MAG, k_{b} = 2.4 × MAG, ω = 0.8 × MAG, α = k_{f}/3(k_{b} + ω), n = 4, m = 2, Δt = 0.15. In general, the first set of parameters can be used to reconstruct many objects from the noisefree projections, including some simulations and practical applications; in this work, these parameters were used for the simulation in the noisefree case and practical experiment. For the simulation in the lowdose noise case, the second set of parameters were adopted. Although the above two sets of parameters cannot fit to all objects, the optimal parameters for the other reconstructed object can also be determined in the abovementioned way.
2.7. Quantitative assessment of the reconstructed images
Three quantitative metrics, including universal quality index (UQI), peak signaltonoise ratio (PSNR) and root mean squared error (RMSE), are adopted to quantitatively assess the quality of reconstructed images. The UQI can be used to evaluate the pixeltopixel similarity between a reconstructed and reference image, which yields a value between 0 and 1 that increases with increasing similarity (Wang & Bovik, 2002). PSNR is a traditional measure of image quality, and a larger value indicates better quality. RMSE is used to evaluate the reconstruction accuracy based on error sensitivity, and a smaller value means more accuracy.
(i) UQI is widely used and defined as follows,
where x is the reference image, y is the reconstructed image and u_{x} and u_{y} are the means of x and y, respectively; and denote the variances of x and y, respectively; and Cov(x,y) is the covariance between x and y.
(ii) PSNR is defined as follows,
where MSE is the mean square error function, and the size of the reconstructed and the reference images are M×N; x_{i, j} represents the pixel intensity of the reference image in some pixel (i,j), y_{i, j} represents the pixel intensity of the reference image in some pixel (i,j); and Peak is the largest pixel value in the normalized image, e.g. in the case of eightbit pixel representation it is 255.
(iii) RMSE is defined as follows,
3. Simulation experiment
3.1. Simulations
To evaluate the performance of the SARTFAB8 algorithm, the standard Sheep–Logan phantom was utilized (Fig. 3a). In this experiment, semicircular angle scanning based on parallelbeam geometry was used to simulate the process of ILPCI, and a Sheep–Logan phantom image with a matrix size of 512 × 512 pixels was used to simulate the phase information distribution of the sample (Langer et al., 2009; Yang et al., 2015). The detector is modelled as a straightline array with 724 elements, and the size of the reconstructed images is 512 × 512 pixels. The 60 uniformly distributed projections without noise and with noise over πview were used to simulate fewview projections and lowdose noisy fewview projections, respectively; the lowdose noise added into fewview projections was introduced in detail in §2.5. Here, in order to reduce the effects from the FBP sampling errors, the missing 300view projections were compensated by interpolation between the acquired 60view projections, and then the compensated projections were used for the FBP. As the stopping criterion, the total iteration number of 20 was selected for SART, SARTFAB4 and SARTFAB8 algorithms according to the convergence curves, as shown in Fig. 6. Here, the FBP, SART and SARTFAB4 algorithms were used for comparison with the SARTFAB8 algorithm, and all parameters were optimally chosen for the best performance, which were introduced in detail in §2.6. All experiments were conducted using the MATLAB programming language on a desktop PC platform equipped with Intel(R) Core(TM) i54460 CPU at 3.2 GHz and 16 GB RAM.
3.2. Experimental results
Four images from 60view noisefree projections and four images from 60view noisy projections were reconstructed using FBP, SART, SARTFAB4 and SARTFAB8 algorithms, as shown in Figs. 4(a)–4(d) and 4(i)–4(l). Among the reconstructed images, the images reconstructed using the FBP algorithm are worst, and are seriously affected by a large number of streak artefacts and lowdose noise, indicating that the FBP algorithm has a poor ability to deal with fewview projection data for the cases with and without noise [Figs. 4(a) and 4(i)]. The images reconstructed using the SART algorithm are better than those of the FBP algorithm, where the streak artefacts and lowdose noise are effectively reduced. However, the edges and textures of the image are affected by the lowdose noise and oscillating artefacts due to the loss of highfrequency information in the fewview projections [Figs. 4(b) and 4(j)]. The reconstructed images of the SARTFAB4 [Figs. 4(c) and 4(k)] and SARTFAB8 [Figs. 4(d) and 4(l)] algorithms are better than those using the FBP and SART algorithms, where the streak artefacts, oscillating artefacts and lowdose noise are effectively reduced, implying that the SARTFAB4 and SARTFAB8 algorithms can suppress streak artefacts, oscillating artefacts and lowdose noise. Nevertheless, in contrast with SARTFAB4, the reconstructed images of SARTFAB8 have clearer edges and are closer to the true image.
3.3. Assessments
To compare the accuracy of the four reconstruction algorithms, horizontal profiles of the same position in Figs. 4(a)–4(d) and Figs. 4(i)–4(l) are utilized, as shown in Fig. 5. It is easy to see that the profiles of the SARTFAB8 algorithm are closest to the true result in the cases with and without noise. Additionally, the UQI, PSNR and RMSE values of the reconstructed images were further calculated, and the computation times of the four reconstruction algorithms were also computed, as shown in Table 1. As seen from Table 1, the computation times of the SARTFAB8 algorithm are longest in the cases with and without noise; however, the quality of the images reconstructed using the SARTFAB8 algorithm is obviously the best.

To qualitatively evaluate the convergence performance of the SART, SARTFAB4 and SARTFAB8 algorithms in the cases with and without noise, the RMSEbased convergence curves of the abovementioned methods are presented (Fig. 6). As seen in Fig. 6, the SART, SARTFAB4 and SARTFAB8 algorithms converged before the iterations reach 20, with the convergence rate of the SARTFAB8 algorithm being fastest.
4. Real experiment on ILPCI data
4.1. Data acquisition
An experimental ex vivo rat maxilla sample was provided by the Dental Hospital of Tianjin Medical University, and its ILPCI data were collected at the BL13W1 beamline in SSRF, China. In this experiment, the SDD was 0.8 m, and the energy of the monochromatic beam was set to 33 keV. A chargecoupled device (CCD) camera system with a 36 mm × 5 mm field of view (FOV) was used as the imaging detector, and the effective pixel pitch was 9 µm × 9 µm. The full projection dataset (959view projections) within a 180° CT scan range was acquired with an exposure time of 10 ms per projection, and the size of the projection image is 3992 × 513 pixels. In addition, ten darkcurrent images were used to calibrate dark noise in projections while 20 flatfield images were used to calibrate whitefield signals in projections (Chen et al., 2012; Baumann, 2014). After phase retrieval using the PADBA method, the 192view projections were evenly chosen from the full projection dataset, and then a sinogram with 192 × 3992 pixels was generated for fewview CT reconstruction of the proposed algorithm. In this work, the missing 767view projections were compensated by interpolation between the acquired 192view projections, and then the compensated 959view projections were used to evaluate the performance of the FBP algorithm in fewview CT reconstruction.
4.2. Experimental results
Fig. 7 depicts reconstructions of the rat maxilla sample using the FBP, SART, SARTFAB4 and SARTFAB8 algorithms. Here, the reconstructed slice of the rat maxilla sample with the full projection dataset using the FBP algorithm is utilized as the reference image, as shown in Fig. 7(a). Fig. 7(b) is a reconstructed slice of rat maxilla sample with the compensated 959view projections using FBP algorithm, and Figs. 7(c)–7(e) are reconstructed slices of rat maxilla sample with 192view projections using the SART, SARTFAB4 and SARTFAB8 algorithms, respectively. Fig. 7(b) shows that the slice reconstructed using the FBP algorithm has poor image quality; the textures, fine structures and edges are severely affected by streak artefacts and blur, and the subsequent image analysis (i.e. image segmentation, texture analysis and structure measurement) is influenced. Fig. 7(c) indicates that the SART algorithm has the potential to reduce streak artefacts, but a poor ability to preserve the textures, fine structures and edges using fewview CT reconstruction. From Figs. 7(d)–7(e), it can be seen that the image qualities (e.g. textures, fine structures and edges) have been improved significantly compared with the FBP and SART algorithm, suggesting that the SARTFAB4 and SARTFAB8 algorithms can preserve the textures, fine structures and edges using fewview CT reconstruction. Comparing Figs. 7(d) and 7(e), we can see that the latter has fewer artefacts and clearer detailed features (e.g. textures, structures and edge details), indicating that the SARTFAB8 algorithm yields a better reconstruction result than the SARTFAB4 algorithm.
4.3. Result analysis
To assess the accuracy of four different reconstruction algorithms, the position labelled with the red line in Fig. 7(a), which crossed through the alveolar fossa, was utilized, and the horizontal profiles of the corresponding positions in Figs. 7(a)–7(e) are shown in Fig. 7(f). In Fig. 7(f), it can be seen that the profile and intensity of alveolar fossa using the SARTFAB8 algorithm is closest to the reference image, demonstrating that the accuracy of the SARTFAB8 algorithm is the highest. By comparison, the reconstructed image using the FBP algorithm is the worst; distortions caused by insufficient projection data may impair the analysis and judgement of doctors or researchers, which suggests that the SARTFAB8 algorithm has important value in the case of fewview CT reconstruction. To quantitatively evaluate the reconstruction results of the different methods using the same projection dataset (192view projections), the UQI, PSNR and RMSE values of the reconstructed images and the computation times of the four reconstruction algorithms are provided in Table 2. As seen from Table 2, when being used to complete the same reconstruction task, the FBP algorithm takes only a few seconds; the SART, SARTFAB4 and SARTFAB8 algorithms require more than 4000 s, and the SARTFAB8 algorithm has the longest computation time. However, the UQI and PSNR values of the SARTFAB8 algorithm are obviously the largest, and, correspondingly, the RMSE value is the smallest. These quantitative results confirmed that the reconstructed image from the SARTFAB8 algorithm has the fewest errors and the best image quality.

5. Discussion and conclusion
In this study, the SARTFAB8 algorithm was proposed for accurate CT reconstruction under the fewview condition. This algorithm was applied to reconstruct the Shepp–Logan phantom and ex vivo rat maxilla data obtained by ILPCI in the case of fewview projections, and the FBP, SART, SARTFAB4 algorithms were adopted as comparision algorithms. The results indicated that the SARTFAB8 algorithm was an effective method of dose reduction for ILPCCT that could not only reduce streak artefacts and suppress oscillating artefacts but could also preserve textures, fine structures and edge details. Compared with the SART and SARTFAB4 algorithms, the SARTFAB8 algorithm had the fastest convergence speed, which may help to address the largescale computation problem in the practical datasets. With the wide application of ILPCCT in biological science, it has been demonstrated that ILPCCT has outstanding potential to reveal detailed microstructures inside biological specimens without injecting contrast agents. In recent years, ILPCI experiments have been conducted on conventional Xray sources and demonstrated that comparable image quality could be produced using a benchtop imaging system (Gundogdu et al., 2007; Zysk et al., 2012; Larsson et al., 2016). These findings may pave the way for the realization of preclinical or clinical ILPCI systems. In principle, ILPCI can be used for in vivo imaging, although many challenges still remain, including the limited field of view, sample motion, high radiation dose and so on (Sztrókay et al., 2012; Bravin & Coan, 2012). In fact, this research is underway. We are currently working to reduce the radiation dose of ILPCCT while maintaining acceptable image quality using newly optimized CT reconstruction algorithms. Although the performance of our proposed algorithm still requires improvement, e.g. the proposed algorithm has a long computation time, which can be overcome by graphics processing unit (GPU)based speedup techniques (Tian et al., 2011; Liu et al., 2017), it is worth mentioning that this algorithm was able to reconstruct a highquality slice of rat maxilla using approximately 20% of the projection data from the full dataset, indicating that this algorithm is a valuable tool for ILPCCT in lowdose CT reconstruction. In further research, reducing the number of parameters in SARTFAB8 while retaining excellent performance will be an important goal. Moreover, further studies will be performed to assess whether the developed algorithm also applies for in vivo data.
Acknowledgements
The authors would like to thank the staff from beamline (BL13W1) of SSRF, China, for their kindly assistance during the experiments.
Funding information
The following funding is acknowledged: The National Natural Science Foundation of China (grant No. 81671683; grant No. 81371549); The Natural Science Foundation of Tianjin City in China (grant No. 16JCYBJC28600); The Science and Technology Commission Foundation of Tianjin (grant No. 2015KZ111); The Open Project of Key laboratory of Optoelectronic Information Technology, Ministry of Education (grant No. 2017KFKT004); The Foundation of Tianjin university of technology and education (grant No. KJ1201; grant No. KJ1736).
References
Baumann, M. (2014). Master's thesis, pp. 6–12, KTH Royal Institute of Technology, Stockholm, Sweden. Google Scholar
Bian, Z. Y., Zeng, D., Zhang, Z., Gong, C. F., Tian, X. M., Yan, G., Huang, J., Guo, H., Chen, B., Zhang, J., Feng, Q. J., Chen, W. F. & Ma, J. H. (2017). Med. Phys. 44, e188–e201. Web of Science CrossRef Google Scholar
Brandlhuber, M., Armbruster, M., Zupanc, B., Coan, P., Brun, E., Sommer, W. & Rentsch, M. (2016). Invest. Radiol. 51, 170–176. Web of Science CrossRef Google Scholar
Bravin, A. & Coan, P. (2012). Phys. Med. Biol. 57, 2931–2942. Google Scholar
Bronnikov, A. V. (1999). Opt. Commun. 171, 239–244. Web of Science CrossRef CAS Google Scholar
Bronnikov, A. V. (2002). J. Opt. Soc. Am. A, 19, 472–480. Web of Science CrossRef Google Scholar
Cao, Y., Zhou, Y., Ni, S., Wu, T., Li, P., Liao, S., Hu, J. & Lu, H. (2017). J. Neurotrauma, 34, 1187–1199. Web of Science CrossRef Google Scholar
Chen, R.C., Dreossi, D., Mancini, L., Menk, R., Rigon, L., Xiao, T.Q. & Longo, R. (2012). J. Synchrotron Rad. 19, 836–845. Web of Science CrossRef IUCr Journals Google Scholar
Chen, R. C., Rigon, L. & Longo, R. (2011). J. Phys. D Appl. Phys. 44, 495401. Web of Science CrossRef Google Scholar
Chen, R. C., Rigon, L. & Longo, R. (2013). Opt. Express, 21, 7384–7399. Web of Science CrossRef CAS PubMed Google Scholar
Gerig, G., Kubler, O., Kikinis, R. & Jolesz, F. (1992). IEEE Trans. Med. Imaging, 11, 221–232. CrossRef Web of Science Google Scholar
Gilboa, G., Sochen, N. & Zeevi, Y. (2002). IEEE Trans. Image Process. 11, 689–703. Web of Science CrossRef Google Scholar
Groso, A., Abela, R. & Stampanoni, M. (2006). Opt. Express, 14, 8103–8110. Web of Science CrossRef PubMed CAS Google Scholar
Gundogdu, O., Nirgianaki, E., Che Ismail, E., Jenneson, P. M. & Bradley, D. A. (2007). Appl. Radiat. Isot. 65, 1337–1344. Web of Science CrossRef PubMed CAS Google Scholar
Gureyev, T. E., Davis, T. J., Pogany, A., Mayo, S. C. & Wilkins, S. W. (2004). Appl. Opt. 43, 2418–2430. Web of Science CrossRef PubMed Google Scholar
Hansen, P. & SaxildHansen, M. (2012). J. Comput. Appl. Math. 236, 2167–2178. Web of Science CrossRef Google Scholar
Hetterich, H., Webber, N., Willner, M., Herzen, J., Birnbacher, L., Hipp, A., Marschner, M., Auweter, S. D., Habbel, C., Schüller, U., Bamberg, F., ErtlWagner, B., Pfeiffer, F. & Saam, T. (2016). Eur. Radiol. 26, 3223–3233. Web of Science CrossRef Google Scholar
Jian, J., Yang, H., Zhao, X., Xuan, R., Zhang, Y., Li, D. & Hu, C. (2016). J. Synchrotron Rad. 23, 600–605. Web of Science CrossRef IUCr Journals Google Scholar
Langer, M., Cloetens, P. & Peyrin, F. (2009). J. Opt. Soc. Am. A, 26, 1877–1882. Web of Science CrossRef Google Scholar
Larsson, D. H., Vågberg, W., Yaroshenko, A., Yildirim, A. Ö. & Hertz, H. M. (2016). Sci. Rep. 6, 39074. Web of Science CrossRef Google Scholar
Lee, P. C. (2015). Opt. Express, 23, 10668–10679. Web of Science CrossRef Google Scholar
Liu, R., Kalra, M. K., Hsieh, J. & Yu, H. (2017). JSM. Biomed. Imaging. Data. Papers, 4, 1008. Google Scholar
Liu, X. X., Zhao, J., Sun, J. Q., Gu, X., Xiao, T. Q., Liu, P. & Xu, L. X. (2010). Phys. Med. Biol. 55, 2399–2409. Web of Science CrossRef PubMed Google Scholar
Liu, Y., Ma, J. H., Fan, Y. & Liang, Z. R. (2012). Phys. Med. Biol. 57, 7923–7956. Web of Science CrossRef Google Scholar
Mai, C., Verleden, S. E., McDonough, J. E., Willems, S., De Wever, W., Coolen, J., Dubbeldam, A., Van Raemdonck, D. E., Verbeken, E. K., Verleden, G. M., Hogg, J. C., Vanaudenaerde, B. M., Wuyts, W. A. & Verschakelen, J. A. (2017). Radiology, 283, 252–263. Web of Science CrossRef PubMed Google Scholar
Melli, S., Wahid, K., Babyn, P., Montgomery, J., Snead, E., ElGayed, A., Pettitt, M., Wolkowski, B. & Wesolowski, M. (2016). Nucl. Instrum. Methods Phys. Res. A, 806, 307–317. Web of Science CrossRef Google Scholar
Momose, A., Takeda, T., Itai, Y. & Hirano, K. (1996). Nat. Med. 2, 473–475. CrossRef CAS PubMed Web of Science Google Scholar
Nieniewski, M. (2014). International Conference, ICCVG 2014, edited by L. J. Chmielewski, R. Kozera, B.S. Shin and K. Wojciechowski, Vol. 8671 of Lecture Notes in Computer Science, pp. 454–461. Heidelberg: Springer. Google Scholar
Nugent, K. A., Gureyev, T. E., Cookson, D. J., Paganin, D. & Barnea, Z. (1996). Phys. Rev. Lett. 77, 2961–2964. CrossRef PubMed CAS Web of Science Google Scholar
Paganin, D., Mayo, S., Gureyev, T., Miller, P. & Wilkins, S. (2002). J. Microsc. 206, 33–40. Web of Science CrossRef CAS Google Scholar
Prasath, V. B., Urbano, J. M. & Vorotnikov, D. (2015). Inverse Probl. 31, 1–30. Web of Science CrossRef Google Scholar
Rastogi, A., Maiwall, R., Bihari, C., Ahuja, A., Kumar, A., Singh, T., Wani, Z. A. & Sarin, S. K. (2013). Histopathology, 62, 731–741. Web of Science CrossRef Google Scholar
Sidky, E. Y., Kao, C.M. & Pan, X. (2006). J. Xray Sci. Technol. 14, 119–139. Google Scholar
Snigirev, A., Snigireva, I., Kohn, V., Kuznetsov, S. & Schelokov, I. (1995). Rev. Sci. Instrum. 66, 5486–5492. CrossRef CAS Web of Science Google Scholar
Stampanoni, M., Wang, Z. T., Thüring, T., David, C., Roessl, E., Trippel, M., KubikHuch, R. A., Singer, G., Hohl, M. K. & Hauser, N. (2011). Invest. Radiol. 46, 801–806. Web of Science CrossRef Google Scholar
Sztrókay, A., Diemoz, P. C., Schlossbauer, T., Brun, E., Bamberg, F., Mayr, D., Reiser, M. F., Bravin, A. & Coan, P. (2012). Phys. Med. Biol. 57, 2931–2942. Web of Science PubMed Google Scholar
Tian, Z., Jia, X., Yuan, K. H., Pan, T. S. & Jiang, S. B. (2011). Phys. Med. Biol. 56, 5949–5967. Web of Science CrossRef Google Scholar
Tsiotsios, C. & Petrou, M. (2012). Pattern Recogn. 46, 1369–1381. Web of Science CrossRef Google Scholar
Wang, Z. & Bovik, A. (2002). IEEE Signal Process. Lett. 9, 81–84. Google Scholar
Weik, M., Gilboa, G. & Weickert, J. (2009). ScaleSpace and Variational Methods in Computer Vision, edited by X.C. Tai, K. Mørken, M. Lysaker and K.A. Lie, Vol. 5567 of Lecture Notes in Computer Science, pp. 527–538. Berlin: Springer. Google Scholar
Wu, X. Z. & Liu, H. (2005). Opt. Express, 13, 6000–6014. Web of Science CrossRef Google Scholar
Xuan, R., Zhao, X., Hu, D., Jian, J., Wang, T. & Hu, C. (2015). Sci. Rep. 5, 11500. Web of Science CrossRef PubMed Google Scholar
Yang, X., Hofmann, R., Dapp, R., van de Kamp, T., dos Santos Rolo, T., Xiao, X., Moosmann, J., Kashef, J. & Stotzka, R. (2015). Opt. Express, 23, 5368–5387. Web of Science CrossRef PubMed Google Scholar
Zhou, H. C., Zhu, J. B. & Wang, Z. M. (2004). Chin. J. Electron. 12, 2070–2073. Google Scholar
Zysk, A. M., Garson, A. B., Xu, Q., Brey, E. M., Zhou, W., Brankov, J. G., Wernick, M. N., Kuszak, J. R. & Anastasio, M. A. (2012). Biomed. Opt. Express, 3, 1924–1932. Web of Science CrossRef PubMed Google Scholar
© International Union of Crystallography. Prior permission is not required to reproduce short quotations, tables and figures from this article, provided the original authors and source are cited. For more information, click here.