research papers\(\def\hfill{\hskip 5em}\def\hfil{\hskip 3em}\def\eqno#1{\hfil {#1}}\)

Journal logoJOURNAL OF
SYNCHROTRON
RADIATION
ISSN: 1600-5775

Generalized Titarenko's algorithm for ring artefacts reduction

CROSSMARK_Color_square_no_text.svg

aCNPEM/LNLS, Brazilian Synchrotron Light Laboratory, Campinas, São Paulo, Brazil, and bDepartment of Physics, Universidad Nacional de Colombia, Bogotá, Colombia
*Correspondence e-mail: edu.miqueles@gmail.com

(Received 15 April 2014; accepted 22 July 2014; online 7 October 2014)

A fast algorithm for ring artefact reduction in high-resolution micro-tomography with synchrotron radiation is presented. The new method is a generalization of the one proposed by Titarenko and collaborators, with a complete sinogram restoration prior to reconstruction with classical algorithms. The generalized algorithm can be performed in linear time and is easy to implement. Compared with the original approach, with an explicit solution, this approach is fast through the use of the conjugate gradient method. Also, low/high-resolution sinograms can be restored using higher/lower-order derivatives of the projections. Using different order for the derivative is an advantage over the classical Titarenko's approach. Several numerical results using the proposed method are provided, supporting our claims.

1. Introduction

Micro-computed tomography essentially relies on the inversion of the Radon transform, which is a well known operator in inverse problems, especially in image reconstruction. Typically, a three-dimensional volume must be restored from a stacking of two-dimensional reconstructed images, where the ith feature image satisfies the following principle,

[\matrix{ {\rm{input}} \cr {\rm{image}} } \Bigg| \matrix{ {\rm{sinogram}} \cr s \in V} \quad\longrightarrow\quad \matrix{ {\rm{output}} \cr {\rm{image}} } \Bigg| \matrix{ {\rm{feature}} \cr f \in U }. \eqno(1)]

The most established method in tomography consists of converting X-rays into visible light with a scintillator and projecting onto a CCD using standard microscope optics. These projections can be used as input for computerized tomographic reconstruction. A common source of degradation in tomographic reconstruction is through the superimposition of so-called `ring artifacts'. There are several known reasons for the generation of ring artifacts in reconstructed images. Primarily, these are due to defective pixels on the CCD detector or non-linear response of individual detector elements. Defective pixels typically result in a sharp single-pixel artifact unless several consecutive pixels are flawed. Larger artifacts are generally a result of imperfect or dusty scintillator crystals, which can easily exceed a single pixel. Other causes have recently been reported, such as monochromator vibration (see Titarenko et al., 2010a[Titarenko, S., Withers, P. J. & Yagola, A. (2010a). Appl. Math. Lett. 23, 1489-1495.]) or thermal processes generating additional electrons in the CCD (see Titarenko et al., 2009[Titarenko, S., Titarenko, V., Kyrieleis, A. & Withers, P. J. (2009). Appl. Phys. Lett. 95, 071113.]). For 180° rotation of the sample, semi-circle artifacts are generated; only for 360° scans will the full ring artifact be generated. As such, these imperfections are independent of the projection angle and will appear uniformly for all angles in ring format, superimposed on the sample image. Regardless of the source of these artifacts, the resulting degradation of the image quality, obscuring relevant details, is detrimental to the computed tomography process.

In order to illustrate the artefacts of the reconstructed image using the analytical inversion with filtered backprojection algorithm, we consider the example of Fig. 1[link], where a human tooth was introduced to a standard tomography using the Medipix detector. From the corrupted sinograms we clearly see the ring artifacts in the reconstructed image. Some authors in the literature used to filter the reconstructed image transforming to polar coordinates in order to identify the ring structures (see Sijbers & Postnov, 2004[Sijbers, J. & Postnov, A. (2004). Phys. Med. Biol. 49, 247-253.], and references therein). Indeed, this is because the polar transformation maps circles to straight lines, which can be detected using an appropriate filter. Converting from polar coordinates to Cartesian and vice versa can be done with a computational cost of O(n2) where n is the size of the feature image. Unfortunately, changing to polar/Cartesian coordinates introduces additional noise in the reconstructed image. Polar transformation is a fast algorithm compared with the average computational cost of a reconstructed slice, which is O(n2logn) (see Andersson, 2005[Andersson, F. (2005). SIAM J. Appl. Math. 65, 818-837.]). The main disadvantage of polar/Cartesian algorithms is the need to filter the high frequencies with several thresholds on the cut-off frequency, depending on some prior knowledge of the sample under investigation. Another technique for removal of stripe artefacts in the sinogram is the Fourier–Wavelet approach, presented by Münch et al. (2009[Münch, B., Trtik, P., Marone, F. & Stampanoni, M. (2009). Opt. Express, 17, 8567-8591.]). It is a fast technique, with linear computational cost O(n) based on a discrete wavelet decomposition of the sinogram. This is a more sophisticated technique which is able to remove horizontal and vertical stripes in many imaging problems.

[Figure 1]
Figure 1
Reconstruction of the feature image f, using a corrupted sinogram [s(t,\theta)]. (a) f in Cartesian coordinates. (b) f in polar coordinates. Transforming the reconstructed image to polar coordinates, in order to remove vertical stripes, was a popular approach for removing ring artefacts. See Sijbers & Postnov (2004[Sijbers, J. & Postnov, A. (2004). Phys. Med. Biol. 49, 247-253.]).

A fast tomographic experiment, as carried out at the Brazilian Synchrotron Light Laboratory, goes through the following scheme,

[\matrix{ {\rm{sinogram}} \cr {\rm{of\,\,each\,\,slice}} }\quad \longrightarrow^{^{\hskip-12pt(A)\hskip12pt}} \matrix{ {\rm{filter}} \cr {\rm{of\,\,each\,\,slice}} }\quad \longrightarrow^{^{\hskip-12pt(B)}\hskip12pt} \,\,\matrix{ {\rm{reconstruct}} \cr {\rm{each\,\,slice}} }]

Step (B) is usually done with a fast reconstruction algorithm, specially designed to handle a huge tomographic data set. In our case we use only analytical reconstruction through `pyhst', a fast hierarchical inversion algorithm implemented for graphics processing unit (gpu) (see Mirone et al., 2013[Mirone, A., Gouillart, E., Brun, E., Tafforeau, P. & Kieffer, J. (2013). PyHST2: a hybrid distributed code for high speed tomographic reconstruction with iterative reconstruction and a priori knowledge capabilities, https://arxiv.org/abs/1306.1392 .]). Step (A) is the main contribution of this manuscript, where a fast filtering of each sinogram has to be carried out, prior to reconstruction.

Titarenko et al. (2009[Titarenko, S. & Yagola, A. (2009). Mosc. Univ. Phys. Bull. 65, 65-67.], 2010a[Titarenko, S., Withers, P. J. & Yagola, A. (2010a). Appl. Math. Lett. 23, 1489-1495.],b[Titarenko, V., Bradley, R., Martin, C., Withers, P. J. & Titarenko, S. (2010b). Proc. SPIE, 7804, 78040Z.],c[Titarenko, S., Titarenko, V., Kyrieleis, A. & Withers, P. J. (2010c). J. Synchrotron Rad. 17, 540-549.]) introduced a fast algorithm for filtering the sinogram prior to reconstruction, based on the optimality conditions of a quadratic functional. Their method is direct and with a linear computational cost, i.e. O(n). This has a great impact on building fast three-dimensional volumes for the tomographic data set. Therefore, even with more sophisticated inversion schemes, such as the expectation maximization algorithm (Helou et al., 2014[Helou, E. S., Censor, Y., Chen, T. B., Chern, I. L., De Pierro, A. R., Jiang, M. & Lu, H. H. S. (2014). Inverse Probl. 30, 055003.]; De Pierro, 1995[De Pierro, A. R. (1995). IEEE Trans. Med. Imaging, 14, 132-137.]) or recent algorithms based on compressed sensing techniques (Candes et al., 2006[Candes, E. J., Romberg, J. & Tao, T. (2006). IEEE Trans. Inf. Theory, 52, 489-509.]), the stacking process remains almost unaffected.

In this manuscript we present a generalization of Titarenko's algorithm, still preserving the computational linear cost. Our method preserves the sinogram structure also introducing some radial smoothness on each projection, a similar approach also presented in their work (Titarenko et al., 2010c[Titarenko, S., Titarenko, V., Kyrieleis, A. & Withers, P. J. (2010c). J. Synchrotron Rad. 17, 540-549.]). For real tomographic data, measured in the Brazilian Synchrotron Light Laboratory, our method is competitive with Titarenko's original approach.

The manuscript is organized as follows. §2[link] presents the theoretical fundamentation of Titarenko's original work further generalized in §3[link] and §4[link]. Some numerical experiments are presented in §5[link], discussion in §6[link], and §7[link] gives the final conclusions.

2. Generalized variational approach

Theoretically, the pair {s,f} from equation (1)[link] is related through the Radon transform (Deans, 1983[Deans, S. (1983). The Radon Transform and Some of Its Applications Deans. New York: John Wiley and Sons.]), given below,

[s(t,\theta) \equiv {\cal{R}}\,f(t,\theta)= \textstyle\int\limits_{{\bb R}^2} f({\bf x})\, \delta\left(t-{\bf x}\cdot\boldxi_\theta\right)\,{\rm{d}}{\bf x},\eqno(2)]

where [\boldxi_\theta] = [(\cos\theta,\sin\theta)] stands for the direction of the X-ray path, [{\bf x}] corresponds to a pixel of the feature image [f \in U], and [{\bf y}] = [(t,\theta)] corresponds to a pixel of the sinogram [s \in V]. Typically, U and V are inner product spaces of square integrable functions, e.g. L2 or a Schwarz space. It can be shown that the linear operator [{\cal{R}}\colon U \to V] is invertible for an appropriate choice of U and V. A complete discussion about the action of [{\cal{R}}] can be found by Natterer & Wubbeling (2001[Natterer, F. & Wubbeling, F. (2001). Mathematical Methods in Image Reconstruction. Philadelphia: Society for Industrial and Applied Mathematics.]).

This means that, if we are given s = [{\cal{R}}f], the function f could be recovered using f = [{\cal{R}}^{-1}s] with an appropriate formula for [{\cal{R}}^{-1}]. In practice, [{\bf y}] lies in a discrete mesh of points [\{t_1,t_2,\ldots,t_R\} \times\{ \theta_1, \theta_2, \ldots,\theta_N\} \in {\bf N}], where N and R are assumed to be large, e.g. N ≃ 1000 and R ≃ 3000. Therefore, a good estimate of the solution f could be found using the classical filtered backprojection algorithm, which is fast (mainly due to fast Fourier transforms) and approximates the inverse operator [{\cal{R}}^{-1}] (Gabor, 2009[Gabor, G. T. (2009). Fundamentals of Computerized Tomography: Image Reconstruction from Projections, Advances in Pattern Recognition. Berlin: Springer.]).

The approach of Titarenko et al. (2009[Titarenko, S., Titarenko, V., Kyrieleis, A. & Withers, P. J. (2009). Appl. Phys. Lett. 95, 071113.], 2010a[Titarenko, S., Withers, P. J. & Yagola, A. (2010a). Appl. Math. Lett. 23, 1489-1495.],b[Titarenko, V., Bradley, R., Martin, C., Withers, P. J. & Titarenko, S. (2010b). Proc. SPIE, 7804, 78040Z.],c[Titarenko, S., Titarenko, V., Kyrieleis, A. & Withers, P. J. (2010c). J. Synchrotron Rad. 17, 540-549.]) searches for a regularized sinogram that does not contain horizontal stripes. These horizontal stripes are the main source for ring artefacts in the image reconstruction. A regularized sinogram is obtained through the minimization of an appropriate Tikhonov functional (Tikhonov 1963a[Tikhonov, A. N. (1963a). Sov. Math. Dokl. 4, 1624-1627.],b[Tikhonov, A. N. (1963b). Sov. Math. Dokl. 5, 1035-1038.]; Yagola et al., 2002[Yagola, A. G., Leonov, A. S. & Titarenko, V. N. (2002). Inverse Probl. Eng. 10, 117-129.]). A brief mathematical description of Titarenko's method is given in Appendix A[link]. In this case, we are looking for a regularized sinogram, the solution of

[\min_{s\,\in\,V} V(s)={1\over2}\int\limits_{\bf{N}}\left[\left(\partial_{tt}s\right)^2+\left(\partial_{\theta\theta}s\right)^2\right] \,{\rm{d}}t\,{\rm{d}}\theta+{\lambda\over2}\|s-m\|^2.\eqno(3)]

This was also explored in the work of Twomey (1963[Twomey, S. (1963). J. ACM, 10, 97-101.]) for the solution of integral equations of the first kind. The solution is presented in Appendix A[link]. For completeness, the pseudo-code of the original Titarenko's ring suppression algorithm is given below,

Titarenko's Algorithm (TA):

Input: Corrupted sinogram [{\bf{M}}\in{\bb R}^{R\times{N}}]

[1] Set [{\bf{e}}_N=(1)\in{\bb R}^N] and [\lambda\in{\bb R}]

[2] Set matrix [{\bf{T}}_1] as in (42)

[3] Define explicitly [{\bf{A}}\equiv({\bf{T}}_1+\lambda{\bf{I}}_d)^{-1}]; (see Titarenko et al. (2010a[Titarenko, S., Withers, P. J. & Yagola, A. (2010a). Appl. Math. Lett. 23, 1489-1495.])

[4] Define [{\overline{\bf{m}}}=({{1}/{N}})\,{\bf{Me}}_N]

[5] Compute [{\bf{n}}^*=-{\bf{AT}}_1{\overline{\bf{m}}}]

Output: Restored sinogram

[\left[ \matrix{ {\rm{for}} & (i,j) \in {\bb N}^{R\times{N}} \hfill \cr & {\bf{S}}_{j,i}={\bf{M}}_{j,i}+{\bf{n}}_{j}^* \hfill \cr {\rm{end}} & } \right.\eqno(4)]

3. Generalized Titarenko's algorithm without angle dependency

Assuming that the deviation from s to m does not depend on the angle, i.e.

[s({\bf y}) = m({\bf y}) + n(t),\eqno(5)]

and changing to the discrete case, we are now searching for [{\bf n}^*] such that

[{\bf n}^* = \displaystyle \mathop{\rm{argmin}}_{{\bf n}\,\in\,{\bb R}^R} \left\{ {V}({\bf n}) = {{1} \over {2}}\sum\limits_{i,\,j} {\bb F}_2\left[{{\bf M}}_{j,i} + {\bf n}_j\right]^2 + \lambda{{N}\over{2}} \|{\bf n}\|^2 \right\},\eqno(6)]

where [{\bb F}_2] is a second-order finite difference operator given by

[{\bb F}_2[{\bf P}_{j,i}] = {{\bf P}}_{j-1,i} - 2{{\bf P}}_{j,i} + {\bf P}_{j+1,i}.\eqno(7)]

Since V is quadratic, solving [\nabla{V}({\bf n}^*)] = 0 guarantees a solution of (6)[link]. To make the calculations easier, let [{V}({\bf n})] = [{D}({\bf n})] + [{B}({\bf n})], with D related to the finite difference operator. Denoting [{\bf M}_i] as the ith column of [{\bf M}\in{\bb R}^{R\times N}], a straightforward calculation gives us

[\nabla{D}({\bf n})=\textstyle\sum\limits_i {\bf T}_2 [{\bf M}_i + {\bf n}], \qquad \nabla{B}({\bf n})=\lambda N{\bf n},\eqno(8)]

where [{\bf T}_2] is a pentadiagonal matrix,

[{\bf T}_2\colon \,{\bf F} = \left(\matrix{ 1 & -2 & 1 & 0 & \ldots & 0 & 0 & 0 \cr -2 & 5 & -4 & 1 & \ldots & 0 & 0 & 0 \cr 1 & -4 & 6 & -4 & \ldots & 0 & 0 & 0 \cr \vdots & & & \ddots & & \vdots \cr 0 & 0 & 0 & 0 & \ldots & 6 & -4 & 1\cr 0 & 0 & 0 & 0 & \ldots & -4 & 5 & -2\cr 0 & 0 & 0 & 0 & \ldots & 1 & -2 & 1 }\right)_{R-2\times R}.\eqno(9)]

Therefore, the optimality criteria gives us the noise vector [{\bf n}^*] in the same fashion as in (40)[link] (see Appendix A[link]),

[\eqalignno{\nabla { V}({\bf n}^*)= 0& \Rightarrow {\bf T}_2 \left(\textstyle\sum_i {\bf M}_i \right) + N\, {\bf T}_2 {\bf n}^* + \lambda N {\bf n}^* = 0 \cr& \Rightarrow ({\bf T}_2 + \lambda {{\bf I}}_d){\bf n}^* = -{\bf T}_2\overline{{\bf m}},&(10)}]

with [\overline{\bf{m}}] the average projection, as defined in (41)[link].

3.1. Further generalizations

It is a well known fact, especially in discrete differential equations, that finite difference operators are closely related to tridiagonal or pentadiagonal matrices. The structure of the matrix depends on the type of approximation used for the associated derivatives. In the present case, we are also approximating derivatives and we have exactly the same problem. The finite difference operator [{\bb F}_1] used by Titarenko et al. (2010a[Titarenko, S., Withers, P. J. & Yagola, A. (2010a). Appl. Math. Lett. 23, 1489-1495.],c[Titarenko, S., Titarenko, V., Kyrieleis, A. & Withers, P. J. (2010c). J. Synchrotron Rad. 17, 540-549.]) (see Appendix A[link]) was easily expanded to a finite difference operator [{\bb F}_2] when minimizing (3)[link]. Of course, this can be expanded even more.

Considering only the discrete case, it is easy to realise that

[{\bb F}[{\bf P}_{j,i}] = {\bf f}_j^T {\bf P}_i, \qquad {\bf P}_i \in {\bb R}^R\eqno(11)]

is the general form of the finite difference operator applied to the ith column of matrix [{\bf P}], and [{\bf f}_j] is the vector associated with the linear functional [{\bb F}], i.e. [{\bf f}_j] is the jth row of a matrix [{\bf F}]. In the case of [{\bb F}_1] and [{\bb F}_2] we have the following matrices,

[{\bf T}_1\colon\,\,{\bf F}= \left(\matrix{ 1 & -1 & 0 & \ldots & 0 & 0 \cr 0 & 1 & -1 & \ldots & 0 & 0\cr \vdots & & & \ddots & & \vdots \cr 0 & 0 & 0 & \ldots & -1 & 1 } \right)_{R-1 \times R}\eqno(12)]

and

[{\bf T}_2\colon\,\,{\bf F}= \left(\matrix{ 1 & -2 & 1 & 0 & \ldots & 0 & 0 & 0 \cr 0 & 1 & -2 & 1 & \ldots & 0 & 0 & 0 \cr \vdots & & & & \ddots & & & \vdots\cr 0 & 0 & 0 & 0 & \ldots & 1 & -2 & 1 }\right)_{R-2 \times R}.\eqno(13)]

The number of rows of matrix [{\bf F}] strongly depends on the operator [{\bb F}]; as a rule, the offset from the main diagonal indicates the number of rows of [{\bf F}]. As a consequence, one can easily verify that matrices [{\bf T}_1] and [{\bf T}_2] satisfy [{\bf T}_1] = [{\bf F}^T {\bf F}] and [{\bf T}_2] = [{\bf F}^T {\bf F}].

Inspired by (3)[link], (37)[link] and (11)[link], we want to solve the following generalized optimization problem,

[{\bf n}^* = \displaystyle \mathop{ {\rm{argmin}}}_{{\bf n}\,\in\,{\bb R}^R} \left\{ {V}({\bf n}) = \underbrace{ {{1} \over {2}}\sum_{i = 1}^N \sum_{j = 1}^{R} \left[{\bf f}_j^T({\bf M}_i + {\bf n})\right]^2}_{D({\bf n})} + \underbrace{\lambda{{N} \over {2}} \|{\bf n}\|^2}_{B({\bf n})} \right\}.\eqno(14)]

Using the same notation as in (8)[link], and letting [{\bf P}_i \equiv {\bf M}_i + {\bf n}], it is not difficult to show that

[\eqalignno{ {{\partial D}\over{\partial {\bf n}_j}} & = \textstyle\sum\limits_i ({\bf f}_1^T {\bf P}_i) {\bf f}_{j,1} + \ldots + ({\bf f}_R^T {\bf P}_i) {\bf f}_{j,R} &(15) \cr&= \textstyle\sum\limits_i {\bf F}_j^T \left(\matrix{{\bf f}_1^T {\bf P}_i \cr \vdots \cr {\bf f}_R^T {\bf P}_i}\right) = \textstyle\sum\limits_i {\bf F}_j^T {\bf F} {\bf P}_i &(16) \cr&= {\bf F}_j^T {\bf F} \textstyle\sum\limits_i {\bf P}_i = {\bf F}_j^T {\bf F} \textstyle\sum\limits_i \left[{\bf M}_i + {\bf n}\right] &(17) \cr & = N\,{\bf F}_j^T {\bf F} [\overline{\bf{m}} + {\bf n}] &(18)}]

with [{\bf F}_j] the jth row of [{\bf F}]. From (18)[link] and [\nabla{B}({\bf n})] = [\lambda N {\bf n}], we must have

[\eqalignno{ \nabla{V}({\bf n}^*)= 0& \Rightarrow {\bf F}^T {\bf F} (\overline{\bf{m}} + {\bf n}^*) + \lambda {\bf n}^* = 0 \cr& \Rightarrow \left({\bf F}^T {\bf F} + \lambda {\bf I}_d \right) {\bf n}^* = - {\bf F}^T {\bf F}\,\overline{\bf{m}}. &(19)}]

We refer to equation (19)[link] as a generalization of Titarenko's algorithm, clearly including case (40)[link] and (10)[link]. We can now choose a whole family of finite difference operators in order to build a ring suppression algorithm. The generalized approach is given below, where [\otimes] stands for the Kronecker product (Horn & Johnson, 1985[Horn, R. A. & Johnson, C. R. (1985). Matrix Analysis. Cambridge University Press.]).

 

Generalized Titarenko's Algorithm (GTA):

Input: Corrupted sinogram [{\bf M}\in{\bb R}^{R\times N}]

[1] Set [{\bf e}_N = (1) \in {\bb R}^N] and [\lambda \in {\bb R}]

[2] Set finite difference vector/matrix [\{ h, {\bf F}\}]

[3] Define matrix [{\bf A} = {\bf F}^T {\bf F} + \lambda {\bf I}_d]

[4] Define [\overline{\bf{m}} = ({{1}/{N}}){\bf M}{\bf e}_N]

[5] Solve [{\bf A}{\bf n}^* = -{\bf F}^T{\bf F}\,\overline{\bf{m}}]

Output: Restored sinogram as in (4)

[{\bf S}= {\bf M} + {\bf n}^* \otimes {\bf e}_N^T.\eqno(20)]

3.2. Finite difference coefficients

We remark that matrix [{\bf F}] acting on the operator [{\bb F}] is a convolution matrix based on the kernel h. In fact, from (41)[link] we have h = [(1,-1) \in {\bb R}^2] and h = [(1,-2,1) \in {\bb R}^3] in (7)[link]. From Bengt (1988[Bengt, F. (1988). Math. Comput. 51, 699-706.]) and Mahesh (1998[Mahesh, K. (1998). J. Comput. Phys. 145, 332-358.]) we have a wide family of backward and forward finite difference coefficients, with different order of accuracy. Some coefficients are depicted in Table 1[link]. The kernel vector is denoted hkj, with k standing for the order of derivative and j for the corresponding accuracy.

Table 1
Finite difference coefficients to compute matrix [{\bf F}]; see text for details

Derivative Accuracy Kernel
[\partial_t] 1 h1,1 = (−1, 1)
  2 h1,2 = (−3/2, 2, −1/2)
  3 h1,3 = (−11/6, 3, −3/2, 1/3)
  6 h1,6 = (−49/20, 6, −15/2, 20/3, −15/4, 6/5, −1/6)
[\partial_{tt}] 1 h2,1 = (1, −2, 1)
  2 h2,2 = (2, −5, 4, −1)
  6 h2,6 = (469/90, −223/10, 879/20, −949/18, 41, −201/10, 1019/180, −7/10)
[\partial_{ttt}] 1 h3,1 = (−1, 3, −3, 1)
  5 h3,5 = (−967/120, 638/15, −3929/40, 389/3, −2545/24, 268/5, −1849/120, 29/15)

3.3. Solving the optimality conditions (19)[link]

We want to solve (12)[link] in order to complete step [5] in the Generalized algorithm GTA. Taking [{\bf F}] as in (12)[link], linear system (19)[link] reduces to (40)[link], which has analytical solution

[{\bf n}^* = \left({\bf F}^T {\bf F} + \lambda {\bf I}_d\right)^{-1} \left(- {\bf F}^T {\bf F}\,\overline{\bf{m}} \right)\eqno(21)]

where [({\bf F}^T {\bf F} + \lambda {\bf I}_d)^{-1}] was explicitly given by Titarenko et al. (2010a[Titarenko, S., Withers, P. J. & Yagola, A. (2010a). Appl. Math. Lett. 23, 1489-1495.]). In the general case, a similar approach is not trivial. Huckle (1994[Huckle, T. (1994). BIT Numer. Math. 34, 99-112.]) presents a Fourier-strategy for solving tridiagonal linear systems arising from elliptic partial differential equations. Although we are not restricted to partial differential equations, the matrix inversion presented in the original work of Titarenko et al. is highly connected to the work of Huckle.

It is easy to observe that matrix [{\bf F}] is rank deficient, i.e. [{\rm{rank}}({\bf F})] = R-r, where r+1 is the length of the Kernel vector h, as presented in Table 1[link]. For instance, in (40)[link], [{{\rm{rank}}}({\bf T})] = R-1 whereas [{\rm{rank}}({\bf T})] = R-2 in (10)[link]. The relaxation parameter λ ensures that [{\rm{rank}}({\bf T}+\lambda {\bf I}_d)] = R so that [V({\bf n})] attains a minimum.

Let us denote [{\bf A}] = [{\bf F}^T {\bf F}+\lambda {\bf I}_d] and [{\bf f}] = [-{\bf T} \,\overline{\bf{m}}]. We propose a fast solution of [{\bf A} {\bf n}] = [{\bf f}] through the use of the conjugate gradient method (CGM). This is possible since [{\bf A}] is a symmetric and positive definite matrix.1 The CGM iterations (see Bazaraa et al., 2007[Bazaraa, M. S., Sherati, H. D. & Shetty, C. M. (2007). Nonlinear Programming, Theory and Algorithms, 3rd ed. New York: Wiley.]) are particularly interesting for this linear system since [{\bf A}] is a convolution matrix. Indeed, the CGM strongly depends on matrix-vector product [{\bf A} {\bf u}], for some [{\bf u}], which is computed using fast Fourier transforms. In fact,

[{\bf A}{\bf n}= {\bf F}^T{\bf F}{\bf n}+\lambda {\bf n} = ({\bf n}\star h)\star h + \lambda {\bf n},\eqno(22)]

where h is the convolution kernel and [\star] is a symbol for the circular convolution.

3.4. Computational complexity

The solution of (19)[link] for Titarenko's case, i.e. h = (1,-1) (see Table 1[link]), could be computed really fast using a tridiagonal matrix algorithm (TDMA) (see Thomas, 1949[Thomas, L. H. (1949). Elliptic Problems in Linear Differential Equations over a Network, Watson Science Computer Laboratory Report, Columbia University, New York, USA.]). The computational complexity of TDMA is O(R). If the analytical inverse of [{\bf A}] is computed previously, we have to perform the matrix-vector product for each slice, which has complexity O(R2). This is not the best programming choice, and it is preferable to use TDMA per iteration. A generic pseudocode for the three-dimensional reconstruction, using Titarenko's method, is given by

[[{\bf{3D}}]: \left\{ \matrix{ \bullet\,\, {\rm{for\,\,each\,\,slice[{\it k}],do\!:}} \hfill \cr\quad(a)\,\,{\rm{Solve\!:}}\,\,{\bf An} = {\bf f} \Rightarrow O(R) \hfill \cr\quad(b)\,\,{\rm{Update\!:}}\,\,{\bf S} = {\bf M} + {\bf n} \otimes {\bf e}_N^T \Rightarrow O(R^2) \hfill \cr\quad(c)\,\,\,{\rm{Fast\,\,reconstruction\!:}}\,\,{\bf i}_k = {\bb P}[{\bf S}] \Rightarrow O(R^2\log R) \hfill \cr \bullet \,\,{\rm{Stack}}\,\,\{{\bf i}_k\} \hfill} \right.]

For the generalized case, we still have complexity O(R) for step (a), using the conjugate gradient method. Therefore, the fast reconstruction strategy remains unnaltered. Operator [{\bb P}] stands for the analytical inversion with filtered backprojection, or an iterative strategy.

Fig. 2(a)[link] depicts the CPU time for each slice, using algorithms TA and GTA in a high-level programming language like Python. The reconstructions for this particular example are presented in more details in §5[link]. Even with an analytical formulation for the inverse matrix [{\bf A}] = [({\bf F}^T {\bf F} + \lambda {\bf I})] in Titarenko's original approach, the CPU time is higher if compared with the solution of [{\bf An}] = [{\bf f}] through the CGM. Indeed, A is a symmetric sparse band matrix, and the iterations of CGM converge in at most R steps, where R is the number of rays in the sinogram. In fact, for a particular example with R = 2048, Fig. 2(b)[link] shows the number of iterations for the convergence of the CGM using vector kernal h1,2 and h2,2 for the generalized Titarenko's algorithm. Even with TA computing just once the dense matrix [{\bf A}^{-1}], a matrix-product [{\bf A}^{-1} {\bf f}] has to be performed, for each slice, which has a quadratic computational cost. Using CGM is a more efficient strategy since the matrix-product [{\bf A}{\bf n}] is computed through convolution.

[Figure 2]
Figure 2
Complexity of algorithms TA and GTA. (a) Comparison of the CPU time/slice used by Titarenko's algorithm and his generalization using the conjugate gradient method. (b) Number of iterations used by the conjugate gradient method with order 1 and 2.

3.5. Reduction by blocks

The generalized algorithm (19)[link] could be easily implemented by blocks of columns. This is important whenever the blank scan (the flat field) is corrected in between [b \in {\bb N}] projections. Also, such a strategy could be used to remove artefacts in the sinogram that are not eliminated using (19)[link].

Let the sinogram be given by

[{\bf S} = \left(\matrix{\hat{\bf S}_1 & \hat{\bf S}_2 & \ldots & \hat{\bf{S}}_b} \right), \qquad \hat{\bf S}_k \in {\bb R}^{R \times c},\eqno(23)]

where [c \in {\bb N}] is such that N = cb. Each [\hat{\bf S}_k] is restored according to the generalized algorithm, i.e.

[\hat{\bf S}_k = \hat{{\bf M}}_k + {\bf n} \otimes {\bf e}_c^T,\eqno(24)]

where [{\bf n}] is the solution of (19)[link], and [\overline{\bf{m}}] is the average projection within the block, i.e. [\overline{\bf{m}}] = [({{1}/{c}})\hat{{\bf M}}_k {\bf e}_c]. In this case, matrix [{\bf A}] = [{\bf F}^T{\bf F} + \lambda {\bf I}_d] remains the same for each block.[link]

3.6. Geometric mean sinogram

Solving (19)[link] through the conjugate gradient method could be made fast using convolution and the fast Fourier transform. Therefore, obtaining two solutions is a simple task, say [{\bf n}_1] and [{\bf n}_2]. In this case, we have two approximate restored sinograms [{\bf S}_1] and [{\bf S}_2], respectively. Each one is given by (20)[link]. If we take [{\bf n}_1] as the solution of (19)[link] with kernel h1,j, we are minimizing (40)[link]. On the other hand, if we take [{\bf n}_2] as the solution of the same linear system, with kernel h2,j, we are minimizing (41)[link].

Our aim is to combine these two sinograms in order to have a better one. A convex combination of [{\bf S}_{1,j}] and [{\bf S}_{2,j}] does not fit to our needs since we have to choose a proper weight to favour first or second derivatives. Instead, we set a new sinogram, denoted by [{\bf G}(\varepsilon)], whose entries j,i) are defined by

[{\bf G}_{j,i}(\varepsilon)= \left({\bf N}_{j,i} {\bf P}_{j,i} + \varepsilon\right)^{1/2},\eqno(25)]

for some previously defined tolerance [\varepsilon] > 0. If we take [\varepsilon] [\equiv] 0 we have the geometric mean.2 The choice of ∊ is such that [{\bf G}] is close to the arithmetic mean.

4. Generalized Titarenko's algorithm with angle dependency

This case was already presented by Titarenko et al. (2011[Titarenko, S., Titarenko, V., Kyrieleis, A., Withers, P. & De Carlo, F. (2011). J. Synchrotron Rad. 18, 427-435.]), where the corrupted sinogram also depends on the angle of rotation. In this case, hypothesis (5) is replaced by

[s(t,\theta)=m(t,\theta)+n(t,\theta),\eqno(26)]

taking into account physical properties that depend on the angle of rotation θ. After a similar discussion given in Appendix A[link], a functional is minimized and a linear equation is obtained. Without going into further detail, we claim that a similar analysis of §3[link] is easily extended for this case, using a finite difference operator [{\bb F}]. Here, we are looking for a correction matrix [{\bf N} \in {\bb R}^{R \times N}] such that [{\bf S}] = [{\bf M} + {\bf N}] is the updated sinogram, with [{\bf M}] the measured one.

In this case, the rows of the matrix [{\bf N}] are written as a linear combination of a previously known basis, i.e. [{\bf N} = {\bf C} {\bf D}] with [{\bf C} \in {\bb R}^{R \times S}] the unknown coefficients and [{\bf D} \in {\bb R}^{S\times N}] the basis. After computing the optimality conditions, the following linear system is obtained,

[\left({\bf F}^T {\bf F} + \lambda {\bf I}_d\right) {\bf C}_k = {\bf G}_k,\qquad k=1,2,\ldots,N,\eqno(27)]

where [\{{\bf C}_k,{\bf G}_k\}] are the kth columns of matrices [{\bf C}] and [{\bf G}], respectively. Here, matrix [{\bf G}] is given by [{\bf G}] = [{\bf B} {\bf D}^T \in {\bb R}^{R\times S}] with [{\bf B}] = [{\bf F}^T {\bf F} {\bf M} \in {\bb R}^{R\times N}]. In a matricial form, the correction matrix [{\bf N}] is the one that satisfies [({\bf F}^T {\bf F} + \lambda {\bf I}_d) {\bf N}] = [{\bf B} {\bf D}^T {\bf D}]. Generalization comes from the fact that we can choose a kernel vector h, see Table 1[link], in order to define the finite difference matrix [{\bf F}]. Taking h = [(1,-1) \in {\bb R}^2], matrix [({\bf F}^T {\bf F} + \lambda {\bf I}_d)] becomes exactly the one proposed by Titarenko et al. (2010a[Titarenko, S., Withers, P. J. & Yagola, A. (2010a). Appl. Math. Lett. 23, 1489-1495.]), with an analytic inverse. Further details of this algorithm can be found in the work of Titarenko et al. (2011[Titarenko, S., Titarenko, V., Kyrieleis, A., Withers, P. & De Carlo, F. (2011). J. Synchrotron Rad. 18, 427-435.]).

 

Generalized Titarenko's algorithm with angle dependency [GTA(θ)]

Input: Corrupted sinogram [{\bf M}\in {\bb R}^{R\times N}], [0 \,\,\lt\,\, S \in {\bb Z}]

[1] Define [{\bf D} \in {\bb R}^{S \times N}] orthonormal by rows

[2] Set [{\bf B}] = [({\bf F}^T {\bf F}) {\bf M}]

[3] Define matrix [{\bf A}] = [{\bf F}^T {\bf F} + \lambda {\bf I}_d]

[4] Define matrix [{\bf G}] = [{\bf B} {\bf D}^T {\bf D}]

[5] Solve [{\bf A} {\bf N}_k] = [{\bf G}_k], for eack k = [1,2,\ldots,N]

Output: Restored sinogram [{\bf S}] = [{\bf M} + {\bf N}]

 

Compared with the GTA algorithm, GTA(θ) demands a high computational cost. Loop [5] could be eliminated if the inverse of matrix [{\bf A}] is computed a priori, although this is only true for kernel vector h = (1, −1); in this case, [{\bf N}] = [{\bf{A}}^{-1}{\bf B}{\bf D}^T {\bf D}]. For different kernels, the explicit inverse matrix is no longer available, and the conjugate gradient method is used.

5. Numerical experiments

In 2011, work began in earnest on the design of an imaging beamline at the Brazilian Synchrotron Light Source (https://lnls.br ). IMX, which has an electron source size of 391 µm × 97 µm and beam divergence of 808 µrad × 26 µrad, was built on a 2 T bending magnet of the 1.37 GeV storage. This beamline can operate in either white beam or monochromatic beam. The white-beam energy spectrum ranges from 4 keV to 15 keV, with a photon flux at the sample position of approximate 1015 photons s−1. Several scintillators are available at the beamline, the most utilized being the YAG:Ce (scintillator crystal), which has an emission wavelength of 550 nm and a photon output per 8 keV. The scintillator is mounted on a commercially available microscope system from Optique Peter3 (Douissard et al., 2012[Douissard, P. A., Cecilia, A., Rochet, X., Chapel, X., Martin, T., van de Kamp, T., Helfen, L., Baumbach, T., Luquot, L., Xiao, X., Meinhardt, J. & Rack, A. (2012). J. Instrum. 7, P09016.]) coupled with a PCO.2000 CCD camera (Germany). The PCO.2000 is a high-resolution 14-bit CCD cooled camera with a quantum efficiency of 55% at 500 nm and has a full well capacity of 40000. The resulting array size of this detector system is 2048 × 2048, with pixel size from 0.37 µm to 3.7 µm. For a typical tomographic scan, 1000 images were acquired over 180°. A schematic of the experimental system for the acquisition of projected data is shown in Fig. 3[link].

[Figure 3]
Figure 3
Schematic of the imaging beamline IMX at LNLS.

5.1. Medipix detector

Hybrid silicon photon-counting detectors can meet the requirements of very high dynamic range and high detection efficiency specially needed for tomographic applications using synchrotron radiation. Compared with other families of hybrid pixel detectors, such as Pilatus (see Kraft et al., 2009[Kraft, P., Bergamaschi, A., Broennimann, C., Dinapoli, R., Eikenberry, E. F., Graafsma, H., Henrich, B., Johnson, I., Kobas, M., Mozzanica, A., Schleptz, C. M. & Schmitt, B. (2009). IEEE Trans. Nucl. Sci. 56, 758-764.]) and XPAD (see Pangaud et al., 2008[Pangaud, P., Basolo, S., Boudet, N., Berar, J.-F., Chantepie, B., Clemens, J.-C., Delpierre, P., Dinkespiler, B., Medjoubi, K., Hustache, S., Menouni, M. & Morel, C. (2008). Nucl. Instrum. Methods Phys. Res. A, 591, 159-162.]), the Medipix3RX readout chip enables a higher spatial resolution with a pixel size of 55 µm by 55 µm. Furthermore, this chip takes advantage of the advanced CMOS (complementary metal oxide semiconductor) technology to allow a high level of functionality in each pixel, enabling reconstruction of the charge generated by a photon detected on several neighbour pixels (charge summing mode) or allowing up to eight energy thresholds per cluster of four pixels (spectroscopic mode). Unfortunately, some characteristics of analog and digital circuits of the ASICS (application specific integrated circuit) are responsible for a non-linear relationship between measurement and incident flux. One of these characteristics is the shaping time directly related to the dead time of the system, which is responsible for pile-up effects (see Rinkel et al., 2011[Rinkel, J., Brambilla, A., Dinten, J.-M. & Mougel, F. (2011). Patent WO2011064381.]). The pixel-to-pixel dispersion of these non-linear parameters cannot be corrected by simple flat-field normalization. In tomography, these dispersions are responsible for ring artefacts, in addition to global false estimation of the reconstructed attenuation coefficients.

To assess the robustness of the proposed algorithms for ring artefact correction, tomographic measurements were performed on the tomographic beamline of the Brazilian Synchrotron (LNLS) with a Medipix complementary detector. A Medipix3RX ASIC bump bonded to a 300 µm silicon sensor was used in conventional single-pixel mode configured with single 24-bit counters and high-gain mode (see Gimenez et al., 2011[Gimenez, E. N., Ballabriga, R., Campbell, M., Horswell, I. & Llopart, X. (2011). IEEE Trans. Nucl. Sci. 58, 323-332.]). The detector was read out using Medipix3 USB interfaces and Pixelman software (see Vykydal et al., 2006[Vykydal, Z., Jakubek, J. & Pospisil, S. (2006). Nucl. Instrum. Methods Phys. Res. A, 563, 112-115.]).

5.2. Examples

In the absence of noise, the theoretical sinogram [s(t,\theta)] satisfies the following mathematical relation (Helgason, 1980[Helgason, S. (1980). The Radon Transform. Boston: Birkhauser.]),

[\eqalignno{{{1}\over{2}}\int\limits_{{\bb R}} s(t,\theta)\,{\rm{d}}t&= z(\theta)= {\rm{constant}}\cr&= {{1}\over{2}}\int\limits_{{\bb R}^2} f(\bf x)\,{\rm{d}}{\bf{x}} \qquad \forall \theta.&(28)}]

We refer to constant z as the mass of the sinogram, i.e. the mean remains constant for all possible values of θ. For noisy data, (28)[link] is no longer true. For the LNLS synchrotron data, as the beam current decays in time, function [z(\theta)] is far from being constant, with a monotonic behaviour.

Since [z(\theta)] in (28)[link] is not constant for real measured data [m(\cdot,\theta)], the standard deviation (std) of each projection [m(\cdot,\theta) \equiv m_\theta(\cdot)] is a function [w(\theta)], say

[w(\theta)= {\rm{std}}(m_\theta)\Rightarrow {\bf{w}}=\left[w\left(\theta_k\right)\right]\in{\bb{R}}^N.\eqno(29)]

Function w = [w(\theta)] is shown in Fig. 4[link] for some measured sinogram at the angle set [\{\theta_1,\ldots,\theta_{1000}\}]. We take the regularization parameter λ [see (14)[link]] as the standard deviation of [{\bf{w}}], i.e.

[\lambda={\rm{std}}({\bf{w}}).\eqno(30)]

As is clear from (30)[link], the value of λ depends on the measured sinogram [{\bf{M}}\in{\bb{R}}^{R\times{N}}]. As [{\bf{z}}] = [[z(\theta_k)]] is closer to a constant vector, λ is lower as we are dealing with a well behaved sinogram (at least in theory). The opposite remains true, i.e. the more distant [{\bf{z}}] is from a constant vector then the higher will be the penalty since we have an ill-behaved sinogram.

[Figure 4]
Figure 4
Function w = [w(\theta)], as defined in equation (29)[link], is the standard deviation of each measured projection, for a fixed sinogram. In this particular example the sinogram is presented in Fig. 5(a)[link].

We use the following notation:

[{\bf S}_{k,j} \in {\bb R}^{R\times N}] is the restored sinogram [{\bf S}_{k,j}] = [{\bf M}] + [{\bf{q}}\otimes{\bf{e}}_N^T]; where [{\bf q} \in {\bb R}^R] is the solution of (19)[link] with kernel hk,j given in Table 1[link];

[{\bf G}] = [{\bf G}(\lambda)\in{\bb R}^{R \times N}] is the geometric mean sinogram, with parameter λ defined in (30)[link], using [{\bf S}_{1,3}] and [{\bf S}_{2,2}] as references [see (25)[link]];

[{\bf M}] is the measured sinogram, usually corrupted as in (5)[link].

A measured sinogram at the LNLS, [{\bf M}], is shown in Fig. 5(a)[link]. The region within the rectangle is presented in Fig. 5(b)[link], where several horizontal stripes are visible. Each stripe generates a circle in the reconstruction technique.

[Figure 5]
Figure 5
Sinogram I (see text for details). (a) Original sinogram from the LNLS beamline. (b) Zoomed region from the original sinogram. (c) Sinogram [{\bf S}_{1,3}]. (d) Smoothed sinogram [{\bf S}_{2,2}]. (e) Geometric mean sinogram [{\bf G}].

The generalized Titarenko's suppression algorithm, with kernel h1,3, see Table 1[link], gives the result [{\bf S}_{1,3}] in Fig. 5(c)[link]. Using a high-order finite difference, the horizontal stripes are clearly reduced. However, as can be noted in Fig. 5(c)[link], there are two severe horizontal stripes remaining in the restored sinogram. The one on the top is due to the average projection [see vector [\overline{\bf{m}}] in (41)[link]]. The second is probably due to dead/damaged pixels in the CCD camera.

Fig. 5(d)[link] is the restored sinogram [{\bf S}_{2,2}] using the generalized algorithm with kernel h2,2. The artefact corresponding to the first stripe is strongly reduced, although the second stripe still remains, as it is a major corruption in the data. Fig. 5(e)[link] depicts the geometric sinogram [{\bf G}] using the combination of Figs. 5(d) and 5(c)[link].

The projection at the angle [\theta_{400}] = [{{\pi}/{10}}] for the restored sinogram [{\bf S}_{1,3}] is presented in Fig. 6(a)[link], compared with the same projection of the geometric sinogram [{\bf G}]. As expected, the geometric map overcomes the sinogram [{\bf S}_{1,3}], as shown in the rectangle, zoomed in Fig. 6(b)[link].

[Figure 6]
Figure 6
Projections at [\theta_{400}] = [{{\pi}/{1000}}]. (a) Projection [{\bf S}_{1,3}(\cdot,400)] versus projection [{\bf G}(\cdot,400)]. (b) Zoomed region of part (a).

To remove the second horizontal stripe (see Fig. 5e[link]), we adopt the block strategy presented in (24)[link], using only two blocks. The restored sinograms [only the critical region of Fig. 5(a)[link]], are shown in Figs. 7(a) and 7(b)[link]. The first was obtained using kernel h2,2 at each block, whereas the second was obtained using the geometric mean sinogram of each block, taking h2,2 and h1,3 as appropriate convolution kernels.

[Figure 7]
Figure 7
(a) Restoration using two blocks [see equatiion (24)[link]] with kernel h2,2 at each block. (b) Geometric mean sinogram using two blocks. (c) Reconstruction with the sinogram of (b). (d) Magnified part of (c).

The first horizontal stripe present in the restored sinogram (see Fig. 5c[link]) leads to the circular shadow outside the sample. The second horizontal stripe, uncorrected by first-order finite differences, leads to only one strong ring artefact in the feature image. Using second-order finite differences, i.e. using [{\bf S}_{2,2}], we obtain the reconstruction in Fig. 8(c)[link]. Since the first horizontal stripe was reduced, the shadow outside the sample, as in Fig. 8(b)[link], was dramatically reduced. Nevertheless, since the second stripe still remains in the restored sinogram, the reconstructed feature image presents one strong ring artefact. Through the geometric mean sinogram [{\bf G}] (see Fig. 5e[link]) we obtain the reconstruction in Fig. 8(d)[link].

[Figure 8]
Figure 8
FOV of reconstruction from sinograms presented in Fig. 5[link]. See text for details.

To reduce the leading ring artefact, still present in the reconstructed images [see Figs. 8(b), 8(c) and 8(d)[link]], we use the filtered backprojection on the sinogram of Fig. 7(b)[link], which is the geometric combination of [{\bf S}_{2,2}] (by blocks) and [{\bf S}_{1,3}] (by blocks). As depicted in Fig. 7(c)[link], the leading ring artefact was completely eliminated, while still preserving the aspects of the image; as seen in Fig. 7(d)[link] [zoomed part of Fig. 7(c)[link]].

In order to illustrate the action of the derivative kernel hj,k, see Table 1[link], we present a low-resolution simulated example. A 327×327 image, representing a simulated micro-gear, is shown in Fig. 9(a)[link]. After adding noise to the 527×180 sinogram (i.e. horizontal stripes), the filtered backprojection is applied to the corrupted sinogram, originating the reconstructed Fig. 9(b)[link], where the ring artefacts are clearly visible.

[Figure 9]
Figure 9
Simulated example. (a) Image with resolution 327×327, representing an ideal micro-gear. (b) Reconstructed image (with strong ring artefacts) after adding horizontal stripes to the 527×180 sinogram.

The generalized Titarenko's algorithm (see §2[link]) with different orders for the derivative was applied to the corrupted simulated gear data. The results are depicted in Fig. 10[link]. Part (a) presents the result obtained using the original Titarenko's algorithm (see §2[link] for an algorithmic description of TA). In this case the strong ring artefacts were reduced, but new artefacts are introduced, mainly because the sinogram has low resolution in the pixel axis. We remark that Titarenko's original algorithm is obtained with the generalized algorithm and kernel vector h1,1. Using the second-order derivative, we reduce the stripe effects through smoothness in the radial direction of the sinogram. The reconstructed image using second derivatives, i.e. using kernel vector h2,1 and h2,2, are presented in part (b) and (c), respectively. Although the rings still remain in the reconstructed images, they were reduced if compared with (a). Using the block strategy and different kernel vectors, we obtain Figs. 10(d), 10(e) and 10(f)[link].

[Figure 10]
Figure 10
Simulated example, for TA and GTA. (a) TA for corrupted data. (b) GTA with h1,2. (c) GTA with h2,2. (d) GTA with h1,2 and 6 blocks. (e) GTA h2,2 and 60 blocks. (f) GTA with h3,1 and 6 blocks. See text for details.

The Wavelet-FFT approach for ring removal, presented by Münch et al. (2009[Münch, B., Trtik, P., Marone, F. & Stampanoni, M. (2009). Opt. Express, 17, 8567-8591.]), is based on a discrete wavelet decomposition of the sinogram. Without going into further details, the algorithm to recover the sinogram [{\bf S}] is given by

[{\bf S}={\cal{W}}\,[{\bf{M}},L,\omega,\sigma],\eqno(31)]

where [{\bf M}] is the measured sinogram, L is the highest decomposition level using the wavelet type ω, and σ is a damping factor, fixed for each decomposition level. For example, ω could represent a signature to Daubechies wavelets (e.g. DB25 or DB42) or to a Haar wavelet (DB1). Fig. 11[link] presents some reconstructions using DB25 with three different choices for [(L,\sigma)]. In comparison with the generalized Titarenko's algorithm, we note that that the method is highly sensitive to the choice of [(L,\sigma)].

[Figure 11]
Figure 11
Reconstruction of the simulated micro-gear using the Wavelet-FFT correction and Daubechies wavelet DB25 (Münch et al., 2009[Münch, B., Trtik, P., Marone, F. & Stampanoni, M. (2009). Opt. Express, 17, 8567-8591.]). (a) Parameters L = 7, [\sigma] = 2. (b) Parameters L = 3, [\sigma] = 3. (c) Parameters L = 2, [\sigma] = 7.

The block strategy was applied to the corrupted gear data, originating the reconstructions in Figs. 10(d), 10(e) and 10(f)[link]. The restored sinograms were recovered using the generalized Titarenko's approach using b = 6 blocks [see (23)[link]], i.e. using GTA in between 30 angles.

The sinogram of the rock, presented in Fig. 5[link], was restored using the generalized Titarenko's algorithm with angle dependency [see §4[link]]. Using kernel vector h = (1,-2,1) from Table 1[link] we obtain the results in Fig. 12[link]. The restored sinogram presented in Fig. 12(b)[link] is similar to the one obtained using the classical Titarenko's approach. The same strong stripe artefact remains, even with a smoothing kernel. In this example, GTA(θ) is competitive with GTA, but with a higher computational cost. Fig. 12(a)[link] shows the correction matrix [{\bf N}], which is non-constant through columns. Fig. 13[link] depicts GTA(θ) in the simulated micro-gear example, with different values of parameter S, and with fixed parameter λ, as in (30)[link].

[Figure 12]
Figure 12
Sinogram restoration using GTA(θ) with kernel h = (1,-2,1) and S = 5. (a) Correction matrix [{\bf N} \in {\bb R}^{R\times N}]. (b) Updated sinogram [{\bf S}]. See §4[link] for details.
[Figure 13]
Figure 13
Sinogram restoration using GTA(θ) with kernel h = (1,-2,1) and different values of [S \in {\bb{Z}}]. (a), (b), (c) S = 2, S = 5 and S = 20, respectively. (d), (e), (f) Reconstruction of sinograms (a), (b) and (c), respectively.

Finally, some image reconstructions, with corrupted sinograms obtained with the Medipix detector, are shown in Figs. 14[link] and 15[link]. As Medipix are low-resolution images, the same discussion of the micro-gear example of Fig. 9[link] applies here. Noisy and low-resolution sinograms are better restored with a higher-order derivative, and using a block strategy.

[Figure 14]
Figure 14
Reconstruction of the human tooth of Fig. 1[link] with the Medipix detector. (a) Using sinogram [{\bf S}_{1,1}]. (b) Using the geometric sinogram. (c) Enlarged image of (a). (d) Enlarged image of (b).
[Figure 15]
Figure 15
Reconstruction of a rock sample with the Medipix detector. (a) Using the corrupted sinogram. (b) Using the geometric sinogram. (c) Enlarged image of (a). (d) Enlarged image of (b).

6. Discussion

As mentioned in the Introduction[link], the generation of ring artifacts in reconstructed images are usually related to non-linearity of the pixels' response. The model of sinogram corruption given by (5)[link] accounts for the presence of strips. It assumes that the deviation from s to m does not depend on the angle. In other words, this deviation is supposed to be independent of the sample inhomogeneity.

Consider a pixel receiving a signal (intensity or number of photons) denoted by I0 without sample and I in the presence of the sample. The associated measurements are denoted Im0 and Im, respectively. The measured sinogram m and the original one s of (5)[link] are obtained by calculating the measured and real X-ray attenuations of the object, respectively,

[m(y)=\log(Im_0)-\log(Im),\eqno(32)]

[s(y)=\log(I_0)-\log(I).\eqno(33)]

We can deduce the value of the deviation term,

[n(y)=\log(I_0/Im_0)-\log(Im/I).\eqno(34)]

The second term of the last equation, -log(Im/I), depends on the object composition and in this way on the projection angle for each pixel. Most non-linear effects, such as the pile-up phenomenon, tend to increase with the incident flux. This term is expected to become smaller with higher flux, resulting in better artefact correction for high-attenuation objects. This interpretation could explain the behaviour of the ring suppression algorithm on the images obtained with Medipix (see Figs. 14[link], 15[link] and 16[link]). The ring suppression within the samples is really efficient for the tooth (Fig. 14[link]) and for the rock (Fig. 15[link]). This could be related to their high attenuations with maximum attenuation values over the whole sinograms of 8.3 and 6.2, respectively, for these samples. For the last sample, the ring suppression within the sample is only partial, which could be explained by this low attenuation (maximum attenuation value of 1.8 over the whole sinogram).

[Figure 16]
Figure 16
Reconstruction of the human tooth of Fig. 1[link] with the Medipix detector, using the Wavelet-FFT correction and Daubechies wavelet DB25 (Münch et al., 2009[Münch, B., Trtik, P., Marone, F. & Stampanoni, M. (2009). Opt. Express, 17, 8567-8591.]). (a) Parameters L = 7, [\sigma] = 2. (b) Parameters L = 3, [\sigma] = 3. (c) Parameters L = 2, [\sigma] = 7.

We are following the approach of Prince & Willsky (1990[Prince, J. L. & Willsky, A. S. (1990). Opt. Eng. 29, 535-544.]), where a quadratic functional is minimized. This was also the approach adopted by the classical Titarenko's approach. In fact, using a quadratic norm, we obtain an easy linear system to solve. This is one of the main goals of Titarenko's algorithm. Using a total variation norm (TV) penalty should be more appropriate, but this would introduce another type of algorithm, more difficult and not as fast as the one proposed. One of the most important aspects of imaging beamlines at a synchrotron is computational time, where GTA is appropriate.

7. Conclusions

We have presented a fast algorithm for ring artefact reduction in tomography by generalizing the approach proposed by Titarenko and collaborators. We provided several numerical and simulated results using the proposed method. The algorithm was tested on experimental data acquired with two different detectors: a scintillator mounted on a microscope system coupled with a CCD camera, and a counting detector based on a 300 µm Si sensor connected to a Medipix3RX readout ASIC. For a Medipix detector (with low resolution), we obtain good results using the generalized approach with high accuracy on the finite difference operator.

The main limitation of the method relies on the precision of the model of data corruption. Further work is focused on Titarenko's algorithm allowing the dependency of the correction term with the projection angle, which has already been carried out in their work of 2011 (Titarenko et al., 2011[Titarenko, S., Titarenko, V., Kyrieleis, A., Withers, P. & De Carlo, F. (2011). J. Synchrotron Rad. 18, 427-435.]), i.e. by considering [s(t,\theta)] = [m(t,\theta)+n(t,\theta)] instead of (5)[link]. We have applied such a technique in this manuscript, but the computational cost is high. We have also compared our strategy, without the angle dependency, with the Wavelet-FFT algorithm presented by Münch et al. (2009[Münch, B., Trtik, P., Marone, F. & Stampanoni, M. (2009). Opt. Express, 17, 8567-8591.]). The wavelet approach is fast, with a linear computational cost O(N). Although it is a very competitive method, it depends on three parameters, [\{L,\omega, \sigma\}], making it difficult to choose the best configuration. In contrast with the present method, which depends on only one parameter, this is clearly a disadvantage of the wavelet algorithm.

Applying a TV penalty to the corrupted sinogram could be a good choice to restore the sinogram, penalizing smaller perturbation. This is an excellent starting point for a new study on rings artefacts reduction.

APPENDIX A

Titarenko's Algorithm

In real experiments, the measured sinogram [m({\bf y})] is a corrupt version of the original one, that we denote by [s({\bf y})], i.e.

[s({\bf y}) = m({\bf y}) + n({\bf y}),\eqno(35)]

where [n \in V] is a sinogram to be determined, obeying some mathematical criteria (see Titarenko et al., 2011[Titarenko, S., Titarenko, V., Kyrieleis, A., Withers, P. & De Carlo, F. (2011). J. Synchrotron Rad. 18, 427-435.]). Following the classical approach of Rudin et al. (1992[Rudin, L. I., Osher, S. & Fatemi, E. (1992). Physica D, 60, 259-268.]), the restored sinogram [s \in V] is the one that minimizes the TV, subject to constraints of mean, say [{\cal{M}}], and standard deviation, given by [{\cal{S}}],

[\left.\matrix{ {\rm{min}}\hfill & {{1}\over{2}}{\rm{TV}}(s)\hfill \cr {\rm{s.t}}\hfill & {\cal{M}}(s)={\cal{M}}(m)\hfill \cr & {\cal{S}}(s)=\sigma\hfill } \right\} \Leftrightarrow \left\{ \matrix{ {\rm{min}}\hfill & {{1}\over{2}}{\rm{TV}}(s)+{{\lambda}\over{2}}\|s-m\|^2\hfill\cr {\rm{s.t}}\hfill & s \in V \hfill }\right.\eqno(36)]

Here, the total variation norm is defined by TV(s) = [\textstyle\int_{{\bf}N}\|\nabla s(\bf y)\|\,{\rm{d}}{\bf{y}}]. The equivalence of the constrained and unconstrained optimization problems in (36)[link] was previously shown by Chambolle (2004[Chambolle, A. (2004). J. Math. Imaging Vis. 20, 89-97.]). The choice of the regularization parameter λ depends on the nature of the problem and could be estimated with an appropriate method (see Yagola et al., 2002[Yagola, A. G., Leonov, A. S. & Titarenko, V. N. (2002). Inverse Probl. Eng. 10, 117-129.]). A TV approach will penalize smaller perturbations on the corrupted sinogram, but in this case the optimality conditions are more difficult to solve.

The approach adopted by Titarenko et al. (2010a[Titarenko, S., Withers, P. J. & Yagola, A. (2010a). Appl. Math. Lett. 23, 1489-1495.],b[Titarenko, V., Bradley, R., Martin, C., Withers, P. J. & Titarenko, S. (2010b). Proc. SPIE, 7804, 78040Z.],c[Titarenko, S., Titarenko, V., Kyrieleis, A. & Withers, P. J. (2010c). J. Synchrotron Rad. 17, 540-549.]) searches for a solution that minimizes the quadratic functional

[({{1}/{2}})\int\limits_{\bf{N}}\|\nabla s({\bf{y}})\|^2\,{\rm{d}}{\bf{y}}+({{\lambda}/{2}})\|s-m\|^2.\eqno(37)]

This approach was also explored in the work of Prince & Willsky (1990[Prince, J. L. & Willsky, A. S. (1990). Opt. Eng. 29, 535-544.], 2002[Prince, J. L. & Willsky, A. S. (2002). IEEE Trans. Image Process. 2, 401-416.]), where the sinogram is restored through the mimization of the above functional, subject to the Ludwig–Helgason consistency conditions (Ludwig, 1966[Ludwig, D. (1966). Commun. Pure Appl. Math. 19, 49-81.]; Helgason, 1980[Helgason, S. (1980). The Radon Transform. Boston: Birkhauser.]).

Assuming that the deviation from s to m does not depend on the angle, i.e.

[s({\bf y}) = m({\bf y}) + n(t),\eqno(38)]

we arrive at the minimization, in terms of the radial (or noise) function n = n(t), i.e.

[V(n)=(1/2)\int\limits_{\bf{N}}\left(\partial_tm+\partial_tn\right)^2\,{\rm{d}}t\,{\rm{d}}\theta+({{\lambda}/{2}})\|n\|^2.\eqno(39)]

Assumption (38)[link] is motivated by the stripes along the θ-axis on each sinogram.

In a discrete sense, the sinogram m is represented by matrix [{\bf M}\in{\bb{R}}^{R\times N}], and we are looking for a noise vector [{\bf n} \in {\bb R}^{R}] such that V = [V({\bf n})] is minimum, with the integral replaced by the sum and the derivative replaced by finite differences. Since we are minimizing a quadratic, the function attains a minimum where the gradient is zero, i.e. [\nabla V({\bf n}^*)] = 0. The first-order conditions and a tridiagonal system must be solved to find [{\bf n}^*]. Titarenko's algorithm now follows,

[\eqalignno{ {\bf n}^*&= \mathop{{\rm{argmin}}}_{{\bf n}\,\in\,{\bb{R}}^R} \left[ V({\bf{n}})= {{1}\over{2}} \sum\limits_{i,\,j} {\bb{F}}_1\left({\bf{M}}_{j,i}+{\bf{n}}_j\right)^2\,+\,\,\lambda{{N}\over{2}} \sum\limits_{j} {\bf{n}}_j^2 \right] \cr& \Rightarrow \left({\bf{T}}_1+\lambda{\bf{I}}_{d}\right){\bf{n}}^*= -{\bf{T}}_1\overline{\bf{m}}, &(40)}]

with [{\bf T}_1] being a constant tridiagonal matrix, [{\bb F}_1] a first-order finite difference and [\overline{\bf{m}}] the average of the projections, i.e.

[\eqalign{ {\bb{F}}_1[{\bf{P}}_{j,i}]&= {\bf{P}}_{j,i}-{\bf{P}}_{j+1,i}, \cr \overline{\bf{m}}&=({{1}/{N}})\,{\bf{M}}{\bf{e}}_N\in{\bb{R}}^R, \cr {\bf{e}}_N&=(1)\in{\bb{R}}^N.}\eqno(41)]

Titarenko et al. (2010a[Titarenko, S., Withers, P. J. & Yagola, A. (2010a). Appl. Math. Lett. 23, 1489-1495.]) presented the exact analytical inverse of [{\bf T}_1+\lambda{\bf{I}}] using properties of hyperbolic trigonometric functions. For completeness, matrix [{\bf{T}}_1] is a tridiagonal matrix, whose diagonal entries are given by

[{\bf{T}}_1\colon\,{\bf{F}}= \left(\matrix{ 1 & -1 & \ldots & 0 & 0 \cr -1 & 2 & \ldots & 0 & 0 \cr \vdots & & \ddots & & \vdots \cr 0 & 0 & \ldots & 2 & -1 \cr 0 & 0 & \ldots & -1 & 1 }\right)_{R-1\times{R}}.\eqno(42)]

APPENDIX B

Octave codes

The codes for ring suppression, as presented in this manuscript, are presented below, in Octave open-source programming language. Function ring() performs the correction by columns, while function ring_blocks() corrects by blocks of columns; see (20)[link], (23)[link] and (24)[link]. Auxiliary function [{\tt{ringMatXvec()}}] performs the matrix-vector product (22)[link] using the convolution function conv() implemented by Octave. Finally, function ringCGM() iterates using the Conjugate gradient method in order to solve linear system (21)[link]. Functions kron() (Kronecker product), ones() and std() are also implemented in Octave (see Eaton et al., 2009[Eaton, J. W., Bateman, D. & Hauberg, S. (2009). GNU Octave, a high-level interactive language for numerical computations, Version 3.0.1. CreateSpace Independent Publishing Platform.]).

By changing vector h at line 6 of functions ring() and ring_blocks(), we modify the order of the derivative, according to Table 1[link]. Transcription of these codes to a high-level language such as Python could be easily done. We emphasize that the original Titarenko's algorithm is obtained using h = [1-1] at line 6 of function ring(). The same applies to function ring_blocks().

Function ring_a() stands for the algorithm GTA(θ) (see §4[link]) with S as the input parameter; see Titarenko et al. (2011[Titarenko, S., Titarenko, V., Kyrieleis, A., Withers, P. & De Carlo, F. (2011). J. Synchrotron Rad. 18, 427-435.]) for further details on matrices [{\bf D}] and value [S \in {\bb Z}].

 

function  new = ring(old);

 

alpha = 2*std(std(old));

[{\tt{pp = sum(old{\hbox{'}})\hbox{'}/size(old,2)}};]

N = size(old,2);

h = [-1   2  -1];

[{\tt{f = -ringMatXvec(h,pp)}};]

n = ringCGM(h,alpha,f);

new = old + kron(n,ones(1,N));

 

 

function  new = ring_blocks(old,step);

 

alpha = 2*std(std(old));

R = size(old,1);

N = size(old,2);

h = [-1   2  -1];

blocks = fix(N/step);

 

for  k = 1:blocks,

   col = (k-1)*step+1:k*step;

   sino_block = old(:,col);

[\quad{\tt{pp = sum(sino\_block\hbox{'})\hbox{'}/size(sino\_block,2);}}]

[\quad{\tt{f = - ringMatXvec(h,pp);}}]

   q = ringCGM(h,alpha,f);

   new(:,col) = sino_block + diag(q)*ones(R,step);

end

 

 

[{\tt{function\,\,y = ringMatXvec(h,x);}}]

 

s = conv(x,fliplr(h));

u = s(length(h):length(x));

y = conv(u,h);

 

 

function  x = ringCGM(h,alpha,f)

 

x0 = zeros(size(f));

[{\tt{r = f - [ringMatXvec(h,x0) + alpha*x0];}}]

w = -r;

[{\tt{z = ringMatXvec(h,w) + alpha*w;}}]

[{\tt{a = (r\hbox{'}*w)/(w\hbox{'}*z);}}]

x = x0 + a*w;

B = 0;

 

[{\tt{for\,\,i = 1\!\!:\!\!10\,{{\hat{}\,}}6}}]

r = r - a*z;

[\quad{\tt{if\,(norm(r) \lt 1e-6)}}]

      break;

   endif

[\quad{\tt{B = (r\hbox{'}*z)/(w\hbox{'}*z);}}]

   w = -r + B*w;

[\quad{\tt{z = ringMatXvec(h,w) + alpha*w;}}]

[\quad{\tt{a = (r\hbox{'}*w)/(w\hbox{'}*z);}}]

   x = x + a*w;

end

 

 

function   new = ring_a(old,S);

 

R = size(old,1);

N = size(old,2);

F = zeros(S,N);

i = 1:N;

 

for  k = 1:fix(S/2),

   D(2*k,:) = sqrt(2/N)*cos(pi*i*2*k/N);

   D(2*k-1,:) = sqrt(2/N)*sin(pi*i*2*k/N);

end

D(1,:) = sqrt(1/N) * ones(1,N);

h = [-1   2 -1];

 

[{\tt{for\,\,i = 1\!\!:\!N, B(:,i) = -ringMatXvec(h,old(:,i)); end}}]

 

[{\tt{G = B*D\hbox{'}*D;}}]

alpha = 2*std(std(old));

 

for  w = 1:N,

   new(:,w) = old(:,w)+ringCGM(h,alpha,G(:,w));

end

Footnotes

1Indeed, for all [0 \not = {\bf u} \in {\bb R}^R] we have [{\bf u}^T{\bf A} {\bf u}] = [\|{\bf F} {\bf u}\|^2 + \lambda \|{\bf u}\|^2] > 0.

2This is always lower than or equal to the arithmetic mean.

3https://www.optiquepeter.com/en/applications-synchrotron.php .

Acknowledgements

The authors gratefully acknowledge the comments by an anonymous referee. The authors are also grateful to Hugo H. Slepicka for helping at the integration of the ring artefact code with Python programming language at the LNLS imaging beamline IMX.

References

First citationAndersson, F. (2005). SIAM J. Appl. Math. 65, 818–837.  Web of Science CrossRef CAS Google Scholar
First citationBazaraa, M. S., Sherati, H. D. & Shetty, C. M. (2007). Nonlinear Programming, Theory and Algorithms, 3rd ed. New York: Wiley.  Google Scholar
First citationBengt, F. (1988). Math. Comput. 51, 699–706.  Google Scholar
First citationCandes, E. J., Romberg, J. & Tao, T. (2006). IEEE Trans. Inf. Theory, 52, 489–509.  Web of Science CrossRef Google Scholar
First citationChambolle, A. (2004). J. Math. Imaging Vis. 20, 89–97.  CrossRef Google Scholar
First citationDeans, S. (1983). The Radon Transform and Some of Its Applications Deans. New York: John Wiley and Sons.  Google Scholar
First citationDe Pierro, A. R. (1995). IEEE Trans. Med. Imaging, 14, 132–137.  CrossRef PubMed CAS Web of Science Google Scholar
First citationDouissard, P. A., Cecilia, A., Rochet, X., Chapel, X., Martin, T., van de Kamp, T., Helfen, L., Baumbach, T., Luquot, L., Xiao, X., Meinhardt, J. & Rack, A. (2012). J. Instrum. 7, P09016.  Google Scholar
First citationEaton, J. W., Bateman, D. & Hauberg, S. (2009). GNU Octave, a high-level interactive language for numerical computations, Version 3.0.1. CreateSpace Independent Publishing Platform.  Google Scholar
First citationGabor, G. T. (2009). Fundamentals of Computerized Tomography: Image Reconstruction from Projections, Advances in Pattern Recognition. Berlin: Springer.  Google Scholar
First citationGimenez, E. N., Ballabriga, R., Campbell, M., Horswell, I. & Llopart, X. (2011). IEEE Trans. Nucl. Sci. 58, 323–332.  Web of Science CrossRef CAS Google Scholar
First citationHelgason, S. (1980). The Radon Transform. Boston: Birkhauser.  Google Scholar
First citationHelou, E. S., Censor, Y., Chen, T. B., Chern, I. L., De Pierro, A. R., Jiang, M. & Lu, H. H. S. (2014). Inverse Probl. 30, 055003.  Web of Science CrossRef Google Scholar
First citationHorn, R. A. & Johnson, C. R. (1985). Matrix Analysis. Cambridge University Press.  Google Scholar
First citationHuckle, T. (1994). BIT Numer. Math. 34, 99–112.  CrossRef Web of Science Google Scholar
First citationKraft, P., Bergamaschi, A., Broennimann, C., Dinapoli, R., Eikenberry, E. F., Graafsma, H., Henrich, B., Johnson, I., Kobas, M., Mozzanica, A., Schleptz, C. M. & Schmitt, B. (2009). IEEE Trans. Nucl. Sci. 56, 758–764.  Web of Science CrossRef CAS Google Scholar
First citationLudwig, D. (1966). Commun. Pure Appl. Math. 19, 49–81.  CrossRef Google Scholar
First citationMahesh, K. (1998). J. Comput. Phys. 145, 332–358.  Web of Science CrossRef Google Scholar
First citationMirone, A., Gouillart, E., Brun, E., Tafforeau, P. & Kieffer, J. (2013). PyHST2: a hybrid distributed code for high speed tomographic reconstruction with iterative reconstruction and a priori knowledge capabilities, https://arxiv.org/abs/1306.1392Google Scholar
First citationMünch, B., Trtik, P., Marone, F. & Stampanoni, M. (2009). Opt. Express, 17, 8567–8591.  Web of Science PubMed Google Scholar
First citationNatterer, F. & Wubbeling, F. (2001). Mathematical Methods in Image Reconstruction. Philadelphia: Society for Industrial and Applied Mathematics.  Google Scholar
First citationPangaud, P., Basolo, S., Boudet, N., Berar, J.-F., Chantepie, B., Clemens, J.-C., Delpierre, P., Dinkespiler, B., Medjoubi, K., Hustache, S., Menouni, M. & Morel, C. (2008). Nucl. Instrum. Methods Phys. Res. A, 591, 159–162.  Web of Science CrossRef CAS Google Scholar
First citationPrince, J. L. & Willsky, A. S. (1990). Opt. Eng. 29, 535–544.  Google Scholar
First citationPrince, J. L. & Willsky, A. S. (2002). IEEE Trans. Image Process. 2, 401–416.  Web of Science CrossRef Google Scholar
First citationRinkel, J., Brambilla, A., Dinten, J.-M. & Mougel, F. (2011). Patent WO2011064381.  Google Scholar
First citationRudin, L. I., Osher, S. & Fatemi, E. (1992). Physica D, 60, 259–268.  CrossRef Web of Science Google Scholar
First citationSijbers, J. & Postnov, A. (2004). Phys. Med. Biol. 49, 247–253.  Web of Science CrossRef Google Scholar
First citationThomas, L. H. (1949). Elliptic Problems in Linear Differential Equations over a Network, Watson Science Computer Laboratory Report, Columbia University, New York, USA.  Google Scholar
First citationTikhonov, A. N. (1963a). Sov. Math. Dokl. 4, 1624–1627.  Google Scholar
First citationTikhonov, A. N. (1963b). Sov. Math. Dokl. 5, 1035–1038.  Google Scholar
First citationTitarenko, S., Titarenko, V., Kyrieleis, A. & Withers, P. J. (2009). Appl. Phys. Lett. 95, 071113.  Web of Science CrossRef Google Scholar
First citationTitarenko, S., Titarenko, V., Kyrieleis, A. & Withers, P. J. (2010c). J. Synchrotron Rad. 17, 540–549.  Web of Science CrossRef IUCr Journals Google Scholar
First citationTitarenko, S., Titarenko, V., Kyrieleis, A., Withers, P. & De Carlo, F. (2011). J. Synchrotron Rad. 18, 427–435.  Web of Science CrossRef IUCr Journals Google Scholar
First citationTitarenko, S., Withers, P. J. & Yagola, A. (2010a). Appl. Math. Lett. 23, 1489–1495.  Web of Science CrossRef Google Scholar
First citationTitarenko, S. & Yagola, A. (2009). Mosc. Univ. Phys. Bull. 65, 65–67.  Web of Science CrossRef Google Scholar
First citationTitarenko, V., Bradley, R., Martin, C., Withers, P. J. & Titarenko, S. (2010b). Proc. SPIE, 7804, 78040Z.  CrossRef Google Scholar
First citationTwomey, S. (1963). J. ACM, 10, 97–101.  CrossRef Web of Science Google Scholar
First citationVykydal, Z., Jakubek, J. & Pospisil, S. (2006). Nucl. Instrum. Methods Phys. Res. A, 563, 112–115.  Web of Science CrossRef CAS Google Scholar
First citationYagola, A. G., Leonov, A. S. & Titarenko, V. N. (2002). Inverse Probl. Eng. 10, 117–129.  Web of Science CrossRef Google Scholar

© International Union of Crystallography. Prior permission is not required to reproduce short quotations, tables and figures from this article, provided the original authors and source are cited. For more information, click here.

Journal logoJOURNAL OF
SYNCHROTRON
RADIATION
ISSN: 1600-5775
Follow J. Synchrotron Rad.
Sign up for e-alerts
Follow J. Synchrotron Rad. on Twitter
Follow us on facebook
Sign up for RSS feeds