research papers
Analytical formula for two-dimensional ring artefact suppression
aHenry Moseley X-ray Imaging Facility, Photon Science Institute, Alan Turing Building, The University of Manchester, Manchester M13 9PL, UK
*Correspondence e-mail: valeriy.titarenko@manchester.ac.uk
Ring artefacts are the most disturbing artefacts when reconstructed volumes are segmented. A lot of effort has already been put into better X-ray optics, scintillators and detectors in order to minimize the appearance of these artefacts. However, additional processing is often required after standard flat-field correction. Several methods exist to suppress artefacts. One group of methods is based on minimization of the Tikhonov functional. An analytical formula for processing of a single sinogram was developed. In this paper a similar approach is used and a formula for processing two-dimensional projections is found. Thus suppression of ring artefacts is organized as a two-dimensional convolution of `averaged' projections with a given filter. Several approaches are discussed in order to find elements of the filter in a faster and accurate way. Examples of experimental datasets processed by the proposed method are considered.
Keywords: X-ray tomography; ring artefacts; ill-posed problem; convolution; filter; Tikhonov functional.
1. Introduction
A standard data acquisition procedure in X-ray tomography for synchrotron- or laboratory-based machines consists of recording three groups of images: dark-field images (when an X-ray beam is switched off), white/flat-field images (the beam is on but there is no sample in the beam) and projections (the beam is on and passes through a rotating sample). Several main scanning geometries are in use these days: parallel for synchrotron radiation facilities and nano-tomography laboratory machines, cone-beam for most medical and industrial applications, and helical/spiral cone-beam with moving and static gantries for medical and security applications (Natterer & Wübbeling, 2001; Kak & Slaney, 2001; Kalender, 2006; Warnett et al., 2016). To perform a reconstruction of a specimen one needs to find optical paths by the formula
where Iflat, Idark and I are flat-field, dark-field images and a projection, respectively. Note that the flat- and dark-field images should theoretically be taken at the same moment of time when the projection is recorded. Obviously, it is not possible in practical applications, so dark- and flat-field images are usually taken before and after all projections are acquired.
For many beamlines an X-ray beam may slightly move during an experiment. For instance, beam oscillations and linear movement in the vertical direction on beamline 05B1-1 at the Canadian Light Source are discussed by Hinebaugh et al. (2012). For tomography beamlines these fluctuations often lead to the flat-field image being changed during data acquisition. This effect can be explained in the following way. X-rays travel in a high-vacuum tube protected by a beryllium window just before entering an optical hutch where experiments are performed. Any small variation of the beam position and thermal instabilities at a monochromator force the intensity profile of the beam to move and stretch/shrink in the vertical direction. As a result, projection of the Be window onto a recording device [usually CCD (charge-coupled device) or CMOS (complementary metal-oxide semiconductor) cameras] behaves in a similar way. On the other hand, a scintillator, optical lenses and detector also have some dust/dirt and defects on their surfaces. All these lead to the fact that a background image has (at least) two components/profiles: one is stable and the other is slightly moving/stretching during data acquisition. Therefore the standard flat-field correction procedure defined by (1) leaves some artefacts on projections which in turn lead to so-called ring artefacts on reconstructed slices, i.e. concentric circles or arches. Some attempts to compensate intensity variations due to beam/monochromator instabilities have been given by Titarenko et al. (2010a).
Unfortunately, defects on optical components behave differently. If there is an X-ray absorbing feature on a scintillator, then it usually affects those areas on recorded images which are projections of the feature onto the detector. Thus after the standard flat-field correction has been performed, the corresponding ring artefacts seen after reconstruction are regular, i.e. the circles/arches have the same intensity along the whole curve. However, if a scintillator has scratches or dust on its surface (see Fig. 1), then an additional visible light comes from these defects and depends on the total of X-rays incident on a wider area around these defects. One can change the position of slits in order to use a smaller shape of the X-ray beam and still be able to see these bright defects in areas with no X-rays (see Fig. 2). As a result, for dense materials or anisotropically attenuated specimens one can see the intensity of ring artefacts dependent on the angle of rotation. Some examples can be found in the paper by Titarenko et al. (2011). Of course, if defects on a scintillator are too strong and/or the exposure time to record an image is too long, then some pixels of the detector may become saturated, so we do not have any information from them.
Several methods for ring artefact suppression have already been developed. In principle they can be placed into three main groups. The first group of methods uses a `hardware' solution by usually moving a detector with respect to the projections. Thus if there is a defect on a scintillator or Be window then it is either randomly moved to different pixels or blurred (Davis & Elliott, 1997; Davis et al., 2013). In this case no high-intensity rings can be seen on a reconstructed image. However, if a defect is strong, then one should expect to see a `wave' rather than a ring on slices. The second group is related to preprocessing sinograms before reconstruction is performed by applying various linear or non-linear filters. Methods from the third group process data after reconstruction and usually remap a slice in polar coordinates, so rings are transformed into stripes. Basically methods from these two groups are similar. A detailed description of some of these methods and their comparison for various data sets can be found in the literature (Sijbers & Postnov, 2004; Prell et al., 2009; Münch et al., 2009; Titarenko et al., 2010b,c; Sadi et al., 2010; Rashid et al., 2012; Brun et al., 2013; Wei et al., 2013; Altunbas et al., 2014; Miqueles et al., 2014; Paleo & Mirone, 2015; Wolkowski et al., 2015; Baek et al., 2015).
In this paper a new method for preprocessing sinograms/projections is presented. This method is a generalization of a method from Titarenko et al. (2010b), which has already been modified to be used for irregular ring artefacts (Titarenko et al., 2010c, 2011) and is a part of reconstruction software used at beamlines I12 and I13 of Diamond Light Source (Drakopoulos et al., 2015; Pešić et al., 2013). The original method was designed to process each sinogram independently, i.e. to be applied for one-dimensional (1D) projections, and assumes that `averaged' projections should not have sharp peaks. The same idea of smoothness is used in the proposed algorithm but applied to two-dimensional (2D) projections. Modern reconstruction algorithms usually exploit parallelization provided by graphics processing units (GPUs) and tend to process several slices at the same time. Thus it is more effective to process chunks of projections rather than individual rows. In the case of cone-beam geometry this approach is already used. Therefore, processing 2D projections may speed up other reconstruction steps by avoiding unnecessary movement or remapping of data. At the same time, additional prior information from neighbouring rows of projections should lead to better suppression of artefacts. Note that the solution provided by the original method of Titarenko et al. (2010b) is a vector obtained by multiplying a given matrix by an input vector. In the proposed method a solution (matrix) is a convolution of an input matrix with a 2D filter (matrix), so can be implemented in a more efficient way if a fast Fourier transform is used. The proposed 2D version of the method is a generalization of the 1D method published by the author (Titarenko, 2016). In the paper, performance advantages of the filtering approach versus the standard approach based on the inverse matrix from Titarenko et al. (2010b) are shown.
In the next section a brief mathematical description of steps to obtain the main formula is provided. Then examples of implementation of the proposed filter are presented followed by discussion. All proofs and strict mathematical formulas can be found in Appendices A, B and C.
2. Brief mathematical formulation
2.1. Tikhonov functional
Let there be an m ×n matrix A, a known m-vector u and an unknown n-vector z. We want to solve the following system of linear equations,
Instead of an exact matrix A and the right-hand side u we are given approximate Ah and , where h and δ are errors. For instance, h2 , . A solution of Ah z = is defined as . We aim to find an exact solution of (2) knowing Ah, , h and δ. Thus we have a classical inverse problem. The simplest way to estimate solution z is to find an approximate solution . One may hope that, if errors h and δ tend to zero, then the corresponding solution becomes closer to z. Unfortunately, for most inverse problems this is not the case as they are usually so-called ill-posed problems. This means that solutions z or may not
(i) exist for the pairs (A, u) and/or ,
(ii) be unique,
(iii) be stable with respect to small variations.
To overcome these issues, Tikhonov proposed an approach based on minimization of the following smoothing functional,
for a given parameter > 0. Now is often called the Tikhonov functional. It was shown that the corresponding minimizers of (3) tend to the exact solution z when errors h and δ approach zero. Several methods of how to choose α as a function of h and δ have been proposed. More details on how to solve ill-posed problems can be found in the literature (Tikhonov & Arsenin, 1977; Tikhonov et al., 1995; Engl et al., 1996).
2.2. Optimization problem
In order to better understand how the theory of inverse and ill-posed problems can be applied to correct ring artefacts, we consider the following continuous problem. Suppose there is a 2D rectangular sensor, so we are able to record an intensity of X-rays incident onto this sensor, i.e. for and , where ξ and η are the horizontal and vertical coordinates, respectively. As a sample is rotated around its vertical axis we may also record intensities as a function of angle θ (or time t). For a parallel beam we usually assume that θ is from 0 to 180° (or π radians). After the standard flat-field correction procedure (1) we obtain a continuous function . If there are no errors in our data acquisition system the flat-field correction procedure should provide us with an exact function . In the real world a function = may not be zero. Of course, we try to organize data acquisition in such way that possible imperfections of our system do not affect the quality of data. Therefore we may always assume that function is small with respect to . So we want the following integral
to tend to a small value (ideally zero). In the simplest case we may think that the error does not depend on the angle of rotation, i.e. we have a 2D function and for a given recorded value we should find an unknown value such that
or equivalently
are small. At the same time we have additional, so-called a priori, information that the unknown function is smooth at least along the ξ and η coordinates (Titarenko et al., 2010d). Thus we aim to find such a that
has a minimal value.
2.3. Original 1D problem
In the original method (see Titarenko et al., 2010b), a sinogram is given, , , where m and n are the number of rows (projections) and columns (pixels in one row). It is assumed that the same unknown value qj should be added to each element of the jth column in order to find the exact sinogram. For a regularization parameter γ, the Tikhonov functional can be written as
Note that the Tikhonov functional can be written in several equivalent forms; for each of them different regularization parameters can be used. In order to separate results obtained for the 1D and 2D cases we use γ for 1D and α for 2D problems. If we average the sinogram along columns, i.e. use pj = , then the above problem leads to minimization of
and an analytical formula to find qj is presented. Titarenko et al. (2011) used weights wk to find `averaged' values pj = , so problems to correct for ring artifacts due to non-uniform stripes in sinograms can be solved.
2.4. Mathematical formulation of 2D problem
In the case of 2D projections we can write the Tikhonov functional in a similar way. Suppose we are given an `averaged' projection by adding up projections with some weights. Let this `averaged' projection be a rectangular matrix P with elements pij, i = , j = . This matrix contains some noisy data and we want to find an unknown exact matrix Z with elements zij such that the respective values of zij are close to pij. This approach is similar to the 1D case if we use qij = pij - zij; however, the choice to use zij instead of qij is more convenient for the 2D case due to the simpler description for matrices defined below. Our goal is to find matrix Z such that its values zij are close to values pij of the given averaged projection and use some additional information about Z. We suppose the neighbouring values of Z have similar values; it is a natural assumption (see Titarenko et al., 2010d). Then, for a regularization parameter > 0, the Tikhonov functional can be written as
To distinguish regularization parameters used in the original and proposed methods we use different symbols, γ and α. Comparing formulas (8) and (10) one can see ∼ . The necessary condition to attain the minimum is that all first derivatives of equal zero, i.e.
where matrix A is defined below.
For convenience, instead of (n×m) matrices P and Z we use nm vectors p and z by stitching rows. If we define i = (k-1)m+l, j = , N = nm, then
The elements of matrix A can be found in Appendix A. For example, we obtain (12×12) matrix A as
for n = 3 and m = 4. To find the solution of (12) we want to invert matrix A, so z = A-1 p. These days images from X-ray tomography usually have a size of about 2000×2000 pixels. So N = 4000000 and it is very difficult to invert (N×N) matrix A even if it is a sparse matrix. It is shown in Appendix A that for n = m > 3 the matrix has (at least) two eigenvalues = 1 and . Therefore the condition number = 1+ 4α, so for large values α the condition number is large and numerical matrix inversion becomes unstable with respect to small errors in data. Storing the inverse matrix may also be an issue due to its size. Therefore we try the following approximate approach. We find the analytical solution only for the central element of the matrix Z which will be the product of an (nm ×nm) matrix G by the nm vector p. This matrix G may be considered as a 2D filter, which can be used to find matrices Z from given images P when their sizes are greater than the size of G. As shown in the following sections, elements of filter G decay very fast as we go from the central element to the outer ones (several orders of magnitude). Therefore we may set them to zero from some distance or alternatively we may use a smaller filter G. For many practical datasets, (64×64) filters work very well. So for all elements of matrix Z inside a 32-element boundary we may always select a corresponding (64×64) part of matrix P and convolve it with the (64×64) filter G. Or, in an alternative approach, we just keep G and P the same power-of-two size, so a fast Fourier transform can be used.
We intend to find the values of the central element of the matrix Z; therefore we suppose n and m to be odd numbers. Let us define = 2 + 1/α, so = , and define (m×m) matrix X as
and denote the (m×m) identity matrix by I. Then the matrix A can be written as the following (n×n) block matrix,
Let us introduce a function Wn(x), where the input parameter is a scalar x and the output is the following (n×n) matrix,
Note that Wm(x) is an (m×m) matrix. Then matrix A can be written as
where is a matrix and is a matrix function. Therefore the inverse of matrix A is
where Wn -1(x) is the inverse of matrix Wn(x) (see Higham, 2008).
Symmetric matrix Wn(x) can be written in the form Y D Y T, where D is a diagonal matrix and Y is an orthogonal matrix, i.e. Y -1 = Y T, where T stands for the transpose (Higham, 2008). Therefore the inverse matrix W -1(x) can be written as Y D-1 Y T. In the Appendices we find explicit formulas for Y, D and consequently W -1(x) and . As we only need to find the value of the central element of matrix Z, then all matrices (Z, P and filter G) can be rewritten with respect to their central elements, so instead of m×n matrix Z we denote such a matrix as with elements zij, i = , j = . Note that n and m are odd numbers. If these numbers are large, then the elements of the filter (matrix) can be written as
[see formula (59) in Appendix B]. If C kn n!/[k!(n-k)!] is a binomial coefficient, then the elements of the filter can be written as the infinite sum
where = . The above formula [or formula (61)] is derived in Appendix B. In Appendix C it is shown how to find the infinite sum numerically with high precision. Some useful properties of matrix are derived in Appendix B. Examples of matrices are shown in Fig. 3. Matrix is symmetrical with respect to central horizontal, vertical and diagonal lines. All elements of the matrix are positive and for each row they monotonically decrease with the column index (see Fig. 4). The sum of all elements of matrix (of infinite width/height) is 1.
Elements of matrix seem to be radial symmetric; however, there is a small error between the real values and those found from elements of the central row. If (i,j) is the index of an element of matrix and i,j > 0, then its distance from the central element is r = (i 2+j 2)1/2, so we may find an index k such that k r k+1 and use elements , , and to approximate value by cubic interpolation. Relative errors of such interpolation with respect to the exact element are shown in Fig. 5.
If = 0 then filtering should not change the input matrix, so all elements of are zeros except the central element, which is 1. The central element decreases when α increases; the other elements equal zero for = 0, increase, have one maximum and decrease as in Fig. 6.
3. Example problems
The method proposed in this paper is applied to several datasets acquired at station 16.3 of the Synchrotron Radiation Source (SRS), Daresbury Laboratory (second-generation synchrotron source, now decommissioned) and beamline I12 of Diamond Light Source (DLS; third-generation synchrotron source) in 2008/2009. The choice to use these datasets is dictated by imperfections of an experimental setup in those days as these datasets were one of the first test datasets. For instance, in some cases the rotation axis was not perfectly vertical; a lot of high-intensity peaks were also present on projections as those shown in Fig. 1. A lot of work has been done at I12 to obtain images of sufficiently better quality (Drakopoulos et al., 2015). However, these datasets are still useful as similar issues can be seen at other beamlines worldwide. For all datasets a PCO 4000 14-bit CCD camera, monochromatic beam and CdWO4 scintillator were used; reconstruction was performed using software developed by the author for beamline I12.
3.1. Graphite
A 5.0 mm × 2.5 mm piece of graphite with some metal particles was scanned at SRS (see Figs. 7 and 8). This specimen had produced almost ideal regular ring artefacts, even in the presence of these dense metal particles. The graphite was slightly tilted and as a result the outer regions of the sample are slightly blurred while the central part is sharp. The ring suppression method is applied for regularization parameter = 1000.
3.2. TRISO particles
A cylindrical container with 1 mm tristructural-isotropic (TRISO) test particles was also scanned at SRS. A similar specimen was used by Titarenko et al. (2009) in order to show how a priori information (a known structure of these particles) can be used to remove rings. However, this prior information was applied in a manual way when corresponding areas of the same materials were selected. In this paper an automatic procedure is used.
Beamlines produce almost parallel X-rays, so reconstruction of a sample can be parallelized as all horizontal slices can be found independently of other slices. Therefore many ring artefact suppression algorithms can be used as they usually process 2D slices. However, if one has a cone-beam geometry or a sample is tilted then several neighbouring sinograms should be used to find one slice. In Fig. 9 the specimen is tilted (0.1° along the beam and 0.8° perpendicular to the beam). Therefore it is more efficient to process projections first and then reconstruct slices with a modified filtered backprojection algorithm for tilted samples.
3.3. Mouse skull
A mouse skull was scanned at beamline I12, DLS. Due to scattering, a lot of high-intensity photons reached the detector. Identification of the corresponding pixels is easier to perform on neighbouring projections as three-dimensional neighbouring pixels are used rather than 2D in the case of individual sinograms. A simple correction, e.g. based on linear interpolation of unaffected pixels, can be used (see Fig. 10). However, due to scratches on the scintillator (similar to those shown in Fig. 2), ring artefact suppression still leaves some rings present after correction.
4. Practical implementation
The main formula (19) is a series, i.e. a sum of infinite terms. Effective numerical implementation is discussed in Appendix C. A practical approach is to find the central row of matrix by the formula (64). Then the row above the central one can be found from
for k > 0, and elements of other rows
for k j. Suppose we want to find matrix for N. Then we find the values as shown in Fig. 11, i.e.
(i) For k = , we use (64) to find the central row.
(iii) For j = and k = , we apply (21).
(iv) For j = , k = : = = = = = = = .
5. Discussion
The 2D method presented in this paper is in some sense equivalent to the original 1D method presented in §2.3 or by Titarenko et al. (2010b). Thus they both should provide similar results in the case of regular ring artefacts when the rings' strength does not depend on the angle of rotation. For the data sets used as examples the author was able to find values of regularization parameter γ for the original 1D method which gives almost identical images as the proposed method. Of course, in the case of irregular rings the result of suppression may depend on a sample. The 1D method designed for irregular rings (Titarenko et al., 2011) can also be used with the 2D method as the original minimization problem leads to a set of independent problems, which can be solved with the new method.
The main advantage of the 2D method is that it can be considered as an image filtering procedure. This filtering operation is applied to an `averaged' optical path image and can be used after all projections have been collected. However, this `averaged' image is just a weighted sum of individual optical path images. Therefore this filtering operation can be applied to individual images just after they have been acquired. Taking into account that standard (non-iterative) reconstruction algorithms apply a 1D filter to each projection, then these 1D and 2D filters can be combined in one 2D filter, i.e. ring suppression can be embedded into a standard reconstruction procedure but with a modified filter.
Implementation of 2D filtering can be performed with fast Fourier transform operations, and high-performance libraries exist for both CPUs and GPUs.
APPENDIX A
A1. Two eigenvalues for matrix A
The matrix A is symmetric and all its non-zero elements are on five diagonals: Aii, Ai, i-1, Ai, i+1, Ai, i-m, Ai, i+m. Consider each row of matrix A.
(1) Three non-zero elements:
(a) i = 1, , , ;
(b) i = n, , , ;
(c) i = n (m - 1) + 1, , , Ai, i+1 = ;
(d) i = n m, , , .
(2) Four non-zero elements:
(a) , , , , ;
(b) , , , , ;
(c) , , Ai, i-1 = , Aii = , ;
(d) , , , , .
(3) Five non-zero elements (the rest): , , , , .
It is clear that the sum of all elements of matrix A for each row is 1. So if (nm) vector , then (A- I)v = 0 and thus = 1 is the corresponding eigenvalue.
Now we consider case n = m 3 and check that = 1 + 4α is another eigenvalue, and non-zero elements of one of the corresponding eigenvectors can be written as vi(n-1) = v(i+1)(n-1) + 2 = (-1)i, i = and vn = 1, vn(n-1) + 1 = (-1)n. For convenience this vector can be represented as an (n×n) matrix by copying each n consequent elements to a row of the matrix, then it is an anti three-diagonal matrix, e.g. for n = m = 5,
Denote , then
(1) Three non-zero elements:
(a) i = 1, Qii = -2, Qi, i+1 = -1, Qi, i+n = -1 and vi = 0, vi+1 = 0, vi +n = 0;
(b) i = n, Qi, i-1 = -1, Qii = -2, Qi, i+n = -1 and vi-1 = -1, vi = 1, vi+n = -1;
(c) i = n (n - 1) + 1, Qi, i-n = -1, Qii = -2, Qi, i+1 = -1 and vi-n = -(-1)n, vi = (-1)n, vi+1 = -(-1)n;
(d) i = n n, Qi, i-n = -1, Qi, i-1 = -1, Qii = -2 and vi-n = 0, vi-1 = 0, vi = 0.
(2) Four non-zero elements:
(a) , Qi, i-1 = Qii = Qi, i+1 = Qi, i+n = -1; vi-1 = vi = 0, vi+1 = -1, vi+n = 1 (for i = n-2), vi-1 = 0, vi = -1, vi+1 = 1, vi+n = 0 (for i = n-1) and vi-1 = vi = vi+1 = vi+n = 0 (otherwise);
(b) , Qi, i-n = Qii = Qi, i+1 = Qi, i+n = -1; vi-n = vi = 0, vi+1 = (-1)n, vi+n = -(-1)n [for i = n(n-3)+1], vi-n = 0, vi = -(-1)n, vi+1 = 0, vi+n = (-1)n [for i = n(n-2)+1], vi-n = vi = vi+1 = vi+n = 0 (otherwise);
(c) , Qi, i-n = Qi, i-1 = Qii = Qi, i+n = -1; vi-n = 1, vi-1 = 0, vi = -1, vi+n = 0 (for i = 2n), vi-n = -1, vi-1 = 1, vi = vi+n = 0 (for i = 3n), vi-n = vi-1 = vi = vi+n = 0 (otherwise);
(d) , Qi, i-n = -1, Qi, i-1 = Qii = Qi, i+1 = -1; vi-n = 0, vi-1 = (-1)n, vi = -(-1)n, vi+1 = 0 [for i = n (n-1)+2], vi-n = (-1)n, vi-1 = -(-1)n, vi = vi+1 = 0 [for i = n(n-1) +3], vi-n = vi-1 = vi = vi+1 = 0 (otherwise);
(e) the rest: Qi, i-n = Qi, i-1 = Qi, i+1 = Qi, i+n = -1; vi-n = (-1) j, vi-1 = -(-1) j, vi+1 = (-1) j, vi+n = -(-1) j (for i = jn + n - j), vi-n = vi-1 = 0, vi+1 = -(-1) j, vi+n = (-1) j (for i = j n - j -1), vi-n = (-1) j, vi-1 = -(-1) j, vi+1 = vi+n = 0 (for i = j n +2n - j +1), vi-n = vi-1 = vi+1 = vi+n = 0 (otherwise).
Now it is easy to check that = 0.
APPENDIX B
B1. Formula for 2D filter
B1.1. Trigonometric formulas
From the formulas for the sum of sines/cosines we find
The following formula is true (Gradshteyn & Ryzhik, 2015),
In the case of a = 0, equation (26) gives us
Thus we find the Dirichlet sum
Some limits for trigonometric functions:
B1.2. Combinatorial formulas
A factorial n! of a positive integer number n is defined as n! = , 0! 1, a binomial coefficient Cn k can be defined as
From the definition Cn k = Cn n-k, the following formulas are true (Benjamin & Quinn, 2003):
(Vandermonde convolution/identity).
B1.3. Simple integrals
If n and m are non-negative integer numbers, then for ,
and the above integrals equal zero for n < m (see Gradshteyn & Ryzhik, 2015).
Let f(x) and h(x) be odd and even functions, respectively, i.e. f(-x) = -f(x) and h(-x) = h(x), then
Taking into account that , we find for non-negative integer p,
If p is an odd number, then from (35)
In the case of an even p, equation (34) can be used and
In a similar way for we obtain
The above integrals are zeros in the case of n < m.
The Dirac δ-function can be written as the following sum,
B1.4. The first sequence
Suppose there is a number ρ, < 1/4, and for integer we want to find
As , , then > 0, so Tn is not a singular integral. From (30) we obtain
Taking into account the binomial theorem (31) we obtain
From (37) and (38) it follows that, for even k = , we should consider only even m = 2s,
and for odd k = , we should consider only odd m = 2s + 1,
Due to (38)–(42) we see that Ukn is not zero only when both k and n are either even or odd numbers and or , respectively. So for even k and n, i.e. k = , n = , we find
where the Vandermode identity (32) is used, and for odd k and n, i.e. k = , n = ,
So instead of (46) and (47) we obtain one formula when n and k are both even or odd,
Un k = 0 for n < k or odd (k-n). We replace k = n+2q in (45),
B1.5. The second sequence
The integral (44) depends only on one integer parameter n. Now we consider
which depends on two integer parameters n and m. Due to properties of the cosine function and symmetry of Snm it is enough to consider only the case of as Snm = Smn = S-n m = Sn, -m. Note that
Let n, at the same time, then
Therefore if we use identity (25) then
For n = m = 0 we obtain
Taking into account (49) and Sn, -1 = Sn, 1, we obtain
As (Cn+2q q)2 - (Cn-1+2q q)2 = 0 for q = 0, we may write
Now check that the following formula is true for ,
We use mathematical induction. Above we have checked the formula for m = 0 and m = 1. Assume the formula is true for < k. Then
can be rewritten as a Taylor series of with coefficients
for q > 0 and zero for q = 0. Therefore
B1.6. The third sequence
Let there be and
Using formula (24) we obtain, for n > 0,
since = 0. We may find that
and therefore use the recurrent formula
Since
then Vn is a geometric sequence, i.e. Vn+1 = , where
So the values of sequence Vn can be written as
B1.7. Eigenvalues/eigenvectors
Let us consider an (n×n) matrix Bn,
We aim to find its eigenvalues, therefore we want to solve equation det Bn = 0. For this purpose we replace the element (1,1) of Bn by and denote the corresponding matrix as Fn. If we use the first row of Bn to find its determinant, then det Bn = . In a similar way we obtain det Fn = , thus det Bn = det Fn - det Fn-1, det Fn + det Fn-2 = and
Therefore we need to find a sequence {bn} such that
Of course if one sequence {bn} is found, then is also a solution of the above equation. We want to find τ such that det Bn = bn.
Let , then taking into account (23) a solution of (53) is bn = , = . Note that matrices Bn are formally defined for . One can find their determinants explicitly for n = 3 and 4: det B3 = , det B4 = , so from (53) we obtain bn = and can extrapolate values bn for n < 3: b2 = , b1 = , b0 = 0. Thus we obtain the following system of equations,
The first equation is valid if either = 0 or = 0 or π. The case of = 0 is not possible, since the second equation would always be zero. Since = , then we may only consider the case of = 0 and change the sign of τ to obtain the solution for = . Thus
Now we show that the above equation has n roots, so there is no reason to consider the case of > 1.
Taking into account (28) we find the first root, = 0. The other roots can be found from = 0, so = and k = . Note that, due to (29), . As a result, det Bn = 0 if = , k = . All eigenvalues of matrix Bn can be written as = , where = , k = .
For a given eigenvalue we find an eigenvector y = . In this paper we consider only the case of odd n. We need to solve the system of n linear equations
Let us denote = and check that for k = n, = 1 and
is the solution of (54); for k < n,
If k is odd, then yn+1-q = yq, i.e. vector y is symmetric. If k is even, then yn+1-q = -yq, i.e. vector y is antisymmetric. Thus it is enough to check the case of q < n/2. We obtain y1 = , y2 = = , = , so the first equation of (54) is true. As
then the second equation in (54) is also true.
Now we find the norm of vector y. For k = n, it is n, and, for k < n,
Substituting a = and b = into (26) and taking into account that = = 0, we find that the norm for k > 0 is n/2. Finally, we obtain the normalized vector y with elements
B1.8. Inverse matrix B−1
The original matrix from (52) can be written as
where matrix Y is the orthogonal matrix, i.e. Y -1 = Y T, with elements
and matrix D is the diagonal one with elements
(see Higham, 2008). The inverse of matrix B can then be written as
where D -1 is a diagonal matrix with elements
Therefore the element (k,j) of matrix D -1 Y T,
and the elements of B -1 can be written as
Then
Taking into account that = and = −1, we find
All matrices mentioned above have indices starting from 1. However, our aim is to find a filter to be applied to a matrix. Therefore it may be more convenient to rewrite the same matrices as ones centered with respect to the central element. Since n and m are both odd, this central element always exists and is unique. If a standard matrix Q is given, then by we denote a matrix centered with respect to its central element. So, instead of matrix B-1ij defined for i = , j = , we subtract n from k and from the other indices of the above formula, where (n-1)/2, and obtain
where i, j = . We may rewrite
For the central row of we obtain
For odd k the value of [1+ (-1)k] = 0, so we need to consider only even k and use s = k/2,
B1.9. Inversion of the matrix function
The matrix Wn(x) is defined in (16). According to (57) the central row of the function Wn -1(x) can be written as
Now we need to find the matrix . If x and v are two scalars, then Wm(x) + v = Wm(x+v). Therefore,
In principle we only need the central row of (nm×nm) matrix . Formula (58) can be used to find n (m×m) matrices which form the central (m×nm) block of . So for the jth matrix we may write
where = (m-1)/2. Therefore the ( j m+k)th element of the central row of the matrix A-1 defined in (18) can be written as
If values n and m are relatively large, then instead of the sum we may consider a 2D integral. So define continuous variables ξ and η such that and s = , r = and introduce an integral
then the elements of the central row of matrix A-1 tend to for large n and m. We denote = and rewrite the last formula as
Some useful properties of (59) can be found. As cosine is an even function, then is symmetrical with respect to horizontal, vertical or diagonal flipping, i.e. = = = . Thus it is enough to consider . Formulas (49) and (50) give us
In Appendic B1.5 it is shown how the integral in (59) can be calculated, so
From the above formula we obtain other properties of :
(i) > 0, since < 1/4 and binomial coefficients are non-negative.
(ii) < , since
and therefore
(iii) The sum Sj of all elements of the jth row of matrix ,
Taking into account formula (27) and the property (43), we obtain
If we introduce φ such that = , = , then , since , and the value of Sj follows from the results of Appendix B1.6.
(iv) The sum of all elements of matrix is 1,
APPENDIX C
Numerical implementation
C1. High-precision calculations
In order to find elements one can use a finite sum in (61). However, due to various rounding operations, there will be some additional error even if double precision numbers are used. Therefore it is better to try another approach. Formula (61) can be rewritten as = ,
or a sum = of two terms
where index can be chosen later. The first term can be found using the recurrent formula
and since a0 = Cj + k 0 Cj + k j = Cj + k j, then = a0c1 = Cj+k j c1. Now we estimate from above the value of . We obtain
Note that , so if we denote s = j+k we can estimate
For , we obatin
and < 1, so we can estimate from above the value of ,
As a result the value can be found by the following formula,
for
Acknowledgements
The author would like to thank Albrecht Kyrieleis, Philip Withers and beamline scientists of station 16.3 of Synchrotron Radiation Source (Daresbury Laboratory) and I12 beamline of the Diamond Light Source for the data provided.
References
Altunbas, C., Lai, C.-J., Zhong, Y. & Shaw, C. C. (2014). Med. Phys. 41, 091913. CrossRef PubMed Google Scholar
Baek, J., De Man, B., Harrison, D. & Pelc, N. J. (2015). Opt. Express, 23, 7514–7526. CrossRef PubMed Google Scholar
Benjamin, A. T. & Quinn, J. J. (2003). Proofs that Really Count: The Art of Combinatorial Proof. Washington: The Mathematical Association of America. Google Scholar
Brun, F., Accardo, A., Kourousias, G., Dreossi, D. & Pugliese, R. (2013). 8th International Symposium on Image and Signal Processing and Analysis (ISPA 2013), Trieste, Italy, 4–6 September 2013, pp. 672–676. Google Scholar
Davis, G. R. & Elliott, J. C. (1997). Nucl. Instrum. Methods Phys. Res. A, 394, 157–162. CrossRef CAS Web of Science Google Scholar
Davis, G. R., Evershed, A. N. Z. & Mills, D. (2013). J. Dent. 41, 475–482. CrossRef CAS PubMed Google Scholar
Drakopoulos, M., Connolley, T., Reinhard, C., Atwood, R., Magdysyuk, O., Vo, N., Hart, M., Connor, L., Humphreys, B., Howell, G., Davies, S., Hill, T., Wilkin, G., Pedersen, U., Foster, A., De Maio, N., Basham, M., Yuan, F. & Wanelik, K. (2015). J. Synchrotron Rad. 22, 828–838. Web of Science CrossRef CAS IUCr Journals Google Scholar
Engl, H. W., Hanke, M. & Neubauer, A. (1996). Regularization of Inverse Problems, Vol. 375 of Mathematics and its Applications. Dordrecht: Kluwer Academic Publishers Group. Google Scholar
Gradshteyn, I. S. & Ryzhik, I. M. (2015). Table of Integrals, Series and Products. Amsterdam: Elsevier/Academic Press. Google Scholar
Higham, N. J. (2008). Functions of Matrices: Theory and Computation. Philadelphia, PA, USA: Society for Industrial and Applied Mathematics. Google Scholar
Hinebaugh, J., Challa, P. R. & Bazylak, A. (2012). J. Synchrotron Rad. 19, 994–1000. CrossRef CAS IUCr Journals Google Scholar
Kak, A. & Slaney, M. (2001). Principles of Computerized Tomographic Imaging. Philadelphia: Society for Industrial and Applied Mathematics. Google Scholar
Kalender, W. A. (2006). Phys. Med. Biol. 51, R29–R43. CrossRef PubMed Google Scholar
Miqueles, E. X., Rinkel, J., O'Dowd, F. & Bermúdez, J. S. V. (2014). J. Synchrotron Rad. 21, 1333–1346. Web of Science CrossRef CAS IUCr Journals Google Scholar
Münch, B., Trtik, P., Marone, F. & Stampanoni, M. (2009). Opt. Express, 17, 8567–8591. Web of Science PubMed Google Scholar
Natterer, F. & Wübbeling, F. (2001). Mathematical Methods in Image Reconstruction. Philadelphia: Society for Industrial and Applied Mathematics. Google Scholar
Paleo, P. & Mirone, A. (2015). J. Synchrotron Rad. 22, 1268–1278. CrossRef IUCr Journals Google Scholar
Pešić, Z. D., Fanis, A. D., Wagner, U. & Rau, C. (2013). J. Phys. Conf. Ser. 425, 182003. Google Scholar
Prell, D., Kyriakou, Y. & Kalender, W. A. (2009). Phys. Med. Biol. 54, 3881–3895. Web of Science CrossRef PubMed Google Scholar
Rashid, S., Lee, S. Y. & Hasan, M. K. (2012). Eurasip J. Adv. Signal. Process. 2012, 93. CrossRef Google Scholar
Sadi, F., Lee, S. Y. & Hasan, M. K. (2010). Comput. Biol. Med. 40, 109–118. Web of Science CrossRef PubMed Google Scholar
Sijbers, J. & Postnov, A. (2004). Phys. Med. Biol. 49, N247–N253. Web of Science CrossRef PubMed Google Scholar
Tikhonov, A. N. & Arsenin, V. Y. (1977). Solutions of Ill-Posed Problems. Washington: John Wiley and Sons. Google Scholar
Tikhonov, A. N., Goncharsky, A. V., Stepanov, V. V. & Yagola, A. G. (1995). Numerical Methods for the Solution of Ill-Posed Problems, Vol. 328 of Mathematics and Its Applications. Netherlands: Springer. Google Scholar
Titarenko, V. (2016). IEEE Signal Process. Lett. 23, 800–804. CrossRef Google Scholar
Titarenko, V., Bradley, R., Martin, C., Withers, P. J. & Titarenko, S. (2010c). Proc. SPIE, 7804, 78040Z. Google Scholar
Titarenko, S., Titarenko, V., Kyrieleis, A. & Withers, P. (2009). Appl. Phys. Lett. 95, 071113. CrossRef Google Scholar
Titarenko, S., Titarenko, V., Kyrieleis, A. & Withers, P. J. (2010d). J. Synchrotron Rad. 17, 540–549. CrossRef IUCr Journals Google Scholar
Titarenko, S., Titarenko, V., Kyrieleis, A., Withers, P. & De Carlo, F. (2011). J. Synchrotron Rad. 18, 427–435. Web of Science CrossRef IUCr Journals Google Scholar
Titarenko, V., Titarenko, S., Withers, P. J., De Carlo, F. & Xiao, X. (2010a). J. Synchrotron Rad. 17, 689–699. CrossRef CAS IUCr Journals Google Scholar
Titarenko, S., Withers, P. J. & Yagola, A. (2010b). Appl. Math. Lett. 23, 1489–1495. Web of Science CrossRef Google Scholar
Warnett, J. M., Titarenko, V., Kiraci, E., Attridge, A., Lionheart, W. R. B., Withers, P. J. & Williams, M. A. (2016). Meas. Sci. Technol. 27, 035401. CrossRef Google Scholar
Wei, Z., Wiebe, S. & Chapman, D. (2013). J. Instrum. 8, C06006. Google Scholar
Wolkowski, B., Snead, E., Wesolowski, M., Singh, J., Pettitt, M., Chibbar, R., Melli, S. & Montgomery, J. (2015). J. Synchrotron Rad. 22, 1130–1138. CrossRef CAS IUCr Journals Google Scholar
© International Union of Crystallography. Prior permission is not required to reproduce short quotations, tables and figures from this article, provided the original authors and source are cited. For more information, click here.