research papers\(\def\hfill{\hskip 5em}\def\hfil{\hskip 3em}\def\eqno#1{\hfil {#1}}\)

Journal logoJOURNAL OF
SYNCHROTRON
RADIATION
ISSN: 1600-5775

Improved tomographic reconstructions using adaptive time-dependent intensity normalization

CROSSMARK_Color_square_no_text.svg

aHenry Moseley X-ray Imaging Facility, School of Materials, University of Manchester, Grosvenor Street, Manchester M13 9PL, UK, bDepartment of Mathematics, Faculty of Physics, Moscow State University, Leninskie Gory, Moscow 119991, Russia, and cX-ray Science Division, Argonne National Laboratory, 9700 South Cass Avenue, Bldg 401, Argonne, IL 60439-4837, USA
*Correspondence e-mail: valeriy.titarenko@manchester.ac.uk

(Received 7 July 2009; accepted 24 June 2010; online 22 July 2010)

The first processing step in synchrotron-based micro-tomography is the normalization of the projection images against the background, also referred to as a white field. Owing to time-dependent variations in illumination and defects in detection sensitivity, the white field is different from the projection background. In this case standard normalization methods introduce ring and wave artefacts into the resulting three-dimensional reconstruction. In this paper the authors propose a new adaptive technique accounting for these variations and allowing one to obtain cleaner normalized data and to suppress ring and wave artefacts. The background is modelled by the product of two time-dependent terms representing the illumination and detection stages. These terms are written as unknown functions, one scaled and shifted along a fixed direction (describing the illumination term) and one translated by an unknown two-dimensional vector (describing the detection term). The proposed method is applied to two sets (a stem Salix variegata and a zebrafish Danio rerio) acquired at the parallel beam of the micro-tomography station 2-BM at the Advanced Photon Source showing significant reductions in both ring and wave artefacts. In principle the method could be used to correct for time-dependent phenomena that affect other tomographic imaging geometries such as cone beam laboratory X-ray computed tomography.

1. Introduction

In synchrotron-based micro-tomography, it is common to observe the illumination beam motion owing to the thermal instability of the optics (Tucoulou et al., 2008[Tucoulou, R., Martinez-Criado, G., Bleuet, P., Kieffer, I., Cloetens, P., Labouré, S., Martin, T., Guilloud, C. & Susini, J. (2008). J. Synchrotron Rad. 15, 392-398.]; Nakayama et al., 1998[Nakayama, K., Okada, Y. & Fujimoto, H. (1998). J. Synchrotron Rad. 5, 527-529.]; Smith et al., 2006[Smith, A. D., Flaherty, J. V., Mosselmans, J. F. W., Burrows, I., Stephenson, P. C., Hindley, J. P., Owens, W. J., Donaldson, G. & Henderson, C. M. B. (2006). Proceedings of the 8th International Conference on X-ray Microscopy, IPAP Conference Series, Vol. 7, pp. 340-342.]). Together with the defects in the imaging system, it causes variations in the background recorded over time. These are known (Raven, 1998[Raven, C. (1998). Rev. Sci. Instrum. 69, 2978-2980.]) to introduce artefacts into the resulting reconstructions. In this paper the authors propose a means of reducing such artefacts.

By way of an example case we consider such variations on 2-BM which is a dedicated X-ray micro-tomography beamline (De Carlo et al., 2006[De Carlo, F., Xiao, X. & Tieman, B. (2006). Proc. SPIE, 6318, 63180K.]; Wang et al., 2001[Wang, Y., De Carlo, F., Mancini, D. C., McNulty, I., Tieman, B., Bresnahan, J., Foster, I., Insley, J., Lane, P., von Laszewski, G., Kesselman, C., Su, M.-H. & Thiebaux, M. (2001). Rev. Sci. Instrum. 72, 2062-2068.]; De Carlo & Tieman, 2004[De Carlo, F. & Tieman, B. (2004). Proc. SPIE, 5535, 644-651.]) shown schematically in Fig. 1[link]. Our example datasets comprise projections and white fields collected for a piece of a plant stem Salix variegata and a zebrafish Danio rerio. Typically, the flux on 2-BM is 1012 photons s−1 mm−2 at 10 keV and the beam size is 4 mm × 25 mm (see Chu et al., 2002[Chu, Y. S., Liu, C., Mancini, D. C., De Carlo, F., Macrander, A. T., Lai, B. & Shu, D. (2002). Rev. Sci. Instrum. 73, 1485-1487.]). Since the size of the source is very small and the distance between the sample and the source is very large, the beam could be considered as essentially parallel.

[Figure 1]
Figure 1
A simplified scheme of the micro-tomography beamline 2-BM-B at the Advanced Photon Source, Argonne National Laboratory.

The X-ray beam delivered from the synchrotron source is very intense. Therefore, both the mirror and the multilayer monochromator are water-cooled. Owing to thermal fluctuations in the cooling system, the profiles of the mirror and the monochromator may change during data acquisition. In practice, the mirror and the multilayer monochromator are not perfect and introduce background features into the illuminating beam. The scintillator and the optical objective may also have scratches or dust on their surfaces and defects inside. The bending magnet on the APS storage ring also possesses some instability and there are small time-dependent perturbations of the CCD sensor with respect to the scintillator. All these features affect the intensity profile recorded with time by the CCD camera. Fig. 2[link] shows a typical white-field image recorded by the CCD camera and the associated intensity variation down a pixel column. The standard so-called flat-field correction `can' be written (Stock, 2008[Stock, S. R. (2008). MicroComputed Tomography: Methodology and Applications. CRC Press.])

[{{P-D}\over{W-D}},\eqno(1)]

where P, D and W denote a projection (with a sample in the beam), dark (the beam is switched off) and white (with no sample in the beam) field images, respectively. Unfortunately, this approach does not suppress the above-mentioned artefacts as illustrated by the white-field corrected projection in Fig. 3[link] and the profile of the quotient of two white fields in Fig. 4[link]. As a result, ring and wave artefacts are evident in reconstructions (e.g. see Fig. 5[link]). Ring artefacts often appear as concentric arcs or half-circles with changing intensity. Since the recorded perturbations fluctuate randomly but smoothly with time, the ring artefacts vary continuously along the angular coordinate. In some instances the standard flat-field correction procedure succeeds in suppressing these artefacts for several neighbouring projections and the ring structures become almost invisible. Wave artefacts appear as smooth hills and valleys over areas extending more than 10% of the width of the sinogram. By the width and the height of the sinogram the lengths along projection and angle dimensions, respectively, are meant. Their smoothness is due to the correlated nature of the perturbations.

[Figure 2]
Figure 2
A typical white-field image (top panel) recorded on 2-BM-B by the 12-bit CCD camera. The size of the image is 2048×1792 pixels. The inset in the top-right corner shows a magnified version of the square box in the central region. The intensity profile along the dotted line is shown in the bottom panel.
[Figure 3]
Figure 3
A projection of a piece of a stem (Salix variegata). The flat-field correction method described by equation (1)[link] is applied.
[Figure 4]
Figure 4
Profile of the quotient of two white fields from 16 recorded during the acquisition time. In this case the ninth and the first images are chosen as the divisor and the dividend (reference) images. The profile is taken along the dotted vertical line of the image shown in Fig. 2[link].
[Figure 5]
Figure 5
A horizontal slice reconstructed by (a) the flat-field correction method described by equation (1)[link], (b) the proposed method. This cross section does not intersect the sample.

Ring artefact suppression is a very common task in computed tomography and several methods already exist. These methods can be divided into the following groups:

(a) Suppression is carried out before reconstruction. Assuming that a sinogram should be smooth:

(i) In Boin & Haibel (2006[Boin, M. & Haibel, A. (2006). Opt. Express, 14, 12071-12075.]) the mean values y(i) for each column of a sinogram pn(i) are found, the moving average filter is applied to the values found in order to replace y(i) by ys(i), which is the average value of 2N+1 neighbouring values, then the sinogram is normalized by the formula [p'_n(i)] = pn(iys(i)/y(i);

(ii) In Walls et al. (2005[Walls, J. R., Sled, J. G., Sharpe, J. & Henkelman, R. M. (2005). Phys. Med. Biol. 50, 4645-4665.]) the pixel recorded intensities are replaced by the mean of their eight neighbours in all views;

(iii) In Münch et al. (2009[Münch, B., Trtik, P., Marone, F. & Stampanoni, M. (2009). Opt. Express, 17, 8567-8591.]) a wavelet-FFT filter is applied;

(iv) In Sadi et al. (2010[Sadi, F., Lee, S. Y. & Hasan, M. K. (2010). Comput. Biol. Med. 40, 109-118.]) an iterative centre-weighted median filter is proposed;

(v) In Titarenko et al. (2010[Titarenko, S., Titarenko, V., Kyrieleis, A. & Withers, P. J. (2010). J. Synchrotron Rad. 17, 540-549.]) a Tikhonov's functional is used to find the solution.

(b) Suppression is carried out after the image has been reconstructed (see, for example, Sijbers & Postnov, 2004[Sijbers, J. & Postnov, A. (2004). Phys. Med. Biol. 49, N247-N253.]; Yang et al., 2008[Yang, J., Zhen, X., Zhou, L., Zhang, S., Wang, Z., Zhu, L. & Lu, W. (2008). The 2nd International Conference on Bioinformatics and Biomedical Engineering, pp. 2386-2389. IEEE.]). In Sijbers & Postnov (2004[Sijbers, J. & Postnov, A. (2004). Phys. Med. Biol. 49, N247-N253.]) a reconstructed image is transformed into polar coordinates, where a set of homogeneous rows is detected. From the set an artefact template is generated using a median filter; the template is subtracted from the image in the polar coordinate, which is transformed back into Cartesian coordinates.

(c) Modification of an experimental procedure or hardware:

(i) In Davis & Elliott (1997[Davis, G. R. & Elliott, J. C. (1997). Nucl. Instrum. Methods Phys. Res. A, 394, 157-162.]) the detector is moved laterally during acquisition;

(ii) In Niederlöhner et al. (2005[Niederlöhner, D., Nachtrab, F., Michel, T. & Anton, G. (2005). Nucl. Sci. Symp. Conf. Rec. 4, 2327-2331.]) a photon-counting hybrid pixel detector is described. The authors also investigated flat-field statistics, which have been used to develop an algorithm calculating additional correction factors for each pixel;

(iii) In Hsieh et al. (2000[Hsieh, J., Gurmen, O. E. & King, K. F. (2000). IEEE Trans. Med. Imag. 19, 930-940.]) a solid-state detector is introduced and a correction scheme based on recursive filtering is used to compensate for detector afterglow; as a result ring artefacts have also been suppressed.

Of course, ring artefact suppression methods can combine ideas from several groups. For example, the method proposed by Titarenko et al. (2009[Titarenko, S., Titarenko, V., Kyrieleis, A. & Withers, P. J. (2009). Appl. Phys. Lett. 95, 071113.]) is based on both pre- and post-processing ideas.

The main stumbling block for any ring artefact suppression method is that artefacts may not be removed completely while real features may be suppressed. Consider one example: let a rotation axis of a sample pass through a very dense particle, then the particle is always projected into the same pixel of the CCD camera and there is a jump in intensity near this pixel for all projections. If the number of other dense particles in the sample is low, then one may suppose the pixel to record incorrect values and therefore suppress real features. Similar things may be observed when plastic or glass tubes are used to protect a biological sample and their axes are near the rotation axis of the sample. Therefore some so-called a priori information about a sample is definitely required in order to distinguish real features and artefacts. In addition, different ring artefacts on an image may have a different nature and therefore should be removed in different ways. For example, a piece of dirt on a scintillator may decrease the intensity along an X-ray in a standard way, i.e. intensity decreases by [\exp(-\mu \Delta x)] ([\mu] and [\Delta x] are the attenuation coefficient and the thickness of the dirt) and does not depend on the total flux of the beam, while a crack inside the scintillator may emit additional light with intensity proportional to the total flux, so if one has an asymmetric sample then the strength of the corresponding artefact depends on the sample's orientation. Therefore we think that it is not feasible to develop a `general' ring artefact suppression algorithm applicable to any sample and imaging facility. However, one may propose a strategy that should be adapted to a particular beamline. The aim of this paper is to propose a method of suppressing ring and wave artefacts which is based on some a priori information about the experimental set-up and nature of the perturbations causing the artefacts.

2. Mathematical framework

2.1. Notation

x and z are the horizontal and vertical coordinates perpendicular to the beam.

n and m are the number of projections and white fields.

Index i is related to a projection, while j is related to a white field.

t denotes a moment of time, the ith projection is taken at ti, the jth white field is taken at [\tau_j] (to distinguish projections and white fields we use different symbols, t and [\tau]).

P and W denote a projection or white field after the dark field has been subtracted from each one; we also use symbols P(x,z,t) and W(x,z,t) in the case of continuous time variable t and Pi(x,z) = P(x,z,ti), Wj(x,z) = [W(x,z,\tau_j)] in the case of a discrete time variable.

I(x,z,t) describes the intensity profile of the X-ray beam incident on the sample.

[I_0(x,z) \equiv I(x,z,0)], i.e. the intensity at the initial moment of time t = 0.

[r(x,z,a,b) \equiv I_0(x,az+b)/I_0(x,z)], where a and b are numbers [for simplicity we omit x, i.e. use r(z,a,b) and I0(z)].

a(t) and b(t) are some unknown random functions; they describe how the intensity profile of the beam changes its value over time.

A(x,z,t) is the attenuation factor of the sample, i.e. the ratio of intensities of the X-ray beam after and before the sample for the ray passing through the point (x,z) on the CCD camera at the moment t (the sample is rotating, so the factor is a time-dependent function).

Q(x, z, t) is a multiplicative function defined by dust, dirt on the surfaces of the scintillator and CCD camera; we also use symbol Q(x,z) when the function does not depend on time.

[Q_0(x,z)\equiv Q(x,z,0)].

Functions c(t) and d(t) describe possible horizontal and vertical shifts of the optical system.

w and h are the width and the height of the CCD sensor; the sensor records only the intensity for [0\leq x \leq w] and [0\leq z \leq h].

S, S1, S2 and [\tilde S] are some areas on the CCD sensor, i.e. [S, S_1, S_2, \tilde S \subset [0,w]\times [0,h]].

2.2. A simplified case

To a first approximation, it is assumed that there are only perturbations of the bending magnet source, the mirror and the monochromator. These cause the intensity profile of the X-ray beam incident on the sample to change linearly by a shift/stretch along the vertical axis z (this is the typical case in practice, but other variations could be modelled),

[I(x,z,t)=I_0[x,a(t)z+b(t)],\eqno(2)]

where a(t) and b(t) are unknown random functions. In this paper, for any two moments t1 and t2 ([t_2\,\,\gt\,\,t_1]) the corresponding values of a(t1), a(t2) and b(t1), b(t2) are supposed to be uncorrelated, i.e. the values at t2 do not depend on the values at t1. Of course in real experiments one may find some rule making it possible to predict/estimate the values of a(t) and b(t) at [t\,\,\gt\,\,t_1].

The intensity profile I(x,z,t) only stretches and does not reflect I0(x,z). Therefore the stretch factor a(t) is a positive function. The case of a(t) = 0 is also impossible in practice. So it is supposed that the stretch factor is never close to zero. Hence we find that a(t) is a strictly positive function, i.e. there is a positive constant [\varepsilon] such that [a(t)\geq \varepsilon]. Note that this constant does not exist for any positive function; a(t) = exp(-1/t) for [t\in(0,1]] is such an example. For real samples the authors found that a(t) varied slightly about 1; for the stem sample [a(t)\in[0.995,1.020]] (see Fig. 10). Thus it is always possible to choose any moment t0 and suppose that at this moment I0(x,z) = I(x,z,t0). This is true, since one can always change the variable z linearly, i.e. replace a(t0)z+b(t0) by [\tilde z], so I0[x,a(t0)z+b(t0)] = [I_0(x,\tilde z)].

In this case the scintillator, the objective and the CCD camera are assumed to be stable, i.e. there are no vibrations, and the white field W(x,z,t) varies according to

[W(x,z,t)=Q(x,z)\,I(x,z,t),\eqno(3)]

where Q(x,z) is defined by dust, dirt on the surfaces of the scintillator and the CCD camera. It is difficult to determine Q(x,z) directly. The measured white fields can be described by

[W(x,z,t)=Q(x,z)\,I_0[x,a(t)z+b(t)].\eqno(4)]

For a sample rotating in the X-ray beam the intensity measured by the camera can be written as

[P(x,z,t)=A(x,z,t)\,W(x,z,t),\eqno(5)]

where A(x,z,t) describes the attenuation properties of the sample, from which the tomographic structure of the sample can be reconstructed. Of course, A(x,z,t) depends on time, since the sample rotates during the experiment. This can be rewritten in the following form,

[P(x,z,t)=A(x,z,t)\,Q(x,z)\,I_0[x,a(t)z+b(t)].\eqno(6)]

In collecting our test data at the APS the following measurement scheme was used: a projection was recorded every 0.125° and one white-field image was recorded before every 100 ordinary projections until 180° rotation was complete, after which one additional white-field image was taken. Therefore m = 16 and the number of projections n = 1441.

Let a white field be taken at time [\tau] and a projection at time t. The symbols [\tau], [\tau_j] are used for white-field images and t, ti for projections. In our experiments these times were different, i.e. [\tau_j\ne t_i] if j = i. Then

[{{P(x,z,t)}\over{W(x,z,\tau)}}={{A(x,z,t)\,Q(x,z)\,I_0[x,a(t)z+b(t)]}\over{Q(x,z)\,I_0[x,a(\tau)z+b(\tau)]}},\eqno(7)]

[A(x,z,t)={{P(x,z,t)}\over{W(x,z,\tau)}}\left\{{{I_0[x,a(t)z+b(t)]}\over{I_0[x,a(\tau)z+b(\tau)]}}\right\}^{-1}.\eqno(8)]

Our aim is to obtain the attenuation factor A(x,z,t), which is used in the reconstruction. The functions P(x,z,t) and [W(x,z,\tau)] are measured in experiments but the function I0(x,z,t) cannot be found directly.

2.2.1. Some properties

Let us choose a reference white field. For simplicity one may suppose without loss of generality it is taken at [\tau_1] and [a(\tau_1)] = 1, [b(\tau_1)] = 0 [see the discussion after equation (2)[link]]. Denote the white fields by Wj(x,z) = [W(x,z,\tau_j)], j = [1,2,\ldots,m], the projections by Pi(x,z) = P(x,z,ti), i = [1,2,\ldots,n]. For the white-field images we use coefficients aj = [a(\tau_j)], bj = [b(\tau_j)] and for the projections ai = a(ti), bi = b(ti) are used. Then from (8)[link] the attenuation factor A(x,z,t) can be found at t = ti,

[A(x,z,t_i)={{P_i(x,z)}\over{W_1(x,z)}}\left[{{I_0(x,a_iz+b_i)}\over{I_0(x,z)}}\right]^{-1}.\eqno(9)]

One approach to finding (ai,bi) is to approximate I0(x,aiz+bi)/I0(x,z) with W(x,aiz+bi)/W(x,z). However, this approximation may cause additional artefacts as Q(x,aiz + bi) is not always equal to Q(x,z) over the entire image. Therefore, a new approach is needed.

For simplicity the variable x is temporarily omitted, i.e. I0(x,z) becomes I0(z) defined on [0,h] and a function

[r(z,a,b)={{I_0(az+b)}\over{I_0(z)}}\eqno(10)]

is defined. Assuming the function I0(z) is unknown, and a and b are known, three properties of r(z,a,b) arising from (10)[link] can be written.

(i) Let [\tilde z] = az+b, then z = [a^*\tilde z + b^*], where a* = 1/a, b* = -b/a and

[{{1}\over{r(z,a,b)}}={{I_0(z)}\over{I_0(az+b)}}={{I_0(a^*\tilde z + b^*)}\over{I_0(\tilde z)}} = r(\tilde z,a^*,b^*).\eqno(11)]

(ii) Let r(z,a,b) be known for two pairs (a1, b1) and (a2, b2). Then r(z,a1,b1)r(a1z+b1,a2,b2) equals

[{{I_0\left(a_1z+b_1\right)I_0\left[a_2\left(a_1z+b_1\right)+b_2\right]}\over{I_0(z)I_0\left(a_1z+b_1\right)}}=r\left(z,a_1a_2,a_2b_1+b_2\right),\eqno(12)]

i.e. r(z,a,b) could be found for a = a1a2, b = a2b1+b2.

(iii) If I0(z) is strictly positive [there is I* such that [I_0(z)\geq I^*] for all z], differentiable and [|I_0'(z)|\leq L], then

[\eqalign{\left|{{{\rm{d}}}\over{{\rm{d}}a}}r(z,a,b)\right|&=|I_0'(az+b)z/I_0(z)|\leq Lh/I^*,\cr \left|{{{\rm{d}}}\over{{\rm{d}}b}}r(z,a,b)\right|&=|I_0'(az+b)/I_0(z)|\leq L/I^*,\cr \left|{{{\rm{d}}}\over{{\rm{d}}z}}r(z,a,b)\right|&=|I_0'(az+b)a/I_0(z)|\leq 2L/I^*,}]

since [a\simeq1] for real data, i.e. [a \,\lt\, 2].

Then redefine the function r(x,a,b),

[r(x,z,a,b)={{I_0(x,az+b)}\over{I_0(x,z)}}.\eqno(13)]

If the variable x is fixed, then the new function r(x,z,a,b) has all the properties described above. Since

[{{I_0\left[x,a\left(\tau_j\right)z+b\left(\tau_j\right)\right]}\over{I_0(x,z)}}={{W_j(x,z)}\over{W_1(x,z)}},\eqno(14)]

then on the whole image [0,w]×[0,h] one can find r(x,z,a,b) for the pairs (a,b), where a = [a(\tau_j)] and b = [b(\tau_j)], j = [2,3,\ldots,m].

2.2.2. A method to find the stretch factors

So far we have supposed that the pairs (aj,bj) are known. Now we discuss how to find them. Assume that there is an area [S\subset[0,w]\times[0,h]], where [Q(x,z)\simeq1]. This means that there is an area where the detector is relatively free of dust, dirt or other perturbing influences or where the intensity could be easily corrected. For example, the structures seen in the top right part of Fig. 2[link] could be successfully decreased after applying a median filter (Gonzalez & Woods, 2008[Gonzalez, R. C. & Woods, R. E. (2008). Digital Image Processing, 3rd ed. New Jersey: Pearson Prentice Hall.]). So one may assume that there is an area S where W1(x,z) = I0(x,z).

Suppose that there are constants amin, amax, bmin, bmax such that, at any moment of time t, [a(t)\in[a_{\min},a_{\max}]], [b(t)\in[b_{\min},b_{\max}]]. The following is usually true in practice: there is a region S1 inside S in which Q(x,az+b) = 1 at a given point [(x,z)\in S_1]. In region S1, W1(x,az+b) = I0(x,az+b) and W1(x,az+b)/W1(x,z) = I0(x,az+b)/I0(x,z). For convenience, S1 should not be close to the edges of the images so, for any point [(x,z)\in S_1], numbers [a\in[a_{\min}, a_{\max}]] and [b\in[b_{\min},b_{\max}]], the point [(x,ax+b)\in S].

Choose K pairs (ak,bk), k = [1,2,\ldots,K], [a_k\in[a_{\min},a_{\max}]], [b_k\in[b_{\min},b_{\max}]]. Note that there is no correspondence between (ak, bk) and (aj,bj) or (ai,bi), i.e. the real values of a(t) and b(t) when the jth white-field image or the ith projection were acquired, (aj,bj) and (ai,bi) are still unknown. Let us introduce uniform grids on [amin, amax] and [bmin, bmax] with K1 and K2 grid points, so K = K1 K2. We are going to determine (ai, bi) and (aj, bj) from (ak,bk).

Let us take an area S2 inside S1 and compare the functions

[g_k={{W_1(x,a_kz+b_k)}\over{W_1(x,z)}}\eqno(15)]

with the function

[f_j={{W_j(x,z)}\over{W_1(x,z)}}\eqno(16)]

for [j\in2,\ldots,m], defined on S2. In practice we have noticed that several vertical segments in the images can usually be taken as S2. These segments are chosen in such a way so that W1(x,az+b) have significant intensity fluctuations and these functions are easy to distinguish for different pairs (a,b). The accuracy of determining these pairs will be better when the profiles of W1(x,z) are different on different segments belonging to S2.

We calculate gk/fj for each (ak,bk) [\in] [amin,amax] × [bmin,bmax] and find the corresponding standard deviations on S2. The optimal (ak,bk) is found when the minimal standard deviation is obtained. In the ideal case, the optimal (ak,bk) corresponds to the zero deviation. In practice, however, the standard deviation corresponding to the optimal (ak,bk) is typically non-zero.

2.2.3. Correction of intensity profiles

After the coefficients aj and bj, j = [2,\ldots,m], have been identified for all white-field images, the function r(x,z,a,b) can be found for m-1 pairs (aj,bj), j = [2,\ldots,m]. For any of these pairs one can find 1/r(x,z,a,b) and therefore r(x,z,a*,b*), where a* = 1/a, b* = -b/a [see equation (11)[link]]. So r(x,z,a,b) can be identified for m-1 new pairs of (a,b), i.e. r(x,z,a,b) is found for 2(m-1) pairs in total. In a similar way one can identify r(x,z,a,b) for other (2m-2)2 pairs if equation (12)[link] is applied. So there are (2m-2)2+(2m-2) pairs. And again one may use equations (11)[link] and (12)[link] to identify r(x,z,a,b) for other pairs (a,b), and so on. Suppose I0(x,az+b)/I0(x,z) is found for some number M of pairs (a,b). Based on the smoothness of r(x,z,a,b) one may expect that the found M functions allow one to find I0(x,aiz+bi)/I0(x,z) for all i = [1,2,\ldots,n] with a good accuracy.

Some of these pairs (a,b) may have similar values. Strictly speaking, a good approximation cannot be guaranteed if a given (ai,bi) is far away from the known (aj,bj) pairs. However, the white fields taken evenly during the data acquisition guarantee that the modes of the beam motion are well sampled. The authors expect the set of (a,b) pairs found in the above manner can cover (a,b) pairs for all projections. This is indeed verified in our experimental data processing. In principle, if the pair (a,b) for one projection is out of the range of all M pairs (a,b), one can always perform a calculation with equations (11)[link] and (12)[link] on a subset of the M identified functions to extend the definition range of the (a,b) pairs.

A similar approach can be used to find (ai,bi). In this case an area S*, where A(x,z,ti) = 1, should be selected. Then Pi(x,z)/W1(x,z) is compared with the M identified functions on S*. Once ai and bi are found, then

[A(x,z,t_i)={{P_i(x,z)}\over{W_1(x,z)}}\left[{{I_0\left(x,a_iz+b_i\right)}\over{I_0(x,z)}}\right]^{-1}\eqno(17)]

on [0,w]×[0,h].

2.3. A general case

In the above, the multiplicative function Q(x,z) in equation (3)[link] is assumed to be time independent, which is not strictly true. Here, possible vibrations in the optical detection system are taken into account, which comprises the scintillator, the objective and the CCD camera. Assume the following time-dependence,

[Q(x,z,t)=Q_0[x+c(t),z+d(t)].\eqno(18)]

This means that Q(x,z,t) is shifting the imaging relative to the CCD camera along with the time. Let us also set [c(\tau_1)] = 0, [d(\tau_1)] = 0.

In this paragraph white-field images are considered; projections can be corrected in a similar way. We have found that the vector [c(t),d(t)] in the experimental images is usually just a fraction of a pixel in magnitude. Let us choose an area [\tilde S], where I0(x,z) is almost a constant along each horizontal segment belonging to [\tilde S]. Taking various pairs (c,d) we shift W1(x,z) along the vector (c,d) to obtain an image W1cd(x,z) and find the quotient Wj(x,z)/W1cd(x,z). Then the mean value and the standard deviation are found on each horizontal segment. The goal is to minimize the sum of all these standard deviations. Suppose the minimal summation is obtained at cj = [c(\tau_j)] and dj = [d(\tau_j)]. W1cd(x,z) is therefore defined as Q0(x+cj,z+dj)I0(x+cj,z+dj). Instead of Wj(x,z)/W1(x,z) in the simplified case, Wj(x,z)/W1cd(x,z) should be employed to find (ak,bk) in this general case. Similarly, W1cd(x,z) can be found for each projection image, which is then employed to find (aj, bj).

3. Applications and discussion

3.1. Sample 1: plant stem Salix variegata

In order to assess the new method we examine the data collected for the plant stem (Salix variegata) and compare it with two other techniques. For the first technique we select one white-field image [assume it is W1(x,z)] and set A(x,z,ti) by the following formula,

[A(x,z,t_i)={{P_i(x,z)}\over{W_1(x,z)}}\eqno(19)]

for i = [1,2,\ldots,n] [compare with equation (17)[link]], i.e. we have assumed that [I(x,z,t)\equiv I_0(x,z)]. We refer to this as the one-slice correction method.

For the second technique, here referred to as the intermittent correction, each projection is corrected by the formula

[A(x,z,t_i)={{P_i(x,z)}\over{W_j(x,z)}},\eqno(20)]

where Wj(x,z) is the last white-field image taken before the projection Pi under consideration.

Sinograms obtained by applying these established methods as well as the new method are shown in Fig. 6[link]. The corresponding images reconstructed by a filtered back-projection algorithm (Natterer & Wübbeling, 2007[Natterer, F. & Wübbeling, F. (2007). Mathematical Methods in Image Reconstruction, Vol. 5, SIAM Monographs on Mathematical Modeling and Computation. Philadelphia: SIAM.]) are shown in Fig. 7[link].

[Figure 6]
Figure 6
A sinogram for a horizontal slice of a piece of Salix variegata. The one-slice correction (a), the intermittent correction (b) and the adaptive correction (c) are applied.
[Figure 7]
Figure 7
The reconstructed slice of the plant stem using (a) the one-slice correction, (b) the intermittent correction and (c) the new adaptive correction.

Unsurprisingly, the one-slice correction gives the strongest ring artefacts. However, the intermittent correction also shows significant ring and wave artefacts in both the sinogram and the reconstructed slice. To compare these three techniques some metric characterizing the quality of artefact suppression is needed.

For all the projections we could find the rectangular region [1350,2030]×[20,1780] where there is only a white field, i.e. the stem does not project onto this rectangle. In the case of an ideal correction we should obtain A(x,z,ti) = 1 for this rectangle. The variation in the standard deviation [\sigma] of A(x,z,ti) inside the rectangle is plotted in Fig. 8(a)[link]. Taken over all the projections the average standard deviations for three techniques are: [\sigma_{\rm one}] = 0.0185, [\sigma_{\rm inter}] = 0.0142, [\sigma_{\rm adapt}] = 0.0068. Note that we take the rectangle [1350,2030]×[20,1780] (rather than [1350,2047]×[0,1791]) since for some projections we have to shrink and shift the reference white field W1(x,z) and we have no information about values of W1(x,z) outside [0,2047]×[0,1791]. As a result the correction will be definitely worse than inside the rectangle [1350,2030]×[20,1780].

[Figure 8]
Figure 8
A standard deviation of A(x,z,ti) inside the rectangle [1350,2030]×[20,1780] as a function of projection number. The one-slice correction (blue line, A), the intermittent correction (red line, B) and the adaptive correction (green line, C) are applied. In (a) no filter and in (b) the mean filter was used.

While these deviations clearly demonstrate a better performance of our algorithm they do not quantify the ring artefacts. To obtain an indication of this we apply a 13×13 median filter, which suppresses short-range fluctuations, e.g. those caused by vibrations in the optical system, which cause ring artefacts (see Fig. 8b[link]). As the sample does not project into the rectangle [1350,2030]×[20,1780], the obtained standard deviations fully characterize the quality of wave artefact suppression. In this case [\sigma_{\rm one}] = 0.0164, [\sigma_{\rm inter}] = 0.0115, [\sigma_{\rm new}] = 0.0036. So some wave artefacts still persist (see also Fig. 9[link] and compare with Fig. 3[link], where the one-slice correction is applied) but are decreased by approximately three to four times in comparison with conventional flat-field correction methods.

[Figure 9]
Figure 9
A projection cleaned by the adaptive technique.

The coefficients a(ti) and b(ti) are shown in Figs. 10[link] and 11[link]. The absolute values of the vector [c(ti),d(ti)] are shown in Fig. 12[link]. In this case the sample vibrations give only a subpixel shift.

[Figure 10]
Figure 10
The magnification scale coefficient a(ti) identified for each projection.
[Figure 11]
Figure 11
The illumination shift coefficient b(ti) (in pixels) identified for each projection.
[Figure 12]
Figure 12
The absolute value [c2(ti)+d2(ti)]1/2 (in pixels) of the detector shift of Q(x,z,ti) as a function of projection number.

The behaviour of the functions a(ti) and b(ti) can be explained in the following way. A user chooses the energy value, then the system sends pulses to motors controlling the position of components of the monochromator, so that only rays of the given energy penetrate a sample. Owing to thermal processes and vibration the positions of the components change over time. Once one component has changed its position too much, the system sends a pulse to the corresponding motor in order to return the component to the predefined position. Unfortunately, (i) each motor has a certain resolution, (ii) to return to the original energy it requires moving several components, (iii) some time is needed to stabilize the system. In addition, we have assumed that each image is acquired during a very small time. However, to take one image we need to expose the CCD during some time (in our cases it often varies from 300 to 600 ms), which is more than a `period' of vibrations on the monochromator. Therefore [W_j(x,z) \ne W(x,z,\tau_j)] but

[W_j(x,z)={{1}\over{\Delta\tau}}\int\limits_{\tau_j}^{\tau_j+\Delta\tau} W(x,z,t)\,{\rm d}t.\eqno(21)]

The CoolSNAP-K4 camera used at the beamline allows us to acquire at most three frames per second, so we cannot determine how the real white field depends on time. In addition, decreasing the exposure time does not help, since the signal-to-noise ratio of the CCD sensor decreases and the time between frames cannot be decreased to zero. Another possibility is to use an ultra-fast CMOS camera, but this will not help since both the flux of the X-ray beam available at the beamline and the signal-to-noise ratio of the CMOS sensor will not allow us to sufficiently improve the intensity resolution over time.

After we have acquired a set of white fields with 50 ms exposure time and about 300 ms between shots we may only suppose that the real intensity profile has some `period' (less than a second) depending on the time needed to stabilize the system. Note that there is no `periodicity' for a(ti) and b(ti) as one may suggest from Figs. 10[link] and 11[link] (e.g. when images' numbers are greater than 700): we have checked other data sets.

3.2. Sample 2: zebrafish Danio rerio

Now we consider a sample where the structure and the background have similar attenuation coefficients, so it is difficult to separate them when additional artefacts appear. The sample is a zebrafish Danio rerio. The same projection before and after the correction proposed in the paper is shown in Fig. 13[link]; the reconstructed slice and region of interest are shown in Fig. 14[link].

[Figure 13]
Figure 13
Zebrafish (Danio rerio): (a) the original projection, (b) the same projection after the proposed method has been applied; the dashed line indicates the position of the slice reconstructed in Fig. 14[link].
[Figure 14]
Figure 14
Zebrafish (Danio rerio): (a) full area and region of interest (inset), (b) one-slice correction, (c) linear interpolation between white fields, (d) the proposed method, (e) and (f) additional ring artefact suppression described by Titarenko et al. (2010[Titarenko, S., Titarenko, V., Kyrieleis, A. & Withers, P. J. (2010). J. Synchrotron Rad. 17, 540-549.]) is applied to (c) and (d).

We also used a linear interpolation of white fields in order to correct the ring artefacts, (see Fig. 14d[link]). The method suppresses the ring artefacts well only near several rays; the angle between the rays is about [180^\circ/(m-1)] = 12°. This is due to the fact that only projections acquired just before or after a white field has been taken are cleaned well using the standard flat-field correction technique with the corresponding white-field image. The method proposed in the paper also does not suppress all artefacts (see Fig. 14e[link]); however, the method makes the artefacts more `regular', i.e. their strength depends weakly on the polar angle. As a result, additional pre-processing based on the method of Titarenko et al. (2010[Titarenko, S., Titarenko, V., Kyrieleis, A. & Withers, P. J. (2010). J. Synchrotron Rad. 17, 540-549.]), which is developed for suppression of `regular' ring artefacts, allows us to see the difference between the proposed method and the method based on linear interpolation more clearly.

Now we discuss practical considerations for implementing the method proposed in the paper and possible ways to improve the method. Firstly, let us discuss suppression of wave artefacts caused by shrinking and shifting the incident X-ray beam in the vertical direction. To correct intensity profiles we just choose one to three vertical line segments (their height is about 90% of the projections' height), so that a sample is never projected onto these segments during acquisition. Since the number of segments is small we often choose them manually, so they cross as many `hills' as possible on the intensity profile. Note that the method cannot be used if the sample's shadow is larger that the image sensor. In principle the lines should not be the same during acquisition. For example, if a sample's horizontal cross sections are elongated, then one may choose fewer but better distributed vertical segments, i.e. where the number of `hills' on the intensity profile is increased and these `hills' are better separated, when the area covered by the sample's shadow is minimal and increase the number of vertical segments when the shadow is wide. As we mentioned above, shrinking/shifting of the intensity profile is random. Therefore, if the number m of white fields is small, e.g. <5, then it may happen that the shrinking/shifting factors aj, bj, j = [1,2,\ldots,m], have similar values, which are far away from the factor ai, bi, i = [1,2,\ldots,n], found for projections. Hence to find r(x,z,ai,bi) it may be required to apply a large number of operations described by equations (11)[link] and (12)[link] to the known functions r(x,z,aj,bj). However, increasing the number of these operations causes additional errors in r(x,z,ai,bi), since r(x,z,aj,bj) are known only for discrete values of (x,z) and some interpolation is required. As a result, sometimes it is better to acquire an additional set of white fields, e.g. 100 images after the scan, and select those images having sufficiently different values of aj, bj.

Secondly, let us consider ring artefacts. We have assumed that the multiplicative function Q(x,z) can only be shifted along a vector [see equation (18)[link]]. However, when a scintillator/optics/CCD has been used for a long time, some radiation damage occurs and the time-dependence of Q(x,z) may be written as

[Q(x,z,t)=Q_{\rm stab}(x,z)Q_{\rm mov}[x+c(t),z+d(t)],\eqno(22)]

where Qstab(x,z) is related to a `stable' component, e.g. a CCD, optical system and scintillator, and Qmov(x,z) is determined by a `moving' component, e.g. a monochromator. In the case of a clean undamaged optical system, the number of small `features' similar to those shown in the inset of Fig. 2[link] is very small, therefore Qstab(x,z) is a very smooth function and Qstab(x,z)[Q_{\rm stab}(\tilde x,\tilde z)] if a point [(\tilde x,\tilde z)] is near (x,z), so we may use equation (18)[link]. This is true for the first sample (plant stem Salix variegata). However, if the radiation damage is high and there are a lot of dust/dirt/scratches on the surfaces of the optical system, the number of `features' for Qstab(x,z) is increased. Unfortunately, proper determination of Qstab(x,z) is not always possible, therefore shifting the original white field W1(x, z) along the vector (c, d) to find W1cd(x,z) and the ratio Wj(x, z)/W1cd(x, z) as described in §2.3[link] will suppress ring artefacts caused by the `moving' component Qmov(x,z) but introduce new artefacts caused by shifting the `stable' component Qstab(x,z). This is true for the zebrafish sample. To overcome this problem the authors will propose a method to separate Qstab(x,z) and Qmov(x,z) in a forthcoming paper, so that the method will also be applicable for damaged optical systems.

4. Conclusions

The proposed white-field correction method based on a continuous adaptive correction using intermittent white-field measurements effectively suppresses both ring and wave artefacts. The time required to process a 2000×2000×2000 volume depends on the sizes of the areas and number of temporary images used in the method. However, to obtain enough suppression it takes less than 10 min spent on an ordinary Intel Dual-core processor. Further improvements are possible and will be discussed in future papers. For instance, ring artefacts could be better suppressed if a subpixel structure of the white-field image is found. This is possible in principle, since there are several white fields shifted on subpixel lengths. For remaining artefacts it would be possible to apply suppression algorithms based on sinogram smoothness (see, for example, Titarenko, 2009[Titarenko, S. (2009). British Applied Mathematics Colloquium 2009, p. 102. University of Nottingham, UK.]; Titarenko et al., 2009[Titarenko, S., Titarenko, V., Kyrieleis, A. & Withers, P. J. (2009). Appl. Phys. Lett. 95, 071113.]; Titarenko & Yagola, 2010[Titarenko, S. & Yagola, A. G. (2010). Moscow Univ. Phys. Bull. 65, 65-67.]). Wave artefact suppression could also be improved if the function I0(x,az+b)/I0(x,z) is found for a greater number M of pairs (a,b). While we have focused on parallel beam synchrotron data it should in principle be possible to extend the method to cone beam optics and other types of time variations that affect the white fields collected during acquisition. Finally we should mention that the proposed adaptive correction technique is based only on two types of motion. The next steps should take into account instability of the intensity profile of the beam during acquisition, as well as possible beam-hardening and diffraction effects.

Acknowledgements

The authors would like to thank Dr Walter H. Schröder (Institute Phytosphere, Forschungszentrum Jülich, Germany) who provided the stem sample Salix variegata, Dr Keith C. Cheng, Darin Clark (Division of Experimental Pathology, Jake Gittlen Cancer Research Foundation, Penn State Cancer Institute, Penn State College of Medicine, Hershey, PA 17033, USA) and Dr Patrick La Rivière (Department of Radiology, The University of Chicago, USA) for the possibility to use a preliminary reconstruction of the zebrafish sample Danio rerio (the research is supported by the grant R24RR017441, Web-based Atlas of Zebrafish Microanatomy as a Community Resource, from NIH). Use of the Advanced Photon Source was supported by the US Department of Energy, Office of Science, Office of Basic Energy Sciences, under Contract No. DE-AC02-06CH11357. Valeriy Titarenko is grateful to STFC for funding and Sofya Titarenko to EPSRC for funds through a `Collaborating for Success' grant.

References

First citationBoin, M. & Haibel, A. (2006). Opt. Express, 14, 12071–12075.  Web of Science CrossRef PubMed Google Scholar
First citationChu, Y. S., Liu, C., Mancini, D. C., De Carlo, F., Macrander, A. T., Lai, B. & Shu, D. (2002). Rev. Sci. Instrum. 73, 1485–1487.  Web of Science CrossRef CAS Google Scholar
First citationDavis, G. R. & Elliott, J. C. (1997). Nucl. Instrum. Methods Phys. Res. A, 394, 157–162.  CrossRef CAS Web of Science Google Scholar
First citationDe Carlo, F. & Tieman, B. (2004). Proc. SPIE, 5535, 644–651.  CrossRef Google Scholar
First citationDe Carlo, F., Xiao, X. & Tieman, B. (2006). Proc. SPIE, 6318, 63180K.  CrossRef Google Scholar
First citationGonzalez, R. C. & Woods, R. E. (2008). Digital Image Processing, 3rd ed. New Jersey: Pearson Prentice Hall.  Google Scholar
First citationHsieh, J., Gurmen, O. E. & King, K. F. (2000). IEEE Trans. Med. Imag. 19, 930–940.  CAS Google Scholar
First citationMünch, B., Trtik, P., Marone, F. & Stampanoni, M. (2009). Opt. Express, 17, 8567–8591.  Web of Science PubMed Google Scholar
First citationNakayama, K., Okada, Y. & Fujimoto, H. (1998). J. Synchrotron Rad. 5, 527–529.  Web of Science CrossRef CAS IUCr Journals Google Scholar
First citationNatterer, F. & Wübbeling, F. (2007). Mathematical Methods in Image Reconstruction, Vol. 5, SIAM Monographs on Mathematical Modeling and Computation. Philadelphia: SIAM.  Google Scholar
First citationNiederlöhner, D., Nachtrab, F., Michel, T. & Anton, G. (2005). Nucl. Sci. Symp. Conf. Rec. 4, 2327–2331.  Google Scholar
First citationRaven, C. (1998). Rev. Sci. Instrum. 69, 2978–2980.  Web of Science CrossRef CAS Google Scholar
First citationSadi, F., Lee, S. Y. & Hasan, M. K. (2010). Comput. Biol. Med. 40, 109–118.  Web of Science CrossRef PubMed Google Scholar
First citationSijbers, J. & Postnov, A. (2004). Phys. Med. Biol. 49, N247–N253.  Web of Science CrossRef PubMed Google Scholar
First citationSmith, A. D., Flaherty, J. V., Mosselmans, J. F. W., Burrows, I., Stephenson, P. C., Hindley, J. P., Owens, W. J., Donaldson, G. & Henderson, C. M. B. (2006). Proceedings of the 8th International Conference on X-ray Microscopy, IPAP Conference Series, Vol. 7, pp. 340–342.  Google Scholar
First citationStock, S. R. (2008). MicroComputed Tomography: Methodology and Applications. CRC Press.  Google Scholar
First citationTitarenko, S. (2009). British Applied Mathematics Colloquium 2009, p. 102. University of Nottingham, UK.  Google Scholar
First citationTitarenko, S., Titarenko, V., Kyrieleis, A. & Withers, P. J. (2009). Appl. Phys. Lett. 95, 071113.  Web of Science CrossRef Google Scholar
First citationTitarenko, S., Titarenko, V., Kyrieleis, A. & Withers, P. J. (2010). J. Synchrotron Rad. 17, 540–549.  Web of Science CrossRef IUCr Journals Google Scholar
First citationTitarenko, S. & Yagola, A. G. (2010). Moscow Univ. Phys. Bull. 65, 65–67.  Web of Science CrossRef Google Scholar
First citationTucoulou, R., Martinez-Criado, G., Bleuet, P., Kieffer, I., Cloetens, P., Labouré, S., Martin, T., Guilloud, C. & Susini, J. (2008). J. Synchrotron Rad. 15, 392–398.  Web of Science CrossRef CAS IUCr Journals Google Scholar
First citationWalls, J. R., Sled, J. G., Sharpe, J. & Henkelman, R. M. (2005). Phys. Med. Biol. 50, 4645–4665.  Web of Science CrossRef PubMed Google Scholar
First citationWang, Y., De Carlo, F., Mancini, D. C., McNulty, I., Tieman, B., Bresnahan, J., Foster, I., Insley, J., Lane, P., von Laszewski, G., Kesselman, C., Su, M.-H. & Thiebaux, M. (2001). Rev. Sci. Instrum. 72, 2062–2068.  Web of Science CrossRef CAS Google Scholar
First citationYang, J., Zhen, X., Zhou, L., Zhang, S., Wang, Z., Zhu, L. & Lu, W. (2008). The 2nd International Conference on Bioinformatics and Biomedical Engineering, pp. 2386–2389. IEEE.  Google Scholar

© International Union of Crystallography. Prior permission is not required to reproduce short quotations, tables and figures from this article, provided the original authors and source are cited. For more information, click here.

Journal logoJOURNAL OF
SYNCHROTRON
RADIATION
ISSN: 1600-5775
Follow J. Synchrotron Rad.
Sign up for e-alerts
Follow J. Synchrotron Rad. on Twitter
Follow us on facebook
Sign up for RSS feeds