research papers
Improved tomographic reconstructions using adaptive time-dependent intensity normalization
aHenry Moseley X-ray Imaging Facility, School of Materials, University of Manchester, Grosvenor Street, Manchester M13 9PL, UK, bDepartment of Mathematics, Faculty of Physics, Moscow State University, Leninskie Gory, Moscow 119991, Russia, and cX-ray Science Division, Argonne National Laboratory, 9700 South Cass Avenue, Bldg 401, Argonne, IL 60439-4837, USA
*Correspondence e-mail: valeriy.titarenko@manchester.ac.uk
The first processing step in synchrotron-based micro-tomography is the normalization of the projection images against the background, also referred to as a white field. Owing to time-dependent variations in illumination and defects in detection sensitivity, the white field is different from the projection background. In this case standard normalization methods introduce ring and wave artefacts into the resulting three-dimensional reconstruction. In this paper the authors propose a new adaptive technique accounting for these variations and allowing one to obtain cleaner normalized data and to suppress ring and wave artefacts. The background is modelled by the product of two time-dependent terms representing the illumination and detection stages. These terms are written as unknown functions, one scaled and shifted along a fixed direction (describing the illumination term) and one translated by an unknown two-dimensional vector (describing the detection term). The proposed method is applied to two sets (a stem Salix variegata and a zebrafish Danio rerio) acquired at the parallel beam of the micro-tomography station 2-BM at the Advanced Photon Source showing significant reductions in both ring and wave artefacts. In principle the method could be used to correct for time-dependent phenomena that affect other tomographic imaging geometries such as cone beam laboratory X-ray computed tomography.
Keywords: attenuation tomography; flat-field correction; intensity normalization; ring artefacts; synchrotron X-rays; parallel beam.
1. Introduction
In synchrotron-based micro-tomography, it is common to observe the illumination beam motion owing to the thermal instability of the optics (Tucoulou et al., 2008; Nakayama et al., 1998; Smith et al., 2006). Together with the defects in the imaging system, it causes variations in the background recorded over time. These are known (Raven, 1998) to introduce artefacts into the resulting reconstructions. In this paper the authors propose a means of reducing such artefacts.
By way of an example case we consider such variations on 2-BM which is a dedicated X-ray micro-tomography beamline (De Carlo et al., 2006; Wang et al., 2001; De Carlo & Tieman, 2004) shown schematically in Fig. 1. Our example datasets comprise projections and white fields collected for a piece of a plant stem Salix variegata and a zebrafish Danio rerio. Typically, the on 2-BM is 1012 photons s−1 mm−2 at 10 keV and the beam size is 4 mm × 25 mm (see Chu et al., 2002). Since the size of the source is very small and the distance between the sample and the source is very large, the beam could be considered as essentially parallel.
The X-ray beam delivered from the synchrotron source is very intense. Therefore, both the mirror and the multilayer monochromator are water-cooled. Owing to thermal fluctuations in the cooling system, the profiles of the mirror and the monochromator may change during data acquisition. In practice, the mirror and the multilayer monochromator are not perfect and introduce background features into the illuminating beam. The scintillator and the optical objective may also have scratches or dust on their surfaces and defects inside. The bending magnet on the APS storage ring also possesses some instability and there are small time-dependent perturbations of the CCD sensor with respect to the scintillator. All these features affect the intensity profile recorded with time by the CCD camera. Fig. 2 shows a typical white-field image recorded by the CCD camera and the associated intensity variation down a pixel column. The standard so-called flat-field correction `can' be written (Stock, 2008)
where P, D and W denote a projection (with a sample in the beam), dark (the beam is switched off) and white (with no sample in the beam) field images, respectively. Unfortunately, this approach does not suppress the above-mentioned artefacts as illustrated by the white-field corrected projection in Fig. 3 and the profile of the quotient of two white fields in Fig. 4. As a result, ring and wave artefacts are evident in reconstructions (e.g. see Fig. 5). Ring artefacts often appear as concentric arcs or half-circles with changing intensity. Since the recorded perturbations fluctuate randomly but smoothly with time, the ring artefacts vary continuously along the angular coordinate. In some instances the standard flat-field correction procedure succeeds in suppressing these artefacts for several neighbouring projections and the ring structures become almost invisible. Wave artefacts appear as smooth hills and valleys over areas extending more than 10% of the width of the sinogram. By the width and the height of the sinogram the lengths along projection and angle dimensions, respectively, are meant. Their smoothness is due to the correlated nature of the perturbations.
Ring artefact suppression is a very common task in computed tomography and several methods already exist. These methods can be divided into the following groups:
(a) Suppression is carried out before reconstruction. Assuming that a sinogram should be smooth:
(i) In Boin & Haibel (2006) the mean values y(i) for each column of a sinogram pn(i) are found, the moving average filter is applied to the values found in order to replace y(i) by ys(i), which is the average value of 2N+1 neighbouring values, then the sinogram is normalized by the formula = pn(i) ys(i)/y(i);
(ii) In Walls et al. (2005) the pixel recorded intensities are replaced by the mean of their eight neighbours in all views;
(iii) In Münch et al. (2009) a wavelet-FFT filter is applied;
(iv) In Sadi et al. (2010) an iterative centre-weighted median filter is proposed;
(v) In Titarenko et al. (2010) a Tikhonov's functional is used to find the solution.
(b) Suppression is carried out after the image has been reconstructed (see, for example, Sijbers & Postnov, 2004; Yang et al., 2008). In Sijbers & Postnov (2004) a reconstructed image is transformed into polar coordinates, where a set of homogeneous rows is detected. From the set an artefact template is generated using a median filter; the template is subtracted from the image in the polar coordinate, which is transformed back into Cartesian coordinates.
(c) Modification of an experimental procedure or hardware:
(i) In Davis & Elliott (1997) the detector is moved laterally during acquisition;
(ii) In Niederlöhner et al. (2005) a photon-counting hybrid pixel detector is described. The authors also investigated flat-field statistics, which have been used to develop an algorithm calculating additional correction factors for each pixel;
(iii) In Hsieh et al. (2000) a solid-state detector is introduced and a correction scheme based on recursive filtering is used to compensate for detector afterglow; as a result ring artefacts have also been suppressed.
Of course, ring artefact suppression methods can combine ideas from several groups. For example, the method proposed by Titarenko et al. (2009) is based on both pre- and post-processing ideas.
The main stumbling block for any ring artefact suppression method is that artefacts may not be removed completely while real features may be suppressed. Consider one example: let a rotation axis of a sample pass through a very dense particle, then the particle is always projected into the same pixel of the CCD camera and there is a jump in intensity near this pixel for all projections. If the number of other dense particles in the sample is low, then one may suppose the pixel to record incorrect values and therefore suppress real features. Similar things may be observed when plastic or glass tubes are used to protect a biological sample and their axes are near the rotation axis of the sample. Therefore some so-called a priori information about a sample is definitely required in order to distinguish real features and artefacts. In addition, different ring artefacts on an image may have a different nature and therefore should be removed in different ways. For example, a piece of dirt on a scintillator may decrease the intensity along an X-ray in a standard way, i.e. intensity decreases by ( and are the and the thickness of the dirt) and does not depend on the total of the beam, while a crack inside the scintillator may emit additional light with intensity proportional to the total so if one has an asymmetric sample then the strength of the corresponding artefact depends on the sample's orientation. Therefore we think that it is not feasible to develop a `general' ring artefact suppression algorithm applicable to any sample and imaging facility. However, one may propose a strategy that should be adapted to a particular beamline. The aim of this paper is to propose a method of suppressing ring and wave artefacts which is based on some a priori information about the experimental set-up and nature of the perturbations causing the artefacts.
2. Mathematical framework
2.1. Notation
x and z are the horizontal and vertical coordinates perpendicular to the beam.
n and m are the number of projections and white fields.
Index i is related to a projection, while j is related to a white field.
t denotes a moment of time, the ith projection is taken at ti, the jth white field is taken at (to distinguish projections and white fields we use different symbols, t and ).
P and W denote a projection or white field after the dark field has been subtracted from each one; we also use symbols P(x,z,t) and W(x,z,t) in the case of continuous time variable t and Pi(x,z) = P(x,z,ti), Wj(x,z) = in the case of a discrete time variable.
I(x,z,t) describes the intensity profile of the X-ray beam incident on the sample.
, i.e. the intensity at the initial moment of time t = 0.
, where a and b are numbers [for simplicity we omit x, i.e. use r(z,a,b) and I0(z)].
a(t) and b(t) are some unknown random functions; they describe how the intensity profile of the beam changes its value over time.
A(x,z,t) is the attenuation factor of the sample, i.e. the ratio of intensities of the X-ray beam after and before the sample for the ray passing through the point (x,z) on the CCD camera at the moment t (the sample is rotating, so the factor is a time-dependent function).
Q(x, z, t) is a multiplicative function defined by dust, dirt on the surfaces of the scintillator and CCD camera; we also use symbol Q(x,z) when the function does not depend on time.
.
Functions c(t) and d(t) describe possible horizontal and vertical shifts of the optical system.
w and h are the width and the height of the CCD sensor; the sensor records only the intensity for and .
S, S1, S2 and are some areas on the CCD sensor, i.e. .
2.2. A simplified case
To a first approximation, it is assumed that there are only perturbations of the bending magnet source, the mirror and the monochromator. These cause the intensity profile of the X-ray beam incident on the sample to change linearly by a shift/stretch along the vertical axis z (this is the typical case in practice, but other variations could be modelled),
where a(t) and b(t) are unknown random functions. In this paper, for any two moments t1 and t2 () the corresponding values of a(t1), a(t2) and b(t1), b(t2) are supposed to be uncorrelated, i.e. the values at t2 do not depend on the values at t1. Of course in real experiments one may find some rule making it possible to predict/estimate the values of a(t) and b(t) at .
The intensity profile I(x,z,t) only stretches and does not reflect I0(x,z). Therefore the stretch factor a(t) is a positive function. The case of a(t) = 0 is also impossible in practice. So it is supposed that the stretch factor is never close to zero. Hence we find that a(t) is a strictly positive function, i.e. there is a positive constant such that . Note that this constant does not exist for any positive function; a(t) = exp(-1/t) for is such an example. For real samples the authors found that a(t) varied slightly about 1; for the stem sample (see Fig. 10). Thus it is always possible to choose any moment t0 and suppose that at this moment I0(x,z) = I(x,z,t0). This is true, since one can always change the variable z linearly, i.e. replace a(t0)z+b(t0) by , so I0[x,a(t0)z+b(t0)] = .
In this case the scintillator, the objective and the CCD camera are assumed to be stable, i.e. there are no vibrations, and the white field W(x,z,t) varies according to
where Q(x,z) is defined by dust, dirt on the surfaces of the scintillator and the CCD camera. It is difficult to determine Q(x,z) directly. The measured white fields can be described by
For a sample rotating in the X-ray beam the intensity measured by the camera can be written as
where A(x,z,t) describes the attenuation properties of the sample, from which the tomographic structure of the sample can be reconstructed. Of course, A(x,z,t) depends on time, since the sample rotates during the experiment. This can be rewritten in the following form,
In collecting our test data at the APS the following measurement scheme was used: a projection was recorded every 0.125° and one white-field image was recorded before every 100 ordinary projections until 180° rotation was complete, after which one additional white-field image was taken. Therefore m = 16 and the number of projections n = 1441.
Let a white field be taken at time and a projection at time t. The symbols , are used for white-field images and t, ti for projections. In our experiments these times were different, i.e. if j = i. Then
Our aim is to obtain the attenuation factor A(x,z,t), which is used in the reconstruction. The functions P(x,z,t) and are measured in experiments but the function I0(x,z,t) cannot be found directly.
2.2.1. Some properties
Let us choose a reference white field. For simplicity one may suppose without loss of generality it is taken at and = 1, = 0 [see the discussion after equation (2)]. Denote the white fields by Wj(x,z) = , j = , the projections by Pi(x,z) = P(x,z,ti), i = . For the white-field images we use coefficients aj = , bj = and for the projections ai = a(ti), bi = b(ti) are used. Then from (8) the attenuation factor A(x,z,t) can be found at t = ti,
One approach to finding (ai,bi) is to approximate I0(x,aiz+bi)/I0(x,z) with W(x,aiz+bi)/W(x,z). However, this approximation may cause additional artefacts as Q(x,aiz + bi) is not always equal to Q(x,z) over the entire image. Therefore, a new approach is needed.
For simplicity the variable x is temporarily omitted, i.e. I0(x,z) becomes I0(z) defined on [0,h] and a function
is defined. Assuming the function I0(z) is unknown, and a and b are known, three properties of r(z,a,b) arising from (10) can be written.
(i) Let = az+b, then z = , where a* = 1/a, b* = -b/a and
(ii) Let r(z,a,b) be known for two pairs (a1, b1) and (a2, b2). Then r(z,a1,b1)r(a1z+b1,a2,b2) equals
i.e. r(z,a,b) could be found for a = a1a2, b = a2b1+b2.
(iii) If I0(z) is strictly positive [there is I* such that for all z], differentiable and , then
since for real data, i.e. .
Then redefine the function r(x,a,b),
If the variable x is fixed, then the new function r(x,z,a,b) has all the properties described above. Since
then on the whole image [0,w]×[0,h] one can find r(x,z,a,b) for the pairs (a,b), where a = and b = , j = .
2.2.2. A method to find the stretch factors
So far we have supposed that the pairs (aj,bj) are known. Now we discuss how to find them. Assume that there is an area , where . This means that there is an area where the detector is relatively free of dust, dirt or other perturbing influences or where the intensity could be easily corrected. For example, the structures seen in the top right part of Fig. 2 could be successfully decreased after applying a median filter (Gonzalez & Woods, 2008). So one may assume that there is an area S where W1(x,z) = I0(x,z).
Suppose that there are constants amin, amax, bmin, bmax such that, at any moment of time t, , . The following is usually true in practice: there is a region S1 inside S in which Q(x,az+b) = 1 at a given point . In region S1, W1(x,az+b) = I0(x,az+b) and W1(x,az+b)/W1(x,z) = I0(x,az+b)/I0(x,z). For convenience, S1 should not be close to the edges of the images so, for any point , numbers and , the point .
Choose K pairs (ak,bk), k = , , . Note that there is no correspondence between (ak, bk) and (aj,bj) or (ai,bi), i.e. the real values of a(t) and b(t) when the jth white-field image or the ith projection were acquired, (aj,bj) and (ai,bi) are still unknown. Let us introduce uniform grids on [amin, amax] and [bmin, bmax] with K1 and K2 grid points, so K = K1 K2. We are going to determine (ai, bi) and (aj, bj) from (ak,bk).
Let us take an area S2 inside S1 and compare the functions
with the function
for , defined on S2. In practice we have noticed that several vertical segments in the images can usually be taken as S2. These segments are chosen in such a way so that W1(x,az+b) have significant intensity fluctuations and these functions are easy to distinguish for different pairs (a,b). The accuracy of determining these pairs will be better when the profiles of W1(x,z) are different on different segments belonging to S2.
We calculate gk/fj for each (ak,bk) [amin,amax] × [bmin,bmax] and find the corresponding standard deviations on S2. The optimal (ak,bk) is found when the minimal standard deviation is obtained. In the ideal case, the optimal (ak,bk) corresponds to the zero deviation. In practice, however, the standard deviation corresponding to the optimal (ak,bk) is typically non-zero.
2.2.3. Correction of intensity profiles
After the coefficients aj and bj, j = , have been identified for all white-field images, the function r(x,z,a,b) can be found for m-1 pairs (aj,bj), j = . For any of these pairs one can find 1/r(x,z,a,b) and therefore r(x,z,a*,b*), where a* = 1/a, b* = -b/a [see equation (11)]. So r(x,z,a,b) can be identified for m-1 new pairs of (a,b), i.e. r(x,z,a,b) is found for 2(m-1) pairs in total. In a similar way one can identify r(x,z,a,b) for other (2m-2)2 pairs if equation (12) is applied. So there are (2m-2)2+(2m-2) pairs. And again one may use equations (11) and (12) to identify r(x,z,a,b) for other pairs (a,b), and so on. Suppose I0(x,az+b)/I0(x,z) is found for some number M of pairs (a,b). Based on the smoothness of r(x,z,a,b) one may expect that the found M functions allow one to find I0(x,aiz+bi)/I0(x,z) for all i = with a good accuracy.
Some of these pairs (a,b) may have similar values. Strictly speaking, a good approximation cannot be guaranteed if a given (ai,bi) is far away from the known (aj,bj) pairs. However, the white fields taken evenly during the data acquisition guarantee that the modes of the beam motion are well sampled. The authors expect the set of (a,b) pairs found in the above manner can cover (a,b) pairs for all projections. This is indeed verified in our experimental data processing. In principle, if the pair (a,b) for one projection is out of the range of all M pairs (a,b), one can always perform a calculation with equations (11) and (12) on a subset of the M identified functions to extend the definition range of the (a,b) pairs.
A similar approach can be used to find (ai,bi). In this case an area S*, where A(x,z,ti) = 1, should be selected. Then Pi(x,z)/W1(x,z) is compared with the M identified functions on S*. Once ai and bi are found, then
on [0,w]×[0,h].
2.3. A general case
In the above, the multiplicative function Q(x,z) in equation (3) is assumed to be time independent, which is not strictly true. Here, possible vibrations in the optical detection system are taken into account, which comprises the scintillator, the objective and the CCD camera. Assume the following time-dependence,
This means that Q(x,z,t) is shifting the imaging relative to the CCD camera along with the time. Let us also set = 0, = 0.
In this paragraph white-field images are considered; projections can be corrected in a similar way. We have found that the vector [c(t),d(t)] in the experimental images is usually just a fraction of a pixel in magnitude. Let us choose an area , where I0(x,z) is almost a constant along each horizontal segment belonging to . Taking various pairs (c,d) we shift W1(x,z) along the vector (c,d) to obtain an image W1cd(x,z) and find the quotient Wj(x,z)/W1cd(x,z). Then the mean value and the standard deviation are found on each horizontal segment. The goal is to minimize the sum of all these standard deviations. Suppose the minimal summation is obtained at cj = and dj = . W1cd(x,z) is therefore defined as Q0(x+cj,z+dj)I0(x+cj,z+dj). Instead of Wj(x,z)/W1(x,z) in the simplified case, Wj(x,z)/W1cd(x,z) should be employed to find (ak,bk) in this general case. Similarly, W1cd(x,z) can be found for each projection image, which is then employed to find (aj, bj).
3. Applications and discussion
3.1. Sample 1: plant stem Salix variegata
In order to assess the new method we examine the data collected for the plant stem (Salix variegata) and compare it with two other techniques. For the first technique we select one white-field image [assume it is W1(x,z)] and set A(x,z,ti) by the following formula,
for i = [compare with equation (17)], i.e. we have assumed that . We refer to this as the one-slice correction method.
For the second technique, here referred to as the intermittent correction, each projection is corrected by the formula
where Wj(x,z) is the last white-field image taken before the projection Pi under consideration.
Sinograms obtained by applying these established methods as well as the new method are shown in Fig. 6. The corresponding images reconstructed by a filtered back-projection algorithm (Natterer & Wübbeling, 2007) are shown in Fig. 7.
Unsurprisingly, the one-slice correction gives the strongest ring artefacts. However, the intermittent correction also shows significant ring and wave artefacts in both the sinogram and the reconstructed slice. To compare these three techniques some metric characterizing the quality of artefact suppression is needed.
For all the projections we could find the rectangular region [1350,2030]×[20,1780] where there is only a white field, i.e. the stem does not project onto this rectangle. In the case of an ideal correction we should obtain A(x,z,ti) = 1 for this rectangle. The variation in the standard deviation of A(x,z,ti) inside the rectangle is plotted in Fig. 8(a). Taken over all the projections the average standard deviations for three techniques are: = 0.0185, = 0.0142, = 0.0068. Note that we take the rectangle [1350,2030]×[20,1780] (rather than [1350,2047]×[0,1791]) since for some projections we have to shrink and shift the reference white field W1(x,z) and we have no information about values of W1(x,z) outside [0,2047]×[0,1791]. As a result the correction will be definitely worse than inside the rectangle [1350,2030]×[20,1780].
While these deviations clearly demonstrate a better performance of our algorithm they do not quantify the ring artefacts. To obtain an indication of this we apply a 13×13 median filter, which suppresses short-range fluctuations, e.g. those caused by vibrations in the optical system, which cause ring artefacts (see Fig. 8b). As the sample does not project into the rectangle [1350,2030]×[20,1780], the obtained standard deviations fully characterize the quality of wave artefact suppression. In this case = 0.0164, = 0.0115, = 0.0036. So some wave artefacts still persist (see also Fig. 9 and compare with Fig. 3, where the one-slice correction is applied) but are decreased by approximately three to four times in comparison with conventional flat-field correction methods.
The coefficients a(ti) and b(ti) are shown in Figs. 10 and 11. The absolute values of the vector [c(ti),d(ti)] are shown in Fig. 12. In this case the sample vibrations give only a subpixel shift.
The behaviour of the functions a(ti) and b(ti) can be explained in the following way. A user chooses the energy value, then the system sends pulses to motors controlling the position of components of the monochromator, so that only rays of the given energy penetrate a sample. Owing to thermal processes and vibration the positions of the components change over time. Once one component has changed its position too much, the system sends a pulse to the corresponding motor in order to return the component to the predefined position. Unfortunately, (i) each motor has a certain resolution, (ii) to return to the original energy it requires moving several components, (iii) some time is needed to stabilize the system. In addition, we have assumed that each image is acquired during a very small time. However, to take one image we need to expose the CCD during some time (in our cases it often varies from 300 to 600 ms), which is more than a `period' of vibrations on the monochromator. Therefore but
The CoolSNAP-K4 camera used at the beamline allows us to acquire at most three frames per second, so we cannot determine how the real white field depends on time. In addition, decreasing the exposure time does not help, since the signal-to-noise ratio of the CCD sensor decreases and the time between frames cannot be decreased to zero. Another possibility is to use an ultra-fast CMOS camera, but this will not help since both the
of the X-ray beam available at the beamline and the signal-to-noise ratio of the CMOS sensor will not allow us to sufficiently improve the intensity resolution over time.After we have acquired a set of white fields with 50 ms exposure time and about 300 ms between shots we may only suppose that the real intensity profile has some `period' (less than a second) depending on the time needed to stabilize the system. Note that there is no `periodicity' for a(ti) and b(ti) as one may suggest from Figs. 10 and 11 (e.g. when images' numbers are greater than 700): we have checked other data sets.
3.2. Sample 2: zebrafish Danio rerio
Now we consider a sample where the structure and the background have similar attenuation coefficients, so it is difficult to separate them when additional artefacts appear. The sample is a zebrafish Danio rerio. The same projection before and after the correction proposed in the paper is shown in Fig. 13; the reconstructed slice and region of interest are shown in Fig. 14.
We also used a linear interpolation of white fields in order to correct the ring artefacts, (see Fig. 14d). The method suppresses the ring artefacts well only near several rays; the angle between the rays is about = 12°. This is due to the fact that only projections acquired just before or after a white field has been taken are cleaned well using the standard flat-field correction technique with the corresponding white-field image. The method proposed in the paper also does not suppress all artefacts (see Fig. 14e); however, the method makes the artefacts more `regular', i.e. their strength depends weakly on the polar angle. As a result, additional pre-processing based on the method of Titarenko et al. (2010), which is developed for suppression of `regular' ring artefacts, allows us to see the difference between the proposed method and the method based on linear interpolation more clearly.
Now we discuss practical considerations for implementing the method proposed in the paper and possible ways to improve the method. Firstly, let us discuss suppression of wave artefacts caused by shrinking and shifting the incident X-ray beam in the vertical direction. To correct intensity profiles we just choose one to three vertical line segments (their height is about 90% of the projections' height), so that a sample is never projected onto these segments during acquisition. Since the number of segments is small we often choose them manually, so they cross as many `hills' as possible on the intensity profile. Note that the method cannot be used if the sample's shadow is larger that the image sensor. In principle the lines should not be the same during acquisition. For example, if a sample's horizontal cross sections are elongated, then one may choose fewer but better distributed vertical segments, i.e. where the number of `hills' on the intensity profile is increased and these `hills' are better separated, when the area covered by the sample's shadow is minimal and increase the number of vertical segments when the shadow is wide. As we mentioned above, shrinking/shifting of the intensity profile is random. Therefore, if the number m of white fields is small, e.g. <5, then it may happen that the shrinking/shifting factors aj, bj, j = , have similar values, which are far away from the factor ai, bi, i = , found for projections. Hence to find r(x,z,ai,bi) it may be required to apply a large number of operations described by equations (11) and (12) to the known functions r(x,z,aj,bj). However, increasing the number of these operations causes additional errors in r(x,z,ai,bi), since r(x,z,aj,bj) are known only for discrete values of (x,z) and some interpolation is required. As a result, sometimes it is better to acquire an additional set of white fields, e.g. 100 images after the scan, and select those images having sufficiently different values of aj, bj.
Secondly, let us consider ring artefacts. We have assumed that the multiplicative function Q(x,z) can only be shifted along a vector [see equation (18)]. However, when a scintillator/optics/CCD has been used for a long time, some radiation damage occurs and the time-dependence of Q(x,z) may be written as
where Qstab(x,z) is related to a `stable' component, e.g. a CCD, optical system and scintillator, and Qmov(x,z) is determined by a `moving' component, e.g. a monochromator. In the case of a clean undamaged optical system, the number of small `features' similar to those shown in the inset of Fig. 2 is very small, therefore Qstab(x,z) is a very smooth function and Qstab(x,z) ≃ if a point is near (x,z), so we may use equation (18). This is true for the first sample (plant stem Salix variegata). However, if the radiation damage is high and there are a lot of dust/dirt/scratches on the surfaces of the optical system, the number of `features' for Qstab(x,z) is increased. Unfortunately, proper determination of Qstab(x,z) is not always possible, therefore shifting the original white field W1(x, z) along the vector (c, d) to find W1cd(x,z) and the ratio Wj(x, z)/W1cd(x, z) as described in §2.3 will suppress ring artefacts caused by the `moving' component Qmov(x,z) but introduce new artefacts caused by shifting the `stable' component Qstab(x,z). This is true for the zebrafish sample. To overcome this problem the authors will propose a method to separate Qstab(x,z) and Qmov(x,z) in a forthcoming paper, so that the method will also be applicable for damaged optical systems.
4. Conclusions
The proposed white-field correction method based on a continuous adaptive correction using intermittent white-field measurements effectively suppresses both ring and wave artefacts. The time required to process a 2000×2000×2000 volume depends on the sizes of the areas and number of temporary images used in the method. However, to obtain enough suppression it takes less than 10 min spent on an ordinary Intel Dual-core processor. Further improvements are possible and will be discussed in future papers. For instance, ring artefacts could be better suppressed if a subpixel structure of the white-field image is found. This is possible in principle, since there are several white fields shifted on subpixel lengths. For remaining artefacts it would be possible to apply suppression algorithms based on sinogram smoothness (see, for example, Titarenko, 2009; Titarenko et al., 2009; Titarenko & Yagola, 2010). Wave artefact suppression could also be improved if the function I0(x,az+b)/I0(x,z) is found for a greater number M of pairs (a,b). While we have focused on parallel beam synchrotron data it should in principle be possible to extend the method to cone beam optics and other types of time variations that affect the white fields collected during acquisition. Finally we should mention that the proposed adaptive correction technique is based only on two types of motion. The next steps should take into account instability of the intensity profile of the beam during acquisition, as well as possible beam-hardening and diffraction effects.
Acknowledgements
The authors would like to thank Dr Walter H. Schröder (Institute Phytosphere, Forschungszentrum Jülich, Germany) who provided the stem sample Salix variegata, Dr Keith C. Cheng, Darin Clark (Division of Experimental Pathology, Jake Gittlen Cancer Research Foundation, Penn State Cancer Institute, Penn State College of Medicine, Hershey, PA 17033, USA) and Dr Patrick La Rivière (Department of Radiology, The University of Chicago, USA) for the possibility to use a preliminary reconstruction of the zebrafish sample Danio rerio (the research is supported by the grant R24RR017441, Web-based Atlas of Zebrafish Microanatomy as a Community Resource, from NIH). Use of the Advanced Photon Source was supported by the US Department of Energy, Office of Science, Office of Basic Energy Sciences, under Contract No. DE-AC02-06CH11357. Valeriy Titarenko is grateful to STFC for funding and Sofya Titarenko to EPSRC for funds through a `Collaborating for Success' grant.
References
Boin, M. & Haibel, A. (2006). Opt. Express, 14, 12071–12075. Web of Science CrossRef PubMed Google Scholar
Chu, Y. S., Liu, C., Mancini, D. C., De Carlo, F., Macrander, A. T., Lai, B. & Shu, D. (2002). Rev. Sci. Instrum. 73, 1485–1487. Web of Science CrossRef CAS Google Scholar
Davis, G. R. & Elliott, J. C. (1997). Nucl. Instrum. Methods Phys. Res. A, 394, 157–162. CrossRef CAS Web of Science Google Scholar
De Carlo, F. & Tieman, B. (2004). Proc. SPIE, 5535, 644–651. CrossRef Google Scholar
De Carlo, F., Xiao, X. & Tieman, B. (2006). Proc. SPIE, 6318, 63180K. CrossRef Google Scholar
Gonzalez, R. C. & Woods, R. E. (2008). Digital Image Processing, 3rd ed. New Jersey: Pearson Prentice Hall. Google Scholar
Hsieh, J., Gurmen, O. E. & King, K. F. (2000). IEEE Trans. Med. Imag. 19, 930–940. CAS Google Scholar
Münch, B., Trtik, P., Marone, F. & Stampanoni, M. (2009). Opt. Express, 17, 8567–8591. Web of Science PubMed Google Scholar
Nakayama, K., Okada, Y. & Fujimoto, H. (1998). J. Synchrotron Rad. 5, 527–529. Web of Science CrossRef CAS IUCr Journals Google Scholar
Natterer, F. & Wübbeling, F. (2007). Mathematical Methods in Image Reconstruction, Vol. 5, SIAM Monographs on Mathematical Modeling and Computation. Philadelphia: SIAM. Google Scholar
Niederlöhner, D., Nachtrab, F., Michel, T. & Anton, G. (2005). Nucl. Sci. Symp. Conf. Rec. 4, 2327–2331. Google Scholar
Raven, C. (1998). Rev. Sci. Instrum. 69, 2978–2980. Web of Science CrossRef CAS Google Scholar
Sadi, F., Lee, S. Y. & Hasan, M. K. (2010). Comput. Biol. Med. 40, 109–118. Web of Science CrossRef PubMed Google Scholar
Sijbers, J. & Postnov, A. (2004). Phys. Med. Biol. 49, N247–N253. Web of Science CrossRef PubMed Google Scholar
Smith, A. D., Flaherty, J. V., Mosselmans, J. F. W., Burrows, I., Stephenson, P. C., Hindley, J. P., Owens, W. J., Donaldson, G. & Henderson, C. M. B. (2006). Proceedings of the 8th International Conference on X-ray Microscopy, IPAP Conference Series, Vol. 7, pp. 340–342. Google Scholar
Stock, S. R. (2008). MicroComputed Tomography: Methodology and Applications. CRC Press. Google Scholar
Titarenko, S. (2009). British Applied Mathematics Colloquium 2009, p. 102. University of Nottingham, UK. Google Scholar
Titarenko, S., Titarenko, V., Kyrieleis, A. & Withers, P. J. (2009). Appl. Phys. Lett. 95, 071113. Web of Science CrossRef Google Scholar
Titarenko, S., Titarenko, V., Kyrieleis, A. & Withers, P. J. (2010). J. Synchrotron Rad. 17, 540–549. Web of Science CrossRef IUCr Journals Google Scholar
Titarenko, S. & Yagola, A. G. (2010). Moscow Univ. Phys. Bull. 65, 65–67. Web of Science CrossRef Google Scholar
Tucoulou, R., Martinez-Criado, G., Bleuet, P., Kieffer, I., Cloetens, P., Labouré, S., Martin, T., Guilloud, C. & Susini, J. (2008). J. Synchrotron Rad. 15, 392–398. Web of Science CrossRef CAS IUCr Journals Google Scholar
Walls, J. R., Sled, J. G., Sharpe, J. & Henkelman, R. M. (2005). Phys. Med. Biol. 50, 4645–4665. Web of Science CrossRef PubMed Google Scholar
Wang, Y., De Carlo, F., Mancini, D. C., McNulty, I., Tieman, B., Bresnahan, J., Foster, I., Insley, J., Lane, P., von Laszewski, G., Kesselman, C., Su, M.-H. & Thiebaux, M. (2001). Rev. Sci. Instrum. 72, 2062–2068. Web of Science CrossRef CAS Google Scholar
Yang, J., Zhen, X., Zhou, L., Zhang, S., Wang, Z., Zhu, L. & Lu, W. (2008). The 2nd International Conference on Bioinformatics and Biomedical Engineering, pp. 2386–2389. IEEE. Google Scholar
© International Union of Crystallography. Prior permission is not required to reproduce short quotations, tables and figures from this article, provided the original authors and source are cited. For more information, click here.