## research papers

## Propagation of statistical errors across

extraction and Fourier filtering^{a}LURE, Bâtiment 209D, Centre Universitaire Paris-Sud, 91405 Orsay CEDEX, France, and ^{b}Laboratoire de Biomathématique, Faculté de Pharmacie, Université René Descartes, 4 Rue de l'Observatoire, 75005 Paris, France^{*}Correspondence e-mail: curis@lure.u-psud.fr

A statistical model for estimating the error bars on the

signal is developed. The average can be calculated at the beginning of the analysis, with the estimation of the error bars made directly on the individual measurements; the error bars are then propagated across the extraction. Furthermore, this model also allows the estimation of the error bars on the Fourier transformation of the signal, and on the filtered spectrum after the inverse Fourier transform.Keywords: EXAFS; Fourier transformation; error propagation.

### 1. Introduction

A necessary step in

analysis is the estimation of the experimental error bars for the considered spectrum, as without these error bars it is impossible to estimate the errors on the structural parameters fitted by The experimental errors may originate either from statistical errors (`noise') or from systematic errors. In this paper we only deal with the statistical errors.In order to estimate these error bars, the most reliable approach is the statistical approach, which requires the same spectrum to be recorded many times. Nevertheless, if only one spectrum is available, various methods have been proposed to provide an estimation of an average error bar, thus making the assumption that the error is constant with energy (Newville *et al.*, 1999); all these methods do not use any statistical tool. In addition, if these methods are applied to an averaged spectrum, obtained by many experiments, they provide an estimation of the error bars of the experimental *averaged* distribution and not of the experimental individual distribution, these two values being related by a factor *N*^{1/2}, where *N* is the number of averaged spectra. In the present paper we only deal with spectral recorded many times.

If many spectra are available, the proposed method is to extract the *k* (Vlaic *et al.*, 1999).

In this paper we use standard statistical tools to improve this error treatment; the model we develop allows us to calculate the averages at the very first step of the treatment (which helps in the *R*-space.

All the statistical tools presented in this paper have been implemented in our new *LASE* (Logiciel d'analyse des spectres EXAFS). It runs on any Unix/X-Window computer and includes the classical tools for analysis, adapted to our model of the error propagation.

### 2. Statistical model

We only consider spectra recorded for a finite number of points, *i.e.* by a discrete method (for instance by a step-by-step method). An experimental spectrum is then a set of *n* points , where *E*_{i} is the energy for the *i*th point and is the corresponding absorption coefficient.

We choose a very general model, the basic model used in statistics: the *i*th experimental point is modelled by a random variable *X*_{i}; we assume that the esperance E(*X*_{i}) and the variance V(*X*_{i}) exist. In that case each experimental measurement at energy *E*_{i} is a realization of the random variable *X*_{i}. With these hypotheses, the great number law shows that

almost certainly converges towards E(*X*_{i}) and

almost certainly converges towards V(*X*_{i}).

Furthermore, we can consider the set of random variables (*X*_{i})_{i = 1...n} as an *n*-dimension real random vector , characterized by its mean vector and its covariance matrix . Each experimental spectrum is then a realization of this random vector; the estimated mean vector is equivalent to the usual experimental spectrum after averaging, and all the statistical information is included in the estimated covariance matrix. In that model the following treatment of the experimental data to extract the signal is applied to the random vector instead of applying it only to the averaged experimental spectrum , as is the usual practice. The propagation of the statistical errors is then accomplished by transforming the covariance matrix according to the applied transformation. Hence, we use the information from all the individual measurements, not just the average experiment.

In order to simplify our model we also suppose that the experimental points are statistically uncorrelated, which means that Cov(*X*_{i},*X*_{j}) = 0 if . In that case the covariance matrix is strictly diagonal. This hypothesis is not required, since all that follows is still correct with a non-diagonal covariance matrix, but it makes computations quicker and it is physically meaningful.

### 3. Propagation across extraction

Practically, the value of interest is not but the

oscillations . The relation between these two values iswhere is the *a priori* correlated with . In addition, the relation defining is not linear for . Owing to these two facts there is no simple relation between and , nor between and .

As a first approximation we considered that the random variables and are almost certainly constant. In that case the relation becomes very simple,

To check this approximation we compared these results with those obtained by averaging directly on the values. Fig. 1 shows that the values for obtained by the two methods cannot be distinguished. In Fig. 2 we compare the error bars obtained by the two methods, and we notice that they are very close, except at very low *k*. These results confirm the hypothesis that and are almost certainly constant. We interpret the difference at low *k* as a consequence of the uncertainties on the monochromator displacements, which are functions of the and not of *k* nor *E*. This leads to important variations in the measured intensities in the edge region, where varies quickly with *E*.

Except for this very low *k* region, we observed that the error bars are generally almost constant with *k*, as is usually assumed (Newville *et al.*, 1999; Vlaic *et al.*, 1999), but our model still works when the error bars are not constant with *k*.

We also observed that, by increasing the number of averaged spectra, the error bars are not really modified: the average is better estimated (according to the central limit theorem) and the constant behaviour of the error bars is more visible.

### 4. Propagation across Fourier filtering

#### 4.1. Error propagation across integration

A classical way of estimating *I* = , if only *n* points [*x*_{i}, *y*_{i} = *f*(*x*_{i})]_{i = 1...n} are known, is the trapeze method,

As we can see, *I* appears as a linear combination of *y*_{i}, whose coefficients are functions of *x*_{i}. Hence, it is possible to write *I* = where is the column vector of the values and *A* is a (1 ×*n*) matrix. Hence, the numerical integration is seen as a linear transformation.

If we now consider a random vector = (*X*_{i})_{i} with a covariance matrix , we can define its `integral', which is a random variable *I*, by . The probability theory then shows that and ; we can propagate errors across integration.

With the help of the random function theory it may be proved that, when the number of points increases (hence converges towards a random function *f*), the `integral' defined above converges towards the real integral of *f* (Brard, 1966/1967).

#### 4.2. Application to the Fourier transformation

By convention, in

the Fourier transform of the signal is defined bySince we only know for a finite number of values, we use the trapeze method to evaluate the Fourier transform for the points (*R*_{j})_{j = 1...N}. Use of the classical FFT algorithm (which corresponds to an estimation of the Fourier transform by the rectangle method) is not possible, since it requires a smoothing of the data to obtain a constant *k*-step, and such a smoothing is not compatible with a statistical use of the data. According to the model we developed above, if we model the experimental spectrum by an *n*-dimensional vector , for each point *R*_{j} we can write = *A*_{2j} and = *A*_{2j+1} , where

and

are the matrices associated with the Fourier transformation.

If we now consider the estimated Fourier transform as a vector = , we can define a global matrix *A* for the complete Fourier transformation, and we have = .

In our model, is a random vector with a covariance matrix . Hence, is also a random vector whose covariance matrix is . The diagonal part of gives the error bars for each Fourier transform point (for their real and imaginary parts).

It is possible to develop exactly the same model for the inverse Fourier transformation: if we call *B* the associated matrix, the filtered spectrum and its covariance matrix, then we obtain = = and = = . We then obtain the error bars for each point of the filtered spectrum, and all the correlation information between its points.

#### 4.3. Inclusion of window and filter effects

To limit the distortions of the Fourier transform, one commonly multiplies the experimental *w*(*k*). With our model, this window function is modelled by a diagonal matrix *W* with *W*_{ii} = *w*(*k*_{i}). We can also define a filter matrix *F*, related to the filter function used in *R*-space to isolate one or more peaks of the Fourier transform; this matrix is also a diagonal matrix.

With these two effects included, the global transformation equations are = for the average and = for the covariance matrix.

#### 4.4. Results

##### 4.4.1. Direct Fourier transformation

We may note (Fig. 3) that the error bars in *R*-space are almost constant (whatever *k*-weighting is used), with a residual oscillating part, especially for low *R* values.

We may also note that the variation of the error bars for different filtering methods follows the expected trend: if we increase the *k*-weighting of the signal, the error bars on the Fourier transform are increased. The error bars on the Fourier transform increase in two ways depending on the choice of the window: they increase either with the rectangular tendency of the window, or with its wideness. The most important effect is the wideness of the window, that is the *k*-range of the signal kept for the Fourier transform. Hence, when the signal becomes too noisy (*i.e.* if one goes too far in *k* so, as the signal is attenuated whereas the noise is constant, one just adds noise to the Fourier transform), it leads to more noisy Fourier transforms. The kind of window used for the analysis seems to have a much lower effect on the noise of the Fourier transform. For instance, if we compare a Hanning window and a Kaiser–Bessel window over the same *k*-range, the Hanning window gives some higher error bars, but the Fourier transform intensity also increases, which gives a similar signal-to-noise ratio for the two windows.

We may note that an estimation of the error based on the high *R* values of the average Fourier transform (Newville *et al.*, 1999) gives a smaller value than the error bars in *R*-space. This is not surprising, because the first method estimates the error bar on the `average' distribution and not on the experimental individual distribution. As expected by the theory, we find a factor of approximately *N*^{1/2} between the two values, where *N* is the number of averaged spectra.

##### 4.4.2. Inverse Fourier transformation

To check the results of our method for propagating errors across Fourier filtering, we compared the results obtained by this method with those obtained by the other sequence: extracting the provides graphical evidence that the filtered spectra obtained by the two methods are identical.

signal, Fourier filtering and then averaging. Fig. 4Fig. 5 shows that the error bars obtained by both methods are roughly of the same magnitude, especially for high *k* values. The differences between the two methods may arise from inaccuracies in Fourier transformation evaluation for each individual spectrum, leading to larger error bars for that case. One may also note that, after filtering, the error bars are no longer constant with *k*, although they were before filtering. On all the spectra that we studied, we observed some error bars which varied with *k* after filtering; this variation is not similar from one spectrum to another and seems to be, for an important part, related to the window used for the Fourier transformation.

In Fig. 6 we observe that the filtering operation really has an effect on the uncertainties: the error bars after filtering are much smaller than the error bars before filtering. One notices the same results for the filter effect and for the window effect in the direct transformation (which is not surprising, since the mathematical functions are the same): given a Fourier transform, the filtered spectrum is less noisy if the *R* range is smaller. However, since the *R* range is more or less imposed by the peaks one wants to keep, the main effect arises from the Fourier transform itself: the more noisy the Fourier transform, the more noisy the filtered spectrum. Hence, the choice of the *k* range for the window of the Fourier transformation seems to be the most important step to achieving good filtered spectra. It seems that there is no simple relation between the error before and after filtering; in any case, even if such an expression exists, making use of it would mean losing important information, since it would not give the covariance matrix nor the *k*-evolution of the error bars after filtering.

We also noticed, by studying the covariance matrix after filtering, that there are no strictly independent points after filtering (in the statistical meaning); nevertheless, the

is usually between 0.1 and 0.3, which indicates a weak linear correlation. If we assume that this can be neglected, this means it is possible to fit it with all `experimental' points even after filtering; if not, this means a new statistical estimator needs to be developed which accounts for the covariance matrix.### 5. In the case of a Gaussian distribution

In all the previous sections we did not make any special assumption about the experimental distribution function of , except that it had an esperance and a variance.

If we assume now that this distribution is Gaussian, we can go further: the initial experimental vector is a Gaussian vector. This means that every linear transformation of this vector leads to another Gaussian vector; in our model, all the transformations applied to the experimental data are linear, so at each step the result is a Gaussian vector. In particular, the crude

signal, its Fourier transform and the filtered signal are Gaussian vectors. In addition, with our model the crude signal has a diagonal covariance matrix, so the estimator is the classical estimator used for fitting,For the Fourier transform and the filtered signal, this estimator may still be used, but it is not the

estimator, since the covariance matrix is not diagonal for these quantities.### 6. Conclusion

We have developed a simple statistical model for

analysis that allows all statistical information extracted from the experimental data to be kept, and propagated across all steps of the analysis. In particular, this model allows the averages to be calculated at the very beginning of the treatment – which permits a better extraction since the less noisy average spectrum eases the background removal and saves time in the case where a great number of spectra are to be analysed – and keeps all the statistical information needed for the fitting step of the analysis procedure.This model allowed us to confirm the usual results used in the analysis: there is no difference between `average, then extraction' and `extraction, then average', except that, previously, only the second method lead to statistical information. It also confirmed that, as far as we could see, the error bars are constant with *k* before filtering.

Use of this model also provides answers to the problem of the propagation of the error across Fourier filtering, and especially the conservation of the distribution law and the estimation of the error in *R*-space, which is necessary for some statistical treatments when fitting in *R*-space.

The next step of our work is to check the Gaussian distribution of the experimental points – is that model a theoretically plausible model, knowing that the measured quantities follow some Poissonian laws? – and the use of this model to estimate the distribution functions of the parameters fitted during the analysis: in which conditions is it valid to use the classical results, proved only for the linear case, knowing that the

equation is not linear?### Acknowledgements

We wish to thank Bruno Blanchet for his help in the mathematical part of this work, Alain Michalowicz for the discussions about statistics in *K*-edge of zinc-doped hydroxyapatite (3000 p.p.m.).

### References

Brard, R. (1966/1967). In *Cours de Calcul des Probabilités*, Part 2, *Fonctions Aléatoires*. Google Scholar

Newville, M., Boyanov, B. I. & Sayers, D. E. (1999). *J. Synchrotron Rad.* **6**, 264–265. Web of Science CrossRef CAS IUCr Journals Google Scholar

Vlaic, G., Andreatta, D., Cepparo, A., Colavita, P. E., Fonda, E. & Michalowicz, A. (1999). *J. Synchrotron Rad.* **6**, 225–227. Web of Science CrossRef CAS IUCr Journals Google Scholar

© International Union of Crystallography. Prior permission is not required to reproduce short quotations, tables and figures from this article, provided the original authors and source are cited. For more information, click here.