research papers\(\def\hfill{\hskip 5em}\def\hfil{\hskip 3em}\def\eqno#1{\hfil {#1}}\)

Journal logoJOURNAL OF
SYNCHROTRON
RADIATION
ISSN: 1600-5775

Insight into 3D micro-CT data: exploring segmentation algorithms through performance metrics

CROSSMARK_Color_square_no_text.svg

aComputational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720-8150, USA, bBerkeley Institute for Data Science, University of California Berkeley, Berkeley, CA 94720, USA, cAdvanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, CA 94720-8150, USA, dMaterials Department, University of California Santa Barbara, Santa Barbara, CA 93106-5050, USA, and eDepartment of Mathematics, University of California Berkeley, Berkeley, CA 94720, USA
*Correspondence e-mail: tperciano@lbl.gov

Edited by P. A. Pianetta, SLAC National Accelerator Laboratory, USA (Received 8 April 2017; accepted 25 July 2017; online 23 August 2017)

Three-dimensional (3D) micro-tomography (µ-CT) has proven to be an important imaging modality in industry and scientific domains. Understanding the properties of material structure and behavior has produced many scientific advances. An important component of the 3D µ-CT pipeline is image partitioning (or image segmentation), a step that is used to separate various phases or components in an image. Image partitioning schemes require specific rules for different scientific fields, but a common strategy consists of devising metrics to quantify performance and accuracy. The present article proposes a set of protocols to systematically analyze and compare the results of unsupervised classification methods used for segmentation of synchrotron-based data. The proposed dataflow for Materials Segmentation and Metrics (MSM) provides 3D micro-tomography image segmentation algorithms, such as statistical region merging (SRM), k-means algorithm and parallel Markov random field (PMRF), while offering different metrics to evaluate segmentation quality, confidence and conformity with standards. Both experimental and synthetic data are assessed, illustrating quantitative results through the MSM dashboard, which can return sample information such as media porosity and permeability. The main contributions of this work are: (i) to deliver tools to improve material design and quality control; (ii) to provide datasets for benchmarking and reproducibility; (iii) to yield good practices in the absence of standards or ground-truth for ceramic composite analysis.

1. Introduction

X-ray synchrotron facilities regularly produce terabytes of data, with imaging beamlines commonly storing data as two-dimensional (2D) and three-dimensional (3D) images (Bethel et al., 2015[Bethel, W. et al. (2015). DOE ASCR Workshop, pp. 2-30. DOE.]). The data volume generated daily and the variety of samples in terms of complexity and features is a challenge for effective data analysis of the experiments, particularly given frequent upgrades of the instruments' brightness, resolution and throughput. Specific characteristics of the image data such as the presence of heterogeneous structures in multiple scales and their complex architecture hinder the accuracy of current image processing algorithms. Nonetheless, micro-computed tomography (µ-CT) continues to be an essential imaging technique employed for the non-destructive 3D characterization of objects. This approach is widely used in academia and industry, including medical imaging, material science, electronics and geology.

In spite of the success of this imaging technique, some challenges still remain when analyzing these types of data, starting with the tomographic reconstruction and going through all image processing steps. For example, trying alternative acquisition schemes and experimental setups and/or quantitatively evaluating reconstruction methods are still a challenge. The work described by Ching & Gürsoy (2017[Ching, D. J. & Gürsoy, D. (2017). J. Synchrotron Rad. 24, 537-544.]) addresses this specific topic. The authors propose software that generates complex simulated phantoms and evaluates new or existing data acquisition schemes and image reconstruction algorithms for targeted applications. Following a similar strategy, the present paper addresses the problem of comparing and evaluating different image segmentation algorithms.

Given large data rates and sizes, machine learning for data acquisition (Yang et al., 2017[Yang, X., De Carlo, F., Phatak, C. & Gürsoy, D. (2017). J. Synchrotron Rad. 24, 469-475.]) as well as for the automation of feature detection and extraction represents key steps in reducing data while gaining insight from µ-CT image structures. Many different categories of automated feature extraction exist (Hintermüller et al., 2010[Hintermüller, C., Marone, F., Isenegger, A. & Stampanoni, M. (2010). J. Synchrotron Rad. 17, 550-559.]; Chen et al., 2012[Chen, R.-C., Dreossi, D., Mancini, L., Menk, R., Rigon, L., Xiao, T.-Q. & Longo, R. (2012). J. Synchrotron Rad. 19, 836-845.]), but most of them are reliant on image segmentation techniques supported by unsupervised learning (Khanum et al., 2015[Khanum, M., Mahboob, T., Imtiaz, W., Abdul Ghafoor, H. & Sehar, R. (2015). Int. J. Comput. Appl. 119, 34-39.]). Broadly used to analyze experimental data, un­supervised segmentation algorithms (Chen et al., 2013[Chen, W., Ostrouchov, G., Pugmire, D., Prabhat & Wehner, M. (2013). Technometrics, 55, 513-523.]) enable grouping picture elements, somewhat collecting tokens that `belong together'. Such algorithms can gather meaningful groups by searching for hidden structures from unlabeled data. Some of the advantages are dispensable training data and potential for data reduction, for example, removal of non-contributing image portions, such as background and artifacts.

Two important tasks during µ-CT data analysis are (1) selecting the best segmentation algorithm and (2) determining the best evaluation metrics. Those two choices highly impact the performance and quality of the results. Even harder to evaluate, the cases that lack ground-truth can lead to incomplete and/or ambiguous results, many times assisting only in qualitative terms. Therefore, we have employed a few strategies to evaluate image segmentation problems before we allude to the most appropriate algorithm for a specific application, for example, through direct comparison of algorithms based on a given ground-truth (Ushizima et al., 2011[Ushizima, D., Parkinson, D., Nico, P., Ajo-Franklin, J., MacDowell, A., Kocar, B., Bethel, W. & Sethian, J. (2011). Proc. SPIE, 8135, 813502.]; Arbeláez et al., 2011[Arbeláez, P., Maire, M., Fowlkes, C. & Malik, J. (2011). IEEE Trans. Pattern Anal. Mach. Intell. 33, 898-916.]; Perciano et al., 2016[Perciano, T., Ushizima, D. M., Bethel, E. W., Mizrahi, Y. D., Parkinson, D. & Sethian, J. A. (2016). IEEE International Conference on Image Processing (ICIP), 25-28 September 2016, Phoenix, Arizona, USA, pp. 1259-1263.]; Tassani et al., 2014[Tassani, S., Korfiatis, V. & Matsopoulos, G. K. (2014). J. Microsc. 256, 75-81.]; Sheppard et al., 2014[Sheppard, A., Latham, S., Middleton, J., Kingston, A., Myers, G., Varslot, T., Fogden, A., Sawkins, T., Cruikshank, R., Saadatfar, M., Francois, N., Arns, C. & Senden, T. (2014). Nucl. Instrum. Methods Phys. Res. B, 324, 49-56.]) and combination of different algorithms (Polak et al., 2012[Polak, S. J., Candido, S., Levengood, S. K. L. & Wagoner Johnson, A. J. (2012). Comput. Med. Imaging Graph. 36, 54-65.]) allied to confluence analysis as an indicator of result agreement.

In this work, we propose the Materials Segmentation and Metrics (MSM) dataflow that runs different algorithms separately, but checks for their agreement in terms of their segmentation results by using performance metrics. Suitability of metrics depends on the scientific goals of the experiment, that can be, for example, calculating porosity of a sample or counting the number of targeted objects. This article describes a multidisciplinary project involving experimental investigations of 3D µ-CT data of different materials performed at the Advanced Light Source (ALS) at the Lawrence Berkeley National Laboratory (LBNL), and development of algorithms for unsupervised image analysis, performed by investigators at the Center for Advanced Mathematics for Energy Research Applications (CAMERA), also at LBNL. We introduce a process for segmentation and analysis of 3D micro-tomography data, which offers strategies to evaluate algorithms and metrics applied to 3D micro-tomography experiments. Here, we use the proposed process to investigate geological samples and ceramic matrix composites (CMCs). These examples illustrate a strategy for executing different algorithms, extracting respective criteria, and parameter ranges to arrive at an appropriate answer.

Previous works on µ-CT data analysis described by Ushizima et al. (2011[Ushizima, D., Parkinson, D., Nico, P., Ajo-Franklin, J., MacDowell, A., Kocar, B., Bethel, W. & Sethian, J. (2011). Proc. SPIE, 8135, 813502.], 2012[Ushizima, D., Morozov, D., Weber, G. H., Bianchi, A. G. C., Sethian, J. A. & Bethel, E. W. (2012). IEEE Trans. Vis. Comput. Graph. 18, 2041-2050.]) address the problem of segmenting geological samples being studied to understand carbon sequestration, geologic storage of captured CO2 in underground rock formations. The authors developed tools for providing precise measurements of porosity and permeability, and for visualizing pore structures.

More recently, we have focused on yet another material sample: CMCs, which are composed of continuous silicon carbide (SiC) fibers and SiC matrices. With a high impact in industrial manufacturing, CMCs are an enabling element in the development of gas turbine engines that can operate at higher temperatures and therefore yield higher efficiency (Zok, 2016[Zok, F. W. (2016). Am. Ceram. Soc. Bull. 95, 22-28.]). Although there have been several recent studies related to computed tomography of CMCs, computational tools to automate and streamline image analysis are presently lacking (Bale et al., 2013[Bale, H. A., Haboub, A., MacDowell, A. A., Nasiatka, J. R., Parkinson, D. Y., Cox, B. N., Marshall, D. B. & Ritchie, R. O. (2013). Nat. Mater. 12, 40-46.]).

In this article we: (a) introduce an analysis pipeline with different strategies for segmentation and evaluation; (b) demonstrate the use of this process in the detection of different structures in synthetic and experimental µ-CT datasets such as geological samples, glass beads and CMCs; and (c) make a critical assessment of the accuracy of the algorithms in determining various quantitative characteristics of the extracted structures using general and specific metrics. Fig. 1[link] presents the analytical dataflow of MSM. The figure highlights the transformations that the 3D image stacks undergo before MSM can deliver the materials characteristics list. The raw µ-CT image stack passes through a preprocessing step based on 3D non-linear filtering. The preprocessed data serve as input for different segmentation algorithms [statistical region merging (SRM), k-means, and parallel Markov random field (PMRF)]. The results from different segmentation algorithms undergo an analysis step based on both general and specific metrics. The evaluation takes into account reference data obtained from a semi-automated segmentation process.

[Figure 1]
Figure 1
Image analysis process employed for X-ray micro-tomography data included in MSM.

The remainder of the article is organized as follows. The segmentation methods for sample partitioning and the metrics used are described in §2[link]. In §3[link] we present different types of experiments using synthetic and experimental data to assess the accuracy of the results. In doing so, we show how the proposed process can be applied to evaluate different segmentation algorithms and to find the necessary metric in each case. Finally, §4[link] summarizes the key accomplishments of our investigation.

2. Materials and methods

This section describes the algorithms used in each step of the proposed process pipeline: image enhancement, image segmentation and measurements.

2.1. Image enhancement

Image enhancement techniques ensure that input data are well suited for a specific task, such as image segmentation or feature extraction. The challenge is to improve image quality while retaining essential information about the true structure. In the present study, we apply two main strategies: mathematical morphology (MM) (Pinoli & Debayle, 2012[Pinoli, J. C. & Debayle, J. (2012). IEEE J. Sel. Top. Signal. Process. 6, 820-829.]) and non-linear edge-preserving filtering (Tomasi & Manduchi, 1998[Tomasi, C. & Manduchi, R. (1998). Proceedings of the Sixth IEEE International Conference on Computer Vision, Bombay, India, pp. 839-846.]). Previously we developed `F3D', a graphics-card aware image processing plug-in for Fiji (Schindelin et al., 2012[Schindelin, J., Arganda-Carreras, I., Frise, E., Kaynig, V., Longair, M., Pietzsch, T., Preibisch, S., Rueden, C., Saalfeld, S., Schmid, B., Tinevez, J.-Y., White, D. J., Hartenstein, V., Eliceiri, K., Tomancak, P. & Cardona, A. (2012). Nat. Methods, 9, 676-682.]) that employs MM and non-linear filters and can handle data sets whose size exceeds the amount of RAM available in the computer system (Ushizima et al., 2014[Ushizima, D. M., Perciano, T., Krishnan, H., Loring, B., Bale, H., Parkinson, D. & Sethian, J. (2014). IEEE International Conference on Big Data, 27-30 October 2014, Washington, DC, USA, pp. 683-691.]). Details on those techniques are out of the scope of this work.

F3D gray-level MM operators are one-pass constant-time methods that can perform morphological transformations with structuring elements oriented in several directions. MM operators consist of two parts: (i) a reference shape or structuring element, which translates over the image, and (ii) a mechanism that defines the comparisons performed between the image and the structuring element (Van Droogenbroeck & Talbot, 1996[Van Droogenbroeck, M. & Talbot, H. (1996). Pattern Recognit. Lett. 17, 1451-1460.]). In this work, we use the closing operator, which is given by the combination of the dilation and the erosion operators (Gonzalez & Woods, 2006[Gonzalez, R. C. & Woods, R. E. (2006). Digital Image Processing, 3rd ed. Upper Saddle River: Prentice-Hall.]), to improve image contrast. The procedure involves the following steps:

(1) Define a structuring element based on the geometrical shape of the feature to be detected in the image.

(2) Apply the closing operator using the structuring element defined in step 1 [Fig. 2(b)[link]].

[Figure 2]
Figure 2
Example of contrast improvement using F3D with a circle of size 20 pixels as structuring element. (a) Original region of interest. (b) Result of step 2. (c) Result of step 3. (d) Result of step 4.

(3) Subtract the input image from the result obtained in step 2 [Fig. 2(c)[link]].

(4) Subtract the result obtained in step 3 from the input image [Fig. 2(d)[link]].

Additionally, we apply a non-linear image denoising filter with edge-preserving characteristics (Tomasi & Manduchi, 1998[Tomasi, C. & Manduchi, R. (1998). Proceedings of the Sixth IEEE International Conference on Computer Vision, Bombay, India, pp. 839-846.]; Bethel, 2012[Bethel, E. W. (2012). Exploration of Optimization Options for Increasing Performance of a GPU Implementation of a Three-Dimensional Bilateral Filter, Technical Report LBNL-5406E. Lawrence Berkeley National Laboratory, Berkeley, CA, USA.]). A weighted average of nearby pixels replaces each intensity value of the original image. This filter takes into account differences in intensity values in neighboring pixels to preserve edges while smoothing. Consequently, the influence between neighboring pixels depends on the similarity of their intensity values. It is defined as

[\hat{I_p} = {{1}\over{N_p}} \,\,\sum_{q\,\in\,S_p} I_q \, G_{\sigma_{\rm{r}}} \left(\left|I_p-I_q\right|\right) \, G_{\sigma_{\rm{s}}} (|p-q|), \eqno(1)]

where

[N_p = \sum_{q\,\in\,S_p} G_{\sigma_{\rm{r}}} \left(\left|I_p-I_q\right|\right) \, G_{\sigma_{\rm{s}}} (|p-q|), \eqno(2)]

I is the input image, [G_{\sigma_{\rm{r}}}] and [G_{\sigma_{\rm{s}}}] are spatial Gaussian kernels, p and q are pixel locations and S is the neighborhood.

The parameter [G_{\sigma_{\rm{r}}}], called range kernel, smooths differences within intensities, while the parameter [G_{\sigma_{\rm{s}}}], referred to as spatial kernel, smooths differences within coordinates. When the value of [G_{\sigma_{\rm{r}}}] increases, the filter comes closer to a Gaussian convolution. The value of the spatial kernel is directly proportional to the size of the features to be smoothed. Fig. 3[link] shows an example of a bilateral filter applied to an image varying both parameters.

[Figure 3]
Figure 3
Example of bilateral filter applied to an image. (a) Original image. (b) Bilateral filter with [G_{\sigma_{\rm{r}}}] = 50 and [G_{\sigma_{\rm{s}}}] = 3. (c) Bilateral filter with [G_{\sigma_{\rm{r}}}] = 250 and [G_{\sigma_{\rm{s}}}] = 3. (d) Bilateral filter with [G_{\sigma_{\rm{r}}}] = 25 and [G_{\sigma_{\rm{s}}}] = 3. (e) Bilateral filter with [G_{\sigma_{\rm{r}}}] = 25 and [G_{\sigma_{\rm{s}}}] = 30.

Upon application of the enhancement process, the images present improved contrast and reduced noise, making them more suitable for use as inputs to the image segmentation algorithms, described in the next section.

2.2. Unsupervised segmentation

In this section we present the three algorithms that are currently part of MSM: SRM, k-means and PMRF. SRM and k-means are among the most commonly used image segmentation strategies, yet due to the sensitivities of these methods the results can vary depending both on the dataset and parameters used. Additionally we evaluate PMRF, a novel graph-based algorithm, as another candidate to produce equivalent results, but addressing performance issues that traditional SRM and k-means lack.

2.2.1. Statistical region merging (SRM)

Nock & Nielsen (2004[Nock, R. & Nielsen, F. (2004). IEEE Trans. Pattern Anal. Mach. Intell. 26, 1452-1458.]) proposed an efficient region growing segmentation algorithm based on adaptive statistical threshold merging predicate on intensity levels. As in other region segmentation algorithms, it aims at associating a pixel to a region using a similarity criteria.

The iterative process starts with one region per pixel, followed by merging phases after the calculation of a statistical test that takes neighboring regions into account. This test considers an ascending order of intensity differences and checks if the mean intensities are sufficiently similar to be merged. The merging predicate, [{\cal P}(R_i,R_j)], regulates whether the observed regions Ri and Rj belong to the same statistical region. The merging predicate assumes that the pixels from a statistical region have the same expectation, and it is represented as

[{\cal P}(R_i,R_j) = \bigg\{ \matrix{ {\rm{true}} \hfill & {\rm{if}}\,\, \left|\bar{R_i}-\bar{R_j}\right| \leq b\left(R_i\right)+b\left(R_j\right), \hfill \cr {\rm{false}} \hfill & {\rm{otherwise}}, \hfill } \eqno(3)]

where the right-hand side of the equation is the center value between Ri and Rj, and it is used as a merging threshold. The variable b is a function of g (the largest possible intensity value):

[b(R) = g \,\left[ {{1}\over{2Q|R|}}\, \left({{\ln{|S_{|R|}|}}\over{\delta}}\right) \right]^{1/2}. \eqno(4)]

Sl is the set of regions with l pixels, δ is the probability error and takes values in [0 \leq \delta \leq 1]. Q stands for the number of random variables, which somewhat translates the complexity of the scene and controls the coarseness of the segmentation. In other words, Q roughly estimates the number of regions in the image. The function [|\ldots|] represents the number of elements in the set of pixels. Even though the default µ-CT image output is 32-bit, we propose the use of g = 255, therefore the methods take images in 8-bit as input, given that we empirically verified that this is enough bit depth to represent more than 95% of the relevant intensity values from the original image.

2.2.2. k-means

The k-means algorithm (Macqueen, 1967[Macqueen, J. (1967). 5th Berkeley Symposium on Mathematical Statistics and Probability, pp. 281-297.]) is a non-hierarchical unsupervised clustering method that classifies the input data points into k classes based on the inherent distance between point pairs. The algorithm assumes that the data features form a vector space and tries to find natural clusters, iteratively minimizing the distance between the points and a set of centroids [\mu_i], [\forall i = 1 \ldots k]. The minimization function is given by

[V = \sum_{j\,=\,1}^{k} \sum_{x_i\,\in\,S_j} \left(x_i-\mu_j\right)^2, \eqno(5)]

where there are k clusters Sj , i = [1, 2, \ldots, k], and [\mu_j] is the centroid of all points [x_i\,\in\,S_j]. The general k-means algorithm for image segmentation consists of the following steps:

(1) Compute the histogram of pixel intensities.

(2) Initialize the centroids with k random intensities.

(3) Repeat the following steps until the cluster labels of the image converge:

 (a) Cluster the pixels based on the distance of their intensities from the centroid intensities,

[c_j = {\rm{argmin}}_j||x_i-\mu_j||^2, \eqno(6)]

add i to Sj.

 (b) Compute the new centroid of each cluster,

[\mu_j = ({{1}/{|S_j|}})\textstyle\sum\limits_{i\,\in\,S_j}x_i, \eqno(7)]

where k is the preferred number of clusters (usually set empirically and higher than the number of phases), i iterates over all intensities, j iterates over all centroids and [\mu_i] are the centroid intensities.

2.2.3. Parallel Markov random field (PMRF)

Perciano et al. (2016[Perciano, T., Ushizima, D. M., Bethel, E. W., Mizrahi, Y. D., Parkinson, D. & Sethian, J. A. (2016). IEEE International Conference on Image Processing (ICIP), 25-28 September 2016, Phoenix, Arizona, USA, pp. 1259-1263.]) proposed a graph-based model called PMRF, which exploits the Markov random fields (MRF) framework to segment images. PMRF makes use of the linear and parallel (LAP) method (Mizrahi et al., 2014[Mizrahi, Y. D., Denil, M. & de Freitas, N. (2014). International Conference on Machine Learning (ICML), 21-26 June 2014, Beijing, China.]), a graph partitioning algorithm for MRF parameter estimation. In a MRF model, the optimization process uses a global energy function to find the best solution to a similarity problem, such as the best pixel space partition or the best matching. The energy function consists of a data term and a smoothness term. For image segmentation, we use the mean of the intensity values of a region as the data term. The smoothness term takes into account similarities between regions. The goal is to find the best labeling for the regions, so that the similarity between two regions with the same labels is optimal for all pixels (Mahapatra & Sun, 2012[Mahapatra, D. & Sun, Y. (2012). IEEE Trans. Image Process. 21, 170-183.]).

Given an image represented by [{\bf y}] = [(y_1,\ldots,y_N)], where each yi is a region, we seek a configuration of labels [{\bf x}] = [(x_1,\ldots,x_N)], where [x_i\,\in\,L] and L is the set of all possible labels, L = [\{0, 1, 2,\ldots, M\}]. The MAP criterion (Li, 2013[Li, S. Z. (2013). Markov Random Field Modeling in Image Analysis, 3rd ed. Springer Publishing Company.]) states that one wants to find a labeling [{\bf x}^*] that satisfies

[{\bf x}^* = \mathop {\rm argmax}\limits_{x} \big\{P({\bf y}|{\bf x},\Theta)P({\bf x})\big\}, \eqno(8)]

which can be rewritten in terms of the energies (Li, 2013[Li, S. Z. (2013). Markov Random Field Modeling in Image Analysis, 3rd ed. Springer Publishing Company.]) as

[{\bf x}^* = \mathop {\rm{argmin}}\limits_{x}\big\{U({\bf y}|{\bf x},\Theta) + U({\bf x})\big\}. \eqno(9)]

The prior probability [P({\bf x})] is a Gibbs distribution, and the joint probability distribution is

[P({\bf y}|{\bf x},\Theta) = \prod_i P(y_i|{\bf x},\Theta) = \prod_i P(y_i|x_i,\theta_{x_i}), \eqno(10)]

where [P(y_i|x_i,\theta_{x_i})] is a Gaussian distribution with parameters [\theta_{x_i}] = [(\mu_{x_i},\sigma_{x_i})] and [\Theta] = [\{\theta_l|l \in L\}] is the parameter set.

The general PMRF segmentation framework works as follows. Initially, the input image passes through a feature extraction algorithm or an oversegmentation method to transform the voxel-domain image data into a less noisy representation. Next, the resulting regions compose the nodes of a graph representation of the image. The graph partitioning process is performed using the LAP algorithm, allowing simultaneous parallel parameter estimation and optimization for each subgraph. This parallelization strategy makes PMRF scalable, i.e. suitable for large experimental datasets. Finally, the iterative optimization process aggregates the graph nodes reaching an optimal segmentation through expectation maximization (EM) and maximum a posteriori (MAP) calculations.

2.3. Quantification metrics

In this section we describe metrics used to assess the accuracy of the unsupervised segmentation methods as applied to the µ-CT datasets.

2.3.1. General metrics

Our pipeline uses two sets of general evaluation measurements: binary segmentation metrics and material metrics.

The binary segmentation metrics consist of precision, recall and accuracy. Precision represents the proportion of voxels correctly classified as material, i.e. measures the performance of the algorithm with respect to false positives. It is given by: Precision = TP/(TP + FP), where TP stands for true positives and FP for false positives. Recall measures the performance of the segmentation algorithm with respect to false negatives (FN), and it is given by: Recall = TP/(TP + FN). Finally, accuracy gives the proportion of true segmentations among the total number of voxels. It is given by: Accuracy = (TP + TN)/(TP + TN + FP + FN), where TN stands for true negatives.

We calculate two additional metrics related to the material of the sample: the volume of the solid component of the sample, shown in cm3, and the porosity of the material, given by [\varphi] = Vv/(Vv + Vs), where Vv is the volume of the void space (empty space) and Vs is the volume of the solid component.

All the metrics described in this section are calculated with respect to the reference data of each analyzed sample.

2.3.2. Characteristics of fiber beds

To evaluate the CMC samples containing fiber beds, in addition to the general metrics described in the previous section, we use more specific metrics to obtain additional scientific information about the samples. In a cross section of a 3D µ-CT stack for these samples, each fiber is contained within a polygonal cell whose boundaries are defined by the perpendicular bisectors of lines joining the centroid of that fiber with the centroids of its nearest neighboring fibers. A Voronoi tessellation (Aurenhammer, 1991[Aurenhammer, F. (1991). ACM Comput. Surv. 23, 345-405.]) defines these cells. Fig. 4[link] shows an example of a Voronoi tesselation for a small region of fibers. The collection of cells can be interrogated in order to quantify various characteristics of the fiber bed. We compute three such characteristics for each cross-section through the stacks:

[Figure 4]
Figure 4
Example of a Voronoi tesselation calculated from fiber segmentation.

(i) The mean cell area, [\mu_{{\bf A}}], given by

[\mu_{{\bf A}}= {{1}\over{N}} \sum\limits_{i\,=\,1}^{N} {\cal A}_i, \eqno(11)]

where [{\cal A}_i] is the area of one cell and is expressed in units of pixels.

(ii) The non-uniformity of cell areas (Shou et al., 2015[Shou, D., Ye, L. & Fan, J. (2015). J. Compos. Mater. 49, 1753-1763.]), characterized by

[\alpha = {{ ({{1}/{N}}) ({\bf{A}}\cdot{\bf{A}}) }\over{ \mu_{\bf{A}}^2 }}, \eqno(12)]

where [{\bf A}] = [[{\cal A}_1, {\cal A}_2, \ldots, {\cal A}_N]] is the entire set of cell areas.

(iii) The mean porosity, given by

[\rho = \sum\limits_{i\,=\,1}^{N}{\cal A}_{\rm{U}_i} \,\,\Big/\,\, \sum\limits_{i\,=\,1}^{N} {\cal A}_i, \eqno(13)]

where [{\cal A}_{\rm U}] = [[{\cal A}_{\rm{U}_1}, {\cal A}_{\rm{U}_2}, \ldots, {\cal A}_{\rm{U}_N}]] is the set of unoccupied areas within the cells.

The α metric was proposed by Shou et al. (2015[Shou, D., Ye, L. & Fan, J. (2015). J. Compos. Mater. 49, 1753-1763.]) for the analysis of permeability of the fiber reinforcement, and applied to simulated random fiber arrays. We propose to use the others to enrich the analysis of the samples. These metrics are calculated in 2D for each cross section of the 3D stack.

Generally, the 2D cross sections used are the transverse slices of the 3D stack. However, the domain of analysis can be changed, i.e. the user can slice the data in different directions so that the metrics are calculated based on the selected domain. This flexibility allows taking into account that, due to the nature of a 3D tomographic reconstruction, characteristics and artifacts can vary depending on how the data are sliced.

3. Experiments

In this section we present a set of different in silico experiments using the proposed analysis pipeline. The experiments are carried out in an increasing order of complexity: (1) synthetic data corrupted with noise, (2) experimental data of geological sample, and (3) experimental CMC sample with fiber beds. We show how to use the proposed process to evaluate the segmentation algorithms and suitability of different metrics presented in §2.3[link] for each case.

3.1. Synthetic samples

3.1.1. Description

For the first set of experiments we use synthetic data corrupted with artifacts. In doing so, we have total control of the samples and the ground-truth to demonstrate the accuracy of the segmentation algorithms. We use benchmark images available at the Network Generation Comparison Forum (NGCF) (https://people.physics.anu.edu.au/~aps110/network_comparison), a forum that provides binary images with geometry that resembles porous media, which we contaminate with several levels of noise, similarly to Ushizima et al. (2012[Ushizima, D., Morozov, D., Weber, G. H., Bianchi, A. G. C., Sethian, J. A. & Bethel, E. W. (2012). IEEE Trans. Vis. Comput. Graph. 18, 2041-2050.]).

To produce images similar to the images obtained from tomographic experiments, we simulated a tomographic synchrotron experiment for each phantom image, and computed tomographic reconstructions. The ASTRA toolbox (Aarle et al., 2015[Aarle, W. van, Palenstijn, W. J., De Beenhouwer, J., Altantzis, T., Bals, S., Batenburg, K. J. & Sijbers, J. (2015). Ultramicroscopy, 157, 35-47.]) simulates tomographic projections for each phantom using 1025 angles over a 180° range. Afterwards, we added various sources of noise to the projections, simulating common artifact sources in practice, such as Gaussian white noise, miscalibrated detector pixels and shifting illumination, which typically result in ring-like artifacts in reconstructed images. The simulated projections were processed similarly to actual experimental data by applying a ring-removal algorithm (Münch et al., 2009[Münch, B., Trtik, P., Marone, F. & Stampanoni, M. (2009). Opt. Express, 17, 8567-8591.]) and computing a reconstructed image using the popular filtered backprojection algorithm (Kak & Slaney, 2001[Kak, A. C. & Slaney, M. (2001). Principles of Computerized Tomographic Imaging. Philadelphia: Society for Industrial and Applied Mathematics.]).

3.1.2. Analysis pipeline

For the synthetic µ-CT image stacks, we apply F3D and non-linear image denoising to enhance image quality and then the three segmentation algorithms detect the phases of interest. Here, the algorithms separate two phases: structure and void space. To calculate all the measures described in §2.3.1[link], a voxel-to-voxel comparison is made against the reference data (ground-truth) for each sample.

Empirically found values used for the parameters [G_{\sigma_{\rm{r}}}] and [G_{\sigma_{\rm{s}}}] by the edge-preserving filtering were 50 and 5. PMRF converges after five iterations on average, the SRM algorithm used Q = 8 and k-means used k = 6.

3.1.3. Results

The first synthetic data simulate a glass bead column with µ-CT artifacts as shown in Fig. 5(a)[link]. Fig. 5(b)[link] presents the reference data and Fig. 5(c)[link] shows the 3D rendering of the corrupted data masked with the reference segmentation.

[Figure 5]
Figure 5
Result of one slice from the two-phases segmentation of the synthetic glass bead column. (a) Synthetic corrupted data. (b) Reference data. (c) 3D rendering of the corrupted data masked with the reference. (d) Result using PMRF algorithm. (e) Result using SRM algorithm. (f) Result using k-means algorithm. The colors blue, red, yellow and black represent TP, FP, FN and TN, respectively.

We apply the segmentation algorithms following the analysis pipeline described before. Figs. 5(d)–5(f)[link] present the results for the 46th slice using PMRF, SRM and k-means, respectively. The colors blue, red, yellow and black represent TP, FP, FN and TN.

Table 1[link] summarizes the quantitative results using the general metrics described in §2.3.1[link] for this dataset. Note that the quantitative results are compatible with the visual results shown in Fig. 5[link], i.e. the three algorithms perform similarly with small variations, and all of them approach the reference data. When these different algorithms and respective metrics somewhat coincide, our hypothesis is that the µ-CT segmentation worked properly.

Table 1
Metrics for the synthetic glass bead column

Note that the three algorithms provide similar results converging to the reference data.

  Reference PMRF SRM k-means
Precision 1.0 0.995 0.993 0.996
Recall 1.0 0.993 0.996 0.9896
Accuracy 1.0 0.992 0.994 0.991
Porosity 0.378 0.380 0.377 0.3833
Volume (cm3) 0.0603 0.0604 0.0607 0.0601

The second experiment aims to evaluate the segmentation algorithms applied to a material with peculiar characteristics, including complex geometries and high curvature points. These synthetic data represent a geological formation similar to rocks. In this case, the material has a lower porosity, smaller and more connected structures. Additionally, the same approach for adding artifacts is applied to the synthetic data.

Figs. 6(d)–6(f)[link] depict the segmentation results compared with the reference data, presented in Fig. 6(b)[link]. Fig. 6(a)[link] shows the synthetic corrupted data and Fig. 6(c)[link] presents the 3D rendering of the corrupted data masked with the reference segmentation. The colors blue, red, yellow and black represent TP, FP, FN and TN.

[Figure 6]
Figure 6
Result of one slice from the two-phases segmentation of the synthetic rock-like sample. (a) Synthetic corrupted data. (b) Reference data. (c) 3D rendering of the corrupted data masked with the reference. (d) Result using PMRF algorithm. (e) Result using SRM algorithm. (f) Results using k-means algorithm. The colors blue, red, yellow and black represent TP, FP, FN and TN, respectively.

Note again the visual similarity of the results. Table 2[link] summarizes the general metrics values obtained for this experiment. Despite the fact that the simulated material in this case has a more challenging set of characteristics as described before, the results of the three algorithms also approach the reference data, with precision achieving values higher than 0.99, which is often enough to enable scientific interpretation of the sample.

Table 2
Metrics for the synthetic rock-like sample showing the small variation on the metrics for the three segmentation algorithms, indicating total agreement

  Reference PMRF SRM k-means
Precision 1.0 0.996 0.993 0.996
Recall 1.0 0.969 0.989 0.972
Accuracy 1.0 0.972 0.986 0.975
Porosity 0.206 0.227 0.210 0.2248
Volume (cm3) 0.077 0.0752 0.0769 0.0755

3.2. Glass beads

3.2.1. Description

This real dataset presents the observation of a geological carbon sequestration experiment, which uses glass beads as a proxy for sand grains for in situ studies of infiltration and storage. Fig. 7[link] illustrates the experimental outcome using glass-bead-packed columns in biogenic mixture where calcite precipitation was induced by using the microbe S. pasteurii. The glass beads data set originally contains a 10 GB image stack and represents a 4.49 mm field of view, with a smaller core region corresponding to x × y dimensions of 1393 × 1398, as described previously by Ushizima et al. (2011[Ushizima, D., Parkinson, D., Nico, P., Ajo-Franklin, J., MacDowell, A., Kocar, B., Bethel, W. & Sethian, J. (2011). Proc. SPIE, 8135, 813502.]).

[Figure 7]
Figure 7
Result of one slice from the two-phases segmentation of the glass bead column. (a) Original slice. (b) Reference data. (c) 3D rendering of the reference segmentation. (d) Result using PMRF algorithm. (e) Result using SRM algorithm. (f) Results using k-means algorithm. The colors blue, red, yellow and black represent TP, FP, FN and TN, respectively.
3.2.2. Analysis pipeline

Following the procedure used for the synthetic datasets, firstly the F3D plugin combined with edge-preserving filtering enhances image quality and then the three segmentation algorithms detect the phases of interest. Here, besides separating two phases, material and void space, we apply the algorithms to find an additional phase representing the precipitation of calcium carbonate. The general metrics described in §2.3.1[link] are also used to evaluate the results on this dataset.

The values used for the parameters [G_{\sigma_{\rm{r}}}] and [G_{\sigma_{\rm{s}}}] by the edge-preserving filtering were 50 and 5. PMRF converges after five iterations on average, the SRM algorithm used Q = 8 and k-means used k = 2 for the two-phase segmentation and k = 6 for the three-phase segmentation.

3.2.3. Results

We apply the pipeline described before to the experimental data of the glass bead column firstly aiming to separate material from void space. Fig. 7[link] presents a summary of the results for the first slice of the 3D stack. In the original data presented in Fig. 7(a)[link], three phases can be identified by the difference of intensity values of the pixels: void space (darkest gray level), glass beads (intermediate gray level) and precipitate (brightest gray level). Fig. 7(b)[link] presents the used reference for this slice for the two-phases segmentation. Fig. 7(c)[link] shows the 3D rendering of the original data masked with the reference data. Finally, Figs. 7(d)–7(f)[link] present the results obtained by PMRF, SRM and k-means, respectively. For the purposes of this work, the reference data used are the best results found in the literature for this dataset (Ushizima et al., 2011[Ushizima, D., Parkinson, D., Nico, P., Ajo-Franklin, J., MacDowell, A., Kocar, B., Bethel, W. & Sethian, J. (2011). Proc. SPIE, 8135, 813502.]), which were validated by a material scientist expert. We observe a close agreement among the results compared with the reference data according to the values of TP, FP, FN and TN represented by the colors blue, red, yellow and black, respectively.

Table 3[link] presents the values for the metrics described in §2.3.1[link] calculated for this experiment. The results of the three algorithms are in close agreement with the reference data, and the accuracy metric achieved values higher than 0.94 indicating that the algorithms successfully segmented the solid component of the material from the void space.

Table 3
Metrics for the two-phases segmentation of the glass bead column

The three algorithms are in agreement with the reference data.

  Reference PMRF SRM k-means
Precision 1.0 0.957 0.962 0.952
Recall 1.0 0.935 0.939 0.941
Accuracy 1.0 0.940 0.945 0.940
Porosity 0.436 0.449 0.450 0.443
Volume (cm3) 0.04882 0.04774 0.04764 0.04823

Now we apply the same algorithms aiming to separate the three different phases: void space, glass beads and precipitate. Using the same previous slice, Fig. 8[link] shows the results for the three algorithms. In this case, the algorithms are able to separate the void space (black) and the glass beads (red), and obtain a phase which is a combination of precipitate and vessel (cyan).

[Figure 8]
Figure 8
Result of one slice from the three-phases segmentation of the glass bead column. (a) Original slice. (b) Result using PMRF algorithm. (c) Result using SRM algorithm. (d) Result using k-means algorithm. The colors red, cyan and black represent the solid, precipitate and void phases, respectively.

Table 4[link] presents the volumes calculated for each phase. The reference data for the three-phases segmentation are unavailable; however, we can notice the similarity among the results also indicating total agreement of the three algorithms, making the results valuable for visual analysis and characterization of this kind of material. As mentioned before, the phase representing the precipitation is combined with the vessel, which affects the calculation of the metrics for this phase. However, from an experimental point of view, it is possible to have the initial state of the experiment, i.e. the state of the glass bead column before infiltration. Consequently, it is feasible to subtract the initial state in order to obtain a phase which purely represents the precipitate.

Table 4
Metrics for the three-phases segmentation of the glass bead column

Reference data are not available; however, the agreement of the algorithms is noticeable

  PMRF SRM k-means
Volume of beads (cm3) 0.0391 0.04178 0.04301
Volume of precipitation (cm3) 0.005827 0.005851 0.00522
Total volume (cm3) 0.04882 0.04764 0.04823

The values obtained for the general metrics are calculated based on the full 3D binary results from the segmentation algorithms. The boundary shown in Fig. 7[link] was used for visualization purposes only, it is not a requirement for the metrics calculation. However, it is important to emphasize that, if a specific boundary is selected from the original image (a region of interest), then the calculated values for the metrics are going to vary accordingly.

3.3. Ceramic matrix composite

3.3.1. Description

Our last set of experiments focuses on the analysis of CMC samples containing fiber beds. The specimens analyzed in this work are unidirectional mini-composites. Each specimen contains approximately 5000–6000 aligned SiC fibers, each about 13 µm in diameter, surrounded by a cured pre-ceramic polymer matrix. The matrix had been introduced through pressure-assisted axial infiltration of a liquid polymer precursor into dry fiber beds encased in glass capillary tubes. The uniformity of fiber packing is of interest because it influences void formation during polymer impregnation and pyrolysis (PIP) processing. µ-CT imaging of the infiltrated specimens was conducted at beamline 8.3.2 at ALS. The distance between the sample and the detector was 1.5–2.0 cm. The data were reconstructed using filtered back projection with Octopus software (Inside Matters NV, 2016[Inside Matters NV (2016). Octopus imaging, https://octopusimaging.eu/.]). Reconstruction required approximately 2 h when running on a standard desktop computer. The reconstructed results consist of sets of cross sections, recorded as 2D images. Brought together, the images can be used to recover the 3D structures.

Fig. 9[link] presents a cross section from one of the nine stacks studied. Three different regions are evident in the interior of the tubes: (i) fibers (lightest gray); (ii) cured precursor (medium gray) and (iii) voids (darkest gray). In this work, the precursor and the voids are together distinguished from the fibers.

[Figure 9]
Figure 9
Original slice from one of the µ-CT stacks and its respective reference data.
3.3.2. Analysis pipeline

The segmentation and analysis pipeline were applied to nine 3D µ-CT image stacks, each approximately 2000 × 2000 × 2160 voxels. For each stack, F3D was first used to enhance image quality. Then, the image segmentation algorithms recover binary masks representing the separation of two phases: fiber beds and void space (combination of cured precursor and voids). From the binary masks, we calculate the general metrics described in §2.3.1[link].

Additionally, we use the binary masks to construct the Voronoi tesselation, which, combined with the segmentation results, enables calculation of the three specific metrics in equations (11)[link], (12)[link] and (13)[link]. The parameter Q for the SRM algorithm is set to 8, k-means was applied with k = 4 and PMRF executes an average of five iterations. Automated threshold estimators are applied to obtain the binary results from SRM (Otsu) and k-means (Iso-data). The values used for the parameters [G_{\sigma_{\rm{r}}}] and [G_{\sigma_{\rm{s}}}] by the edge-preserving filtering were 50 and 5.

The results from the three segmentation algorithms are compared with corresponding results of the reference segmentation. The latter segmentation was obtained using MATLAB scripts with numerous parameters optimized for these specimens. The parameter values obtained from the reference segmentation are deemed to be the best estimates of their true values.

3.3.3. Results

Fig. 10[link] presents the segmentation result for a region of interest from a 2D plane of a stack. In this set of results, differences among the segmentation results can be observed, indicating that the algorithms are not in agreement and further analysis is necessary.

[Figure 10]
Figure 10
Comparisons of the segmentation results for a representative region of the original image. (a) Original image. (b) Reference data. (c) Result using PMRF. (d) Result using SRM. (e) Result using k-means. The colors blue, red, yellow and black represent TP, FP, FN and TN.

Following the same procedure used for the previous experiments, firstly we analyze the results using general metrics. Fig. 11[link] presents the graphics for the porosity and volume obtained for the nine stacks. The graphics show the curves in increasing order for both metrics. There is a weak indication that the PMRF algorithm approaches the reference results slightly more.

[Figure 11]
Figure 11
Values of porosity and volume calculated for the nine stacks analyzed. Curves plotted in increasing order for both metrics based on the reference data. Note the close result of the three algorithms with a weak advantage for the PMRF algorithm.

Table 5[link] presents the average segmentation performance of the algorithms applied to the nine stacks, which is in agreement with the other metrics results, where the PMRF algorithm achieves a slightly higher average accuracy. In general, it is clear that the three algorithms perform well considering these metrics. It is possible to have an overall understanding of the samples. However, it is difficult to draw any precise conclusions from these results regarding the fiber beds.

Table 5
Average for precision, recall and accuracy obtained from the three segmentation algorithms for the nine CMC 3D stacks

Note that although the results are very similar there is an indication that the PMRF algorithm performs better.

    Mean Standard deviation
Precision PMRF 0.9278 0.020
  SRM 0.946 0.001
  k-means 0.945 0.005
Recall PMRF 0.939 0.023
  SRM 0.891 0.009
  k-means 0.896 0.007
Accuracy PMRF 0.942 0.018
  SRM 0.931 0.007
  k-means 0.933 0.005

Taking a closer look at the results in Fig. 10[link], the fiber beds obtained by the k-means algorithm suffer from missing fiber regions due to inaccurate pixel segmentation [note the spread yellow regions in Fig. 10(e)[link]]. In contrast, being graph-based, both the SRM and the PMRF algorithms allow use of higher-level information, correctly merging similar homogeneous regions.

However, the SRM algorithm is negatively impacted by the final threshold, leading to occasional missing or misclassified fiber regions [note the yellow regions in Fig. 10(d)[link]]. So, in fact, there are more details about the results that the general metrics are not able to capture. Consequently, we need more specific criteria for a more precise evaluation.

The three fiber bed characteristics defined in §2.3.2[link] are now used. Fig. 12[link] shows the variation in the average value of each characteristic across each sectioning plane with position in the stack. The result based on the PRMF method is consistently in close agreement with the one from the reference segmentation. In contrast, the result from k-means differs considerably from the reference values, over-estimating both the non-uniformity of cell area, α, and the porosity, ρ, while underestimating the cell area, [\mu_{{\bf A}}].

[Figure 12]
Figure 12
α, [\mu_{{\bf A}}] and ρ calculated for each slice through the 3D volume, for each segmentation algorithm and the segmentation reference, for the sample presented in Fig. 9[link]. The gray region around the curves represents the standard deviation.

Table 6[link] summarizes the errors in the three computed characteristics from the three segmentation methods for all nine data sets. The PMRF method consistently yields the best results. For example, the error in α is a mere 0.0096. (For reference, α = 1 corresponds to uniform fiber packing; values obtained in the present composite specimens are typically about α = 1.1.) The errors in [\mu_{{\bf A}}] and ρ from this method are 0.012 and 0.0073, respectively. (For reference, the average values of [\mu_{{\bf A}}] and ρ are about 430 and 0.28.) Higher errors (in some cases by an order of magnitude) are obtained for the other methods.

Table 6
Errors in α, [\mu_{{\bf A}}] and ρ calculated over all nine stacks

Best results are emphasized in bold text

  α μA ρ
PMRF 0.0096 0.012 0.0073
SRM 0.025 0.026 0.17
k-means 0.087 0.109 0.149

Note that the results obtained using the specific metrics are different and more clear from the ones obtained using the general metrics. The reason behind this is that the specific metrics capture details of the fibers individually for each 2D plane throughout the 3D stacks, giving a more precise analysis of this structure. In doing so, the PMRF algorithm is in fact considerably more precise in this case than the other two algorithms, meaning that this algorithm appears to be more suitable to the analysis of this kind of sample. The main reason behind the better results obtained by the PMRF algorithm is its ability to estimate and separate more precisely the regions of interest based on a contextual model, not only on the intensity values of the pixels.

The set of experiments carried out was only possible and feasible given the proposed pipeline. The methodology described here uses different strategies for segmentation and evaluation of µ-CT samples, and also enables the critical assessment of the accuracy of the algorithms in obtaining scientific information about the samples. The results demonstrated the impact of choosing the suitable segmentation algorithms and evaluation metrics when analyzing 3D µ-CT data, and how the proposed pipeline facilitates this exploration and evaluation process.

4. Concluding remarks

Segmentation algorithms enable quantitative characterization of different materials in µ-CT data through measurements that extract physical properties from samples. However, finding the most appropriate segmentation algorithm for the data and finding the best evaluation metrics are a constant challenge, while those two choices heavily impact the performance of the results in obtaining valuable information from the data.

We have presented an automated pipeline called MSM for the analysis of 3D µ-CT images that offers different strategies to evaluate combinations of segmentation algorithm and metric for different scientific problems. We performed a broad and detailed set of experiments covering samples with different types of materials: synthetic data of glass beads and a rock-like sample, a real glass beads sample, and real CMC samples containing fiber beds. The impact of choosing the right segmentation algorithm and metric depending on the scientific goals is demonstrated throughout the experiments. Although MSM comes with a pre-defined set of segmentation algorithms and metrics, the pipeline can be easily extended with additional methods for comparison.

We show that the three unsupervised segmentation algorithms detect different phases of the materials and give comparable results for all the experiments. However, specimens with complex geometries and fine details may diverge considerably with regards to the segmentation results. The analysis of fiber beds requires tailored segmentation algorithms, and a more specific set of metrics so that the necessary scientific questions can be answered. In this case, the experiments indicate that the PMRF algorithm is more suitable and more precise when segmenting the fiber beds.

Future directions of this work include studies of scalability of the segmentation methods in order to improve the efficiency of the analytical process. This includes the use of an under-development MPI-based version of the PMRF algorithm. In doing so, we intend to perform a detailed analysis of the fiber beds in CMC samples, modeling the fibers, and measuring fiber deformations and the matrix deformation evolution. Additionally, we will develop metrics that suggest whether or not the input data need enhancement. Currently, the image enhancement step in the proposed pipeline is optional and dependent on the user's choice.

Acknowledgements

This work was supported by: the Office of Naval Research, monitored by Dr David A. Shifler (see funding details below); the grant `Towards Exascale: High Performance Visualization and Analytics' and the Early Career Research project, both under Advanced Scientific Computing Research (ASCR), under program manager Dr Lucy Nowell (see funding details below); the Center for Applied Mathematics for Energy Related Applications (CAMERA), under management of ASCR and Office of Basic Energy Sciences (BES) of the US Department of Energy; an ALS Doctoral Fellowship in Residence (supporting NML) with funding provided by the US Department of Energy; and a National Science Foundation Graduate Research Fellowship (NML) (see funding details below). Any opinion, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. The experimental work was performed at beamline 8.3.2 at the Advanced Light Source (ALS), a Division of Lawrence Berkeley National Laboratory. The authors gratefully acknowledge Alastair MacDowell and Harold Barnard (ALS) for assistance with planning and executing the experiments. ALS is supported by the Director, Office of Science, Office of Basic Energy Sciences, of the US Department of Energy under Contract No. DE-AC02-05CH11231.

Funding information

Funding for this research was provided by: Office of Naval Research (award No. N00014-13-1-0860); Advanced Scientific Computing Research (award No. DE-AC02-05CH11231); National Science Foundation Graduate Research Fellowship (award No. 1144085).

References

First citationAarle, W. van, Palenstijn, W. J., De Beenhouwer, J., Altantzis, T., Bals, S., Batenburg, K. J. & Sijbers, J. (2015). Ultramicroscopy, 157, 35–47.  Web of Science PubMed Google Scholar
First citationArbeláez, P., Maire, M., Fowlkes, C. & Malik, J. (2011). IEEE Trans. Pattern Anal. Mach. Intell. 33, 898–916.  PubMed Google Scholar
First citationAurenhammer, F. (1991). ACM Comput. Surv. 23, 345–405.  CrossRef Google Scholar
First citationBale, H. A., Haboub, A., MacDowell, A. A., Nasiatka, J. R., Parkinson, D. Y., Cox, B. N., Marshall, D. B. & Ritchie, R. O. (2013). Nat. Mater. 12, 40–46.  CrossRef CAS PubMed Google Scholar
First citationBethel, E. W. (2012). Exploration of Optimization Options for Increasing Performance of a GPU Implementation of a Three-Dimensional Bilateral Filter, Technical Report LBNL-5406E. Lawrence Berkeley National Laboratory, Berkeley, CA, USA.  Google Scholar
First citationBethel, W. et al. (2015). DOE ASCR Workshop, pp. 2–30. DOE.  Google Scholar
First citationChen, R.-C., Dreossi, D., Mancini, L., Menk, R., Rigon, L., Xiao, T.-Q. & Longo, R. (2012). J. Synchrotron Rad. 19, 836–845.  Web of Science CrossRef IUCr Journals Google Scholar
First citationChen, W., Ostrouchov, G., Pugmire, D., Prabhat & Wehner, M. (2013). Technometrics, 55, 513–523.  Google Scholar
First citationChing, D. J. & Gürsoy, D. (2017). J. Synchrotron Rad. 24, 537–544.  CrossRef IUCr Journals Google Scholar
First citationGonzalez, R. C. & Woods, R. E. (2006). Digital Image Processing, 3rd ed. Upper Saddle River: Prentice-Hall.  Google Scholar
First citationHintermüller, C., Marone, F., Isenegger, A. & Stampanoni, M. (2010). J. Synchrotron Rad. 17, 550–559.  Web of Science CrossRef IUCr Journals Google Scholar
First citationInside Matters NV (2016). Octopus imaging, https://octopusimaging.eu/Google Scholar
First citationKak, A. C. & Slaney, M. (2001). Principles of Computerized Tomographic Imaging. Philadelphia: Society for Industrial and Applied Mathematics.  Google Scholar
First citationKhanum, M., Mahboob, T., Imtiaz, W., Abdul Ghafoor, H. & Sehar, R. (2015). Int. J. Comput. Appl. 119, 34–39.  Google Scholar
First citationLi, S. Z. (2013). Markov Random Field Modeling in Image Analysis, 3rd ed. Springer Publishing Company.  Google Scholar
First citationMacqueen, J. (1967). 5th Berkeley Symposium on Mathematical Statistics and Probability, pp. 281–297.  Google Scholar
First citationMahapatra, D. & Sun, Y. (2012). IEEE Trans. Image Process. 21, 170–183.  CrossRef PubMed Google Scholar
First citationMizrahi, Y. D., Denil, M. & de Freitas, N. (2014). International Conference on Machine Learning (ICML), 21–26 June 2014, Beijing, China.  Google Scholar
First citationMünch, B., Trtik, P., Marone, F. & Stampanoni, M. (2009). Opt. Express, 17, 8567–8591.  Web of Science PubMed Google Scholar
First citationNock, R. & Nielsen, F. (2004). IEEE Trans. Pattern Anal. Mach. Intell. 26, 1452–1458.  CrossRef PubMed Google Scholar
First citationPerciano, T., Ushizima, D. M., Bethel, E. W., Mizrahi, Y. D., Parkinson, D. & Sethian, J. A. (2016). IEEE International Conference on Image Processing (ICIP), 25–28 September 2016, Phoenix, Arizona, USA, pp. 1259–1263.  Google Scholar
First citationPinoli, J. C. & Debayle, J. (2012). IEEE J. Sel. Top. Signal. Process. 6, 820–829.  CrossRef Google Scholar
First citationPolak, S. J., Candido, S., Levengood, S. K. L. & Wagoner Johnson, A. J. (2012). Comput. Med. Imaging Graph. 36, 54–65.  CrossRef PubMed Google Scholar
First citationSchindelin, J., Arganda-Carreras, I., Frise, E., Kaynig, V., Longair, M., Pietzsch, T., Preibisch, S., Rueden, C., Saalfeld, S., Schmid, B., Tinevez, J.-Y., White, D. J., Hartenstein, V., Eliceiri, K., Tomancak, P. & Cardona, A. (2012). Nat. Methods, 9, 676–682.  CrossRef CAS PubMed Google Scholar
First citationSheppard, A., Latham, S., Middleton, J., Kingston, A., Myers, G., Varslot, T., Fogden, A., Sawkins, T., Cruikshank, R., Saadatfar, M., Francois, N., Arns, C. & Senden, T. (2014). Nucl. Instrum. Methods Phys. Res. B, 324, 49–56.  CrossRef CAS Google Scholar
First citationShou, D., Ye, L. & Fan, J. (2015). J. Compos. Mater. 49, 1753–1763.  CrossRef CAS Google Scholar
First citationTassani, S., Korfiatis, V. & Matsopoulos, G. K. (2014). J. Microsc. 256, 75–81.  CrossRef CAS PubMed Google Scholar
First citationTomasi, C. & Manduchi, R. (1998). Proceedings of the Sixth IEEE International Conference on Computer Vision, Bombay, India, pp. 839–846.  Google Scholar
First citationUshizima, D., Morozov, D., Weber, G. H., Bianchi, A. G. C., Sethian, J. A. & Bethel, E. W. (2012). IEEE Trans. Vis. Comput. Graph. 18, 2041–2050.  CrossRef CAS PubMed Google Scholar
First citationUshizima, D., Parkinson, D., Nico, P., Ajo-Franklin, J., MacDowell, A., Kocar, B., Bethel, W. & Sethian, J. (2011). Proc. SPIE, 8135, 813502.  Google Scholar
First citationUshizima, D. M., Perciano, T., Krishnan, H., Loring, B., Bale, H., Parkinson, D. & Sethian, J. (2014). IEEE International Conference on Big Data, 27–30 October 2014, Washington, DC, USA, pp. 683–691.  Google Scholar
First citationVan Droogenbroeck, M. & Talbot, H. (1996). Pattern Recognit. Lett. 17, 1451–1460.  CrossRef Google Scholar
First citationYang, X., De Carlo, F., Phatak, C. & Gürsoy, D. (2017). J. Synchrotron Rad. 24, 469–475.  CrossRef IUCr Journals Google Scholar
First citationZok, F. W. (2016). Am. Ceram. Soc. Bull. 95, 22–28.  CAS Google Scholar

This article is published by the International Union of Crystallography. Prior permission is not required to reproduce short quotations, tables and figures from this article, provided the original authors and source are cited. For more information, click here.

Journal logoJOURNAL OF
SYNCHROTRON
RADIATION
ISSN: 1600-5775
Follow J. Synchrotron Rad.
Sign up for e-alerts
Follow J. Synchrotron Rad. on Twitter
Follow us on facebook
Sign up for RSS feeds