 Research
 Open Access
Vertebra segmentation based on twostep refinement
 JeanBaptiste Courbot^{1}Email author,
 Edmond Rust^{1},
 Emmanuel Monfrini^{2} and
 Christophe Collet^{1}
https://doi.org/10.1186/s4024401600180
© The Author(s) 2016
Received: 29 September 2015
Accepted: 27 June 2016
Published: 26 July 2016
Abstract
Knowledge of vertebra location, shape, and orientation is crucial in many medical applications such as orthopedics or interventional procedures. Computed tomography (CT) offers a high contrast between bone and soft tissues, but automatic vertebra segmentation remains difficult. Hence, the wide range of shapes, aging, and degenerative joint disease alterations as well as the variety of pathological cases encountered in an aging population make automatic segmentation sometimes challenging. Besides, daily practice implies a need for affordable computation time.
This paper aims to present a new automated vertebra segmentation method (using a first bounding box for initialization) for CT 3D data which tackles these problems. This method is based on two consecutive steps. The first one is a new coarsetofine method efficiently reducing the data amount to obtain a coarse shape of the vertebra. The second step consists in a hidden Markov chain (HMC) segmentation using a specific volume transformation within a Bayesian framework. Our method does not introduce any prior on the expected shape of the vertebra within the bounding box and thus deals with the most frequent pathological cases encountered in daily practice.
We experiment this method on a set of standard lumbar, thoracic, and cervical vertebrae and on a public dataset, on pathological cases, and in a simple integration example. Quantitative and qualitative results show that our method is robust to changes in shapes and luminance and provides correct segmentation with respect to pathological cases.
Keywords
Background
Primitive bone tumors such as osteoid osteoma, metastatic lesions, and degenerative disorders such as arthritis or vertebral body collapse and traumatic injuries can affect one or several vertebrae. Diagnosis and characterization of these spine lesions rely on medical imaging. Computed tomography (CT) is yet one of the firstline imaging procedures. This crosssectional imaging technique discriminates tissues along their densities and allows a good contrast between bones, surrounding organs, and soft tissues. However, identification of vertebrae can be difficult. Even if vertebrae vary in shape and orientation along the spine, these modifications can be slight between two neighbor elements of the backbone, making assessment of the exact level sometimes challenging.
A precise knowledge of vertebrae location, shape, and orientation is however essential. Hence, an imaging followup of spine lesions requires a precise identification of the affected levels and consequent reliable vertebrae identification. The same considerations are relevant in case of multimodality imaging, that is to say supplementary spinal imaging procedures (e.g., bone scan with SPECT/CT, 18Ffluoride PET/CT) performed so as to allow a better characterization of lesions or tumor burden. This is even more crucial for preoperative planning and for interventional radiology treatments. Vertebra segmentation and identification is therefore a key issue for many medical applications.
Beside their ambiguous shapes and boundaries, one of the major concern in a segmentation perspective is the varying vertebrae neighborhood and shapes in a single patient, which led to the development of regionspecific methods. Another important problem for clinical application is the eventuality of pathological cases, which is not always taken into account in previous works. This is challenging because of the wide range of diseases, e.g., on CT scans a spine lesion can affect the vertebra local shape (primitive tumor), global shape (scoliosis, fused vertebrae, degenerative disorders), or the intensity of some regions (hyper or hypodense tumors). On top of that, one has to consider that a reliable vertebra segmentation method is one of the requirements needed to perform further advanced processing such as efficient image registration.
Medical image segmentation methods can be divided in three types: the iconic, the texturebased, and the edgebased methods [1]. Iconic methods rely directly on voxel intensities and include amplitude segmentation (e.g., thresholding) and regionbased methods [2]. Texturebased methods rely on local operators [3] to describe and discriminate objects along their apparent texture. Edgebased methods use more abstract descriptors to constrain the shapes and boundaries. As mentioned in [4], vertebra segmentation is a challenging problem since vertebrae are inhomogeneous in intensity and texture and have complex shapes, which make traditional segmentation techniques inefficient to the problem. The vast majority of recent methods dedicated to vertebrae segmentation are edgebased and rely on deformable models performing an adaptation of prior data, such as templates or statistical atlases, to the vertebrae volume. For example, [5, 6] use a prior statistical shape model [7] as an initialization followed by a rigid or nonrigid registration, and in [4, 8, 9], the authors use a shapeconstrained deformable model to fit a prior mesh into the data.

The algorithms use complex shape description and processing, which dramatically increase global processing time.

Methods are validated on a limited set of vertebrae in terms of scope (lumbar, thoracic, or cervical) and healthiness (middleaged patient, healthy cases).
We assume in the following that vertebrae are properly isolated in bounding boxes, delimited roughly by their intervertebral disk and corresponding mean planes. Several methods of vertebra localization can produce such planes, such as the works presented in [4, 8, 10]. Since spine partition is not in the scope of this paper, the volume extractions are made manually. Therefore, this work focuses on the segmentation of vertebral elements contained in the volume, which may include parts of neighbor vertebrae.
Our method overcomes limitations listed above, since it does not rely on prior shape information or on complex shape descriptors. To restrain the computation time, we propose a coarse segmentation algorithm which drops voxel clusters from the data volume. This first step of the segmentation is built on the basis of a coherent voxel cluster statistical testing and is therefore robust to local and global luminance change. This coarse segmentation step will be referred as “Carving”. The second step aims at discriminating the two classes in the remaining volume within a robust Hidden Markov Chain (HMC) framework and thus performs coherent voxellevel segmentation. No shape priors are introduced in the algorithms; thus, the method can deal with any type of standard vertebrae from lumbar to cervical as well as with most of the nonstandard cases one can expect in clinical context. In the Sections “Coarse segmentation” and “Fine segmentation based on HMC modeling” the twopass segmentation algorithm is described. The Section “Results” explains the experiments and the results obtained with the proposed method. A discussion on the results is given in the Section “Discussion and conclusion” as well as a conclusion on the method.
Method
Coarse segmentation

The layer construction consists in selecting the external layer to be processed from the volume of interest in the current iteration.

The layer clustering produces clustered voxels with a joint spaceluminance criterion.

The cluster selection tests if the clusters should be rejected or included in the final volume.
The three steps are repeated until the volume is completely processed within the initial bounding box.
Layer construction
This step isolates the external layer of voxels on which the further processing will be applied. The first layer is defined by its depth I _{1} from the borders of the volume. Given the boundary from the previous step, the following layers cover both an inner part of depth I _{ j } and an outer part of height O _{ j }, j>1. The layer are isolated with mathematical morphology operator.
Layer clustering
We develop a clustering method based on the simple linear iterative clustering (SLIC) method proposed by Achanta et al. [13]. The authors presented a clustering method for color images we generalize in the 3D gray level case. Thereafter, it will be referred as “SLIC3D”. In the color image case, a pixel i can be defined by its Cartesian coordinates (x _{ i }, y _{ i }) and the L∗a∗b intensities (l _{ i },a _{ i },b _{ i }). In [13], the authors combine the two representation spaces in one distance using two weighting parameters called m and S: m is used to balance the contributions of the color distance with respect to the Euclidean distance and S stands for the number of pixels a superpixel is expected to contain. The SLIC algorithm proposed in [13] consists in clustering the pixels in order to approximately minimize for each pixel its combined distance to the cluster centroid.
The cluster centroids are initialized on a regular cubic grid of size S. For each cluster, the algorithm processes a cubic 2S×2S×2S region centered on the centroid spatial coordinates. Each voxel in the region closer to the cluster centroid than to its current cluster centroid is then relabeled. Finally, the cluster centroids are updated, and the procedure can be repeated for a few iterations. The SLIC3D procedure is summarized in Algorithm 1.
Cluster selection
where k is the number of voxels in the cluster k, g is the grey level range, and δ is the acceptable probability of error for the predicate. Q stands for the expected number of underlying independent random variables (r.v.) for the current region, and according to [15], it allows to quantify its statistical complexity. The bone merging predicate can then be stated as “accept the cluster k if l _{ k }−l _{0} < b(k)”.
Model and parameters

Equation (1) defining the layer construction and the SLIC3D method (Algorithm 1) both depend on a size parameter. We choose to define them only with the S parameter from SLIC3D, meaning that we link the layer depth to the expected size of supervoxels. Thus, this parameter quantifies both the depth of the current layer and the scale of the supervoxel to exclude. We choose the two first values of S to be higher than the latter ones, as we process in a coarsetofine fashion. S is given in millimeters to ensure isotropy between axes and between scans.

The m parameter used in clustering (Algorithm 1) mostly defines the clusters shape between a spatial regularity and an intensity regularity. We use decreasing values of m along iterations to exclude first spatially coherent and then intensitycoherent clusters.

The statistical parameters g, Q, and δ used for cluster selection (Predicate 5) can be computed automatically on the basis of the cluster to proceed: g is the range of the current layer intensity and Q must be set lower than g to reduce the expected complexity of the bone merging predicate. The error probability δ can be fixed arbitrarily, e.g., as the inverse of the cardinal of the cluster.

The reference intensity l _{0} (Predicate 5) is the intensity of typical bone in CT scans and is provided by an expert.
The result of the coarse segmentation is very nice given the expectation: the first need is to reduce the data amount to proceed, which is efficiently done. The algorithm actually does more than data volume reduction since the results already have the shape of the underlying vertebrae. However, this step alone remains too coarse, and we have now to use a finer segmentation to perform a voxellevel classification of the remaining volume.
Fine segmentation based on HMC modeling
The coarse segmentation results obtained at the previous section are smaller than the initial volume and include most of the anatomical vertebrae volume. However, to allow an efficient final segmentation, we need to have in the volume enough voxels of the two classes to separate. Thus, a region of interest (ROI) is built based on the previous coarse result. It is a morphological dilation with a ball structuring element of radius 10 mm. In this section, this ROI will be processed, as it preserves the expected shape of the vertebra and includes enough nonvertebral voxels to allow automatic separation.
We are interested in a robust, voxelwise segmentation method. The Bayesian framework meets these requirements and offers a consistent statistical modeling for the segmentation of an image into classes. When processing images or volumes, Hidden Markov Random Field (HMRF) [16] modeling often provides good results because the model do consider spatial relationships. However, using HMRF can be computationally timeconsuming. This is mainly due to the sampling needed to perform estimations from an analytically unknown distribution. On the other hand, the classical HMC framework, while having the advantages of Bayesian segmentation, does not have the drawbacks of HMRF. It provides faster computations, and we use it with a specific volume transformation to preserve the most important spatial features. The BaumWelch algorithm [17] is used for segmentation, based on parameters estimated with the Stochastic ExpectationMaximization (SEM) [18] method.
Volume transformation
First of all, the 3D data needs to be transformed to obtain a onedimensional chain. This point must be carefully considered, since while it permits fast computation, it introduces an artificial 1D order in 3D data and thus uses only 2 out of 26 neighbors for each voxel. A 3D volume can be transformed by sweeping each line, column, and row from first to last but this transformation induces too much distortion to the original data structure. Another alternative is the Hilbert curve [19], which is known to be successful for transforming 2D or 3D images into chains (see, e.g., [20–22]). The resulting chain is more spatially regular; however, it creates artifacts in the HMC segmentation because the chain requires having relatively few state transitions to produce a smooth segmentation estimate. The Hilbert curve path does not ensure this; therefore, in this section, a new volumetochain transformation is introduced, relying on the shape information obtained at the coarse segmentation step.
Forwardbackward algorithm
This algorithm allows the computation of posterior densities required for segmentation. Let N be the length of the chain obtained from the spiral transform. X=(X _{1},…,X _{ N }) and Y=(Y _{1},…,Y _{ N }) are respectively the random variables sequences representing the spiral transformation of the class volume and the observed volume. We will note x=(x _{1},…,x _{ N }) and y=(y _{1},…,y _{ N }) their respective realizations. The class volume elements take their values in Ω={ω _{0},ω _{1}} since we want to discriminate vertebral (ω _{1}) from nonvertebral (ω _{0}) elements. The observed volume voxels remain in Hounsfield Units, typically in a range of [ −2000, 2000] HU. For clarity, we note p(x _{ n }) and p(x) instead of p(X _{ n }=x _{ n }) and p(X=x), respectively, and likewise for the Y process.

X is a Markov chain:$$ p(\boldsymbol{x}) = p(x_{1}) p(x_{2}\,\,x_{1}) \ldots p(x_{N}\,\,x_{N1}) $$(10)

The (Y _{ n })_{1≤n≤N } are conditionally independent with respect to X:$$ p(\boldsymbol{y}\,\,\boldsymbol{x}) = \prod_{n=1}^{N} p(y_{n}\,\,\boldsymbol{x}) $$(11)

The noise independence is verified :$$ p(y_{n}\,\,\boldsymbol{x}) = p(y_{n}\,\,x_{n}) ~ \forall n \in \left\{ 1, \ldots, N \right\} $$(12)

The previous points lead to the following expression for the joint (X,Y) probability distribution:$$ p(\boldsymbol{x}, \boldsymbol{y}) = p(x_{1}) p(y_{1}\,\,x_{1}) \prod_{n=2}^{N}p(x_{n}\,\,x_{n1}) p(y_{n}\,\,x_{n}) $$(13)
The distribution in (16) requires the knowledge of noise and model parameters. In an unsupervized segmentation framework, they must be estimated. An estimation method is reported in the next section.
Parameter estimation
The set of parameter to estimate is Θ={π _{ ij },π _{ i },μ _{ i },σ _{ i }} for i,j∈{0,1}. With complete data (x,y), one can perform the estimation of Θ with the maximum likelihood (ML) estimators. Complete data are however unavailable. This is why SEM iteratively provides simulations of x along posterior distributions.
The SEM algorithm requires an accurate initialization to ensure fast parameter estimation convergence. At this step, we split the process: in some known cases, the volume can include air elements, which are clearly distinct from both soft tissues and bones. To avoid wrong class clustering, we use for the initialization a set of reference parameters obtained from other vertebra of the same patient without air in the neighborhood. This allows correct calibration with respect to the patient and the scanner, while avoiding wrong class clustering. In any other cases, we use a simple initialization where μ _{0}=0.25, μ _{1}=0.75, and σ _{0}=σ _{1} are estimated through the ML estimator from the whole sequence y and π _{ ij }=π _{ i }=0.5 ∀i,j∈{0,1}. These initializations are choosen to ensure class separation and avoid reliance on other algorithm(s) convergence (e.g., the Kmeans algorithm [25]).
We assume that the SEM algorithm has converged when ε<1 %. This choice allows performing the algorithm in a small number of iterations: typically less than 15 iterations are needed. Setting a smaller ε increases the global processing time and does not provide noticeable improvement to the result. The adapted SEM algorithm is summarized in Algorithm 4.
HMC segmentation algorithm
Results
In this section, the method performance is evaluated. First, the method is qualitatively evaluated on a set of 339 standard vertebrae acquired in daily practice. Then, quantitative results on manually segmented data are reported. Pathological cases make then the robustness evaluation possible. Finally, we provide simple integration examples with a segmentation of the full spine.
Standard cases: qualitative results
The method is evaluated on a set of vertebral volumes from the whole spine of 15 consecutive patients in an oncologic tertiary center, with exclusion of patients with bone tumors or metastatic spine involvement. Patients had a mean age of 63 and presented degenerative joint alterations and some osteoporotic changes, reflecting most of the situations encountered in daily practice; 339 vertebral volumes were extracted and evaluated, meaning that almost all patients’ vertebrae were tested.
Each volume is applied successively Algorithm 2 for coarse segmentation and Algorithm 5 for fine segmentation. For the sake of comparison, the Kmeans algorithm is used as a benchmark since it has common ground with the proposed method: it processes the voxel intensities and has no prior on the volume shape. Since the twoclass Kmeans classification fails when air is present in the volume, we use a subsample in which no air is present to provide a more accurate comparison.

Excellent (100): the vertebra is exactly delimited inside its bounding box.

Good (75): most of the anatomical structure is covered, but some voxels are segmented out.

Bad (50): the vertebra is recognizable but noticeable part are missing from the result.

Poor (25): the vertebra is not recognizable enough.

Fail (0): the segmentation fails to proceed.
Results for the subsample without air and for the full sample. This parting provides accurate comparative results on the subsample
Partial set: 178 volumes  Full set: 339 volumes  

Grade (score)  Proposed method  Kmeans  Proposed method 
Excellent (100)  75 (42.13 %)  46 (25.84 %)  98 (28.91 %) 
Good (75)  64 (35.96 %)  83 (46.63 %)  129 (38.05 %) 
Bad (50)  31 (17.42 %)  35 (19.66 %)  80 (23.60 %) 
Poor (25)  6 (3.37 %)  9 (5.06 %)  29 (8.55 %) 
Fail (0)  2 (1.12 %)  5 (2.81 %)  3 (0.88 %) 
Average score  78.65  71.91  71.39 
The algorithms were developed and tested using Matlab on an Intel i5 (2.6 GHz) on one core, without specific optimization of the code. The processing time is of 36 s by vertebra on average, with 10 s for the Carving step and 26 s for the HMC step. This processing time depends on the size of the vertebra to segment, the average total time being 71.4 s for lumbar vertebrae and 19.8 s for cervical vertebrae. For daily practice implementation, the use of C/C++ language is expected to provide a gain of a factor at least 10 to the processing time.
The results presented here cover standard volumes mostly encountered in practice, but did not yield voxelwise error rate. The next section presents a quantitative evaluation of the method, with respect to manual segmentations.
Quantitative results
When considering vertebral segmentation in CT images, there are few available complete dataset allowing a quantitative evaluation of a method. We use the dataset presented at the CSI 2014 challenge [26], available on the public SpineWeb platform [27]. This dataset contains 10 spine scans from a trauma center, acquired during daily clinical routine work. Patients were aged 16 to 35, and the scans covered thoracic and lumbar vertebrae in most cases. A total of 175 segmented volumes were extracted from the dataset. The performances are measured as the rate of correctly classified voxels (true positives and true negatives).
Errors are either false positive (type I error) or false negative (type II). False positive are most encountered in the presence of highintensity elements, such as calcifications or ribs near thoracic vertebrae. On the other hand, false negatives are either missing voxels at the vertebrae boundary or missing voxels within the vertebral body.
These results are satisfying, given that most error sources are known and that specific postprocessing could easily remove them. So far, the performances were evaluated on standard cases: the next section presents pathological cases that may be encountered in practice.
Pathological cases: robustness evaluation
As our method performs correctly on standard case, it has to be evaluated in more difficult situations, namely pathological cases. Since we aim at a clinical implementation, the method robustness to the most frequent nonstandard cases is indeed mandatory. We first briefly describe the selected cases and the corresponding challenges, then we provide their corresponding segmentation results and discussion.
On the other hand, many pathologic conditions can lead to bone density variations. For instance, osteoblastic cancerous tumors will increase bone density. On the other side, osteolytic tumoral involvement is associated with bone destruction and is therefore seen as areas of decreased bone density. Finally, treatments—general treatments as chemotherapy or interventional treatments as cementoplasy—can induce bone density alterations. In particular, cementoplasty, which can be described as the interventional introduction of artificial highdensity material inside the vertebral body, represents an extreme case of overdensity. This is the third case we retained (see Fig. 11 c).

The hernia case presents a region which can be seen as a vacuum in the bone material. This vacuum is segmented out by the method since it differs from the bone in intensity. However, this particular point does not prevent the method to perform the segmentation correctly. Noteworthy, some inner parts of the vertebral body are included in the segmentation, which is not the case for standard lumbar vertebrae (e.g., Figures 6,12 a). This is due to the relative overdensity induced by the hernia in the surrounding material.

Finally, the cementoplasty case represents a more challenging test. It produces indeed an almost homogeneous bright region inside the vertebral body, leading to a global distortion of the data in comparison with the standard cases. Nevertheless, the proposed method successes in providing a correct result (Fig. 12 c), which does not cover the full vertebra volume but does represent most of the underlying vertebra. It also shows that natural overdensities of lower range can be handled by our method, the cementoplasty being one of the most extreme cases.
The results presented here show that the proposed method is robust to some of the most frequent particular cases met in clinical context. Furthermore, as it provides a correct result for a challenging case, one can expect it to be robust to most of the lowerintensity specificities. We provide in the next section further results covering the range of all vertebrae for two patients.
Integration examples
First of all, one can notice on both results that the segmentations include some separation artifacts due to the delimitations of the volumes (initial bounding box). Thus, some vertebral elements have been segmented out from the total result, as they are initially badly delimited. Note also that some nonvertebral elements have been segmented in; this is in particular the case with the L1 vertebra from the second case which present calcifications (as in Fig. 12 b). This also happens with surrounding bones, such as the ribs for thoracic vertebrae and the pelvis for the L5 vertebra. Nevertheless, the segmentation is not impacted by these inclusions and performs correctly.
From Fig. 13, the changes in vertebrae shape and size are clear within one patient and also between patients. Indeed, it is noticeable that the seven lower vertebrae shape differs between the two cases. The vertebral compression induces deformations in the observed vertebral bodies; thus, their shape does differ from prior expectations. Note also that the second case presents an arthrosis between T12 and L1, causing vertebral bodies to join. Despite these specificities, our method performs correctly on all vertebrae; all of the vertebrae substructure are clearly segmented.
The results show that the proposed method can be successfully integrated within a simple spine processing and thus can be used in more complex framework. Given the results from the previous sections, one can expect our method to perform well in most practical situations, regardless of the vertebra type, position, and specificity. Further discussion on the method is given in the next section.
Discussion and conclusion
Vertebra segmentation is a challenging task. The wide range of shapes, the high rate of aging modifications, and the pathologic alterations frequently encountered in real cases explain the difficulties of an automatic segmentation in daily practice patients.
Hence, most of the published works about vertebra segmentation seem to be developed and evaluated on ideal data, namely in a young population in which vertebra are well separated, with CT providing a very high contrast between medullar bone, vertebra boundaries, and soft tissues. Additionally, sometimes only the lumbar spine is evaluated, with a consequent lack of information about the robustness of the presented schemes toward thoracic and cervical vertebrae.
This work is part of a larger project on spinal registration for patients presenting bone tumors. The segmentation method we present is thus developed in a real practice perspective, explaining why we took into account pathologic cases as well as the most frequent aging modifications, without any prior on vertebrae shape and luminance. Bounding box initialization method can be obtained automatically using stateofthe art techniques.
The presented method results fulfills the requirements of automatic bone segmentation prior to registration processes, with an affordable computational time.
Endnotes
^{1}Note that this operation is similar to a morphological gradient, with two different structuring elements instead of one.
^{2}We use volume height since it is shorter than depth or width for the vertebral volumes we process.
^{3}Experiments show that other perimeterbased path such as using a coronal axis slice by slice or using concentric helix leads to similar results.
Declarations
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
 Sharma N, Aggarwal LM (2010) Automated medical image segmentation techniques. J Med Physics/Assoc Med Phys India 35(1): 3.Google Scholar
 Nyúl LG, Kanyó J, Máté E, Makay G, Balogh E, Fidrich M, Kuba A (2005) Method for automatically segmenting the spinal cord and canal from 3D CT images In: Computer Analysis of Images and Patterns, 456–463.. Springer.Google Scholar
 Malik J, Belongie S, Leung T, Shi J (2001) Contour and texture analysis for image segmentation. Int J Comput Vis 43(1): 7–27.View ArticleMATHGoogle Scholar
 Kim Y, Kim D (2009) A fully automatic vertebra segmentation method using 3D deformable fences. Comput Med Imaging Graph 33(5): 343–352.View ArticleGoogle Scholar
 Mirzaalian H, Wels M, Heimann T, Kelm BM, Suehling M (2013) Fast and robust 3D vertebra segmentation using statistical shape models In: Engineering in Medicine and Biology Society (EMBC), 2013 35th Annual International Conference of the IEEE, 3379–3382, Osaka.Google Scholar
 Rasoulian A, Rohling R, Abolmaesumi P (2013) Lumbar spine segmentation using a statistical multivertebrae anatomical shape + pose model. IEEE Trans Med Imaging 32(10): 1890–1900.View ArticleGoogle Scholar
 Heimann T, Meinzer HP (2009) Statistical shape models for 3D medical image segmentation: a review. Med Image Anal 13(4): 543–563.View ArticleGoogle Scholar
 Klinder T, Ostermann J, Ehm M, Franz A, Kneser R, Lorenz C (2009) Automated modelbased vertebra detection, identification, and segmentation in CT images. Med Image Anal 13(3): 471–482.View ArticleGoogle Scholar
 Ma J, Lu L, Zhan Y, Zhou X, Salganicoff M, Krishnan A (2010) Hierarchical segmentation and identification of thoracic vertebra using learningbased edge detection and coarsetofine deformable model In: Medical Image Computing and ComputerAssisted Intervention–MICCAI 2010, 19–27.. Springer, Beijing.View ArticleGoogle Scholar
 Glocker B, Feulner J, Criminisi A, Haynor DR, Konukoglu E (2012) Automatic localization and identification of vertebrae in arbitrary fieldofview CT scans In: Medical Image Computing and ComputerAssisted Intervention–MICCAI 2012, 590–598.. Springer, Nice.View ArticleGoogle Scholar
 Chen T, Huang TS, Yin W, Zhou XS (2005) A new coarsetofine framework for 3D brain MR image registration In: Computer Vision for Biomedical Image Applications, 114–124.. Springer, Beijing.View ArticleGoogle Scholar
 Duda RO, Hart PE, Stork DG (2012) Pattern classification. John Wiley & Sons.Google Scholar
 Achanta R, Shaji A, Smith K, Lucchi A, Fua P, Susstrunk S (2012) SLIC superpixels compared to stateoftheart superpixel methods. Pattern Anal Mach Intell IEEE Trans 34(11): 2274–2282.View ArticleGoogle Scholar
 Sprawls P. Jr. (1995) Physical principles of medical imaging. Aspen Publication, Rockville.Google Scholar
 Nock R, Nielsen F (2004) Statistical region merging. Pattern Anal Mach Intell IEEE Trans 26(11): 1452–1458.View ArticleGoogle Scholar
 Geman S, Geman D (1984) Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. Pattern Anal Mach Intell IEEE Trans6(6): 721–741.View ArticleMATHGoogle Scholar
 Baum LE, Petrie T, Soules G, Weiss N (1970) A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains. Ann Math Stat 1: 164–171.MathSciNetView ArticleMATHGoogle Scholar
 Celeux G, Diebolt J (1986) L’algorithme SEM: un algorithme d’apprentissage probabiliste: pour la reconnaissance de mélange de densités. Revue de statistique appliquée 34(2): 35–52.MATHGoogle Scholar
 Sagan H (1994) Spacefilling curves, Vol. 18. Springer.Google Scholar
 Bricq S, Collet C, Armspach JP (2008) Unifying framework for multimodal brain MRI segmentation based on hidden Markov chains. Med Image Anal 12(6): 639–652.View ArticleGoogle Scholar
 Derrode S, Pieczynski W (2004) Signal and image segmentation using pairwise Markov chains. Signal Process IEEE Trans 52(9): 2477–2489.MathSciNetView ArticleGoogle Scholar
 Fjortoft R, Delignon Y, Pieczynski W, Sigelle M, Tupin F (2003) Unsupervised classification of radar images using hidden Markov chains and hidden Markov random fields. Geosci Remote Sensing IEEE Trans 41(3): 675–686.View ArticleGoogle Scholar
 Marroquin J, Mitter S, Poggio T (1987) Probabilistic solution of illposed problems in computational vision. J Am Stat Assoc 82(397): 76–89.View ArticleMATHGoogle Scholar
 Dempster AP, Laird NM, Rubin DB (1977) Maximum likelihood from incomplete data via the EM algorithm. J R Stat Soc Series B (Methodological) 39: 1–38.MathSciNetMATHGoogle Scholar
 MacQueen J, et al. (1967) Some methods for classification and analysis of multivariate observations In: Proceedings of the fifth Berkeley symposium on mathematical statistics and probability, vol. 1, 281–297, Berkeley.Google Scholar
 Yao J, Burns JE, Munoz H, Summers RM (2012) Detection of vertebral body fractures based on cortical shell unwrapping In: Medical Image Computing and ComputerAssisted Intervention–MICCAI 2012, 509–516.. Springer.Google Scholar
 SpineWeb platform website. http://spineweb.digitalimaginggroup.ca/.