 Research
 Open access
 Published:
Towards improving edge quality using combinatorial optimization and a novel skeletonize algorithm
BMC Medical Imaging volumeÂ 21, ArticleÂ number:Â 119 (2021)
Abstract
Background
Object detection and image segmentation of regions of interest provide the foundation for numerous pipelines across disciplines. Robust and accurate computer vision methods are needed to properly solve imagebased tasks. Multiple algorithms have been developed to solely detect edges in images. Constrained to the problem of creating a thin, onepixel wide, edge from a predicted object boundary, we require an algorithm that removes pixels while preserving the topology. Thanks to skeletonize algorithms, an object boundary is transformed into an edge; contrasting uncertainty with exact positions.
Methods
To extract edges from boundaries generated from different algorithms, we present a computational pipeline that relies on: a novel skeletonize algorithm, a nonexhaustive discrete parameter search to find the optimal parameter combination of a specific postprocessing pipeline, and an extensive evaluation using three data sets from the medical and natural image domains (kidney boundaries, NYUDepth V2, BSDS 500). While the skeletonize algorithm was compared to classical topological skeletons, the validity of our postprocessing algorithm was evaluated by integrating the original postprocessing methods from six different works.
Results
Using the state of the art metrics, precision and recall based Signed Distance Error (SDE) and the Intersection over Union bounding box (IOUbox), our results indicate that the SDE metric for these edges is improved up to 2.3 times.
Conclusions
Our work provides guidance for parameter tuning and algorithm selection in the postprocessing of predicted object boundaries.
Background
Boundaries are crucial in object detection methods. They provide the means to distinguish objects of interest in an image from the background. In humans, this task appears to be intuitive although it is accomplished using complex processes that integrate a priori knowledge. Knowing where the boundary of an object lies requires human annotators and/or algorithms to find a dividing line between two areas of an image that exhibit different visual features (e.g.,Â object boundaries, material properties boundaries, lighting boundaries). However, in both cases there remains an uncertainty as to where the edge lies. This uncertainty varies and is even exacerbated by the domain specificity (e.g.,Â natural image domain). In turn, it prevents the definition of precise edges and the generalization of edgerelated methods. In this work, we distinguish between this area of uncertainty (ramp edge) and a thin edge. Our problem description defines the former as the boundary, where the intensity change is not instantaneous but occurs over a finite distance with the occurrence of discontinuities (i.e.,Â incomplete object boundary). The latter, or thin edge, is a 1 pixel wide exact representation from a postprocessed ramp edge. While a boundary in an image is represented as a perpixel uncertainty value, an edge is represented as a perpixel binary dichotomy. Our aim is to improve the creation and definition of ramp edges after they have been detected to meet the constraints of an 1pixel edge. That is to say, postprocessing the ramp edge, i.e.,Â the boundary, to a thin edge.
Historically, edge detection is a core discipline in computer vision that allows many different tasks, e.g., image segmentation [1], object detection [2, 3], and augmented reality [4]. These tasks have evolved in different directions: (a) pixel based, where individual pixel value differences are investigated in a local neighborhood of an edge [5,6,7], (b) models based on human perception [8], and (c) computational approaches on step edges [9]. However, these works provide all edges of an image and the task of filtering out nonrelevant edge remains. The extraction of taskrelevant edges, called semantic edges, is an active research topic across many research fields [10,11,12,13,14]. A graphbased solution was introduced by stitching together appropriate edge fragments to produce a closed or linked object boundary [15]. Boundary fragments are identified using an encoding of the Gestalt laws of proximity and continuity. Two other works were proposed that rely on the concept of object consistency. They tackle the same task of segmentation, however they do so by considering hue or saturation between neighboring objects. One proposed solution used the color differences of neighboring pixels combined with progressive blockoriented edge detection [16]. Another solution introduced the use of differences in gray values between pixels with the addition of not taking into account edges between other features [17].
In the medical research field, deep learning based algorithms have been reported [10, 14, 18, 19]. For minimally invasive surgery, and specifically laparoscopic image data, the task of semantic segmentation has proven challenging due to different limiting factors (e.g.,Â the occlusion by fatty tissue or surgical instruments, or the homogeneity of the texture of the target organ) [20]. In our proposed deep learning based edge detector [14], the neural net predicts kidney boundaries that are then postprocessed into 1 pixel wide edges. In this work, the specific edge location is determined by a confidence value, that is represented by the pixel intensity within the predicted boundary. Related work relied on this same approach to address the task of headcircumference delineation from fetal ultrasound images in the domain knowledge of prenatal ultrasounds [21]. Yet, adapting this algorithm warrants further investigations to improve postprocessing steps.
This work details the how to improve the resulting edges by proposing a formalization of an edge detection pipeline that contains the conversion of such boundaries or ramp edges to thin edges as a postprocessing step. This allows us to seamlessly integrate our approach into many existing edge detection pipelines and help improve methods for a variety of tasks. Further, we introduce a novel skeletonize algorithm and evaluate its performance compared to the stateoftheart results across different research fields (c.f.,Â TableÂ 3). We benchmark our algorithm against morphological medial axis transformation algorithms to enable a comparable evaluation. We consider these three algorithms: a 2D skeletonize [22], a 3D skeletonize [23], and the novel GrayWeighted Path Skeletonize (GWPS). To evaluate and improve the accuracy of the resulting edges, we use stateoftheart metrics, that is the Signed Distance Error (SDE) and the Intersection over the Union of the bounded connected component (IoUbox), respectively. To avoid introducing and separating metrics of downstream tasks from edgespecific metrics, we restrict our analyses to the resulting 1pixel edge maps. We assess the validity of the proposed postprocessing algorithm by relying on three data sets: (1) Kidney boundaries [14], (2) NYU Depth Dataset V2 [24], and (3) BSDS 500 [25].
This work would help the community at large by presenting: (a) a parameterized postprocessing algorithm to achieve a thin edge that also enables a combinatorial optimization, (b) a novel skeletonize algorithm by intermediate transformation into a grayweighted distance image, and (c) an extensive evaluation regarding performance metrics.
Methods
To improve the accuracy of topological skeletons, we first formalize an edge detection pipeline as a composition of multiple functions. Second, and to enable a combinatorial optimization, we describe the postprocessing function, including its parameters. Third, we propose a novel skeletonize algorithm that is adapted to better object boundary processing. Fourth and last, we detail the used data sets, lay out the evaluation, and report the employed metrics.
We define an edge detection pipeline \({\mathcal{P}}\) as a composition of two functions:
where \(I^3\) is the space of the three RGB channels and \(I^B\) the space of binary images. The function defining the pipeline contains two parts:
where \(I^1\) is the space of 1channel gray scale images.
This function composition maps our approach of predicting ramp edges or boundaries, then processing them into thin edges as follows:

1
\({\mathcal{A}}\) takes a RGB image \(s \in S\) as input and predicts the boundaries of the objects of interest as a gray scale image \(g \in G\).

2
\({\mathcal{E}}\) extracts from \(g \in G\) the thin edges and produces the output of the pipeline \(o \in O\), i.e.,Â binary image.
S,Â G and O are sets of images in image spaces \(I^3, I^1\) and \(I^B\), respectively. FigureÂ 1 presents a visual overview of \({\mathcal{P}}\).
Edge extraction function \({\mathcal{E}}\)
The predicted boundaries of an image \(g \in G\) from \({\mathcal{A}}\) are converted to a thin edge using the extraction function \({\mathcal{E}}\) defined in Eq.Â 3 with the following parameters:

1
Input image \(g\): Gray scale output image of function \({\mathcal{A}}\) containing the predicted boundaries.

2
Threshold \(\alpha\): Due to noise in the predicted boundary, we set a pixel value threshold \(\alpha\). Iff the pixel value is below \(\alpha\), the pixel gets set to a value of zero, otherwise it remains the same.

3
Skeletonize algorithm \(\beta\): This algorithm converts the boundary to a thin edge. We benchmark our skeletonize algorithm â€˜GWPSâ€™Â to the following algorithms: 2D skeletonize [22] (called â€˜2Dâ€™), 3D skeletonize [23] (called â€˜3Dâ€™). For the â€˜2Dâ€™Â and â€˜3Dâ€™Â algorithm the boundary images are converted to binary image representations before applying the skeletonize algorithm. That is to say, while a pixel value of 1 implies a contribution to the boundary, 0 belongs to the background.

4
Pruning \(\gamma\):Â After applying the skeletonize algorithm spurs are introduced because of small irregularities in the boundary. Spur pruning removes all spurs with a maximum length \(l\) [26]:
$$\begin{aligned} l = \frac{\gamma }{100}*\sqrt{w^2+h^2}. \end{aligned}$$(4)With the image width \(w\) and height \(h\) in pixels. This definition allows for the relative comparison of the pruning effectiveness between images of different sizes.
Our postprocessing algorithm (AlgorithmÂ 1) defines the necessary steps to implement the aforementioned edge extraction function \({\mathcal{E}}\) on a per image basis.
The parameter set \(\{\alpha , \beta , \gamma \}\) is bounded, such that:
Grayweighted path skeletonize (GWPS)
FigureÂ 2 shows an example predicted boundary as a threedimensional surface. The goal of the GWPS algorithm is to find the ridge along this surface and extract it as a thin edge in twodimensional space. To use the ridge of the surface, we introduce an algorithm that computes the resulting edge with the help of an intermediate representation of the distance field using a GrayWeighted Distance Transformation or GWDT [27]. The GWDT assigns each contributing pixel a value representing the cost of the shortest path to the background, where costs correspond to the accumulated pixel intensity values along the path. In the context of our problem, background pixels have the value zero and therefore all other nonzero pixels are contributing information to the thin edge.
The GWDT value at pixel position (i,Â j), given an image; where f(i,Â j) represents the pixel value at position (i,Â j), is calculated by minimizing the accumulated cost along the path from the current position to a background pixel. A pixel path W is defined as
where the following properties hold:
That is to say, all pixels in the sequence are fourway connected, the first \(n1\) pixels are foreground pixels, and the path ends in a background pixel. We optimize the cost along this path for all nonzero pixels of the input image and assign the cost as the corresponding pixel value of the GWDT image \(t \in I^1\)
that is to say, the pixel value at position (i,Â j) is optimal as it is set to the minimum of the accumulated costs along the pixel path W.
Using the resulting image as an intermediate representation, we propose AlgorithmÂ 2 to reduce the GWDT to a thin edge. This algorithm uses the eightway neighborhood number \(N_c\) [28] that is defined for a pixel \(x_0\) using its neighborhood, as depicted in TableÂ 1 and Eq.Â 9.
The resulting image contains the desired thin edge represented by nonzero pixels along the ridge of the distance fieldâ€™s surface. The edge is also eightway connected due to the evaluation of the eightwayneighborhood number. In summary, our skeletonize algorithm comprises two steps:

1.
Calculate the GWDT of the image and,

2.
Extract the thin edge of the GWDT by running GWPS (c.f.,Â AlgorithmÂ 2).
Data sets
To meet the requirement of creating thin edges, the pipeline is used to find the best parameter sets for the postprocessing methods. This means the evaluation is carried out for every data set on the predicted boundaries versus the published edges. We report below the data sets that are used to evaluate our computational strategies and the pipeline usage:

Kidney boundaries: We use this data set as is [14]. It consists of 2250 images captured during porcine partial nephrectomies and for the task of kidney edge detection. For the investigated method, we do not modify the original pipeline.

NYU Depth V2: We adapt this data set to an edge detection problem [24]. This is achieved by using preprocessing scripts from [29] that extract edges from ground truth segmentations. The data set contains 1449 images from a variety of indoor image scenes. For the two investigated methods, we only use the RGB image components. That is to say, we disregard the depth component.

BSDS 500: We use this data set as is. It contains natural images with human ground truth object boundary annotations. This data set contains 500 images [25]. For the three investigated methods, we do not modify the original pipeline.
Evaluation
To improve the resulting thin edges, both qualitatively and quantitatively, we focus on the metrics as reported in the state of the art [14]. Specifically, metrics that reflect the reliability of all evaluated pipelines. That is to say, the SDE and IoUbox metric, as the former integrates both precision and recall, and the latter evaluates the thin edge shape and its absolute position to the ground truth.
To find the optimal parameters for function \({\mathcal{E}}\), we perform a combinatorial optimization iteratively for each data set; that is on the validation set. The parameter combination yielding the best SDE result is then selected and used for evaluation of the test set. The parameter set of function \({\mathcal{E}}\) is evaluated in the bounds defined by Eq.Â 5. The originally presented pixel accuracy based metrics (i.e.,Â precision, recall and SDE) are modified to be used in cases where no edges are predicted. Yet, some are present in the corresponding ground truth image and vice versa. This is modification is achieved by identifying all described cases during evaluation and then adding an edge pixel in the center of the empty binary image.
Results
We present the results for the parameter search regarding the function \({\mathcal{E}}\), then compare our results on the test sets of each related work. FigureÂ 3 shows the result of a sample image at different parts of our pipeline.
In TableÂ 2, we found that in four of the six investigated related work (66.67%), the edge extraction yields the lowest SDE score by using the GWPS algorithm. Further, pruning was only necessary in one pipeline, as all other reported parameter combinations are optimal with \(\gamma = 0\). That is to say, pruning was not used.
We evaluated the best parameters on the test part of each data set. All results are reported in TableÂ 3. The term original refers to running the methodology as is, meaning that we follow the original data set usage and postprocessing algorithm. We report in parenthesis the standard deviation for each result.
Using our previous work [14], we achieved an SDE score of 2515.0 (\(\sigma = 7053.22\)), which is 10.8\(\times\) worse than the original, and an IoUBox of 0.04 (\(\sigma = 0.07\)).
Using the NYU Depth Dataset V2, our AlgorithmÂ 1 achieved a SDE of 4.2 (\(\sigma = 1.7\)) and an IoUBox of 0.6 (\(\sigma = 0.2\)). However, using the algorithm of [30], we achieved a SDE score of 7.4 (\(\sigma = 3.91\)) and an IoUBox of 1 (\(\sigma = 0.04\)) using [31]. For this data set, we improved the average SDE by 1.13 times (x).
Using the BSDS 500 data set and the algorithm of [25], we reported a SDE score of 21.8 (\(\sigma = 31.28\)). That is an improvement of 2.3\(\times\), while maintaining a similar IoUBox score of 0.6 (\(\sigma = 0.18\)). While we increased the SDE using the algorithm of [32] to 12.0 (\(\sigma = 13.08\)), that is 1.08\(\times\) worse, we managed to increase the IoUBox to 0.8 (\(\sigma = 0.17\)), that is an 1.6\(\times\) increase. Using the algorithm of [33] an 1.08\(\times\) increase in SDE to 10.3 (\(\sigma =9.51\)) was reported. Although, increasing the IOUBox to 0.8 (\(\sigma =0.15\)), which corresponds to an improvement of 1.6\(\times\).
Discussion
Our evaluation of a new and refined postprocessing algorithm have found improvements, in regards to the average SDE and the IOUbox metrics, across several data sets and the majority of the established works. Indeed, using an optimized parameter set for our proposed edge detection pipeline \({\mathcal{P}}\), specifically for the function \({\mathcal{E}}\), was found to be beneficial to improving the measured edge quality. Instead of relying solely on fixed classical postprocessing algorithms, our work proposes adjusting the algorithm to adopt the parameter set to the task at hand. We discuss several pointers below.
First, to find the optimal parameter combination we used a nonexhaustive discrete parameter search. Due to the computational expense of such a parameter search, we reduces the search space by evaluating every tenth value for \(\alpha\) and each integer from 0 to 10 with the addition of every tenth value from 20 to 100 for \(\gamma\), respectively. Although computationally intensive, the granularity of such a search may be increased to find better sets of suitable parameters for each tested work.
Second, the introduction of an error correction step, that is to say pruning, was determined to only improve the evaluated metrics for one of the examined works. Evaluating the pruning with a higher granularity may lead to a potential improvement of metrics by allowing noninteger pruning values.
Third, the modification of the SDE computation (c.f., evaluation section) led to an inconsistency in the evaluation of our previous work [14]. Due to predicted images with no edge, the original evaluation algorithm miscalculates the SDE value. By implementing our adjusted SDE calculation algorithm, we obtained higher SDE scores for a few predicted images. This can be seen in the discrepancy between the mean and median value as well as the high standard deviation of 7053.22. This is mainly due to the postprocessing algorithm failing on a small subset of the images.
Fourth, the usage of a boundary that represents a distance field, that is used in [14], prompted us into creating AlgorithmÂ 2. Using the available certainty information inside the boundary, our algorithm is designed to find the optimal edge along the ridge of the threedimensional representation of the step edge or the boundary. Although the algorithms from the investigated works were not particularly designed with this idea in mind, we apply the GWPS algorithm on all boundary based predictions as presented in TableÂ 3. Additionally, we hypothesize that our results may be improved if we incorporate the distance field based prediction into the respective algorithms. For example, this could be achieved by training a neural network to make the predicted boundaries more suitable for AlgorithmÂ 2.
Fifth, the choice of the \(\alpha\) parameter is critical to the quality of the resulting 1pixel edges. The presence of noise in the input data to our pipeline is a major factor in poor edge predictions. Existing methods that adapt our pipeline as a postprocessing step should take special care in setting the \(\alpha\) parameter, as it reflects only a very rudimentary form of noise filtering. Rather, it might even be advantageous to use additional noise reduction to achieve better results. The influence of the \(\beta\) parameter is less than that of \(\alpha\), as tests have shown similar but only slightly worse results when the selected \(\beta\) is changed in TableÂ 3. \(\gamma\) influences the results the least, as seen from TableÂ 3, and can often be neglected.
Sixth, our evaluation pointed towards a complex interdependence between the employed boundary prediction algorithm, including edge detectors, the data set, and the resulting optimal parameter choice. To achieve the best SDE score, we conducted a sensitivity analysis. This permitted us to find the most suitable parameter set for the used data sets. In the worst case scenario, the combination of other data sets, computational approaches (e.g,Â detectors, predictors, etc) and the wrong parameters, would result in unsatisfactory results. Given these combinations, we recommend additional investigation of the failing pipeline step and the implementation of countermeasures, e.g., the noise reduction step (see previous discussion point). Given an example data set and two algorithms, we found that the difference in value of each parameter pair influenced our decision for a good postprocessing algorithm. Although this decision is nontrivial, formalizing an evaluation pipeline from endtoend better helped us consider the solution space and find the optimal one. In sum, the evaluation results support our effort to improve the resulting edges.
Seventh, the choice of an appropriate postprocessing algorithm is an important step in the definition of an edge detection method. Yet, the related work that tackles the task of edge detection are not attributing the necessary care of parameter tuning and algorithm selection regarding the processing of the predicted boundaries. We believe that an integration of our parameter optimization approach will help achieve a better edge quality.
Eighth, the evaluation of additional data sets, such as the PASCAL VOC 2012 [34] or Multicue [35] data set, could provide insight into the robustness of our proposed method in a greater variety of domains. Unfortunately, due to the unavailability of ground truth data for the PASCAL VOC 2012 data set and computational limitations this remains part of a potential future work.
Ninth and last, regardless of the application area, the task of edge detection can be supplemented by our herein presented methodology to optimize the postprocessing step. This work can be generalized using the following two steps: (a). evaluate the SDE metric on the validation set using an appropriate parameter search technique and find the optimal values for \(\{\alpha , \beta , \gamma \}\), and (b). use this parameter set and the selected algorithms for postprocessing the test set data. We examined different approaches and their corresponding parameters of step (a) and (b) and can therefore provide a reference to adapt our postprocessing algorithm. We believe that our work is a stepping stone for future endeavors that involve edge detection tasks using predicted signed distance fields or boundaries in images.
Conclusion
In this work, we have introduced a postprocessing algorithm that can be employed in an edge detection context. Despite the high impact on edge quality, edge detection methods primarily focus on one specific postprocessing reasoning. We investigated how edge quality can be improved by introducing a postprocessing algorithm with multiple parameters. By incorporating the parameters of thresholding, the type of the skeletonize algorithm, and the pruning into an edge extraction function, we were able to improve stateoftheart metrics in multiple pipelines, across different data sets and from multiple domains. The division of the evaluation process into multiple parameters and making our work open source renders our approach readily available, adaptable, and extendable. Additionally, we introduced a novel skeletonize algorithm that works in analogy to extracting the ridge of a 3D surface and projecting it into a 2D plane. Apart from the challenge of improving kidney boundary edge detection, our pipeline successfully improved SDE or IoUBox metrics for all other tested works on the two different data sets. Based on our evaluation pipeline, we believe that future edge detection methods can achieve better edge quality.
Availability of data and materials
The datasets analysed during the current study are available at https://endovissub2017kidneyboundarydetection.grandchallenge.org/ (kidney boundary, on request), https://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html (NYUDepth V2, public) and https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/resources.html (BSDS 500, public).
All evaluated results and source code is publicly available in the edges repository at https://github.com/fuxxel/edges.
References
Salman N. Image segmentation based on watershed and edge detection techniques. Int Arab J Inf Technol. 2006;3(2):104â€“10.
Zhan C, Duan X, Xu S, Song Z, Luo M. An improved moving object detection algorithm based on frame difference and edge detection. In: Fourth international conference on image and graphics (ICIG 2007). 2007. p. 519â€“523.
Zitnick CL, DollÃ¡r P. Edge boxes: locating object proposals from edges. In: European conference on computer vision. Springer; 2014. p. 391â€“405.
Hattab G, Meyer F, Albrecht R, Speidel S. Modelar: a modular and evaluative framework to improve surgical augmented reality visualization. In: Kerren A, Garth C, Marai GE, editors. EuroVis 2020: short papers. 2020 https://doi.org/10.2312/evs.20201066.
Roberts LG. Machine perception of threedimensional solids. Ph.D Thesis, Massachusetts Institute of Technology; 1963.
Prewitt JM. Object enhancement and extraction. Pict Process Psychopict. 1970;10(1):15â€“9.
Sobel I. Camera models and machine perception. Computer Science Department, Technion: Technical report; 1972.
Marr D, Hildreth E. Theory of edge detection. Proc R Soc Lond Ser B Biol Sci. 1980;207(1167):187â€“217.
Canny J. A computational approach to edge detection. IEEE Trans Pattern Anal Mach Intell. 1986;6:679â€“98.
Acuna D, Kar A, Fidler S. Devil is in the edges: learning semantic boundaries from noisy annotations. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2019. p. 11075â€“11083.
Marmanis D, Schindler K, Wegner JD, Galliani S, Datcu M, Stilla U. Classification with an edge: improving semantic image segmentation with boundary detection. ISPRS J Photogram Remote Sens. 2018;135:158â€“72.
Wu Z, Li S, Chen C, Hao A, Qin H. A deeper look at image salient object detection: Bistream network with a small training dataset. IEEE Trans Multimed. 2020.
Wang X, Li S, Chen C, Fang Y, Hao A, Qin H. Datalevel recombination and lightweight fusion scheme for RGBD salient object detection. IEEE Trans Image Process. 2020;30:458â€“71.
Hattab G, Arnold M, Strenger L, Allan M, Arsentjeva D, Gold O, SimpfendÃ¶rfer T, MaierHein L, Speidel S. Kidney edge detection in laparoscopic image data for computerassisted surgery. Int J Comput Assist Radiol Surg. 2020;15(3):379â€“87.
Wang S, Kubota T, Siskind JM, Wang J. Salient closed boundary extraction with ratio contour. IEEE Trans Pattern Anal Mach Intell. 2005;27(4):546.
Rulic P, Kramberger I, Kacic Z. Progressive method for color selective edge detection. Opt Eng. 2007;46(3):037004.
Borga M, Malmgren H, Knutsson H. Fsedfeature selective edge detection. In: Proceedings 15th international conference on pattern recognition, ICPR2000, vol. 1. IEEE; 2000. p. 229â€“232.
Somandepalli K, Toutios A, Narayanan SS. Semantic edge detection for tracking vocal tract airtissue boundaries in realtime magnetic resonance images. In: Interspeech. 2017. p. 631â€“635.
Qi H, Collins S, AlisonÂ Noble J. Upinet: semantic contour detection in placental ultrasound. In: Proceedings of the IEEE/CVF international conference on computer vision workshops. 2019.
Stoyanov D. Surgical vision. Ann Biomed Eng. 2012;40(2):332â€“45.
Fiorentino MC, Moccia S, Capparuccini M, Giamberini S, Frontoni E. A regression framework to headcircumference delineation from us fetal images. Comput Methods Programs Biomed. 2021;198:105771.
Zhang T, Suen CY. A fast parallel algorithm for thinning digital patterns. Commun ACM. 1984;27(3):236â€“9.
Lee TC, Kashyap RL, Chu CN. Building skeleton models via 3D medial surface axis thinning algorithms. CVGIP Graph Models Image Process. 1994;56(6):462â€“78.
Silberman N, Hoiem D, Kholi P, Fergus R. Indoor segmentation and support inference from RGBD images. In: ECCV 2012.
Arbelaez P, Maire M, Fowlkes C, Malik J. Contour detection and hierarchical image segmentation. IEEE Trans Pattern Anal Mach Intell. 2011;33(5):898â€“916. https://doi.org/10.1109/TPAMI.2010.161.
Gonzalez RC, Woods RE. Digital image processing. 3rd ed. Upper Saddle River: PrenticeHall Inc; 2006. p. 676â€“8.
Qian K, Cao S, Bhattacharya P. Skeletonization of grayscale images by gray weighted distance transform. In: Visual information processing VI, vol. 3074. International Society for Optics and Photonics; 1997. p. 224â€“228.
Yokoi S, Toriwaki JI, Fukumura T. An analysis of topological properties of digitized binary pictures using local features. Comput Graph Image Process. 1975;4(1):63â€“73.
Xiaofeng R, Bo L. Discriminatively trained sparse code gradients for contour detection. In: Advances in neural information processing systems. 2012. p. 584â€“592.
Shen W, Wang X, Wang Y, Bai X, Zhang Z. Deepcontour: A deep convolutional feature learned by positivesharing loss for contour detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2015. p. 3982â€“3991.
DollÃ¡r P, Zitnick CL. Structured forests for fast edge detection. In: Proceedings of the IEEE international conference on computer vision. 2013. p. 1841â€“1848.
Liu Y, Cheng MM, Hu X, Wang K, Bai X. Richer convolutional features for edge detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2017. p. 3000â€“3009.
He J, Zhang S, Yang M, Shan Y, Huang T. Bidirectional cascade network for perceptual edge detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2019. p. 3828â€“3837.
Everingham M, VanÂ Gool L, Williams CKI, Winn J, Zisserman A. The PASCAL visual object classes challenge 2012 (VOC2012) Results. http://www.pascalnetwork.org/challenges/VOC/voc2012/workshop/index.html.
MÃ©ly DA, Kim J, McGill M, Guo Y, Serre T. A systematic comparison between visual cues for boundary detection. Vis Res. 2016;120:93â€“107.
Acknowledgements
None.
Funding
Open Access funding enabled and organized by Projekt DEAL. The authors gratefully acknowledge partial funding by a grant from the OP4.1 initiative, which was funded by the Federal Ministry of Economics and Technology, Germany (BMWi).
Author information
Authors and Affiliations
Contributions
MA implemented and evaluated the methods. GH supervised the research and its evaluation. The initial manuscript was drafted by MA. GH and SS revised the manuscript. All authors read and approved the manuscript.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
About this article
Cite this article
Arnold, M., Speidel, S. & Hattab, G. Towards improving edge quality using combinatorial optimization and a novel skeletonize algorithm. BMC Med Imaging 21, 119 (2021). https://doi.org/10.1186/s1288002100650z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s1288002100650z