Skip to main content

Compensation of small data with large filters for accurate liver vessel segmentation from contrast-enhanced CT images

Abstract

Background

Segmenting liver vessels from contrast-enhanced computed tomography images is essential for diagnosing liver diseases, planning surgeries and delivering radiotherapy. Nevertheless, identifying vessels is a challenging task due to the tiny cross-sectional areas occupied by vessels, which has posed great challenges for vessel segmentation, such as limited features to be learned and difficult to construct high-quality as well as large-volume data.

Methods

We present an approach that only requires a few labeled vessels but delivers significantly improved results. Our model starts with vessel enhancement by fading out liver intensity and generates candidate vessels by a classifier fed with a large number of image filters. Afterwards, the initial segmentation is refined using Markov random fields.

Results

In experiments on the well-known dataset 3D-IRCADb, the averaged Dice coefficient is lifted to 0.63, and the mean sensitivity is increased to 0.71. These results are significantly better than those obtained from existing machine-learning approaches and comparable to those generated from deep-learning models.

Conclusion

Sophisticated integration of a large number of filters is able to pinpoint effective features from liver images that are sufficient to distinguish vessels from other liver tissues under a scarcity of large-volume labeled data. The study can shed light on medical image segmentation, especially for those without sufficient data.

Peer Review reports

Background

Liver vessel segmentation from computed tomography (CT) images is to pinpoint the pixels that comprise the vessels; see Fig. 1. Vessel segmentation is quite helpful in many clinical applications [1, 2], e.g., disease diagnosis, surgical planning, thermal ablation, etc. Hence, many computational approaches have been developed to solve this problem, both from the traditional machine learning perspective as well as the deep learning perspective, particularly the latter one.

Fig. 1
figure 1

Vessel segmentation. The first row is the original images, while the second is the vessel masks obtained from 3D-IRCADb [22]

The traditional machine-learning techniques that are borrowed for vessel segmentation include active contour or level set [3], graph cut [4, 5], extreme learning machine [6], vascular filters [7,8,9], and still many others [10,11,12,13]. These approaches can fish out vessels from CT images with moderate accuracy and time-saving. However, the segmentation can be easily leaked into the adjacent tissues. Besides, some of these approaches require careful initialization, parameter settings, or feature engineering. These limitations highly prevent the applicability of the aforementioned models.

Hence, deep learning-based approaches have been intensively explored and exploited to overcome these constraints because of their automatic feature learning characteristics. These approaches include convolutional neural network-based [14,15,16], recurrent neural network-based [17], a mixture of convolution and recurrent neural works [18], and integration of deep neural networks with conventional machine learning techniques [19, 20]. These deep learning-based models manifest remarkable improvement compared with the traditional approaches. However, they require large volumes of manually delineated images containing vessels. Unfortunately, delineating vessel masks with high fidelity is prohibitively difficult and time-consuming. The main obstacles preventing this goal are small size, irregular shape, low contrast and heavy noise; cf. Fig. 1. Hence, developing a model-driven but not data-starved approach is still very promising.

To this end, we develop a new computational model that borrows a large number of existing renowned image filters to distinguish vessels from other tissues and then use XGBoost [21] to classify each pixel as vessels or others. Finally, a refined Markov random field integrates neighborhood information to polish the results. Experimental results carried out on a widely used dataset 3D-IRCADb [22] show that our newly proposed model outperforms all existing traditional machine learning models, even better than deep learning-based models in most cases. Our model only requires a small number of labeled images to train the model but yields competitive or better results. The success reveals that many filters can compensate for the shortage of labeled data, which can be inspiring and promising for those tasks where high-quality data is challenging to obtain.

Methods

The proposed liver vessel segmentation model composes of three modules: vessel enhancement, candidate generation, and segmentation refinement; see Fig. 2. The details are as follows.

Fig. 2
figure 2

Diagram of the proposed liver vessel segmentation model. It composes vessel enhancement, candidate generation, and segmentation refinement. Vessel enhancement is achieved by fading out the background but strengthening boundary regions, candidate vessels are obtained by XGBoost feeding with features generated from extensive image filters, and refinement is fulfilled by a refined Markov random field

Vessel enhancement

Two procedures are applied to the raw images to enhance the edges between vessel areas and other liver tissues, including calibration and contrast.

Calibration is necessary as the raw image may need to be clipped to the appropriate window for vessel analysis. To this end, we automatically determine the window center and width by a statistical approach. Precisely, the mean \(\mu\) and standard deviation \(\sigma\) of vessel intensities are determined. Then, the intensities of all images are clipped into an interval \([{\mu } - 3{\sigma }, {\mu } + 3 {\sigma }]\). These clipped intensities are further normalized to alleviate the systematic bias between different imaging devices by

$$\begin{aligned} f^{\prime }(x,y)=\alpha (f(x,y)-\mu )/\sigma )+c, \end{aligned}$$

where f(xy) is the initial intensity of an image at position (xy), \(\alpha\) and c are used to transform the normalized values into gray scales from 0 to 255.

After calibration, the vessels are enhanced by

$$\begin{aligned} f^{\prime }(x,y)=f(x,y)-\lambda f(x,y)\circledast k(x,y), \end{aligned}$$

where k(xy) is a kernel of a low-pass filter, \(\lambda\) controls its magnitude, and \(\circledast\) means convolution. This operation helps wash out many liver tissues and makes vessels stand out.

Candidate generation

Feature transformation

The filters used to retrieve features from images include CLAHE (contrast limited adaptive histogram equalization) [23], Gabor filter [24], Gamma Correction [25], Gaussian filter [26], Hessian [7], Laplacian operator [27], Median filter [28], Mean filter [29], Minimum filter [30], Bilateral filter [31], Sobel operator [32], Canny edge detector [33], as well as the ten filters predefined in the imageFilter module of Pillow [34], which are BLUR, CONTOUR, DETAIL, EDGE_ENHANCE, EDGE_ENHANCE_MORE, EMBOSS, FIND_EDGES, SMOOTH, SMOOTH_MORE and SHARPEN. The mathematical definitions of these filters/operators are shown in Table 1.

Table 1 The filters and operators that are used to transform CT images

These filters have their unique merits in capturing features from images. Thus, the information obtained in this way is adequate to characterize vessels.

Context-aware vessel identification

Based on the filters, each pixel is represented by a d-dimensional vector containing its original intensity as well as all the values generated by the filters. Hence, the context as well as the vessel regions can be represented by a \(n\times d\) vector with n the number of neighbors surrounding the interested pixel to be classified.

A pixel \(F(i^{\prime },j^{\prime },k^{\prime })\) is deemed as an h-hop neighbor of the interest pixel F(ijk) if \(\text {min}(|i-i^{\prime }|, |j-j^{\prime }|, |k-k^{\prime }|) \le h\), where i, j and k are the indices of a pixel, i and j are used to locate the pixel in a slice, and k is used to locate the slice in a volume. The h is set to 1, 2 and 3, resulting in the voxel size of \(3\times 3\times 3\), \(5\times 5\times 5\) and \(7\times 7\times 7\), respectively. For the 2D situation, only i and j are considered.

The interested pixel as well as its neighbors form a voxel whose features are obtained from its constituent pixels, where its label is the mask of the central pixel. The features are obtained by using the above filters. The final features of the voxel are input into XGBoost [21] for feature selection and pixel classification.

Segmentation refinement

The vessel segmentation is further refined by a Markov random field (MRF) [35] as the classification is only conducted on pixel level that ignores the correlation between pixels.

An MRF is a graph having \({G}=({V},{E})\), where V is the set of nodes (e.g., the pixels of an image), and E is the edges connecting the nodes in V (e.g., the adjacency pixels). For a random variable \(v_{i}\) in G, the probability of \(P(V=v_{i})\) is independent of other variables given its neighbors \(N(v_{i})\) that is named as the Markov blanket. That being said,

$$\begin{aligned} P(V=v_{i}|V- v_{i})=P(V=v_{i}|N(v_{i}). \end{aligned}$$

Based on the Hammersley-Clifford theorem [36], it can be expressed as

$$\begin{aligned} P(V=v_{i}|N(v_{i}))=\frac{1}{Z}\text {exp}(-E(V=v_{i}|N(v_{i}))), \end{aligned}$$

where \(E(\cdot )\) is an energy function and Z is the partition function computed by \(Z=\sum _{v_{i}}E(v_{i})\). In this study \(E(v_{i})\) is calculated by

$$\begin{aligned} E(v_{i})= & {} E_{\text {intensity}}(v_{i})+\lambda E_{\text {gradient}}(v_{i}) \\= & {} \sum \rho (u_{i}-v_{i},\sigma _{i}) + \lambda \sum \limits _{v_{j}\in N(v_{i})}\rho (v_{i}-v_{j},\sigma _{g}), \end{aligned}$$

where \(u_{i}\) is the refined value of the variable \(v_{i}\) and \(\rho (x,\sigma )\) is the Lorentzian function [37] defined by

$$\begin{aligned} \rho (x,\sigma )=log \left( 1+\left( \frac{x}{\sigma }\right) ^{2}/2 \right) . \end{aligned}$$

By minimizing the energy function E, we obtained the refined segmentation of the vessels based on the pixel-wise classification results.

Experiments

Datasets

The well-known dataset 3D-IRCADb [22], scanned using contrast-enhanced computed tomography, is adopted for our model training and validation. In this dataset, all the masks of the liver, hepatic veins, portal veins, and arteries are available. Since 3D-IRCADb only contains 20 volumes (2,823 slices), it is suitable for traditional machine learning approaches but not deep learning-based models. It is because computational models should be trained in cases instead of slices so that training bias can be largely avoided. Thus, we will not make head-to-head comparisons with the deep learning models because of overfitting.

Evaluation metrics

Four metrics are used to evaluate the performance, i.e., accuracy (Acc), sensitivity (Sen), Specificity (Spe), and dice similarity coefficient (DSC). They are defined as

$$\begin{aligned} Sen= & {} \frac{TP}{TP+FN}\\ Spe= & {} \frac{TN}{TN+FP}\\ Acc= & {} \frac{TP+TN}{TP+TN+FP+FN}\\ DSC= & {} \frac{2 \cdot TP}{FP+FN+2 \cdot TP} \end{aligned}$$

where true positives (TP) are vessel pixels classified correctly, false positives (FP) are pixels classified as vessels incorrectly, true negatives (TN) are pixels classified as non-vessels correctly, and false negatives (FN) are vessel pixels classified incorrectly. Among them, DSC is more meaningful as it is robust to imbalanced labels that are very common in vessel data.

Performance qualification

Performance on 3D-IRCADb

The performance of our model is evaluated through a rigorous five-fold cross-validation process. The dataset is partitioned into five folds at the scan level, with four folds (16 scans) designated for training and the remaining fold (4 scans) for testing. The training and testing are iterated across all folds to ensure comprehensive evaluation of all scans independently. On average, the DSC is 0.63 for all the volumes in 3D-IRCADb. However, this score is rarely reported by others. In addition, only partial volumes with top-performed results are reported by others as well. Therefore, we present the results obtained from 3D-IRCADb with the same number of volumes as others; c.f., Table 2. Results show that our model significantly outperforms existing approaches in terms of accuracy and specificity. Regarding sensitivity, our model is superior to others across all the cases, exhibiting an average lift of 2% when compared to the existing leading model. Notably, both sensitivity and DSC can be substantially influenced by the quality of reference masks and predictive accuracy. After carefully checking the labels of 3D-IRCADb, we have found a considerable portion of labels that are incorrectly masked. Taking Fig. 3, there have many over-labeled, under-labeled, and even wrongly-labeled masks. Since the number of vessel pixels is significantly smaller than that of non-vessel pixels, it is more sensitive to imperfect labels, thus the significant fluctuation of sensitivity.

Table 2 Performance comparison on 3D-IRCADb
Fig. 3
figure 3

Examples of imperfect vessel labels. The red boxes highlight over-labeled, under-labeled, and wrongly-labeled masks

Note that, to ensure a fair comparison, we adhere to the standard settings for the number of testing volumes used in existing approaches: 1, 8, 14, and 20 volumes, respectively. The performance of each volume is evaluated using the five-fold cross-validation, and the results for k collective volumes are averaged on the k top-performed volumes.

Performance comparison with deep learning models

The proposed model is trained using multiple filters, necessitating only a small amount of labeled data intrinsically. Nonetheless, it may be less effective compared to deep learning-based models that are capable of automatic feature learning. To assess the efficacy of our proposed model, we evaluate its performance against state-of-the-art deep learning models, including U-Net [38], TransUNet [39], and 3D U-Net [40]. Detailed results presented in Table 2 reveal that our model is slightly inferior to U-Net but notably superior over TransUNet and 3D U-Net. We speculate that this discrepancy is primarily due to the increased parameters in the latter two models, particularly in the case of 3D U-Net.

Larger context improves segmentation

Different window sizes, i.e., 1, 3, 5 and 7, are used to capture the context information for vessel segmentation. To explore the impact of the context within a slice and between slices, we have considered the 2D and 3D scenarios. The performance of our model on 3D-IRCADb with various context window sizes are shown in Table 3. Clearly, a larger window of context consistently generates better segmentation results.

Table 3 Segmentation performance of our proposed model on 3D-IRCADb under various voxel size

Figure 4 shows two examples of vessel segmentation with various window sizes. It can be observed that a larger window size generates complete internal regions and smoother edges of vessels. In contrast, small window size is prone to yield more isolated pixels or regions. Besides, the results obtained from 3D voxels are more tolerant to weakly connected regions between vessels than that generated from the 2D pixels.

Fig. 4
figure 4

Vessel segmentation results obtained from various context ranges. The pixels in white are correctly predicted, the red are over predicted (i.e., false positive) and the green are under predicted (i.e., false negative). The mark “\(i\times j\times k\)” on a slice indicates the voxel size, where \(i=1\) means the context in 2D, otherwise 3D

It is essential to note that a larger voxel size does not always translate to better performance; see Table 3. This is due to the reduced influence of long-distance pixels on the central voxel of interest. Additionally, increasing voxel size substantially enlarges the feature dimension, potentially leading to issues such as the curse of dimensionality.

Markov random field refines segmentation

Although context information has been appended to the model of vessel segmentation, each pixel is predicted separately. Thus the connections of vessels in more extensive ranges are not captured. To this end, we borrow the MRF [35] model with a revised energy function to sharpen the distinction between vessels and non-vessels. The MRF-aware results improve the dice value by 3.1% on average for the 3D-IRCADb dataset (p-value \(<2.2e-16\)); see Fig. 5.

Fig. 5
figure 5

Performance comparison between MRF-aware and MRF-agnostic results. Note only the distribution of dice coefficient and sensitivity are shown here as others are very close to 1 that lose distinguishability

To demonstrate the improvements of the MRF model, we present six representative examples in Fig. 6. It is clear that the revised MRF model is able to remove isolated pixels or smaller regions, fill the holes in vessel regions, and bridge the gaps between separated vessel segments.

Fig. 6
figure 6

Examples of vessel segmentation improvements achieved by the MRF model. The first row contains the original images, the second is the results obtained without MRF refinement, and the third shows the purified results. It can be seen that MRF is able to remove isolates, fill holes, and bridge gaps

Association between critical filters and context

In this study, 22 filters are used to capture vessels’ information from various perspectives to compensate for the lack of data. However, not all filters are of equal importance to the model. To examine the association between the filters and the context size, we have retrieved the filters selected by XGBoost; see Table 4. Interestingly, only CLAHE, Gabor and Hessian are persistently important to the 2D-wise vessel segmentation. At the same time, most filters are kept for the 3D situation except a few presented in the Pillow package (the details are shown in Table 4). In addition, more filters are used in case the context range is more extensive. These observations consolidate our proposal of using multiple filters with broad context to segment vessels.

Table 4 Important filters to vessel segmentation with various context ranges

Conclusion

Liver vessel segmentation is essential for clinical liver disease diagnosis and treatment. Hence great efforts have been made to solve this problem from the computational perspective. However, the performance of existing models is still far from satisfactory. The main reasons hindering vessel segmentation progress include small size, heavy noise, low contrast, and irregular shape. These difficulties further prevent the construction of large-volume and high-quality vessel segmentation data, making the computational models significantly under-fitted, particularly for deep learning models. To overcome the limitations, we propose a rich filter-based model to compensate for the scarcity of labeled data, of which the results are further refined by a Markov random field model. Experiments show that the proposed model significantly improves vessel segmentation without complicated models and extensive data. This study unveils that rich irrelevant filters are helpful for tasks having limited data, like vessel segmentation.

Availability of data and materials

The dataset supporting the conclusions of this article is available at https://www.ircad.fr/research/data-sets/liver-segmentation-3d-ircadb-01/, and the source codes can be found at https://github.com/lzhLab/veSeg/.

References

  1. Xiaopeng Y, Yang JD, Hwang HP, Yu HC, Ahn S, Kim BW, et al. Segmentation of liver and vessels from CT images and classification of liver segments for preoperative liver surgical planning in living donor liver transplantation. Comput Methods Prog Biomed. 2018;158:41–52.

    Article  Google Scholar 

  2. Lu P, Xia J, Li Z, Xiong J, Yang J, Zhou S, et al. A vessel segmentation method for multi-modality angiographic images based on multi-scale filtering and statistical models. Biomed Eng Online. 2016;15(120). https://biomedical-engineering-online.biomedcentral.com/articles/10.1186/s12938-016-0241-7#citeas.

  3. Cheng Y, Hu X, Wang J, Wang Y, Tamura S. Accurate vessel segmentation with constrained B-snake. IEEE Trans Image Process. 2015;24(8):2440–55.

    Article  Google Scholar 

  4. Sangsefidi N, Foruzan AH, Dolati A. Balancing the data term of graph-cuts algorithm to improve segmentation of hepatic vascular structures. Comput Biol Med. 2018;93:117–26.

    Article  PubMed  Google Scholar 

  5. Guo XY, Xiao RX, Zhang T, Chen C, Wang J, Wang Z. A novel method to model hepatic vascular network using vessel segmentation, thinning, and completion. Med Biol Eng Comput. 2020;58(4):709–24.

  6. Zeng YZ, Zhao YQ, Liao M, Zou BJ, Wang XF, Wang W. Liver vessel segmentation based on extreme learning machine. Phys Med. 2016;32(5):709–16.

    Article  PubMed  Google Scholar 

  7. Frangi AF, Niessen WJ, Vincken KL, Viergever MA. Multiscale Vessel Enhancement Filtering. In: International conference on medical image computing and computer-assisted intervention. Cambridge: Springer; 1998. p. 130–7.

  8. Sato Y, Nakajima S, Shiraga N, Atsumi H, Yoshida S, Koller T, et al. Three-dimensional multi-scale line filter for segmentation and visualization of curvilinear structures in medical images. Med Image Anal. 1998;2(2):143–68.

    Article  CAS  PubMed  Google Scholar 

  9. Lebre MA, Vacavant A, Grand-Brochier M, Rositi H, Abergel A, Chabrot P, et al. Automatic segmentation methods for liver and hepatic vessels from CT and MRI volumes, applied to the Couinaud scheme. Comput Biol Med. 2019;110:42–51.

    Article  PubMed  Google Scholar 

  10. Luu HM, Klink C, Moelker A, Niessen W, Van Walsum T. Quantitative evaluation of noise reduction and vesselness filters for liver vessel segmentation on abdominal CTA images. Phys Med Biol. 2015;60(10):3905.

    Article  PubMed  Google Scholar 

  11. Goceri E, Shah ZK, Gurcan MN. Vessel segmentation from abdominal magnetic resonance images: adaptive and reconstructive approach. Int J Numer Methods Biomed Eng. 2017;33(4):e2811.

    Article  Google Scholar 

  12. Zeng YZ, Liao SH, Tang P, Zhao YQ, Liao M, Chen Y, et al. Automatic liver vessel segmentation using 3D region growing and hybrid active contour model. Comput Biol Med. 2018;97:63–73.

    Article  PubMed  Google Scholar 

  13. Zhang HH, Bai P, Min XL, Liu Q, Ren Y, Li H, et al. Hepatic vessel segmentation based on animproved 3D region growing algorithm. In: Journal of Physics: Conference Series. vol. 1486. Chengdu: IOP Publishing; 2020. p. 032038.

  14. Ibragimov B, Toesca D, Chang D, Koong A, Xing L. Combining deep learning with anatomical analysis for segmentation of the portal vein for liver SBRT planning. Phys Med Biol. 2017;62(23):8943.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Kitrungrotsakul T, Han XH, Iwamoto Y, Foruzan AH, Lin L, Chen YW. Robust hepatic vessel segmentation using multi deep convolution network. In: Medical Imaging 2017: Biomedical Applications in Molecular, Structural, and Functional Imaging. vol. 10137. International Society for Optics and Photonics; 2017. p. 1013711.

  16. Kitrungrotsakul T, Han XH, Iwamoto Y, Lin L, Foruzan AH, Xiong W, et al. VesselNet: A deep convolutional neural network with multi pathways for robust hepatic vessel segmentation. Comput Med Imaging Graph. 2019;75:74–83.

    Article  PubMed  Google Scholar 

  17. Chakravarty A, Sivaswamy J. RACE-net: a recurrent neural network for biomedical image segmentation. IEEE J Biomed Health Informa. 2018;23(3):1151–62.

    Article  Google Scholar 

  18. Jiang Y, Wang F, Gao J, Cao S. Multi-Path Recurrent U-Net Segmentation of Retinal Fundus Image. Appl Sci. 2020;10(11):3777.

    Article  CAS  Google Scholar 

  19. Luan S, Chen C, Zhang B, Han J, Liu J. Gabor convolutional networks. IEEE Trans Image Process. 2018;27(9):4357–66.

    Article  PubMed  Google Scholar 

  20. Yan Q, Wang B, Zhang W, Luo C, Xu W, Xu Z, et al. An attention-guided deep neural network with multi-scale feature fusion for liver vessel segmentation. IEEE J Biomed Health Inform. 2020;25(7):2629–42.

  21. Friedman JH. Greedy function approximation: a gradient boosting machine. Ann Stat. 2001;29(5):1189–232.

  22. Soler L, Hostettler A, Agnus V, Charnoz A, Fasquel J, Moreau J, et al. 3D image reconstruction for comparison of algorithm database: A patient specific anatomical and medical image database. IRCAD, Strasbourg, France, Tech Rep. 2010.

  23. Kuran U, Kuran EC. Parameter selection for CLAHE using multi-objective cuckoo search algorithm for image contrast enhancement. Intell Syst Appl. 2021;12:200051.

    Google Scholar 

  24. Mehrotra R, Namuduri KR, Ranganathan N. Gabor filter-based edge detection. Pattern Recogn. 1992;25(12):1479–94.

    Article  Google Scholar 

  25. Rahman S, Rahman MM, Abdullah-Al-Wadud M, Al-Quaderi GD, Shoyaib M. An adaptive gamma correction for image enhancement. EURASIP J Image Video Process. 2016;2016(1):1–13.

    Article  Google Scholar 

  26. Reddy KS, Jaya T. De-noising and enhancement of MRI medical images using Gaussian filter and histogram equalization. Mater Today Proc. 2021.

  27. Zunair H, Ben Hamza A. Sharp U-Net: Depthwise convolutional network for biomedical image segmentation. Comput Biol Med. 2021;136:104699.

    Article  PubMed  Google Scholar 

  28. Wu CH, Shi ZX, Govindaraju V. Fingerprint image enhancement method using directional median filter. In: Biometric Technology for Human Identification. vol. 5404. Florida: International Society for Optics and Photonics; 2004. p. 66–75.

  29. Janani P, Premaladha J, Ravichandran K. Image enhancement techniques: A study. Indian J Sci Technol. 2015;8(22):1–12.

    Article  CAS  Google Scholar 

  30. Chen H, Li A, Kaufman L, Hale J. A fast filtering algorithm for image enhancement. IEEE Trans Med Imaging. 1994;13(3):557–64.

    Article  CAS  PubMed  Google Scholar 

  31. Geng J, Jiang W, Deng X. Multi-scale deep feature learning network with bilateral filtering for SAR image classification. ISPRS J Photogramm Remote Sens. 2020;167:201–13.

    Article  Google Scholar 

  32. Nguyen TP, Chae DS, Park SJ, Yoon J. A novel approach for evaluating bone mineral density of hips based on Sobel gradient-based map of radiographs utilizing convolutional neural network. Comput Biol Med. 2021;132:104298.

    Article  CAS  PubMed  Google Scholar 

  33. Shokhan M. An efficient approach for improving canny edge detection algorithm. Int J Adv Eng Technol. 2014;7(1):59.

    Google Scholar 

  34. Clark A. Pillow (pil fork) documentation. Readthedocs. 2015. https://buildmedia.readthedocs.org/media/pdf/pillow/latest/pillow.pdf. Accessed 18 Nov 2022.

  35. Li SZ. Markov random field models in computer vision. In: European conference on computer vision. Stockholm: Springer; 1994. p. 361–70.

  36. Cressie N, Lele S. New models for Markov random fields. J Appl Probab. 1992;29(4):877–84.

    Article  Google Scholar 

  37. Gough W. The graphical analysis of a Lorentzian function and a differentiated Lorentzian function. J Phys A Gen Phys. 1968;1(6):704.

    Article  Google Scholar 

  38. Ronneberger O, Fischer P, Brox T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In: Navab N, Hornegger J, Wells WM, Frangi AF, editors. Medical Image Computing and Computer-Assisted Intervention - MICCAI 2015. Lecture Notes in Computer Science. Munich: Springer International Publishing; 2015. p. 234–41.

  39. Chen J, Lu Y, Yu Q, Luo X, Adeli E, Wang Y, et al. TransUNet: Transformers make strong encoders for medical image segmentation. 2021. arXiv preprint arXiv:210204306.

  40. Çiçek Ö, Abdulkadir A, Lienkamp SS, Brox T, Ronneberger O. 3D U-Net: learning dense volumetric segmentation from sparse annotation. In: Medical Image Computing and Computer-Assisted Intervention, Athens: Springer; 2016. p. 424–32.

Download references

Funding

This work was collectively supported by the Natural Science Foundation of Hubei Province [2023AFB918]; the Advantages Discipline Group (Medicine) Project in Higher Education of Hubei Province (2021-2025) [2023XKQT5]; and the Open Project of Hubei Provincial Clinical Research Center for Precise Diagnosis and Treatment of Liver Cancer [2023LCOF02].

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization: L.Z. and M.Z.; Methodology: W.C. and L.Z.; Formal analysis and investigation: W.C., Q.L. and X.Z.; Data curation: R.B., Q.L. and X.Z.; Resources: R.B., Q.L. and X.Z.; Writing: L.Z.; Funding acquisition: L.Z. and M.Z..

Corresponding authors

Correspondence to Liang Zhao or Ming Zhang.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, W., Zhao, L., Bian, R. et al. Compensation of small data with large filters for accurate liver vessel segmentation from contrast-enhanced CT images. BMC Med Imaging 24, 129 (2024). https://doi.org/10.1186/s12880-024-01309-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12880-024-01309-1

Keywords