Skip to content

Advertisement

Open Access
Open Peer Review

This article has Open Peer Review reports available.

How does Open Peer Review work?

A consistency evaluation of signal-to-noise ratio in the quality assessment of human brain magnetic resonance images

  • Shaode Yu1, 2,
  • Guangzhe Dai1, 3,
  • Zhaoyang Wang1, 3,
  • Leida Li4,
  • Xinhua Wei5, 6 and
  • Yaoqin Xie1Email author
BMC Medical Imaging201818:17

https://doi.org/10.1186/s12880-018-0256-6

Received: 26 October 2017

Accepted: 30 April 2018

Published: 16 May 2018

Abstract

Background

Quality assessment of medical images is highly related to the quality assurance, image interpretation and decision making. As to magnetic resonance (MR) images, signal-to-noise ratio (SNR) is routinely used as a quality indicator, while little knowledge is known of its consistency regarding different observers.

Methods

In total, 192, 88, 76 and 55 brain images are acquired using T2*, T1, T2 and contrast-enhanced T1 (T1C) weighted MR imaging sequences, respectively. To each imaging protocol, the consistency of SNR measurement is verified between and within two observers, and white matter (WM) and cerebral spinal fluid (CSF) are alternately used as the tissue region of interest (TOI) for SNR measurement. The procedure is repeated on another day within 30 days. At first, overlapped voxels in TOIs are quantified with Dice index. Then, test-retest reliability is assessed in terms of intra-class correlation coefficient (ICC). After that, four models (BIQI, BLIINDS-II, BRISQUE and NIQE) primarily used for the quality assessment of natural images are borrowed to predict the quality of MR images. And in the end, the correlation between SNR values and predicted results is analyzed.

Results

To the same TOI in each MR imaging sequence, less than 6% voxels are overlapped between manual delineations. In the quality estimation of MR images, statistical analysis indicates no significant difference between observers (Wilcoxon rank sum test, p w  ≥ 0.11; paired-sample t test, p p  ≥ 0.26), and good to very good intra- and inter-observer reliability are found (ICC, p icc  ≥ 0.74). Furthermore, Pearson correlation coefficient (r p ) suggests that SNRwm correlates strongly with BIQI, BLIINDS-II and BRISQUE in T2* (r p  ≥ 0.78), BRISQUE and NIQE in T1 (r p  ≥ 0.77), BLIINDS-II in T2 (r p  ≥ 0.68) and BRISQUE and NIQE in T1C (r p  ≥ 0.62) weighted MR images, while SNRcsf correlates strongly with BLIINDS-II in T2* (r p  ≥ 0.63) and in T2 (r p  ≥ 0.64) weighted MR images.

Conclusions

The consistency of SNR measurement is validated regarding various observers and MR imaging protocols. When SNR measurement performs as the quality indicator of MR images, BRISQUE and BLIINDS-II can be conditionally used for the automated quality estimation of human brain MR images.

Keywords

Signal-to-noise ratioConsistency evaluationMedical image quality assessmentMagnetic resonance imaging

Background

Medical image quality is highly related to many clinical applications, such as screening, abnormality detection and disease diagnosis. Nowadays, various kinds of imaging modalities are daily used, such as computerized tomography (CT) and magnetic resonance (MR) imaging, not to speak of these devices under development [13]. At the same time, massive medical images are collected and used to support the clinical decision making in each day. Therefore, how to evaluate the medical image quality wins increasing attention [4, 5].

Medical image quality assessment (MIQA) is crucial in the equipment quality assurance [68], comparison of algorithms for image restoration [913], image interpretation [1417] and disease diagnosis [18, 19]. These MIQA algorithms can be grouped into the full- and no-reference categories [1923]. The full-reference algorithms require the access to the reference image, while it is often unavailable in the medical imaging domain. To tackle this problem, the images from advanced devices are used as the reference to validate the proposed methods with images from common devices [24, 25]. However, this kind of approaches leads to new obstacles due to uncontrollable motion and particularly the different imaging characteristics. Comparatively, no-reference MIQA algorithms are more useful and challenging, and no reference information can be borrowed [20, 23, 26].

As a quality indicator of medical images, signal-to-noise ratio (SNR) is widely used to evaluate the development of new hardware and image processing algorithms [19, 23, 2631]. The most common approach for SNR measurement, known as a “two-region” approach, is based on the signal statistics in two separate regions of interest (ROIs) from a single image. One is the tissue ROI (TOI) which determines the signal and the other ROI is localized in the object-free region which measures the noise [27, 28, 32]. The quality comparison of medical images with SNR measurement is still difficult across studies [23]. Above all, SNR values might vary according to the delineation of ROIs. For specific purposes, different tissues are concerned. And regarding the same purpose, it is impossible to delineate an identical tissue region. Moreover, the quality of MR imaging acquisition is closely related to the magnetic field strength (1.5 T, 3 T, etc), imaging protocol (T1, T2, etc), field of view (FOV), reconstruction methods and other significant factors. Furthermore, medical imaging is prone to unavoidable noise and artifacts. Besides, a great challenge might come from the fact that there are diverse imaging characteristics across modalities. Therefore, a consistency evaluation of SNR measurement is helpful in the further comparison of medical image quality.

In this paper, we evaluate the reliability of SNR measurement regarding different observers. At the preliminary stage, this study is confined to human brain MR images and four MR imaging sequences are analyzed. To the best of our knowledge, the most similar work is [26], in which it conducted the correlation analysis between subjective evaluation and 13 full-reference models. These models are primarily used for natural image quality assessment (NIQA). However, the study is with poor generalization. First, the experiment was based on synthesized distortions on 25 reference MR images and the result might be not so convincing in regard to real-life medical images. Second, the study involved subjective estimation to score the image quality, which is time consuming and expensive. On contrary, in this study, 411 in vivo human brain MR images are collected and 2 observers are involved to localize the tissue regions of white matter (WM) and cerebral spinal fluid (CSF) as the TOI for SNR measurement. Most importantly, this study investigates the SNR consistency regarding different observers. After the reliability of SNR measurement is verified, 4 no-reference NIQA models are borrowed from the computer vision community to predict the MR image quality, and furthermore, the correlation between the predicted results and SNR values is explored. On the whole, this study might shed some light on automated objective MIQA with less time and expenditure.

Methods

Data collection

In total, 192 T2* weighted MR images of healthy brain, 88 T1, 76 T2 and 55 contrast enhanced T1 (T1C) weighted MR images of brain with cancerous tumors are collected. Participants were scanned with a 3.0 T scanner (Siemens, Erlangen, Germany) and an 8-channel brain phased-array coil was used.

Specifically, T2* weighted images are acquired using gradient-echo pulse sequence. Its time of repetition (TR) is 200 ms and time of echo (TE) varies from 2.61 ms to 38.91 ms with an equal interval of 3.3 ms. The flip angle is 15o, FOV is 220 × 220 mm2, slice thickness is 3.0 mm and the resultant image matrix is 384 × 384. Note that the original purpose of multi-echo T2* weighted image acquisition is toward tissue dissimilarity analysis [12]. T1, T2 and T1C weighted images are acquired using spin echo protocol with different TR and TE pairs (535 ms and 8 ms; 3500 ms and 105 ms; 650 ms and 9 ms). The flip angle is 15o, FOV is 220 × 220 mm2 and slice thickness is 1 mm or 2 mm. The resultant image size of T1 and T1C weighted MR images varies from 512 × 432 to 668 × 512, while the matrix size of T2 weighted MR images is ranged from 384 × 324 to 640 × 640.

Image pre-processing

To each image, pixel intensity is linearly scaled to [0, 255]. Then, two TOIs (WM and CSF) are outlined in addition to two air regions. A non-physician (observer A, OA) and a radiologist with more than 15-year experience (observer B, OB) are asked to determine ROIs manually. Since the observers work separately and independently, they agree on that the size of outlined ROIs should be as large as possible. Furthermore, to T1, T2 and T1C weighted MR images, they also agree on that TOIs should be homogeneous and keep away from the tumor areas. The initial shape of each ROI is approximated with six points (the red sparkles in Fig. 1) and further refined by using a free-form curve-fitting method [33, 34]. The curve-fitting method takes the six points as the control points and Hermite cubic curve [35] is utilized for smooth interpolation between the points. In the end, outlined regions are as input to our in-house built algorithm with MATLAB (Mathworks, Natick, MA, USA) to measure the WM-based SNR (SNRwm) and CSF-based SNR (SNRcsf) values. Note that the procedure is repeated on another day within 30 days for intra-observer reliability analysis.
Figure 1
Fig. 1

Manual outline of tissue regions and air regions. a, b, c, d are T2*, T1, T2 and T1C weighted MR images, respectively. b, c, d demonstrates one example of a subject. Primarily points localized by observers are noted with red sparkles. Outlined WM, CSF and AIR regions are in closed curves with pink, blue and yellow lines, respectively. Note that images have been cropped for display purpose

Figure 1 shows T2* (A), T1 (B), T2 (C) and T1C (D) weighted MR images. In each image, WM, CSF and AIR regions are in closed curves which are highlighted with pink, blue and yellow lines, respectively. Note that the red sparkles are primarily points localized by observers and images have been cropped for display purpose.

SNR measurement

Two approaches exist for SNR measurement. The most common one requires two separate ROIs from a single image [27, 28]. By taking the signal (S) to be the average intensity in a tissue ROI (μ TOI ) and the noise (σ) to be the standard deviation of the pixel intensity in a background ROI (σ AIR ), we can approximate the SNR value of the image as below,
$$ {SNR}_{TOI}=\frac{S}{\sigma }=0.655\times \frac{\mu_{TOI}}{\sigma_{AIR}}. $$
(1)

Due to the Rician distribution of the background noise in a magnitude image, the factor of 0.655 arises because noise variations can be negative and positive [27, 28].

If the image is not homogeneous, the SNR measurement can be derived from the second approach [36, 37]. At first, a couple of images are acquired by consecutive scans and the MR device is equipped with identical imaging settings. And then, a difference image is derived by subtracting the images one from the other. Since the images are consecutively acquired on without any instability, the noise should be the only difference between the two original images. Taking the signal (S) as the mean pixel intensity value in a tissue ROI (μ oTOI ) on one original image and the noise as the standard deviation (σ) in the same ROI on the subtracted image (σ sTOI ),SNR can be estimated as
$$ {SNR}_{TOI}=\frac{S}{\sigma }=\sqrt{2}\times \frac{\mu_{oTOI}}{\sigma_{sTOI}}, $$
(2)
where the factor of \( \sqrt{2} \) arises because the standard deviation (σ) is derived from the subtraction image but not from the original image.

This study utilizes Eq. (1) to measure SNR values of MR images, since image homogeneity is warranted in this study. In addition, the second approach is commonly used for equipment quality assurance and requires scanning the object twice.

No-reference NIQA

Massive NIQA models are developed each year, while few models are used in the medical imaging community [3840]. This study makes use of four automated no-reference NIQA methods to predict the MR image quality. The correlation analysis between SNR values and NIQA results aims to find potential no-reference NIQA models for MIQA applications.

Involved NIQA models utilize natural scene statistics (NSS) to estimate the general quality of natural images. Specifically, the blind image quality index (BIQI) [41] estimates the image quality based on the statistical features extracted in discrete wavelet transform (DWT). It requires no knowledge of the distortion types and can be extended to any kinds of distortions. The second indicator (BLIINDS-II) [42] is an improved version of blind image integrity notator using discrete cosine transform (DCT) statistics [38]. It adopts a general statistical model for score prediction. The third one, blind/referenceless image spatial quality evaluator (BRISQUE) [43], makes use of the locally normalized luminance coefficients and quantifies possible losses of “naturalness” which is a holistic measure of image quality. The last one is the natural image quality evaluator (NIQE) [44]. It builds a “quality-aware” selector that collects statistical features for natural image quality estimation.

These NIQA models are implemented with MATLAB (the Mathworks, Natick, MA, USA) and the codes provided by the authors are accessible online. The models are evaluated without modifications in this study. Full details of these algorithms can be referred to corresponding literature [4144].

Experiment design

The experiment is divided into three steps. First, the overlapping ratio of manually outlined TOIs between and within observers are concerned and Dice index is employed. The index is defined as \( d=2\times \frac{\mid X\cap Y\mid }{\mid X\mid +\mid Y\mid}\times 100\% \), where X and Y stand for the TOI, and the signal indicates TOI computed as the number of voxels in the region. The Dice index equal to 100% means the two TOIs are identical, while it equal to 0% indicates the two TOIs are absolutely non-overlapping.

Then, with respect to the same TOI in each imaging sequence, the inter-observer difference is assessed with Wilcoxon rank sum test [45, 46] and paired-sample t-test [47]. The statistical analysis is performed using R (http://www.Rproject.org) and a significance level is set as 0.05. Moreover, the test-retest reliability is evaluated in terms of intra-class correlation coefficient (ICC, p icc ) using a two-way mixed-effects model [48]. The values of p icc ranging from 0.81 to 1.00 suggest very good reliability and 0.61 to 0.80 good reliability.

In the end, the correlation between SNR values and NIQA results is analyzed by using Pearson correlation coefficient (r p ) [49]. Note that the values of r p ranging from 0.81 to 1.00 indicate very strong or good correlation, while 0.61 to 0.80 good or strong correlation.

Results

Overlapped voxels in TOIs

Table 1 summarizes the number of voxels in TOIs in each MR sequence (the mean and standard deviation, μ ± σ). It is found that hundreds of voxels are outlined for SNR measurement and the minimum is 330±72.
Table 1

The number of voxels in the outlined tissue regions

 

T2*

T1

T2

T1C

WM

CSF

WM

CSF

WM

CSF

WM

CSF

The first time

OA

423 (95)

381 (117)

558 (173)

614 (258)

609 (239)

889 (366)

523 (146)

704 (314)

OB

330 (72)

333 (138)

567 (181)

649 (318)

414 (174)

699 (288)

477 (156)

663 (272)

The second time

OA

382 (88)

378 (104)

530 (187)

626 (219)

589 (251)

853 (349)

505 (138)

692 (290)

OB

357 (119)

342 (119)

582 (176)

663 (282)

447 (195)

721 (306)

480 (177)

686 (268)

Specifically, the overlapping ratio is described with Dice index as shown in Table 2. It indicates that less than 6% voxels are overlapped between and within observers in the manual delineation of TOIs.
Table 2

Dice index for the overlapped percentage of voxels in the TOIs between and within observers

 

WM

CSF

OB1

OB2

OA2

OB1

OB2

T2*

OA1

0.05

0.03

0.05

0.04

0.03

OA2

0.03

0.04

 

0.03

0.03

OB1

 

0.06

  

0.06

T1

OA1

0.02

0.03

0.03

0.04

0.03

OA2

0.03

0.03

 

0.01

0.02

OB1

 

0.02

  

0.02

T2

OA1

0.02

0.04

0.02

0.02

0.01

OA2

0.03

0.03

 

0.03

0.02

OB1

 

0.02

  

0.02

T1C

OA1

0.02

0.02

0.03

0.02

0.03

OA2

0.02

0.03

 

0.01

0.03

OB1

 

0.04

  

0.02

Analysis of SNR values

Figure 2 shows the first-time measurement of SNR values by using Bland & Altman plots [50]. It is a scatter diagram of the differences plotted against the averages of two SNR observations. In each plot, the average and the difference of SNR values can be perceived from the horizontal and the vertical axis respectively. In addition, horizontal lines are drawn at the mean difference between two SNR observations and at the limits of agreement. The latter is defined as the mean difference plus and minus 1.96 times the standard deviation (SD) of the SNR difference. The Bland & Altman plots show that more than 89% points are localized between the limits of agreement.
Figure 2
Fig. 2

Bland & Altman plots of SNR values. It presents the SNR values of the first time measurement. The left column represents SNRwm values and the right shows SNRcsf values. The solid lines indicate the mean values of SNR measurements and the dashed lines indicate the 95% confident interval of the difference between observations

Inter-observer difference

Inter-observer difference of SNR observations is analyzed with Wilcoxon rank sum test (p w ) and paired-sample t test (p p ). Corresponding results are show in Table 3. Note that the minimum value is boldfaced in each test. It is observed that the minimal p w is 0.11 and p p is 0.26. It is also found that both p w and p p from SNRwm are larger than those from SNRcsf, correspondingly.
Table 3

Statistical analysis of SNR measure in each imaging sequence regarding different TOIs

  

T2*

T1

T2

T1C

  

WM

CSF

WM

CSF

WM

CSF

WM

CSF

The first time

pw

0.54

0.39

0.88

0.74

0.99

0.11

0.69

0.56

pp

0.41

0.30

0.98

0.59

0.94

0.28

0.77

0.46

The second time

pw

0.57

0.33

0.92

0.75

0.95

0.18

0.72

0.58

pp

0.44

0.36

0.96

0.62

0.96

0.26

0.79

0.47

Test-retest reliability

Table 4 lists the result of test-retest reliability. Note that ICC1 and ICC2 respectively stands for intra- and inter-observer correlation coefficient. As shown in the Table, very good intra-observer reliability of the experience radiologist (OB) is found (p icc  ≥ 0.81). Similar results are found on the non-physician (OA) except that only good reliability is achieved for SNRcsf on T2* (p icc  ≥ 0.79) and T2 (p icc  ≥ 0.76) weighted MR images. Furthermore, good to very good inter-observer reliability is found (p icc  ≥ 0.80) but only good inter-observer reliability is found for SNRcsf in T2* weighted MR imaging sequence (p icc  ≥ 0.74).
Table 4

Intra- and inter-observer reliability in terms of intra-class coefficients between the non- and experienced physician

 

T2*

T1

T2

T1C

WM

CSF

WM

CSF

WM

CSF

WM

CSF

Intra-observer reliability

OA

0.84

0.79

0.91

0.87

0.95

0.76

0.89

0.86

OB

0.86

0.81

0.95

0.83

0.97

0.85

0.88

0.82

Inter-observer reliability

ICC2

0.81

0.74

0.92

0.80

0.90

0.81

0.85

0.83

Correlation between SNR and NIQA

Table 5 shows the correlation coefficients (r p ) between mean SNR values of each TOI (two measurements each observer) and NIQA results. The bold-faced r p values in red and blue denote r p  ≥ 0.60. Specifically, to SNRwm, BIQI, BLIINDS-II and BRISQUE on T2* (r p  ≥ 0.78), BRISQUE and NIQE on T1 (r p  ≥ 0.77), BLIINDS-II on T2 (r p  ≥ 0.68), and BRISQUE and NIQE on T1C (r p  ≥ 0.62) images show strong correlation; while to SNRcsf values, BLIINDS-II correlates well on T2* (r p  ≥ 0.63) and T2 (r p  ≥ 0.64) weighted MR imaging sequence.
Table 5

Correlation between TOI-based SNR values and no-reference NIQA results

 

T2*

T1

T2

T1C

SNRwm

SNRcsf

SNRwm

SNRcsf

SNRwm

SNRcsf

SNRwm

SNRcsf

OA

OB

OA

OB

OA

OB

OA

OB

OA

OB

OA

OB

OA

OB

OA

OB

BIQI

0.81

0.79

0.55

0.57

0.16

0.11

0.15

0.13

0.18

0.25

0.07

0.29

0.36

0.33

0.08

0.12

BLIINDS-II

0.78

0.80

0.72

0.63

0.23

0.20

0.02

0.06

0.72

0.68

0.73

0.64

0.34

0.38

0.10

0.15

BRISQUE

0.82

0.81

0.56

0.52

0.77

0.81

0.18

0.22

0.45

0.37

0.52

0.28

0.62

0.73

0.33

0.36

NIQE

0.24

0.27

0.35

0.03

0.82

0.84

0.24

0.28

0.55

0.46

0.53

0.32

0.63

0.72

0.32

0.30

Discussion

This paper has validated the consistency of SNR measurement in the quality assessment of human brain MR images. Moreover, the correlation between TOI-based SNR measurement and NIQA models has been analyzed. The study suggests that off-the-shelf NIQA models used in computer vision community are full of potential for automated and objective MIQA applications.

The consistency evaluation indicates that SNR measurement is reliable to different observers in each MR imaging sequence. In image pre-processing, TOIs are randomly localized. When no overlapping between TOIs, the Dice index would be zero. On average, TOIs are slightly overlapped by no more than 6% [Table 2], while the statistical analysis indicates that SNR values are not significantly changed between observers [Table 3]. That means independent localization of TOIs makes no difference to SNR measurement. Moreover, the test-retest reliability study suggests good to very good intra- and inter-observer reliability (Table 4). That might be the reason why SNR is widely used in clinical situations. And accordingly, a non-physician can independently perform the SNR measurement of MR images as good as an experienced physician does.

The correlation between SNR values and NIQA models shows that BLIINDS-II correlates well with SNRcsf on T2* and T2 weighted MR images, since CSF presents relatively higher voxel intensity over other tissues that leads to the robust estimation of SNRcsf. In comparison to SNRcsf, more NIQA results are in good correlation with SNRwm values, since WM is distinguishable in involved MR imaging sequences. Therefore, the authors suggest that tissue regions with higher intensities should function as the TOI in SNR measurement. On the whole, BRISQUE performs well as an automated no-reference NIQA model for the quality assessment of T2*, T1 and T1C weighted MR brain images, and BLIINDS-II is superior on assessing the quality of T2* and T2 MR images independent of the TOI selection. Consequently, it is full of potential to modify NIQA models developed in the computer vision community for MIQA applications in the medical imaging domain [51]. It should be mentioned that the correlation of SNR values and predicted results is not very good (r p  ≤ 0.85) and further improvement or modifications of existing NIQA models is needed.

SNR is frequently used as an image quality indicator in clinic. It is a local measure regarding the whole MR image. The SNR measurement can also be formulated from the global signal by using the whole object region as the tissue region. An overview of existing definitions of SNR measurement can be referred to [23]. More general and automated MIQA algorithms include using Shannon’s theory to describe the image content and then to model the spatial spectral power density of the image as the quality indicator [21] or analyzing the background of magnitude images of structural brain to represent the image quality [52]. In particular, some researchers explore to bridge the gap between SNR measurement and diagnostic accuracy or detectability [9, 18]. These studies show superiority over the physical measure of image quality, since the ultimate goal of medical imaging aims at abnormality detection and disease diagnosis.

Conclusions

The consistency of SNR measurement is validated regarding different observers. The correlation between SNR measurement and NIQA models indicates that BRISQUE works well for automated MIQA of T2*, T1 and T1C weighted brain MR images, and BLIINDS-II is superior over T2* and T2 weighted images independent of the TOI selection. Our future work will focus on the connection of SNR measurement, NIQA models and MIQA applications.

Abbreviations

BIQI: 

Blind image quality index

BLIINDS-II: 

The improved version of blind image integrity notator using DCT statistics

BRISQUE: 

Blind/referenceless image spatial quality evaluator

CSF: 

Cerebral spinal fluid

CT: 

Computerized tomography

DCT: 

Discrete cosine transform

DWT: 

Discrete wavelet transform

FOV: 

Field of view

ICC1

Intra-observer correlation coefficient

ICC2

Inter-observer correlation coefficient

MIQA: 

Medical image quality assessment

MR: 

Magnetic resonance

NIQA: 

Natural image quality assessment

NIQE: 

Natural image quality evaluator

NSS: 

Natural scene statistics

OA: 

Observer A

OB: 

Observer B

ROI: 

Regions of interest

SD: 

Standard deviation

SNR: 

Signal-to-noise ratio

SNRcsf

CSF-based SNR

SNRwm

WM-based SNR

T1C: 

Contrast-enhanced T1

TE: 

Time of echo

TOI: 

Tissue region of interest

TR: 

Time of repetition

WM: 

White matter

Declarations

Acknowledgements

The authors would like to thank the editor, reviewers and Rached Belgacem from Institut Superieur des Technologies Medicales de Tunis (ISTMT) for their valuable advices that have helped to improve the paper quality.

Funding

This work is supported in part by grants from the National Key Research and Develop Program of China (2016YFC0105102), the Leading Talent of Special Support Project in Guangdong (Y77504), the Shenzhen Key Technical Research Project (JSGG20160229203812944), the National Science Foundation of Guangdong (2014A030312006) and the Beijing Center for Mathematics and Information Interdisciplinary Sciences; the National Natural Science Foundation of China (61471349), the Science and Technology Plan Projects of Guangdong Province (2015B020233004), the Shenzhen Basic Technology Research Project (JCYJ20160429174611494 and JCYJ20170818160306270); the National Natural Science Foundation of China (61771473 and 61379143), the Six Talent Peaks High-level Talents in Jiangsu Province (XYDXX-063) and the Qing Lan Project; and the Science and Technology Planning Project of Guangzhou (201804010032). The funding sponsors had no role in the design of the study; in the collection, analysis or interpretation of data; in the writing of the manuscript; nor in the decision to publish the results.

Availability of data and materials

The datasets analyzed during the current study are not publicly available. These data could only be accessed to the physicians and researchers to ensure participant confidentiality.

Authors’ contributions

Conceived and designed the experiments: SY, LL, XW, YX Performed the experiments: GD, XW Analyzed the data: SY, GD, ZW Contributed reagents/materials/analysis tools: SY, XW Wrote the manuscript: SY Discussed and proof-read the manuscript: LL, XW, YX. All authors read and approved the final manuscript.

Ethics approval and consent to participate

This study was performed in accordance with the ethical guidelines of the Declaration of Helsinki (version 2002). The brain MR imaging of healthy volunteers was approved by the Medical Ethics Committee of Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, and the MR imaging of patients with brain tumors was approved by the Medical Ethics Committee of Guangzhou First People’s Hospital of Guangzhou Medical University. Written informed consent was obtained from all participants.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Authors’ Affiliations

(1)
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
(2)
Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
(3)
Sino-Dutch Biomedical and Information Engineering School, Northeastern University, Shenyang, China
(4)
School of Information and Control Engineering, Chinese University of Mining and Technology, Xuzhou, China
(5)
Department of Radiology, Guangzhou First Peoples Hospital, Guangzhou Medical University, Guangzhou, China
(6)
The Second Affiliated Hospital, South China University of Technology, Guangzhou, China

References

  1. Sandhu GY, Li C, Roy O, Schmidt S, Duric N. Frequency domain ultrasound waveform tomography: breast imaging using a ring transducer. Phys Med Biol. 2015;60(14):5381.View ArticlePubMed CentralGoogle Scholar
  2. Ahmad M, Bazalova-Carter M, Fahrig R, Xing L. Optimized detector angular configuration increases the sensitivity of x-ray fluorescence computed tomography (XFCT). IEEE Trans Med Imaging. 2015;34(5):1140–7.View ArticleGoogle Scholar
  3. Zhang Z, Yu S, Liang X, Zhu Y, Xie Y. A novel design of ultrafast micro-CT system based on carbon nanotube: a feasibility study in phantom. Phys Med. 2016;32(10):1302–7.View ArticleGoogle Scholar
  4. Razaak M, Martini MG, Savino K. A study on quality assessment for medical ultrasound video compressed via HEVC. IEEE J Biomed Health Inform. 2014;18(5):1552–9.View ArticleGoogle Scholar
  5. Zhang L, Cavaro-M’enard C, Le Callet P, Ge D. A multi-slice model observer for medical image quality assessment. IEEE ICASSP. 2015;1:1667–71.Google Scholar
  6. Jenkins CH, Xing L, Fahimian BP. Automating position and timing quality assurance for high dose rate brachytherapy using radioluminescent phosphors and optical imaging. Brachytherapy. 2016;15:28.View ArticleGoogle Scholar
  7. Firbank MJ, Coulthard A, Harrison RM, Williams ED. Quality assurance for MRI: practical experience. Br J Radiol. 2000;73(868):376–83.View ArticleGoogle Scholar
  8. Peltonen JI, Makela T, Sofiev A, Salli E. An automatic image processing workflow for daily magnetic resonance imaging quality assurance. J Digit Imaging. 2016;73(868):1–9.Google Scholar
  9. Eck BL, Fahmi R, Brown KM, Zabic S, Raihani N, Miao J, Wilson DL. Computational and human observer image quality evaluation of low dose, knowledge-based CT iterative reconstruction. Med Phys. 2015;42(10):6098–111.View ArticlePubMed CentralGoogle Scholar
  10. Baselice F, Ferraioli G, Pascazio V. A 3D MRI denoising algorithm based on Bayesian theory. Biomed Eng Online. 2017;16(1):25.View ArticlePubMed CentralGoogle Scholar
  11. Peng C, Qiu B, Li M, Guan Y, Zhang C, Wu Z, Zheng J. Gaussian diffusion sinogram inpainting for X-ray CT metal artifact reduction. Biomed Eng Online. 2017;16(1):1.View ArticlePubMed CentralGoogle Scholar
  12. Yu S, Wu S, Wang H, Wei X, Chen X, Pan W, Hu J, Xie Y. Linear-fitting-based similarity coefficient map for tissue dissimilarity analysis in T2 *-w magnetic resonance imaging. Chinese Physics B. 2015;24(12):128711.View ArticleGoogle Scholar
  13. Li H, Wu J, Miao A, Yu P, Chen J, Zhang Y. Rayleigh-maximum-likelihood bilateral filter for ultrasound image enhancement. Biomed Eng Online. 2017;16(1):46.View ArticlePubMed CentralGoogle Scholar
  14. Zhang R, Zhou W, Li Y, Yu S, Xie Y. Nonrigid registration of lung CT images based on tissue features. Comput Math Methods Medicine. 2013;834192:1–7.Google Scholar
  15. Yu S, Zhang R, Wu S, Hu J, Xie Y. An edge-directed interpolation method for fetal spine MR images. Biomed Eng Online. 2013;12(1):102.View ArticlePubMed CentralGoogle Scholar
  16. Guo L, Wang H, Peng C, Dai Y, Ding M, Sun Y, Yang X, Zheng J. Non-rigid MR-TRUS image registration for image-guided prostate biopsy using correlation ratio-based mutual information. Biomed Eng Online. 2017;16(1):8.View ArticleGoogle Scholar
  17. Li X, Huang W, Rooney WD. Signal-to-noise ratio, contrast-to-noise ratio and pharmacokinetic modeling considerations in dynamic contrast-enhanced magnetic resonance imaging. Magn Reson Imaging. 2012;30(9):1313–22.View ArticlePubMed CentralGoogle Scholar
  18. Cosman PC, Gray RM, Olshen RA. Evaluating quality of compressed medical images: SNR, subjective rating, and diagnostic accuracy. Proc IEEE. 1994;82(6):919–32.View ArticleGoogle Scholar
  19. Cao Z, Park J, Cho ZH, Collins CM. Numerical evaluation of image homogeneity, signal-to-noise ratio, and specific absorption rate for human brain imaging at 1.5, 3, 7, 10.5, and 14T in an 8-channel transmit/receive array. J Magn Reson Imaging. 2015;41(5):1432–9.View ArticleGoogle Scholar
  20. Chow LS, Paramesran R. Review of medical image quality assessment. Biomed Signal Process Control. 2016;27:145–54.View ArticleGoogle Scholar
  21. Fuderer M. The information content of MR images. IEEE Trans Med Imaging. 1988;7(4):368–80.View ArticleGoogle Scholar
  22. Geissler A, Gartus T, Foki T, Tahamtan AR, Beisteiner R, Barth M. Contrast-to-noise ratio (CNR) as a quality parameter in fMRI. J Magn Reson Imaging. 2007;25(6):1263–70.View ArticleGoogle Scholar
  23. Welvaert M, Rosseel Y. On the definition of signal-to-noise ratio and contrast-to-noise ratio for fMRI data. PLoS One. 2013;8(11):77089.View ArticleGoogle Scholar
  24. Niu T, Zhu L. Scatter correction for full-fan volumetric CT using a stationary beam blocker in a single full scan. Med Phys. 2011;38(11):6027–38.View ArticlePubMed CentralGoogle Scholar
  25. Liang X, Zhang Z, Niu T, Yu S, Wu S, Li Z, Zhang H, Xie Y. Iterative image-domain ring artifact removal in cone-beam CT. Phys Med Biol. 2017;62:5276–92.View ArticleGoogle Scholar
  26. Chow LS, Rajagopal H, Paramesran R. ANDI. Correlation between subjective and objective assessment of magnetic resonance (MR) images. Magn Reson Imaging. 2016;34(6):820–31.View ArticleGoogle Scholar
  27. Henkelman RM. Measurement of signal intensities in the presence of noise in MR images. Med Phys. 1985;12(2):232–3.View ArticleGoogle Scholar
  28. Kaufman L, Kramer DM, Crooks LE, Ortendahl DA. Measuring signal-to-noise ratios in MR imaging. Radiology. 1989;173(1):265–7.View ArticleGoogle Scholar
  29. Shokrollahi P, Drake JM, Goldenberg AA. Signal-to-noise ratio evaluation of magnetic resonance images in the presence of an ultrasonic motor. Biomed Eng Online. 2017;16(1):45.View ArticlePubMed CentralGoogle Scholar
  30. Reeder SB, Wintersperger BJ, Dietrich O, Lanz T, Greiser A, Reiser MF, Glazer GM, Schoenberg SO. Practical approaches to the evaluation of signal-to-noise ratio performance with parallel imaging: application with cardiac imaging and a 32-channel cardiac coil. Magn Reson Med. 2005;54(3):748–54.View ArticleGoogle Scholar
  31. Chen S, Wu H, Wu L, Jin J, Qiu B. Compressed sensing MRI via fast linearized preconditioned alternating direction method of multipliers. Biomed Eng Online. 2017;16(1):53.View ArticlePubMed CentralGoogle Scholar
  32. Murphy BW, Carson PL, Ellis JH, Zhang YT, Hyde RJ, Chenevert TL. Signal-to-noise measures for magnetic resonance imagers. Magn Reson Imaging. 1993;11(3):425–8.View ArticleGoogle Scholar
  33. Zhou W, Xie Y. Interactive contour delineation and refinement in treatment planning of image-guided radiation therapy. J Appl Clin Med Phys. 2014;15(1):4499.View ArticleGoogle Scholar
  34. Yu S, Wu S, Zhuang L, Wei X, Sak M, Neb D, Hu J, Xie Y. Efficient segmentation of a breast in B-mode ultrasound tomography using three-dimensional GrabCut (GC3D). Sensors. 2017;17(8):1827.View ArticlePubMed CentralGoogle Scholar
  35. Lu L. A note on curvature variation minimizing cubic Hermite interpolants. Appl Math Comput. 2015;259:596–9.Google Scholar
  36. Firbank MJ, Coulthard A, Harrison RM, Williams ED. A comparison of two methods for measuring the signal to noise ratio on MR images. Phys Med Biol. 1999;44(12):261.View ArticleGoogle Scholar
  37. Kellman P, McVeigh ER. Image reconstruction in SNR units: a general method for SNR measurement. Magn Reson Med. 2005;54(6):1439–47.View ArticlePubMed CentralGoogle Scholar
  38. Saad MA, Bovik AC, Charrier C. A DCT statistics-based blind image quality index. IEEE Signal Process Lett. 2010;17(6):583–6.View ArticleGoogle Scholar
  39. Yu S, Wu S, Wang L, Jiang F, Xie Y, Li L. A shallow convolutional neural network for blind image sharpness assessment. PLoS One. 2017;12(5):e0176632.View ArticlePubMed CentralGoogle Scholar
  40. Gu K, Li L, Lu H, Min X, Lin W. A fast reliable image quality predictor by fusing micro-and macro-structures. IEEE Trans Ind Electron. 2017;64(5):3903–12.View ArticleGoogle Scholar
  41. Moorthy A, Bovik A. A two-step framework for constructing blind image quality indices. IEEE Signal Process Lett. 2010;17(5):513–6.View ArticleGoogle Scholar
  42. Saad MA, Bovik AC, Charrier C. DCT statistics model-based blind image quality assessment. IEEE ICIP. 2011;1:3093–6.Google Scholar
  43. Mittal A, Moorthy A, Bovik A. No-reference image quality assessment in the spatial domain. IEEE Trans Image Process. 2012;21(12):4695–708.View ArticleGoogle Scholar
  44. Mittal A, Soundararajan R, Bovik A. Making a “completely blind” image quality analyzer. IEEE Signal Process Lett. 2013;20(3):209–12.View ArticleGoogle Scholar
  45. Wilcoxon F. Individual comparisons by ranking methods. Biom Bull. 1945;1(6):80–3.View ArticleGoogle Scholar
  46. Kerby DS. The simple difference formula: an approach to teaching nonparametric correlation. Compr Psychol. 2014;3:11.View ArticleGoogle Scholar
  47. Zimmerman DW. A note on interpretation of the paired-samples t test. J Educ Behav Stat. 1997;22(3):349–60.Google Scholar
  48. Lin HS, Chen YJ, Lu HL, Lu TW, Chen CC. Test–retest reliability of mandibular morphology measurements on cone-beam computed tomography-synthesized cephalograms with random head positioning errors. Biomed Eng Online. 2017;16(1):62.View ArticlePubMed CentralGoogle Scholar
  49. Galton F. Regression towards mediocrity in hereditary stature. J Anthropol Inst G B Irel. 1886;15:246–63.Google Scholar
  50. Giavarina D. Understanding bland Altman analysis. Biochemia Medica. 2015;25(2):141–51.View ArticlePubMed CentralGoogle Scholar
  51. Chow LS, Rajagopal H. Modified-BRISQUE as no reference image quality assessment for structural MR images. Magn Reson Imaging. 2017;43:74–87.View ArticleGoogle Scholar
  52. Mortamet B, Bernstein MA, Jack CR, Gunter JL, Ward C, Britson PJ, Meuli R, Thiran JP, Krueger G. Automatic quality assessment in structural brain magnetic resonance imaging. Magn Reson Med. 2009;62:365–72.View ArticlePubMed CentralGoogle Scholar

Copyright

© The Author(s). 2018

Advertisement