Multicontrast brain magnetic resonance image superresolution using the local weight similarity
 Hong Zheng^{1, 2},
 Xiaobo Qu^{1}Email author,
 Zhengjian Bai^{3},
 Yunsong Liu^{1},
 Di Guo^{4},
 Jiyang Dong^{1},
 Xi Peng^{5} and
 Zhong Chen^{1}Email author
DOI: 10.1186/s1288001601762
© The Author(s). 2017
Received: 23 May 2016
Accepted: 26 December 2016
Published: 17 January 2017
Abstract
Background
Lowresolution images may be acquired in magnetic resonance imaging (MRI) due to limited data acquisition time or other physical constraints, and their resolutions can be improved with superresolution methods. Since MRI can offer images of an object with different contrasts, e.g., T1weighted or T2weighted, the shared information between intercontrast images can be used to benefit superresolution.
Methods
In this study, an MRI image superresolution approach to enhance inplane resolution is proposed by exploring the statistical information estimated from another contrast MRI image that shares similar anatomical structures. We assume some edge structures are shown both in T1weighted and T2weighted MRI brain images acquired of the same subject, and the proposed approach aims to recover such kind of structures to generate a highresolution image from its lowresolution counterpart.
Results
The statistical information produces a local weight of image that are found to be nearly invariant to the image contrast and thus this weight can be used to transfer the shared information from one contrast to another. We analyze this property with comprehensive mathematics as well as numerical experiments.
Conclusion
Experimental results demonstrate that the image quality of lowresolution images can be remarkably improved with the proposed method if this weight is borrowed from a high resolution image with another contrast.
Graphical Abstract
Multicontrast MRI Image Superresolution with Contrastinvariant Regression Weights
Keywords
Superresolution Multicontrast Statistical information Weight Noniterative processBackground
In MRI, lowresolution (LR) images may be acquired in applications, e.g., functional MRI [1, 2] and diffusion tensor imaging [3, 4], due to limited data acquisition time or other physical constraints. Highresolution (HR) images appear favorable to perform subsequent posterior image processing and visualization [5]. Superresolution methods are widely utilized to improve image resolution [6–10]. Typical methods include sparse representations [6–8], projection onto convex sets (POCS) [9], tensor frames [10], etc. However, these methods need numerous iterations to accomplish superresolution, thus they inevitably lead to high computational costs. For MRI, since a great number of images have to be processed, fast and stable methods are desired. Recently, the prior information of MRI has been explored in superresolution. For example, (a) redundant information produced by subpixel spatial shifts between multiple images [3], (b) space homogeneity constraint from orthogonal anisotropic acquisitions [2], and (c) the learned dictionary with a nature of the orthogonality [11] have been employed to refine structural details and edges. Besides, image contrast can also be utilized to produce sharper images [12]. However, these methods may not lead to faithful superresolution results when multipleshifted images are inapplicable or the information is very limited within a single image. Thus, one may expect other prior information beyond a single image.
where ρ(H) refers to the proton density, TR is the repetition time and TE is the echo time. There are different TR value and TE value within a section of medical tissue that would result in multiple contrast images. Yet, these images share the proton density of the subject so that they largely share similar anatomical structures but with different contrasts in regions. The shared information between intercontrast images can be considered to benefit superresolution. Therefore, it is possible to improve the LR image resolution by incorporating prior information from the different contrast image in HR. Rousseau proposed a patchbased iterative framework combining with nonlocal similarity to share information among multiple contrast images in [15], and later many more detailed analysis was studied in [16]. A constraint that the downsampled version of the reconstructed LR data must be equal to the original LR data is imposed in the iterative framework [5]. The nonlocal similarity is also measured with both voxel intensity and gradient intensity in superresolution [17]. However, these methods require training sets or timeconsuming iteration processing.
New edgedirected interpolation (NEDI) [18] is a fast and statistical superresolution method for a single image. It estimates local covariance coefficients from a LR image and assumes that this statistical information is also valid for the corresponding HR image. A pixel of the HR image is interpolated by performing the linear regression of neighboring pixels, which originate from the LR image. This regression process is based on noniterative operations, thus the superresolution can be performed fast. The NEDI provides a nice way of analyzing statistical information in the image superresolution. Some recent methods [19–21] also use regressions to improve the image resolution and achieve remarkable performances. However, these methods train hundreds of external images prior to recovering structural details, and require plenty of computations. Due to the nice statistical property and low computation time of NEDI, in this work, we extend it into the multicontrast image superresolution and demonstrate its superior performance on MRI images.
We will explore how to incorporate the statistics from one image into another contrast image. Regression weights, estimated from a HR image in one contrast, and neighboring pixels around the interpolated location in the LR image of another contrast work together to generate a new pixel value. The fact that neighbors are provided by the LR image itself can offer a guarantee and support for the consistent contrast between the LR one and the interpolated result. Mathematical analysis and experimental evidence will be presented to address a fundamental question of why these weights between two contrast images constitute faithful criteria. Then, the proposed approach probes the information both from a LR image and its corresponding HR image in another contrast. Our method will be compared with the classic bicubic method, NEDI method [18], and the stateoftheart contrastguided interpolation (CGI) method [12] in terms of objectiveevaluation criteria and visual perceptions.
The remainder of this article is organized as follows: In section II, we briefly review basic concepts of NEDI. In section III, we derive conditions that must be satisfied in our method. Experimental results and discussions will be presented in sections IV. Finally, concluding remarks are made in section V.
Method
Brief review of NEDI
In NEDI, regression weights are estimated in a local region then target pixels are calculated as a linear regression of neighbors [18]. Thus, it is crucial to determine the regression weights in the interpolation. Within a neighborhood, four neighbors are commonly used in NEDI, and consequently there are four regression weights for one pixel interpolation.
where ε _{ i } is the residual error. By continually sampling in a 9 × 9 region, a vector y = [y _{1}, ⋯, y _{49}]^{ T } ∈ ℝ ^{49} is formed to represent pixels in this region and meanwhile a matrix X = [x _{1}, ⋯, x _{49}] ∈ ℝ ^{49 × 4}, whose column x _{ i } contains four neighbors of y _{ i }, is formed to represent all neighboring pixels around those pixels of y.
Multicontrast image superresolution
In the proposed method, a HR image of one contrast is assumed to be available for interpolating a LR image of another contrast. This assumption is reasonable since multicontrast images are always available in MRI experiments [5, 7, 13].
where the vector s _{ i } includes four pixels of the LR image that are the nearest neighbors along diagonal directions of the i ^{th} pixel in the center. This means we assume that the HR image in Fig. 1a is in one contrast and the LR image in Fig. 1b is in another contrast. Then b _{ i } is estimated from Fig. 1a and s _{ i } comes from Fig. 1b. Therefore, this new approach absorbs prior information from the HR image in one contrast and maintains the data consistency of LR image in another contrast.
To facilitate following discussion, intensities of images are all normalized between 0 and 1. Furthermore, we assume that multicontrast images are well registered before superresolution.
Weights in multicontrast images
Regression weights for synthetic images shown in Fig. 2
Fig. 2a  Fig. 2b  Fig. 2c  Fig. 2d  Fig. 2e  Fig. 2f  

( Ω_{ p }, Ω_{ q })  (0, 0.78)  (0.39, 0.78)  (0.76, 0.78)  (0.78, 0.76)  (0.78, 0.39)  (0.78, 0) 
b  [0.50;0.00; 0.00; 0.50]  [0.50;0.00; 0.00; 0.50]  [0.50;0.00; 0.00; 0.50]  [0.50;0.00; 0.00; 0.50]  [0.50;0.00; 0.00; 0.50]  [0.50;0.00; 0.00; 0.50] 
∑ _{ j = 1} ^{4} b _{ j }  1.00  1.00  1.00  1.00  1.00  1.00 
Regression weights in regions of zoom for same anatomical structures shown in Fig. 3
Besides, one may find that the sum of weights in each vector is approximately 1 (Tables 1, 2 and 3). We will analyze this property with comprehensive mathematics and empirical tests on MRI images. This property will be an important foundation to derive similar regression weights for multicontrast images.
Sum of weights is approximately equal to 1
Shared weights in multicontrast images
In this section, the case where the weights in one image are close to those of another contrast image will be analyzed.
Regression weights for T1weighted and T2weighted images
Source images  Regression weights b  

S1  S2  
T1  [−0.10; 0.56; 0.71; −0.16]  [0.76; −0.26; −0.06; 0.53] 
T2  [−0.04; 0.54; 0.60; −0.07]  [0.76; −0.26; −0.19; 0.65] 
The mathematical analysis on weights is simplified as listed below:
Regression weights are estimated by continually sampling 3 × 3 patches in a 9 × 9 region, and each patch is composed of one pixel y _{ i } and its 4 neighbors x _{ i,j }(j = 1, 2, 3, 4) along diagonal directions. Consequently, the vector y = [y _{1}, ⋯, y _{ i }, ⋯, y _{49}]^{ T } ∈ ℝ ^{49} denotes pixels in this region and the matrix X = [x _{1}, ⋯, x _{ i }, ⋯, x _{49}]^{ T } ∈ ℝ ^{49 × 4} stands for all neighboring pixels around those pixels of y. Here, X (or \( \tilde{\mathbf{X}} \)) is the columnfullrank matrix and their generalized inversions are represented by X ^{+} and \( {\tilde{\mathbf{X}}}^{+} \), respectively. In addition, there are the vector d = ỹ − y ∈ ℝ ^{49} and the matrix \( \mathbf{C}=\tilde{\mathbf{X}}\mathbf{X}\in {\mathbb{R}}^{49\times 4} \).
Results and discussions
In experiments, we verify our approach on realistic T1weighted and T2weighted brain MRI images. 256 × 256 T1 and T2 HR images in Fig. 9 are from Philips Company. The T1 (TR = 170 ms, TE = 3.9 ms) and T2 (TR = 3000 ms, TE = 80 ms) datasets are acquired with Fast Field Echo (FFE) sequence (FOV = 230 × 230 mm^{2}, slice thickness = 5.0 mm). The FFE sequence is a steady state gradient echo sequence acquired from Philips Company. The name of FFE is the trade name in Philips Company, and its common name is SSFPFID. Corresponding trade name of this sequence in Siemens Company is FISP and in GE Company is GRASS. Figure 10 and Fig. 11 are acquired at a 3 T Siemens Trio Tim MRI scanner using a turbo spin echo sequence (FOV = 230 × 187 mm^{2}, slice thickness = 5.0 mm) and the matrix size of T1 (TR = 2000 ms, TE = 9.7 ms) and T2 (TR = 5000 ms, TE = 97 ms) HR images is 384 × 324.
Superresolution experiments
The proposed method aims to recover edge details of LR brain image. We only borrow the weight from another HR contrast image if a pixel in the expanded LR image is located on an edge. In our work, a pixel is declared to be an edge pixel if the local variance within the nearest neighbors is above a given threshold (=0.0001, under the condition of intensities of images are all normalized between 0 and 1). We set the same value of the threshold in all experiments. Although, in some locations, it is not enough to satisfy the property of weights similarity, they only take a very small proportion of the total and are not processed specially in the proposed method.
The proposed approach is compared with the bicubic method, NEDI [18], and CGI [12]. The CGI method is used to guide the interpolation process by conducting directional filtering and achieves superior results compared to traditional interpolation techniques and other stateoftheart edgeguided image interpolation methods. Three objective criteria, Peak SignaltoNoise Ratio (PSNR), the Structural Similarity (SSIM) [23] and the relative l _{2} norm error (RLNE), are used to quantitatively measure the supperresolution performance. The higher PSNR indicates that the reconstructed pixel value is more consistent to the original HR image and the higher SSIM implies better image structures are preserved. Also, the lower RLNE implies better consistency to the original HR image.
For the proposed method, we set the region size as 9 × 9. Within each region, 3 × 3 size patches with 1pixelwidth overlap between adjacent patches is set to maximally explore the statics in the local region. These are typical settings in the original NEDI method and works well for tested images. For CGI, default parameters are used in the shared source code.
PSNR/SSIM/RLNE evaluation for different methods
Images  The bicubic  NEDI  CGI  The proposed 

Fig. 9  28.55/0.8738/0.1159  31.55/0.9117/0.0820  31.79/0.9168/0.0798  31.90/0.9190/0.0788 
Fig. 10  30.67/0.9121/0.1532  33.12/0.9347/0.1155  33.73/0.9396/0.1077  33.89/0.9400/0.1057 
Fig. 11  29.39/0.8986/0.1767  32.60/0.9282/0.1221  33.09/0.9341/0.1155  33.15/0.9345/0.1146 
Fig. A1  29.38/0.9067/0.1800  31.26/0.9389/0.1451  31.81/0.9446/0.1362  32.17/0.9466/0.1306 
Fig. A2  28.50/0.8849/0.1819  30.74/0.9196/0.1405  31.21/0.9260/0.1331  31.32/0.9262/0.1314 
Sensitivity to the Misregistration
Improvements of PSNR/SSIM/RLNE compared with CGI method showed in Fig. 11
Pixels to move  Directions of move  

Slant  Antislant  Vertical  Horizontal  
0  +0.06/+0.0004/+0.0009  
1  +0.10/+0.0004/+0.0014  +0.05/+0.0003/+0.0007  +0.04/+0.0002/+0.0006  +0.07/+0.0006/+0.0010 
2  +0.08/+0.0001/+0.0011  −0.05/−0.0007/−0.0006  −0.0003/−0.0001/0  +0.03/0/+0.0004 
3  +0.06/−0.0006/+0.0009  −0.39/−0.0033/−0.0053  −0.05/−0.0005/−0.0005  −0.25/−0.0022/−0.0034 
4  −1.04/−0.0081/−0.0145  −1.57/−0.0111/−0.0229  −0.51/–0.0039/−0.0069  −1.29/−0.0092/−0.0184 
One slice of the brain image in Fig. 11 is used in simulation.
Structural distinction in T1 and T2
In MRI, T1 and T2 images have some distinct signal intensity that may cause structural distinctions appeared. For example, a structure can be visible clearly in the T2 image and is embodied too little in the T1 image (Fig. 10a and b, arrow B), or, in turn, a structure can be visible in T1 image and is embodied too little in the T2 image (Fig. 10a and b, arrow A). These distinct structures may be lesions or normal organisms but are not ghosts. This is normal phenomenon in MRI.
Image denoising
We agree that the noise is not obviously presented in the tested brain imaging datasets. But the proposed method has the ability to suppress noise since regression weights are estimated according to the least square rule, which intrinsically has the ability to suppress noise.
We also comment that if the serious noise that may injury the interpolation result, noise removal before the interpolation should be accomplished. This is beyond the scope of this work and we leave this as the future work.
Computation time
Our method is implemented with MATLAB on a personal computer with DualCore CPU 3.00GHz and 2GB memory. The computation time of the proposed method is very close to NEDI, and costs around 10 s.
Conclusions
An MRI image superresolution approach is proposed to employ the statistical information retrieved from another contrast MRI image that shares similar anatomical structures. It is found that local regression weights are very similar among multicontrast MRI images. This property is analyzed with comprehensive mathematics and experimental evidence. Experiment results demonstrate that the image quality of the lowresolution image can be truly improved if the contrastinvariant weight is borrowed from the high resolution image of another contrast. In the future, we plan to further improve the sharpness of edges and textures by utilizing sparse representation [26–29] and local geometric directions [30–32]. The code of this work is available at http://www.quxiaobo.org/project/MultiContrastMRI/Toolbox_MultiContrastMRI_Superresolution.zip.
Highlights

Multicontrast MR images share similar anatomical structures, e.g., the T1weighted and the T2weighted images.

Regression weights are found to be similar among multicontrast images.

Comprehensive mathematics and numerical experiments are presented trying to analyze the weightssimilarity property.

Regression weights are learnt from another contrast highresolution MRI image.

An MRI image superresolution approach using local regression weights is proposed.

Compared with classic stateoftheart interpolation techniques, the performance of the proposed method is remarkably improved.
Abbreviations
 CGI:

Contrastguided interpolation
 HR:

Highresolution
 LR:

Lowresolution
 MRI:

Magnetic resonance imaging
 NEDI:

New edgedirected interpolation
 POCS:

Projection onto convex sets
 PSNR:

Peak signaltonoise ratio
 RLNE:

Relative L2 norm error
 SSIM:

Structural similarity
Declarations
Acknowledgements
The authors sincerely thank Dr. Feng Huang at Philips North America for providing the data in Fig. 9. The authors are grateful to Drs. Xin Li and KaiKuang Ma for sharing the codes of NEDI and CGI methods, respectively.
Funding
This work was partially supported by the National Natural Science Foundation of China (61571380, 11375147, 61302174, 11271308 and 11301508), Natural Science Foundation of Fujian Province of China (2015 J01346, 2016 J05205), Fundamental Research Funds for the Central Universities (20720150109) and Important Joint Research Project on Major Diseases of Xiamen City (3502Z20149032).
Authors’ contributions
XQ designed the proposed MRI superresolution method and HZ implemented this method. The mathematical analysis for the property of weights similarity was conducted by HZ, ZB and JD. Algorithm development and data analysis was carried out by HZ, XQ, YL, DG and ZC. XP contributed with most of data in experiments. All authors have been involved in drafting and revising the manuscript and approved the final version to be published. All authors read and approved the final manuscript.
Competing interests
The authors declare that they have no competing interests.
Consent for publication
Not applicable.
Ethics approval and consent to participate
Synthetic brain images were downloaded from BrainWeb (http://brainweb.bic.mni.mcgill.ca/). Real MRI images in Fig. 9 were acquired from Philips Company and this study was approved by Institutional Review Board of Philips Company and proper informed consent was obtained from all volunteers prior to enrollment; Real MRI images in Figs. 10–11 were acquired from healthy subjects under the approval of the Institute Review Board of Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences. This study conformed to human experimentation standards of the ethics committee of the Institute Review Board of Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, and informed consents were obtained from the subjects.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Authors’ Affiliations
References
 Peled S, Yeshurun Y. Superresolution in MRI: application to human white matter fiber tract visualization by diffusion tensor imaging. Magn Reson Med. 2001;45:29–35.View ArticlePubMedGoogle Scholar
 Scherrer B, Gholipour A, Warfield SK. Superresolution reconstruction to increase the spatial resolution of diffusion weighted images from orthogonal anisotropic acquisitions. Med Image Anal. 2012;16:1465–76.View ArticlePubMedPubMed CentralGoogle Scholar
 Poot DHJ, Jeurissen B, Bastiaensen Y, Veraart J, Van Hecke W, Parizel PM, Sijbers J. Superresolution for multislice diffusion tensor imaging. Magn Reson Med. 2013;69:103–13.View ArticlePubMedGoogle Scholar
 Kornprobst P, Peeters R, Nikolova M, Deriche R, Ng M, Van Hecke P. A superresolution framework for fMRI sequences and its impact on resulting activation maps. Med Image Comput ComputeringAssisted Intervention (MICCAI’03) (Montreal, Canada). 2003;2879:117–25.Google Scholar
 Manjón JV, Coupé P, Buades A, Collins DL, Robles M. MRI Superresolution using selfsimilarity and image priors. Int J Biomed Imaging. 2010;2010:425891–901.View ArticlePubMedPubMed CentralGoogle Scholar
 Yang B, Yuan M, Ma Y, Zhang J, Zhan K. Local sparsity enhanced compressed sensing magnetic resonance imaging in uniform discrete curvelet domain. BMC Med Imaging. 2015;15:28.View ArticlePubMedPubMed CentralGoogle Scholar
 Qu X, Hou Y, Lam F, Guo D, Zhong J, Chen Z. Magnetic resonance image reconstruction from undersampled measurements using a patchbased nonlocal operator. Med Image Anal. 2014;18:843–56.View ArticlePubMedGoogle Scholar
 Wong A, Liu C, Wang X, Fieguth P, Bie H. Homotopic nonlocal regularized reconstruction from sparse positron emission tomography measurements. BMC Med Imaging. 2015;15:10.View ArticlePubMedPubMed CentralGoogle Scholar
 Wang TT, Cao L, Yang W, Feng QJ, Chen WF, Zhang Y. Adaptive patchbased POCS approach for super resolution reconstruction of 4DCT lung data. Phys Med Biol. 2015;60:5939–54.View ArticlePubMedGoogle Scholar
 Ding HJ, Gao H, Zhao B, Cho HM, Molloi S. A highresolution photoncounting breast CT system with tensorframelet based iterative image reconstruction for radiation dose reduction. Phys Med Biol. 2014;59:6005–17.View ArticlePubMedPubMed CentralGoogle Scholar
 Huang JH, Guo L, Feng QJ, Chen WF, Feng YQ. Sparsitypromoting orthogonal dictionary updating for image reconstruction from highly undersampled magnetic resonance data. Phys Med Biol. 2015;60:5359–80.View ArticlePubMedGoogle Scholar
 Wei Z, Ma KK. Contrastguided image interpolation. IEEE Trans Image Process. 2013;22:4271–85.View ArticlePubMedGoogle Scholar
 Greenspan H. Superresolution in medical imaging. Comput J. 2009;52:43–63.View ArticleGoogle Scholar
 Mark AB, Richard CS. MRI Basic Principles and Applications. WileyLiss 2003.
 Rousseau F. Brain hallucination. In Prceedings of the European Conference on Computer Vision (ECCV'08) (New York, USA). 2008; Part 1. p. 497–508.
 Rousseau F. A nonlocal approach for image superresolution using intermodality priors. Med Image Anal. 2010;14:594–605.View ArticlePubMedPubMed CentralGoogle Scholar
 JafariKhouzani K. MRI upsampling using featurebased nonlocal means approach. IEEE Trans Med Imag. 2014;33:1969–85.View ArticleGoogle Scholar
 Li X, Orchard MT. New edgedirected interpolation. IEEE Trans Image Process. 2001;10:1521–7.View ArticlePubMedGoogle Scholar
 Timofte R, De Smet V, Van Gool L. Anchored neighborhood regression for fast examplebased superresolution. IEEE Int Conf Comput Vis (ICCV’13) (Sydney, Australia). 2013:1920–7.
 Yang CY, Yang MH. Fast direct superresolution by simple functions. IEEE Int Conf Comput Vis (ICCV’13) (Sydney, Australia). 2013:561–8.
 Dai D, Timofte R, Van Gool L. Jointly optimized regressors for image superresolution. Comput Graph Forum. 2015;34:95–104.View ArticleGoogle Scholar
 Cocosco CA, Kollokian V, Kwan RKS, Evans AC. BrainWeb: online interface to a 3D MRI simulated brain database. Neuroimage. 1997;5:S425.Google Scholar
 Wang Z, Bovik AC, Sheikh HR, Simoncelli EP. Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process. 2004;13:600–12.View ArticlePubMedGoogle Scholar
 Manjo’n JV, Caballero JC, Lull JJ, Martı’ GG, Bonmatı’ LM, Robles M. MRI denoising using Nonlocal means. Med Image Anal. 2008;12:514–23.View ArticleGoogle Scholar
 Gudbjartsson H, Patz S. The Rician distribution of noisy MRI data. Magn Reson Med. 1995;34:910–4.View ArticlePubMedPubMed CentralGoogle Scholar
 Ravishankar S, Bresler Y. MR image reconstruction from highly undersampled kspace data by dictionary learning. IEEE Trans Med Imaging. 2011;30:1028–41.View ArticlePubMedGoogle Scholar
 Ravishankar S, Bresler Y. Efficient blind compressed sensing using sparsifying transforms with convergence guarantees and application to magnetic resonance imaging. SIAM J Imaging Sci. 2015;8:2519–57.View ArticleGoogle Scholar
 Liu Y, Zhan Z, Cai JF, Guo D, Chen Z, Qu X. Projected iterative softthresholding algorithm for tight frames in compressed sensing magnetic resonance imaging. IEEE Trans Med Imaging. 2016;35:2130–40.View ArticleGoogle Scholar
 Zhan Z, Cai JF, Guo D, Liu Y, Chen Z, Qu X. Fast multiclass dictionaries learning with geometrical directions in MRI reconstruction. IEEE Trans Biomed Eng. 2016;63:1850–61.View ArticleGoogle Scholar
 Qu X, Guo D, Ning B, Hou Y, Lin Y, Cai S, Chen Z. Undersampled MRI reconstruction with patchbased directional wavelets. Magn Reson Imaging. 2012;30:964–77.View ArticlePubMedGoogle Scholar
 Ning B, Qu X, Guo D, Hu C, Chen Z. Magnetic resonance image reconstruction using trained geometric directions in 2D redundant wavelets domain and nonconvex optimization. Magn Reson Imaging. 2013;31:1611–22.View ArticlePubMedGoogle Scholar
 Lai Z, Qu X, Liu Y, Guo D, Ye J, Zhan Z, Chen Z. Image reconstruction of compressed sensing MRI using graphbased redundant wavelet transform. Med Image Anal. 2016;27:93–104.View ArticlePubMedGoogle Scholar