Skip to main content

Automated assessment of the smoothness of retinal layers in optical coherence tomography images using a machine learning algorithm

Abstract

Quantifying the smoothness of different layers of the retina can potentially be an important and practical biomarker in various pathologic conditions like diabetic retinopathy. The purpose of this study is to develop an automated machine learning algorithm which uses support vector regression method with wavelet kernel and automatically segments two hyperreflective retinal layers (inner plexiform layer (IPL) and outer plexiform layer (OPL)) in 50 optical coherence tomography (OCT) slabs and calculates the smoothness index (SI). The Bland–Altman plots, mean absolute error, root mean square error and signed error calculations revealed a modest discrepancy between the manual approach, used as the ground truth, and the corresponding automated segmentation of IPL/ OPL, as well as SI measurements in OCT slabs. It was concluded that the constructed algorithm may be employed as a reliable, rapid and convenient approach for segmenting IPL/OPL and calculating SI in the appropriate layers.

Peer Review reports

Introduction

Optical Coherence Tomography (OCT) is a non-invasive and powerful imaging technique that uses low-coherence interferometry to obtain high-resolution multidimensional images of biological retinal tissue [1]. OCT provides an in vivo cross-sectional image from the retinal layers which typically lose their normal characteristics, such as thickness, smoothness, and organization, as a result of various pathologic conditions, including the diabetic retinopathy, age-related macular degeneration (AMD), and retinal vascular events [2,3,4,5]. Thus, quantifying these structural changes of retinal layers by processing the OCT images will provide useful information about the severity of involvement and also response to treatments [6]. In this way, segmentation of retinal layers is essential for this purpose and has been proposed using manual, semi-automatic, and fully automated methods [7,8,9,10,11].

On spectral-domain OCT (SD-OCT), the healthy retina has an organized, layered structure that is highlighted by diverse reflectance patterns. As knowledge of SD-OCT increased, plenty of imaging biomarkers were suggested to estimate the visual prognosis, such as the amount of the central retinal thickness (which reflects the entire retina), the attenuation of the ellipsoid zone (EZ) or external limiting membrane (ELM) (which reflect the outer retinal incompetence) and disorganization of the retinal inner layers (DRIL) (which reflects the inner retina) [12,13,14]. Improved risk stratification utilizing new OCT biomarkers would aid in disease morphology characterization, advise the optimum use of specific therapies (e.g., anti-vascular endothelial growth factor (VEGF) therapy), enhance prognosis counseling, and help define the inclusion criteria for therapeutic trials more precisely [15].

Moreover, the inner plexiform layer (IPL; also known as the inner synaptic layer) is made up of synaptic connections between bipolar cell axons and ganglion cell dendrites. In the visual pathway, the IPL includes the synapse between second-order and third-order neurons. The outer plexiform layer (OPL; also known as the outer synaptic layer) made up synapses between photoreceptor cells and cells from the inner nuclear layer. Because ischemic processes and structural abnormalities such as epiretinal membranes (ERM), may possibly impact the inner portions of the retina, the research of biomarkers that particularly evaluate the inner retina has gained a lot of interest in recent years [16]. For instance, disorganization of the retinal inner layers (DRIL) is a newly discovered and reliably prognostic imaging biomarker. DRIL is defined by disorganization of inner retinal laminations as well as an inability to distinguish the borders between the layers. Sun et al. defined DRIL and established a connection between it and visual acuity (VA) in patients with diabetic macular edema (DME) for the first time [17]. Since then, DRIL has been implicated in a variety of retinal pathologies, including retinal vein occlusions (RVO), nonproliferative diabetic retinopathy (NPDR) and proliferative diabetic retinopathy (PDR) with or without DME, epiretinal membranes (ERM), uveitic cystoid macular edema (CME), central retinal artery occlusion, and macular telangiectasia [17,18,19,20,21,22,23,24,25,26,27,28]. While DRIL may be an excellent VA correlate in a variety of retinal ischemic disorders, precise and repeatable measurements of DRIL may be challenging. Interobserver agreement for DRIL assessment was low in individuals with macular edema due to RVO. Macular edema may obscure the boundaries necessary for an appropriate assessment of DRIL. So, prior to regular use of this measure in clinical practice, an automated and algorithmic surrogate of DRIL is likely to be needed [29].

In this regard, Cho et al. compared the correlations between the inner-retinal irregularity index, defined as the length ratio between the inner plexiform layer and retinal pigment epithelium, and visual outcomes before and after epiretinal membrane (ERM) surgery. The inner-retinal irregularity index was shown to be strongly associated with visual results before and after ERM surgery. They showed that this index has the potential to serve as a novel surrogate measure for inner-retinal damage as well as a predictive prognostic sign in ERM [30]. Neuron J, a semi-automated layer-tracing software program that is a plug-in module for ImageJ, a free JAVA-based image analysis software package, was used to compute the irregularity index. However, the software fails to identify the right route in certain places with low-contrast layers. In this instance, the tracing mode is switched to manual tracing mode in order to precisely measure the length of interest [30].

Since the manual segmentation using applications such as ImageJ is time-consuming and extremely vulnerable to intra- and interobserver variability, automated techniques have been developed to overcome these issues [31,32,33,34,35]. Researchers have long studied the thickness of different layers of the retina as an important and practical biomarker in the diagnosis and treatment of various retinal diseases. Numerous studies focused on segmenting the retinal layers in OCT images to generate retinal thickness maps as well as establishing a correlation between the produced maps and associated retinopathies [30, 36, 37]. The formation of the retina as a series of parallel layers stacked on top of one another has prompted researchers to consider the degree of irregularity and thickness of these layers in various retinal conditions [17,18,19, 27,28,29]. We previously introduced a novel biomarker -iris smoothness index (SI)-into anterior segment optical coherence tomography (AS-OCT) images of patients with Fuchs uveitis and developed an automated method for its measurement [38, 39]. The manual segmentation of retinal layers in OCT images using ImageJ software, which is time-consuming and needs a skilled operator, may restrict the practical applicability of SI quantification in retinal layers. To overcome these limitations, in the current study we aimed to develop an automated algorithm for calculating the SI in two hyperreflective retinal layers (inner plexiform layer (IPL) and outer plexiform layer (OPL)) in patients with varying stages of diabetic retinopathy and compare the results to those obtained by experts using the manual method as the ground truth.

Methods

Data acquisition and adjustment

The dataset for this study was supplied by the Farabi Eye Hospital at Tehran University of Medical Sciences in Tehran, Iran. The RTVue XR 100 Avanti device (Optovue, Inc., Fremont, CA, USA) was used to capture enhanced-depth imaging optical coherence tomography (EDI-OCT) images between 8:00 and 12:00 a.m. Patients were positioned appropriately, and 8 mm * 12 mm raster patterns of OCT B-scans were captured. For image analysis and SI calculation, scans passing through the fovea as well as equally spaced rasters in the superior and inferior macular areas were chosen. In OCT scans above or below the fovea, the IPL and OPL layers are often separated and parallel. But, due to the near proximity of IPL and OPL in the foveal region, it might be challenging for the algorithm to differentiate between the layers.

To evaluate the accuracy of the algorithm in a robust and precise manner in the current investigation, we selected similar OCT rasters in all patients with equal distances from the fovea and assessed the accuracy of segmentation and measurement.

We omitted low-quality scans due to conditions like cataract or other causes of hazy media in which the IPL and OPL were not discernible. Due to structural OCT alterations, such as the presence of exudate, which may impact OPL and IPL segmentation, we also removed some OCT scans.

Finally, we included 50 OCT B-scans from 21 patients with various stages of diabetic retinopathy (mild to severe non-proliferative diabetic retinopathy and proliferative diabetic retinopathy) in this study. The images were originally 256 × 728 pixels in size. The inner plexiform layer (IPL) and outer plexiform layer (OPL) in OCT images were manually segmented by two retina experts (H.R.E and E.K.P using the public domain ImageJ software (available in the public domain at Fiji http://imagej.nih.gov/ijwas).

The proposed automated algorithm, developed in the current study by Python framework, consists of three stages: pre-processing; segmentation of the retinal IPL and OPL layers using support vector regression (SVR); and smoothness index calculation. The proposed steps of the automated method are depicted in Fig. 1. After denoising the original image (Fig. 1a) by non-local means denoising algorithm (Fig. 1b), the retinal area is localized using Gabor filters (Fig. 1c). A search strategy is used to perform the initial prediction of the IPL and OPL layers (Fig. 1d). Then, as morphological operations, dilation and erosion are performed in order to remove undesirable areas and improve the predicted layers (Fig. 1e). Finally, the boundaries are detected using an SVR-based model (Fig. 1f).

Fig. 1
figure 1

Steps of the proposed automated method. After denoising the original image (a) by non-local means denoising algorithm (b), the retinal area is localized using Gabor filters (c). A search strategy is used to perform the initial prediction of the IPL and OPL layers (d). Then dilation and erosion are performed in order to remove undesirable areas and improve the predicted layers (e). Finally, the boundaries are detected using an SVR-based model (f)

In preprocessing stage, low pass Gaussian filter with kernel size of 11 × 11 as well as a non-local means algorithm were used to reduce the noise of OCT images. This algorithm is an image noise reduction method that, unlike local and global methods, uses a different and smarter approach for denoising in such a way that it considers similar parts in image in terms of texture and structure, and detects and removes noisy points by averaging. Then, histogram equalization was applied to increase the contrast of the image.

Localization of retinal area was carried out using Gabor filter. Upper and lower limits of the retina were obtained using segmentation of internal limiting membrane (ILM) and retinal pigment epithelium (RPE) layers. Gabor filtering with two different kernels was used for obtaining these boundaries as shown in the following formula.

$$g(x,y;\lambda ,\theta ,\psi ,\sigma ,\gamma ) = \exp \left( { - \frac{{x^{{\prime}{2}} + \gamma^{2} y^{{\prime}{2}} }}{{2\sigma^{2} }}} \right)\cos (2\pi \frac{{x^{\prime}}}{\lambda } + \psi ),$$
(1)

where

$${x}^{^{\prime}}=x\mathrm{cos}\theta +y\mathrm{sin}\theta ,{ y}^{^{\prime}}=-x\mathrm{sin}\theta +y\mathrm{cos}\theta ,$$
(2)

and \(\lambda\) is the wavelength of sinusoidal component, \(\theta\) represents the orientation of the normal to the parallel stripes of Gabor function, \(\psi\) denotes the phase offset of the sinusoidal function, \(\sigma\) is the standard deviation of the Gaussian envelope and \(\gamma\) is the spatial aspect ratio and specifies the elasticity support of Gabor function. By properly selecting of \(\lambda ,\theta ,\psi ,\sigma\) and \(\gamma\) parameters, we obtained two different kernels that shown in Fig. 2(1,2). After applying them to the original image (Fig. 2a), the boundaries of the ILM and RPE were obtained that shown in Fig. 2b. In this way, as you can see in Fig. 2c the retinal area is separated from the rest of the image. The selected parameters for two kernels were summarized in Table 1. In the current study, the initial parameters were chosen in a way that the resulted segmentation visually resembles the output of the original OCT image. In this manner, we find a ground truth interval for the parameters. In the next step the values were carefully changed to increase the precision of the obtained layer according to the original image.

Fig. 2
figure 2

The result of localization of retinal area: (1,2) Gabor kernels, (a) Original image, (b) Upper and lower boundaries of the retina, (c) Segmented retina region

Table 1 Parameters used for two kernels for segmentation of ILM and RPE layers

IPL and OPL segmentation

We automatically acquired the retinal area as a region of interest in the previous steps, which has the benefit of reducing background noise and therefore facilitating the segmentation of the IPL and OPL layers. For segmentation of IPL and OPL layers, we smooth our original image (Fig. 3a) using Gaussian filter (Fig. 3b) and then go to reduce image noise (Fig. 3c). In the next step, we go to the enhanced image using histogram equalization that result of this process is shown in Fig. 3d. Initial prediction of the layers is obtained by employing a search strategy based on similarity and correlation between intensity of the pixels in these layers. According to the separability of the image histogram, after some mentioned preprocessing steps, conventional thresholding algorithms can be used to find the optimal threshold in a simple and achievable manner. The points on our target layers (IPL and OPL) have a brightness above 30, while other points reflect a brightness below 15, as can be observed after applying the Gabor filter. Testing different brightness thresholds between 15 and 30, one observes that the output images have no significant difference, so the threshold of brightness is set to 20 in the current study. After setting a suitable threshold between foreground (IPL and OPL) and background (the rest of the image) pixels, a binary image was created using global thresholding algorithm, this process is generally shown in Fig. 0.3. White pixels indicate estimated IPL and OPL locations in the generated binary images (Fig. 3e). Then, morphological operators are employed to detect the desired layers, erosion eliminates the individual white pixels which is not connected to any meaningful layer and dilation helps to get a connected and continuous layer. Also, we need to perform a connected-component labeling for eliminating any extra layer, such layers may occur in the process like the one which appears in the upper right corner of Fig. 3e. To be more precise, the process begins with two times erosion, followed by a dilation. Then it uses a connected component operator followed by an erosion and a dilation. The continuous boundaries were created and outliers were removed as depicted in Fig. 3f [40]. In order to skeletonize and get a curve-shape for IPL and OPL identifiers, we create two separate lists of pixels, the first is according to the far most upper white pixels in a column considered as IPL and the second stores the lowest white pixels in each column considered as OPL. It is possible for these two lists to have common points, it occurs whenever the IPL and OPL stick together in the original OCT slabs, in these cases this common pixels appear in both IPL and OPL lists, separately.

Fig. 3
figure 3

Pre-processing stage: a Original image, b Smoothed image using Gaussian filter, c Denoised image using non-localization method, d Enhanced image using histogram equalization, e Binarized image using a search strategy, f Result of applying morphological operators

In the final step, due to discontinuity of pixels in the resulted lists, we need approximation methods for filling the gaps, that we will describe in the following sections.

A modeling-based technique was utilized to achieve precise segmentation of the IPL and OPL layers. Two approaches were investigated for this purpose: classic and machine learning-based methods. In the classic technique, cubic spline interpolation is employed to determine the exact target layers’ (IPL and OPL) locations. Cubic spline interpolation is a method for modeling objects with curved shapes [41]. On OCT slabs with relatively regular and organized layers, this approach produces satisfactory results (Fig. 4c). However, as seen in Fig. 4, it fails in OCT slabs with relatively disorganized layers (Fig. 4f).

Fig. 4
figure 4

Segmentation results using cubic spline interpolation. a An OCT slab with relatively regular and organized layers. c The labelling of IPL and OPL for a, e Final segmentation for a, and b an OCT slab with disorganized layers, d the labelling of IPL and OPL for b, and f Final segmentation of the image b which shows incorrect segmentation of IPL

To address this issue, we presented a machine learning technique based on support vector regression (SVR). SVR, as a supervised-learning technique, works utilizing points on layers and a prediction function. When the labelling is wrong, the model predicts and replaces the missing values, which is an advantage of employing SVR.

Since our model was a nonlinear function, the data was transformed into a higher-dimensional space known as kernel [42]. SVR can be achieved by using various kernels such as polynomials [43], sigmoid [44], Radial Basis Functions (RBF) [45], and wavelet functions. The performance of SVR largely depends on the selection of the kernel. One of the interesting features of the kernel-based approach is its ability to use feature mapping with infinite dimensions. For this purpose, a class of functions known as radial basis functions (RBF) can be used [46] as follows:

$$K\left( {\varvec{x}.\varvec{x}^{\prime } } \right) = \exp \left( { - \frac{{\left\| {\varvec{x} - {\text{ }}\varvec{x}^{\prime } } \right\|^{2} }}{{2\sigma ^{2} }}} \right),$$
(3)

In the above equation, \({\varvec{x}}\) and \({{\varvec{x}}}^{^{\prime}}\in {\mathbb{R}}^{N}\), where \({\mathbb{R}}^{N}\) represents input space and \(\sigma\) is the RBF kernel parameter. After mapping the data to an infinite-dimensional space, the SVR tries to fit a hyperplane in the feature space to the training data. So, our prediction function is mapped as Eq. (4):

$$y\left( {\varvec{x}} \right) = \mathop \sum \limits_{k = 1}^{n} \left( {a_{k}^{ + } - a_{k}^{ - } } \right) K\left( {{\varvec{x}}.\user2{x^{\prime}}_{{\varvec{k}}} } \right) + b,$$
(4)

where \(y\in {\mathbb{R}}\) and we seek to estimate (4) based on independent uniformly distributed data\(\left({{\varvec{x}}\boldsymbol{^{\prime}}}_{1}.{y}_{1}\right)\),…, \(\left({{\varvec{x}}\boldsymbol{^{\prime}}}_{n}.{y}_{n}\right)\) by finding function \(y({\varvec{x}})\) as sum of weighted kernels with a small error. The terms \(\alpha_{k}^{ + }\) and \(\alpha_{k}^{ - }\) are the corresponding Lagrange multipliers for the SVR model and \(\mathrm{b}\) is a penalty parameter used to suppress the noise and obtain a soft margin to create a more appropriate and accurate model.

$$b = \frac{1}{{N_{v} }}\sum\limits_{v \in V} {\left[ {y_{v} - \varepsilon - \sum\limits_{m = 1}^{n} {(a_{m}^{ + } - a_{m}^{ - } )K(x_{v} ,x_{m} )} } \right]} .$$
(5)

In our work, input \(x\) and its corresponding target value \(y\) explicitly include the horizontal and vertical coordinates of inner and outer plexiform layer data in the OCT images. Although SVR with RBF kernel is excellent for layer segmentation in noisy images, it performed poorly in OCT slabs with disorganized layers, as demonstrated in Fig. 5d.

Fig. 5
figure 5

Segmentation using SVR with RBF kernel. a Relatively regular layers, c The labelling for a, e Final segmentation for a, and b Irregular case d the labelling IPL and OPL for b, f Final segmentation of (b)

Zhang et al., showed that SVR with wavelet kernel has an accurate and acceptable performance against irregularities and fluctuations [47]. The wavelet kernel is a kind of multidimensional wavelet function that can estimate arbitrary nonlinear functions. So kernel function in Eq. (4) should be replaced with Eq. (6) as below:

$$K\left( {{\varvec{x}},\user2{x^{\prime}}} \right) = \mathop \prod \limits_{i = 1}^{m} \cos (1.75\frac{{{\varvec{x}}_{{\varvec{i}}} - \user2{x^{\prime}}_{{\varvec{i}}} }}{\sigma })\exp ( - \frac{{{\varvec{x}} - \user2{x^{\prime}}^{2} }}{{2\sigma^{2} }}),$$
(6)

With the newly used kernel and fixing the epsilon, penalty parameters and the appropriate choice of \(\upsigma\), which is the degree of sensitivity of the kernel to small oscillations, significant results were obtained in dealing with small and large fluctuations in the IPL and OPL layers. To validate the automated algorithm's agreement with manual segmentation of IPL and OPL and also SI measurement in these layers, 50 OCT images with varying levels of retinal layer regularity were chosen (from relatively regular and organized layers in OCT slabs to highly disorganized layers in patients with diabetic retinopathy and diabetic macular edema). Two experts (EKP and HRE) manually segmented IPL and OPL using ImageJ's freehand tool and SI manually computed in each layer of the related OCT slab. The gold standard (GS) for segmentation of the layers and SI calculation was considered by averaging the results of two experts. Additionally, IPL and OPL layers were segmented using the SVR method with a wavelet kernel, and SI was calculated automatically for each layer in all OCT slabs. We utilized the Bland–Altman plot to determine agreement between two experts' manual segmentation and also between the automated method and the manual segmentation.

The results of applying the SVR with wavelet kernel and comparison of segmentation with two experts are shown in Fig. 6.

Fig. 6
figure 6

Illustrations of final segmentation results in three different patients with mild, moderate, and severe diabetic retinopathy (rows) with variable irregularity of IPL and OPL by expert 1 (continuous red line), expert 2 (continuous blue line), and fully automated method using SVR with wavelet kernel (continuous green line)

Quantitative analysis of IPL and OPL layers

To analysis the IPL and OPL quantitatively, we introduced smoothness index (SI) that can be used to evaluate abnormalities in these layers under pathologic conditions like AMD, Epiretinal Membrane (ERM), and Diabetic Macular Edema (DME) [17,18,19, 27, 29]. SI was used to evaluate the quantitative smoothness of IPL and OPL layers in macular OCT images and is defined as follows:

$$Smoothness \, Index\;(SI) = \frac{{L_{E} }}{{L_{A} }}$$
(7)

where \({\mathrm{L}}_{\mathrm{E}}\) indicates the Euclidean distance between the beginning and the end of each layer in the final output image (the length of the straight line connecting the start and the end of IPL or OPL) and, \({\mathrm{L}}_{\mathrm{A}}\) is the actual length of the layer, obtained by calculating the sum of the Euclidean distance between all the pixels that make up the layer (Fig. 7).

Fig. 7
figure 7

a Automated calculation of SI, yellow straight line indicates the Euclidean distance between the beginning and the end point of each layer in the final output image (the length of the straight line connecting the start and the end of IPL or OPL) and, green curves shows the actual length of the layers, obtained by calculating the sum of the Euclidean distance between all the pixels that make up the layer. The results of SI calculation in IPL and OPL for patients with mild, moderate, and severe diabetic retinopathy are depicted in b, c and d respectively

Results

In this study, 50 OCT B-scans from 21 patients with varying stages of diabetic retinopathy were analyzed and two retina experts (EKH and HRE) rechecked and confirmed accurate automated segmentations of IPL and OPL. As previously stated, these 21 patients had varying stages of diabetic retinopathy, with 8 having mild, 9 having moderate, and 4 having severe diabetic retinopathy. Those with mild to moderate diabetic retinopathy have parallel IPL and OPL, whereas patients with severe diabetic retinopathy have disorganized and non-parallel OPL and IPL. We used 42 foveal OCT rasters from 21 patients' both eyes. Meanwhile, we added 8 OCT rasters with disorganized IPL and OPL from the superior or inferior foveal regions of four patients with severe diabetic retinopathy to test the accuracy of the algorithm in segmenting IPL and OPL in circumstances of unorganized OCT images.

The participants' mean age was 61.8 years (range: 47–78 years), and 11 (52.4 percent) were male.

The gold standard (GS), was obtained individually by averaging the layer points marked by two experts. in all of 50 selected OCT slabs with variable fluctuation of layers. IPL and OPL layers were segmented using the SVR method with a wavelet kernel, and SI was calculated automatically for each layer in all 50 OCT slabs. We utilized the Bland–Altman plot to determine agreement between two experts' manual segmentation and automated method. Additional file 1 illustrates the segmentation of all 50 OCT images by two experts and also by automated technique. For validation, the mean unsigned error border position and SI values for each layer were computed and presented in Table 2. The evaluation algorithm is used to test on 50 B-scans. The root mean square error (RMSE) for layer border detection is calculated by Eq. (8), mean absolute error (MAE) is calculated by Eq. (9) and signed error (SE) is calculated by Eq. (10):

$$RMSE = \sqrt {\frac{{\sum\nolimits_{{w_{1} }}^{{w_{2} }} {(x_{i} - y_{i} )^{2} } }}{{abs(w_{2} - w_{1} )}},}$$
(8)
$$MAE = \frac{{\sum\nolimits_{{w_{1} }}^{{w_{2} }} {abs(x_{i} - y_{i} )} }}{{abs(w_{2} - w_{1} )}},$$
(9)
$$SE = \frac{{\sum\nolimits_{{w_{1} }}^{{w_{2} }} {(x_{i} - y_{i} )} }}{{abs(w_{2} - w_{1} )}},$$
(10)

where \({\mathrm{w}}_{1}\) and \({\mathrm{w}}_{2}\) are respectively the common starting point (The first pixel to be marked as the start of the layer in both automatic and GS and the common end point in the automatic methods and GS in each layer. \({\mathrm{x}}_{\mathrm{i}}\), \({\mathrm{y}}_{\mathrm{i}}\) represent the \(\mathrm{i}\) th point of the two acquired boundaries for automatic method and GS.

Table 2 MSE, MAE and signed error border position (in pixels) between our proposed fully-automated method and the GS

Also, the MAE for SI is calculated by:

$$MAE = \frac{{\sum\nolimits_{i = 1}^{N} {abs(x_{i} - y_{i} )} }}{N},$$
(11)

where \(\mathrm{N}\) is number of our 50 B-scans used for test and \({\mathrm{x}}_{\mathrm{i}}\), \({\mathrm{y}}_{\mathrm{i}}\) represent the \(\mathrm{i}\) th SI of the two acquired boundaries calculated for automatic method and GS. The value of MAE calculated for SI, between the proposed method and GS, for the two layers IPL and OPL shows the values of 0.016 pixel and 0.025 pixel, respectively.

Bland–Altman plots for two experts in manual segmentation of IPL and OPL are shown in Fig. 8a–b. As can be seen, acceptable agreement is achieved between two experts for IPL and OPL segmentation. The total agreement between the automated method (SVR with wavelet kernel) and the GS (average of two experts) in IPL and OPL segmentation were also satisfactory, being as shown in Fig. 8c–d. Despite the occurrence of the fluctuations and irregularities in these layers, there was acceptable agreement between the fully-automated method and the GS.

Fig. 8
figure 8

Bland–Altman plots for correlating two experts in manual segmentation of IPL a and OPL b, also our proposed fully automated algorithm with the GS (average of two experts) for IPL c and OPL d

Figure 9 depicts the Bland–Altman plots of two experts in manual SI calculation as well as level of agreement between the automated approach and the GS in evaluating the SI measurement. To determine SI value in IPL and OPL layers by two experts, a total agreement of 94 percent is attained. As shown in Fig. 9c–d, there is a high degree of agreement between the automated and manual computation of SI, with 96 and 94 percent agreement in IPL and OPL, respectively.

Fig. 9
figure 9

The Bland–Altman plots for the agreement of SI calculation between two experts in IPL (a) and OPL (b), and also proposed automated method with GS (average of two experts) in SI calculation for IPL (c) and OPL (d)

In general, the Bland–Altman plots showed no substantial difference between manual segmentation technique and our proposed automated approach could be successfully replaced with typical manual segmentation method.

Finally, the execution times for all three methods, including spline interpolation, SVR (RBF kernel), and SVR (Wavelet kernel) are shown in Table 3, which shows that all three perform faster than the manual method for layers’ segmentation.

Table 3 A comparison between the required amount of time (in seconds) for all methods and manual method

Discussion

In this study, we developed a fully-automated algorithm using machine learning methods to segment the IPL and OPL and calculate the SI in each of these retinal layers. In general, the Bland–Altman plots demonstrated an excellent level of agreement between the two experts and our fully automated method for segmentation and SI calculation in the above-mentioned layers.

We hypothesized that the IPL and OPL would grow increasingly irregular before complete disorganization and the emergence of DRILs. Because of the difficulty in quantifying these abnormalities, prior researches have not looked into the relationship between the irregularity of these layers as a biomarker and visual acuity prognosis or response to therapy. As a result, in order to explore the presence of these connections, an algorithm to quantify the abnormalities of these layers must be designed. We previously incorporated a new biomarker, iris smoothness index (SI), into anterior segment optical coherence tomography (AS-OCT) images of Fuchs uveitis patients and created an automated technique for measuring it [38, 39]. We performed the present research in order to integrate a similar method into retinal layers. The manual segmentation of retinal layers in OCT images using ImageJ software, which is time-consuming and needs a skilled operator, may restrict the practical applicability of SI quantification in retinal layers. To overcome these limitations, the goal of the current study was to create an automated algorithm for calculating the smoothness index in two hyperreflective inner retinal layers (IPL) and (OPL) in patients with varying stages of diabetic retinopathy and diabetic macular edema and compare the results to those obtained using the manual method. In the present research, we attempted to incorporate OCTs with varying degrees of irregularity in order to enhance the algorithm's performance in situations when the presence of macular edema causes severe irregularities in the retinal layers. We enhanced the algorithm's accuracy in IPL and OPL segmentation, as well as SI measurement, by using a machine learning technique called SVR with a wavelet kernel that exhibited consistent agreement with experts and a manual approach.

Figures 4 and 5 show the qualitative representation of IPL and OPL layers segmentation, using classic (cubic spline) and machine learning (SVR) algorithms. Although cubic spline, was a faster way to split layers than other methods (see Table 3), but it failed in OCT images with significant irregularity of layers. SVR with wavelet kernel is shown to be a powerful and robust method for segmentation of the layers with irregularities which could be due to the prediction of the missing values. Also, Fig. 6 depicts an illustration of segmentation results in three different patients (mild, moderate, and severe diabetic retinopathy) with variable irregularity of IPL and OPL, which shows an acceptable agreement between the results obtained by our proposed method and experts' manual segmentation. The comparisons is available in Additional file 1.

Qualitatively, these results in our study are comparable to those of Chiu et al. [48, 49], Dufour et al. [50], Kugelman et al. [51], who implemented and analyzed different automatic segmentation methods to detect retinal layers in optical coherence tomography (OCT) or spectral-domain OCT (SD-OCT) images. Their studies also corroborate this fact that the segmentation challenges increase with the increasing abnormality and irregularity in diabetic retinopathy patients.

Table 2 also shows quantitative results of error measurement for the border position (in pixels) between our proposed fully-automated method and those obtained by experts using the manual method as the ground truth. These measurements were done for three types of error, including root mean square error (RMSE), mean absolute error (MAE), and signed error (SE). Overall, as we can see, low errors were obtained for estimated borders (IPL/OPL) in comparison with the ground truth. RMSE is the standard deviation of the residuals between the estimated and real data. Residuals are a measure of how far from the regression line data points are; RMSE is a measure of how spread out these residuals are. In other words, it tells you how data is concentrated around the line of best fit. In addition, MAE represents the absolute error of our estimation, point by point, and finally, the SE can tell us how well an estimated data matches the quantity that it is supposed to estimate, in which positive and negative errors represent regression situated above and under the line of best fit, respectively.

As a quantitative comparison, these calculations are consistent with the study by Chiu et al. [48, 49], Dufour et al. [50], Kugelman et al. [51], Bilc et al. [52], in which authors provided values of retinal layers segmentation errors obtained from comparison between their automatic algorithm and expert manual segmentation. Also, comprehensive comparisons between these works and our proposed algorithm, including methods, dataset, retinal conditions, retinal layers, segmentation errors, and execution time are listed in Table 4. It can be shown that our observed errors are acceptable and near to the total errors in previous studies, although in our study patients with different stages of diabetic retinopathy were evaluated and might be attributed to the more complicated nature of our segmentation.

Table 4 A comprehensive comparison between related works

Although, to the best of our knowledge, the measurement of the smoothness (irregularity) of retinal layers has only been done semi-automatically or manually in ERM [30], this potential biomarker of retinal layers in various vascular and structural diseases of the retina can be investigated using the algorithm developed in the current study. In addition to the overall thickness of the retina, which is a popular biomarker, the smoothness index may be used to quantify the response to therapy and prognosis of various patients by conducting longitudinal comparative studies. Although in the current study, SI was calculated in the OCT images of patients with varying stages of diabetic retinopathy, to conclusively determine whether or not this index is a novel biomarker, larger-scale investigations will be required.

While we have developed a relatively reliable and convenient technique for detecting hyperreflective OCT layers and corresponding smoothness index, it should be warned that in certain situations when layer irregularity and oscillation are significant and layers are indistinguishable like in cases of DRIL, this method may not give a correct answer.

Although, we added 8 OCT rasters with disorganized IPL and OPL from the superior or inferior foveal regions of four patients with severe diabetic retinopathy to test the accuracy of the algorithm in segmenting IPL and OPL in circumstances of unorganized OCT images. Some of these OCT rasters were so unstructured that identifying IPL and OPL was impossible even with manual segmentation (Fig. 10a), therefore they were omitted from the study. Our implemented algorithm, on the other hand, failed to correctly separate the IPL / OPL in two scans when layer irregularity and oscillation are high and layers are fade due to poor image quality or contrast (Fig. 10b).

Fig. 10
figure 10

An example of OCT raster was so unstructured that identifying IPL and OPL was impossible even with manual segmentation, therefore they were omitted from the study (a). An example of segmentation error in OPL in a case with high level of irregularity and oscillation and poor OPL contrast (b)

On the other hand, we have just recruited patients with diabetic retinopathy in order to evaluate the performance and accuracy of the proposed algorithm. To address this restriction, an investigation of the current algorithm's accuracy on a large sample size of individuals with various retinal diseases should be conducted.

Conclusion

In conclusion, in the current study we developed a reliable, fast and convenient algorithm based on machine learning method to automatically segment two hyperreflective inner retinal layers and calculate smoothness index (SI) in the corresponding layers. Results prove that there is an acceptable agreement between our proposed algorithm and those obtained by experts using the manual method as the ground truth and therefore, the SI can be used to quantify the response to therapy and prognosis of various patients by conducting longitudinal comparative studies.

Availability of data and materials

The data that support the findings of this study are available from [Elias Khalili Pour] but restrictions apply to the availability of these data, which were used under license for the current study, and so are not publicly available. Data are however available from the authors upon reasonable request and with permission of [Elias Khalili Pour].

Code availability

The machine-learning models were developed and deployed using standard model libraries and the Python framework. The codes may be available for research purposes from the corresponding authors on reasonable request.

Abbreviations

OCT:

Optical coherence tomography

SD-OCT:

Spectral-domain optical coherence tomography

ERM:

Epiretinal membranes

IPL:

Inner plexiform layer

OPL:

Outer plexiform layer

AMD:

Age-related macular degeneration

ILM:

Internal limiting membrane

RPE:

Retinal pigment epithelium

DRIL:

Disorganization of the retinal inner layers

SVR:

Support vector regression

RBF:

Radial basis functions

SI:

Smoothness index

RMSE:

Root mean square error

MAE:

Mean absolute error

SE:

Signed error

References

  1. Podoleanu AG. Optical coherence tomography. Br J Radiol. 2005;78:976–88.

    Article  Google Scholar 

  2. Huang D, Swanson EA, Lin CP, Schuman JS, Stinson WG, Chang W, et al. Optical coherence tomography. Science. 1991;254:1178–81.

    Article  CAS  Google Scholar 

  3. Schuman JS, Hee MR, Arya AV, Pedut-Kloizman T, Puliafito CA, Fujimoto JG, et al. Optical coherence tomography: a new tool for glaucoma diagnosis. Curr Opin Ophthalmol. 1995;6:89–95.

    Article  CAS  Google Scholar 

  4. Coker JG, Duker JS. Macular disease and optical coherence tomography. Curr Opin Ophthalmol. 1996;7:33–8.

    Article  CAS  Google Scholar 

  5. Mohammadzadeh V, Fatehi N, Yarmohammadi A, Lee JW, Sharifipour F, Daneshvar R, et al. Macular imaging with optical coherence tomography in glaucoma. Surv Ophthalmol. 2020;65:597–638.

    Article  Google Scholar 

  6. Hee MR. Optical coherence tomography of the human retina. Arch Ophthalmol. 1995;113:325.

    Article  CAS  Google Scholar 

  7. Kafieh R, Amini Z, Rabbani H, Baghbaderani BK, Salafian B, Mazaheri F, et al. Automatic Multifaceted matlab package for analysis of ocular images (AMPAO). SoftwareX. 2019;10:100339.

    Article  Google Scholar 

  8. Kafieh R, Rabbani H, Abramoff MD, Sonka M. Intra-retinal layer segmentation of 3D optical coherence tomography using coarse grained diffusion map. Med Image Anal. 2013;17:907–28.

    Article  Google Scholar 

  9. Rabbani H, Kafieh R, Kermani S. A review of algorithms for segmentation of optical coherence tomography from retina. J Med Signals Sensors. 2013;3:45.

    Article  Google Scholar 

  10. González-López A, de Moura J, Novo J, Ortega M, Penedo MG. Robust segmentation of retinal layers in optical coherence tomography images based on a multistage active contour model. Heliyon. 2019;5:e01271.

    Article  Google Scholar 

  11. Montazerin M, Sajjadifar Z, Pour EK, Riazi-Esfahani H, Mahmoudi T, Rabbani H, et al. Livelayer: a semi-automatic software program for segmentation of layers and diabetic macular edema in optical coherence tomography images. Sci Rep. 2021;11:1–13.

    Article  Google Scholar 

  12. Alasil T, Keane PA, Updike JF, Dustin L, Ouyang Y, Walsh AC, et al. Relationship between optical coherence tomography retinal parameters and visual acuity in diabetic macular edema. Ophthalmology. 2010;117:2379–86.

    Article  Google Scholar 

  13. Uji A, Murakami T, Nishijima K, Akagi T, Horii T, Arakawa N, et al. Association between hyperreflective foci in the outer retina, status of photoreceptor layer, and visual acuity in diabetic macular edema. Am J Ophthalmol. 2012;153:710-717.e1.

    Article  Google Scholar 

  14. Lai TY, Saxena S, Srivastav K, Cheung G, Ng YWJ. Photoreceptor inner segment ellipsoid band integrity on spectral domain optical coherence tomography. Clin Ophthalmol. 2014:2507.

  15. Chan EW, Eldeeb M, Sun V, Thomas D, Omar A, Kapusta MA, et al. Disorganization of retinal inner layers and ellipsoid zone disruption predict visual outcomes in central retinal vein occlusion. Ophthalmol Retin. 2019;3:83–92.

    Article  Google Scholar 

  16. Wu SM. Synaptic organization of the vertebrate retina: general principles and species-specific variations: the Friedenwald lecture. Invest Ophthalmol Vis Sci. 2010;51:1263–74.

    Article  Google Scholar 

  17. Sun JK, Lin MM, Lammer J, Prager S, Sarangi R, Silva PS, et al. Disorganization of the retinal inner layers as a predictor of visual acuity in eyes with center-involved diabetic macular edema. JAMA Ophthalmol. 2014;132:1309–16.

    Article  Google Scholar 

  18. Goker YS, Atılgan CU, Tekin K, Kızıltoprak H, Kosekahya P, Demir G, et al. Association between disorganization of the retinal inner layers and capillary nonperfusion area in patients with retinal vein occlusion. Arq Bras Oftalmol. 2020;83:497–504.

    Google Scholar 

  19. Das R, Spence G, Hogg RE, Stevenson M, Chakravarthy U. Disorganization of inner retina and outer retinal morphology in diabetic macular edema. JAMA Ophthalmol. 2018;136:202–8.

    Article  Google Scholar 

  20. Garnavou-Xirou C, Xirou T, Gkizis I, Kabanarou SA, Dimitriou E, Theodossiadis P, et al. The role of disorganization of retinal inner layers as predictive factor of postoperative outcome in patients with epiretinal membrane. Ophthalmic Res. 2019;63:13–7.

    Article  Google Scholar 

  21. Disorganization of the retinal inner layers as a prognostic factor in eyes with central retinal artery occlusion. Int J Ophthalmol. 2019;12.

  22. Grewal DS, O’Sullivan ML, Kron M, Jaffe GJ. Association of disorganization of retinal inner layers with visual acuity in eyes with uveitic cystoid macular edema. Am J Ophthalmol. 2017;177:116–25.

    Article  Google Scholar 

  23. Călugăru D, Călugăru M. Disorganization of the retinal inner layers as a predictor of visual acuity in eyes with macular edema secondary to vein occlusion. Am J Ophthalmol. 2017;184:190–1.

    Article  Google Scholar 

  24. Joltikov KA, Sesi CA, de Castro VM, Davila JR, Anand R, Khan SM, et al. Disorganization of retinal inner layers (DRIL) and neuroretinal dysfunction in early diabetic retinopathy. Investig Opthalmology Vis Sci. 2018;59:5481.

    Article  CAS  Google Scholar 

  25. Nadri G, Saxena S, Stefanickova J, Ziak P, Benacka J, Gilhotra JS, et al. Disorganization of retinal inner layers correlates with ellipsoid zone disruption and retinal nerve fiber layer thinning in diabetic retinopathy. J Diabetes Complicat. 2019;33:550–3.

    Article  Google Scholar 

  26. Guo J, Tang W, Ye X, Wu H, Xu G, Liu W, et al. Predictive multi-imaging biomarkers relevant for visual acuity in idiopathic macular telangiectasis type 1. BMC Ophthalmol. 2018;18.

  27. Zur D, Iglicki M, Feldinger L, Schwartz S, Goldstein M, Loewenstein A, et al. Disorganization of retinal inner layers as a biomarker for idiopathic epiretinal membrane after macular surgery-the DREAM study. Am J Ophthalmol. 2018;196:129–35.

    Article  Google Scholar 

  28. Ishibashi T, Sakimoto S, Shiraki N, Nishida K, Sakaguchi H, Nishida K. Association between disorganization of retinal inner layers and visual acuity after proliferative diabetic retinopathy surgery. Sci Rep. 2019;9:12230.

    Article  Google Scholar 

  29. Schmidt-Erfurth U, Michl M. Disorganization of retinal inner layers and the importance of setting boundaries. JAMA Ophthalmol. 2019;137:46–7.

    Article  Google Scholar 

  30. Cho KH, Park SJ, Cho JH, Woo SJ, Park KH. Inner-retinal irregularity index predicts postoperative visual prognosis in idiopathic epiretinal membrane. Am J Ophthalmol. 2016;168:139–49.

    Article  Google Scholar 

  31. Mishra Z, Ganegoda A, Selicha J, Wang Z, Sadda SR, Hu Z. Automated retinal layer segmentation using graph-based algorithm incorporating deep-learning-derived information. Sci Rep. 2020;10:9541.

    Article  Google Scholar 

  32. He Y, Carass A, Liu Y, Jedynak BM, Solomon SD, Saidha S, et al. Fully convolutional boundary regression for retina OCT segmentation. Med image Comput Comput Interv MICCAI Int Conf Med Image Comput Comput Interv. 2019;11764:120–8.

    Google Scholar 

  33. Huang Y, Danis RP, Pak JW, Luo S, White J, Zhang X, et al. Development of a semi-automatic segmentation method for retinal OCT images tested in patients with diabetic macular edema. PLoS ONE. 2013;8:e82922.

    Article  Google Scholar 

  34. Liu X, Fu T, Pan Z, Liu D, Hu W, Li B. Semi-supervised automatic layer and fluid region segmentation of retinal optical coherence tomography images using adversarial learning. In: 2018 25th IEEE International Conference on Image Processing (ICIP). 2018.

  35. Koozekanani D, Boyer K, Roberts C. Retinal thickness measurements from optical coherence tomography using a Markov boundary model. IEEE Trans Med Imaging. 2001;20:900–16.

    Article  CAS  Google Scholar 

  36. Munk MR, Beck M, Kolb S, Larsen M, Hamann S, Valmaggia C, et al. Quantification of retinal layer thickness changes in acute macular neuroretinopathy. Br J Ophthalmol. 2017;101:160–5.

    Article  Google Scholar 

  37. Mariottoni EB, Jammal AA, Urata CN, Berchuck SI, Thompson AC, Estrela T, et al. Quantification of retinal nerve fibre layer thickness on optical coherence tomography with a deep learning segmentation-free approach. Sci Rep. 2020;10:402.

    Article  CAS  Google Scholar 

  38. Zarei M, Mahmoudi T, Riazi-Esfahani H, Mousavi B, Ebrahimiadib N, Yaseri M, et al. Automated measurement of iris surface smoothness using anterior segment optical coherence tomography. Sci Rep. 2021;11:1–11.

    Article  Google Scholar 

  39. Zarei M, KhaliliPour E, Ebrahimiadib N, Riazi-Esfahani H. Quantitative analysis of the iris surface smoothness by anterior segment optical coherence tomography in Fuchs Uveitis. Ocul Immunol Inflamm. 2020;1–6.

  40. Sartor LJ, Weeks AR. Morphological operations on color images. J Electron Imaging. 2001;10:548–59.

    Article  Google Scholar 

  41. Schoenberg IJ. Cardinal spline interpolation. SIAM. 1973.

  42. Cristianini N, Shawe-Taylor J. An introduction to support vector machines and other kernel-based learning methods. Cambridge university press; 2000.

  43. Zhou D-X, Jetter K. Approximation with polynomial kernels and SVM classifiers. Adv Comput Math. 2006;25:323–44.

    Article  Google Scholar 

  44. Lin H-T. A study on sigmoid kernels for SVM and the training of non-PSD kernels by SMO-type methods. 2005.

  45. Xiaoyan WPZ. Model selection of SVM with RBF kernel and its application. Comput Eng Appl. 2003;24:21.

    Google Scholar 

  46. Steinwart I, Hush D, Scovel C. An explicit description of the reproducing kernel Hilbert spaces of Gaussian RBF kernels. IEEE Trans Inf Theory. 2006;52:4635–43.

    Article  Google Scholar 

  47. Zhang L, Zhou W, Jiao L. Wavelet support vector machine. IEEE Trans Syst Man Cybern Part B. 2004;34:34–9.

    Article  Google Scholar 

  48. Chiu SJ, Li XT, Nicholas P, Toth CA, Izatt JA, Farsiu S. Automatic segmentation of seven retinal layers in SDOCT images congruent with expert manual segmentation. Opt Express. 2010;18:19413–28.

    Article  Google Scholar 

  49. Chiu SJ, Allingham MJ, Mettu PS, Cousins SW, Izatt JA, Farsiu S. Kernel regression based segmentation of optical coherence tomography images with diabetic macular edema. Biomed Opt Express. 2015;6:1172.

    Article  Google Scholar 

  50. Dufour PA, Ceklic L, Abdillahi H, Schroder S, De Dzanet S, Wolf-Schnurrbusch U, et al. Graph-based multi-surface segmentation of OCT data using trained hard and soft constraints. IEEE Trans Med Imaging. 2013;32:531–43.

    Article  Google Scholar 

  51. Kugelman J, Alonso-Caneiro D, Read SA, Vincent SJ, Collins MJ. Automatic segmentation of OCT retinal boundaries using recurrent neural networks and graph search. Biomed Opt Express. 2018;9:5759.

    Article  Google Scholar 

  52. Bilc S, Groza A, Muntean G, Nicoara SD. Interleaving automatic segmentation and expert opinion for retinal conditions. Diagnostics. 2021;12:22.

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to thank Pooran Fadakar in Farabi Eye Hospital for her attentive contribution and help in this study.

Funding

The authors indicate no financial support.

Author information

Authors and Affiliations

Authors

Contributions

Concept and design (EKP, JS, HA, TM, AAA); data acquisition (HRE, AK, BJ, NEA, MZ); data analysis/interpretation (EKP, JS, HA, TM); drafting of the manuscript (EKP, JS, HA, TM, HRE); critical revision of the manuscript (EKP, JS, HA, HRE); supervision (EKP, JS); All authors read and approved the final manuscript.

Corresponding author

Correspondence to Hossein Azimi.

Ethics declarations

Ethics approval and consent to participate

Written informed consents were obtained from each participant. This study adhered to the tenets of the Declaration of Helsinki and was approved by the ethics committee of Farabi eye hospital, Tehran university of medical sciences.

Consent for publication

Written informed consents were obtained from each participant.

Competing interests

None of the authors have any proprietary interests or conflicts of interest related to this submission.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1:

 Segmentation of the IPL and OPL by two specialists and the automated approach.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Saeidian, J., Mahmoudi, T., Riazi-Esfahani, H. et al. Automated assessment of the smoothness of retinal layers in optical coherence tomography images using a machine learning algorithm. BMC Med Imaging 23, 21 (2023). https://doi.org/10.1186/s12880-023-00976-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12880-023-00976-w

Keywords