Skip to main content

Automatic diagnosis of melanoma using machine learning methods on a spectroscopic system



Early and accurate diagnosis of melanoma, the deadliest type of skin cancer, has the potential to reduce morbidity and mortality rate. However, early diagnosis of melanoma is not trivial even for experienced dermatologists, as it needs sampling and laboratory tests which can be extremely complex and subjective. The accuracy of clinical diagnosis of melanoma is also an issue especially in distinguishing between melanoma and mole. To solve these problems, this paper presents an approach that makes non-subjective judgements based on quantitative measures for automatic diagnosis of melanoma.


Our approach involves image acquisition, image processing, feature extraction, and classification. 187 images (19 malignant melanoma and 168 benign lesions) were collected in a clinic by a spectroscopic device that combines single-scattered, polarized light spectroscopy with multiple-scattered, un-polarized light spectroscopy. After noise reduction and image normalization, features were extracted based on statistical measurements (i.e. mean, standard deviation, mean absolute deviation, L 1 norm, and L 2 norm) of image pixel intensities to characterize the pattern of melanoma. Finally, these features were fed into certain classifiers to train learning models for classification.


We adopted three classifiers – artificial neural network, naïve bayes, and k-nearest neighbour to evaluate our approach separately. The naive bayes classifier achieved the best performance - 89% accuracy, 89% sensitivity and 89% specificity, which was integrated with our approach in a desktop application running on the spectroscopic system for diagnosis of melanoma.


Our work has two strengths. (1) We have used single scattered polarized light spectroscopy and multiple scattered unpolarized light spectroscopy to decipher the multilayered characteristics of human skin. (2) Our approach does not need image segmentation, as we directly probe tiny spots in the lesion skin and the image scans do not involve background skin. The desktop application for automatic diagnosis of melanoma can help dermatologists get a non-subjective second opinion for their diagnosis decision.

Peer Review reports


Melanoma is a lethal form of skin cancer, with an estimated mortality rate of 14% worldwide [1]. The American Cancer Society reported 76,690 new cases of melanoma in the United States in 2013, with 9,480 estimated deaths, according to recent annual cancer facts and figures [2]. The global cancer statistics [3] also shows that the incidence and mortality rates of melanoma are in rising trend. Fortunately, melanoma may be treated successfully with a 10-year survival rate between 90 and 97% yet the curability depends on its early detection and excision when the tumor is still small and thin. Therefore, early and accurate diagnosis of melanoma is particularly important.

With the use of dermoscopy [4] and several clinical algorithms such as the ABCD rule [5], the 7-point checklist [6], and the Menzies method [7], the diagnosis accuracy of melanoma has been higher than the simple naked-eye examination [8]. However, clinical diagnosis is inherently subjective and complex, thus the accuracy is highly depending on experiences of dermatologists, which is estimated to be about 75 – 85% [9].

To reduce subjectivity and complexity of clinical diagnosis, it is desirable to conduct research on quantitative approaches for automated detection of melanoma. In the last two decades, a large amount of computer-aided approaches have been developed for diagnosis of melanoma. For example, A.G. Manousaki et al. [10] have proposed an approach that incorporates parameters of geometry, color and color texture as independent covariates for discriminating melanoma from melanocytic nevi. In the work of H. Ganster et al. [11], a melanoma recognition system that involves image processing, segmentation, feature calculation and selection, as well as k-NN classification has been presented. The work [12] by J.F. Alcón et al. also presents an automatic imaging system that combines the outcome of the image classification with context knowledge such as skin type, age, gender to add confidence to the classification. R. Garnavi and M. Aldeen have presented an approach [13] that uses border- and wavelet-based texture and four different classifiers in diagnosis of melanoma. Additionally, many computer applications have been developed for melanoma diagnosis. Some of these applications include SolarScan [14], the DANAOS expert system [15], DermoGenius-Ultra [16], and MelaFind [17], etc. These methods have achieved good classification accuracy, but either most of them obtained images by hand-held cameras, which thus needed image segmentation to separate the lesion from the background, or they were largely based on the light intensity spectra, derived absorption and scattering spectra, which have been shown to be highly sensitive to the abnormal changes in tissues. Moreover, most of these methods treat the skin tissue as a uniform or homogeneous medium.

Nevertheless, the skin tissue is inhomogeneous with multilayered structure. It consists of two primary layers that include a bottom layer – the dermis, and a top layer – the epidermis. Under the dermis is subcutaneous tissue, or hypodermis, which consists of connective tissue, fibroblasts, and fat cells. The epidermis consists of cells called keratinocytes, which develop in the basal layer of the skin at the bottom of the epidermis. As these keratinocytes migrate to the surface, they flatten, cornify, and harden, thusly sealing the skin [18]. The epidermis also includes melanocytes located near the base of the epidermis. These cells secrete the pigment melanin which protects the skin from ultraviolet radiation. The dermis can vary in thickness from 1 to 4 mm, and is primarily comprised of fibrous connective tissue such as collagen. The dermis also contains the hair follicles, sweat glands, blood vessels, and nerves. The skin is an extremely complex optical structure because it consists of multiple scatters with many shapes and sizes. As light propagates through the skin, it is scattered and absorbed differently in each layer. The absorption is also complex consisting of contributions primarily from blood and melanin. The multilayered structure further increases the complexity. Because skin cancer often begins in the epidermal layer and invades deeper tissue over time, the information obtained with the current multiple-scattered light-based methods is averaged out and does not reflect the accurate morphology of the specific diseased layer, although encouraging results for skin cancer detection have been recently obtained using multiple-scattered light-only methods in conjunction with classification algorithms [19, 20].

Two techniques have been developed for use in deciphering the multilayered characteristics of human skin: multilayer model-based Multiple-Scattered Light Spectroscopy (MSLS) [2125] and single-scattered, Polarized Light Spectroscopy (PLS) [2631]. The multilayer model-based MSLS considers the realistic multilayered structure of the skin in the mathematical model of light propagation, thus is a good choice for skin studies. However, the accurate use of such improved spectroscopy requires knowing the exact thickness of each layer of skin which is difficult to obtain. On the other hand, PLS is able to physically discriminate between multiple layers of tissue, which tends to simplify the light propagation modeling because single-scattered light maintains its original polarization. Nevertheless, compared to MSLS, PLS has a relatively low signal-to-noise ratio.

To take the advantages of both MSLS and PLS, in this study, we captured skin images with a combination of MSLS and PLS technologies, and developed an automated method for diagnosis of melanoma via pattern classification of the skin images. The combined MSLS and PLS technologies provide unprecedented tissue functional information and cellular structures and accurately reflect morphologies in specific diseased layers of skin. Using a number of skin scans collected from a clinic by our spectroscopic system combining single and multiple-scattered light measurements, we first identified the pixel-by-pixel intensity differences between the group of melanoma and the group of benign skins. We then selected statistical measurements of pixel intensity that presented characteristics of melanoma as features for classification. Next, classification was carried out by using the selected features. We evaluated our approach using artificial neural network (ANN), naïve bayes (NB), and k-nearest neighbour (k-NN) separately. The approach could achieve 89% sensitivity, 89% specificity, and 89% accuracy by using NB. Based on the proposed method, a desktop application that runs on the spectroscopic system has been developed to integrate image acquisition, image processing, feature extraction, and classification into one-stop service. Dermatologists can use this application to conduct instantaneous diagnosis of melanoma to get a second opinion to their clinical decisions.


Data used in this work were collected in a clinic by a spectroscopic system with combined single and multiple-scattered light measurements. The study was approved by the Institutional Review Board (IRB) for Human Research in Medical University of South Carolina. Patients have consented to participate in the study and the study has not used patient identifiable information. The spectroscopic system was composed of a PC with a monitor, a CCD camera, a CCD controller, a spectrometer, a light source, optical fibers, a polarizer, gradient-index lens, and a probe. Figure 1 shows the pictures of the system, including the front panel removed to show the inside (1a), the layout of instruments inside the system (1b), and the system schema (1c). In the design of the optical probe, optical fibers with gradient-index lens built at the distal end were employed instead of attaching a separated lens system with the source and detector fibers. Figure 2 is a schema of the probe.

Figure 1

Pictures of the spectroscopic system. (a) front view of the system; (b) instruments on the 2nd layer of the cart; (c) system schema.

Figure 2

A schema of the probe. A three-dimensional view (a) and a two-dimensional schema (b) of the optical probe.


The data set in this study included 187 samples from 79 participants, as some participants provided a couple of samples located in different areas of skin. The participants were aged from 22 to 79. The 187 samples consisted of 19 melanoma [16% female, 84% male, mean age = 61.32 (12.88) years] and 168 benign scans (i.e. benign nevus) [51% female, 49% male, mean age = 34.63 (11.40) years]. The data samples were sent for pathology and the labels (i.e. benign and melanomas) were confirmed correct.

Image acquisition and preprocessing

For each skin sample, our spectroscopic system gathered images from three spots. We performed two pairs of scans on each lesion skin area. Each pair included one 'P’ scanning (using parallel light) and one 'V’ scanning (using vertical light). If the lesion or abnormal area was larger than the probe tip, we moved the probe tip slightly to take the two pairs on two different spots of the lesion area. If the lesion area was too small, only one spot was chosen and scanned twice to obtain two pairs of scans. We also took one pair of scans on the normal skin area nearby for comparison. Therefore, six spectral images were collected for each sample. Figure 3 depicts the switch on the probe for selecting P scan or V scan (left) and the skin lesion imaging by a technician (right).

Figure 3

Demonstration of image acquisition. Left: the switch on the probe from which to perform either 'P’ scan or 'V’ scan. Right: we place the tip of the probe on the skin in a perpendicular way and let the light spot locate in the lesion area of the skin.

The spectral images are in a binary format. Each spectral image contains 32 × 512 pixels. The value of each pixel is the intensity obtained by the CCD camera. Considering that the images may contain noises due to environmental effects during collection, we performed noise reduction using the median filter. After noise reduction, pixel intensities were normalized with min-max normalization method, as shown in formula (1),

I ' i , j , k = I i , j , k - min max - min max ' - min ' +min'

where I (i,j,k) represents the original pixel intensity at position (i, j) of image k (1 ≤ i ≤32, 1 ≤ j ≤512 ), and I ' (i,j,k) is the corresponding normalized intensity. max and min are the largest and smallest intensities in the original image respectively; max ' = 1 and min ' = 0.Take P scan as an example, Figure 4 presents a P scan of a melanoma lesion, where we picked line 15 to visualize the spectra as pixels in this line have the most significant intensities. V scans also demonstrate similar spectra.

Figure 4

P scan of a melanoma skin lesion.

Feature extraction

As the light generated by our spectroscopic system propagates through the benign and malignant skin, it is scattered and absorbed differently, which results to the intensity differences in the collected spectral images. Take P scan as an example, Figure 5 demonstrates the intensity distributions of all benign (left) and all malignant melanoma skin lesions (right). It is easy to find that the pixel intensities of malignant melanoma skin images spread out in the interval from 0 to 1, with most pixels having high intensity, whereas the majority of intensity values of benign skin images fall in the interval from 0.2 to 0.4. We also found that the intensity distribution of V scan has demonstrated similar differences between all benign and all melanoma skin lesions.

Figure 5

Intensity distribution of P scans of all benign and all melanoma skin lesions. Left: Intensity distribution of benign skin lesions; Right: intensity distribution of malignant melanoma skin lesions.

Since the intensity distributions of the benign and melanoma skin images are greatly different as shown in Figure 5, we therefore consider the pixel intensities as the key feature to distinguish benign skin from melanoma. To fully describe the pixel intensity distribution in each image, we adopted five statistical measures to quantify each scan. The five statistical variables are mean μ, standard deviation σ, mean absolute deviation MAD, L 1 norm I - i , j 1 , and L 2 norm I - i , j 2 , where I - i , j is the corrected intensity of pixel (i, j). We calculated the statistical values separately for P scans and V scans. That is to say, each sample has 10 statistical measures (i.e. 5 for P scans and 5 for V scans). Additionally, as for the pixel intensity of P scans or V scans, in each sample, we took both 2 scans of lesion area and 1 scan of normal area nearby into consideration. The corrected intensity of pixel (i, j) in P scans of sample k, I - i , j , k , p , was computed by formula (2),

I - i , j , k , p = I ' i , j , k , p 1 + I ' i , j , k , p 2 /2-I ' i , j , k , p 3

where I ' (i,j,k,p1) is the normalized intensity of pixel (i, j) in p1 scan - P scan of the 1st selected spot in the lesion area, I ' (i,j,k,p2) is the normalized intensity of pixel (i, j) in p2 scan – P scan of the 2nd selected spot in the lesion area, and I ' (i,j,k,p3) is the normalized intensity of pixel (i, j) in p3 scan – P scan of a spot selected in the normal area nearby. The same computation was conducted on V scans. Measuring the pixel intensity with formula (2) can capture the difference of lesion skin and normal skin regarding the same subject, which makes more sense than using intensities of lesion skin alone, as intensity values may be influenced by factors like skin color, age, gender, etc.

We use the following formulas (3) – (7) to calculate the statistical values in P scans of sample k. In the formulas, m =32 and n =512 as there are 32 × 512 pixels per scan. The index i is in the range [1, 32], and the index j is in the range [1, 512]. Similarly, the formulas are applied to calculate the statistical measurements of V scans by replacing I - i , j , k , p with I - i , j , k , v .

μ= 1 m n i = 1 m j = 1 n I - i , j , k , p
σ= 1 m n i = 1 m j = 1 n I - i , j , k , p - μ 2
MAD= 1 m n i = 1 m j = 1 n I - i , j , k , p - μ
I - i , j , k , p 1 = i = 1 m j = 1 n I - i , j , k , p
I - i , j , k , p 2 = i = 1 m j = 1 n I - i , j , k , p 2


After extracting the diagnostic features, we trained different classifiers (ANN, NB, and k-NN) to distinguish melanoma from benign skin. Given a training sample set with n samples D= x i , y i i = 1 n where x i is a sample and y i is the associated class label, we focused on the binary classification problem, i.e., y i is from a label space {±1} where +1 denotes the cancer class and -1 denotes the non-cancer class. The cancer class was composed of the samples of melanoma, whereas the non-cancer class consisted of the samples from benign skin. Each sample in the training set has 10 features to be fed into the classifier. The tool WEKA [32] was used for training and testing. With ANN, a multi-layer network that used back propagation was built. The input layer had 10 input units, which were the 10 selected features. The output layer had 2 units, representing two classes – benign and melanoma. The hidden layer was initially set to have 6 units in training as normally the number of hidden units is set to the half of the sum of input units (10 in our study) and output units (2 in our study). We kept the default parameters (i.e. learningRate = 0.3, momentum = 0.2, seed = 0, trainingTime = 500, validationThreshold = 20) in WEKA for ANN. As for NB, the classifier used estimator classes. Numeric estimator precision values were chosen based on analysis of the training data. The classifier used a normal distribution for numeric attributes. We kept the default parameters (i.e. useKernelEstimator = false, useSupervisedDiscretization = false) in WEKA for NB classifier. The choice of k in k-NN affects the performance of this classifier. In our study, we used 3-NN, which combines robustness to noise and costs less time for classification than using a larger k [33]. For other parameters of k-NN in WEKA, we set crossValidate to false, did not use distance weighting, and used the brute force search algorithm for nearest neighbour search.


Pattern of melanoma

Melanoma and benign group comparison across the 10 features demonstrated the characteristic of melanoma. The effect of melanoma can be seen in the distribution of the pixel intensity differences of abnormal skin and normal skin of individual subjects. Figure 6 demonstrates that the influence on melanoma in terms of pixel intensity can be observed in color scale by comparing a melanoma case to a benign case.

Figure 6

Pattern of melanoma. (a) The color bar representing the color scheme used for pixel intensity; (b) The color map of pixel intensity calculated by formula (2) in P scan of a benign case; (c) The color map of pixel intensity calculated by formula (2) in V scan of the same benign case as that was used in (b); (d) The color map of pixel intensity computed by formula (2) in P scan of a melanoma case; (e) The color map of pixel intensity computed by formula (2) in V scan of the same melanoma case as that was used in (d).

Classification accuracy

In our experiment, the 187 cases were randomly divided into a training set of 60 cases and a test set of 127 cases by random sampling without replacement. We conducted the experiment using three classifiers (i.e. ANN, NB, and 3-NN) separately. The experiment was repeated 25 times for each classifier. In addition, we evaluated the performance of classification using P scan alone (i.e. 5 statistical measures of P scan for classification), V scan alone (5 statistical measures of V scan for classification), as well as the combined P scan and V scan (10 features for classification). Table 1 demonstrates the performance of using NB with the combined P scan and V scan, of which, on average, the accuracy, specificity, and sensitivity were 89%, 89%, 89% respectively. The probability of error in NB was 0.16. The average accuracy, specificity and sensitivity of using ANN with the combined P scan and V scan were 88%, 93%, and 49% respectively. 3-NN with the combined P scan and V scan demonstrated 88% accuracy, 92% specificity, and 47% sensitivity on average. The probability of error in ANN and 3-NN was 0.24 and 0.26 respectively. Table 2 shows the average performance of the classifiers using P scan and V scan, using P scan alone or using V scan alone. Due to space limitation, the performance details of each run of experiment with different classifiers (i.e. ANN, 3-NN) and different sets of features (i.e. P scan alone or V scan alone) were skipped. In addition, we performed the experiment with a different combination of data by randomly dividing the data into a training set of 30 cases and a test set of 157 cases. The average performance of each classifier using P scan and V scan, using P scan alone or using V scan alone is shown in Table 3.

Table 1 Performance of NB using P scan and V scan together (60 training samples, 127 testing samples)
Table 2 The average performance of the classifiers using P scan and V scan, using P scan alone or using V scan alone (60 training samples, 127 testing samples)
Table 3 The average performance of the classifiers using P scan and V scan, using P scan alone or using V scan alone (30 training samples, 157 testing samples)

Desktop application

For automatic detection of melanoma, we built a desktop application which runs on the machine (Figure 1a) that connects to the spectroscopic device. Clinicians can launch the application to collect skin scans using the spectroscopic device, and follow the wizard to make the application automatically conduct image processing, feature extraction, and classification so as to achieve instantaneous diagnosis of melanoma. The NB classifier was integrated in this application. Figure 7 presents a screenshot of the automated tool. We keep the interface simple for dermatologists to use. Dermatologists select images 1P, 2P, 3P and 1 V, 2 V, 3 V that have been collected by the spectroscopic device for a subject, by clicking on the 6 buttons on the very left panel of the interface. The status box shows whether an image is loaded successfully. Users can input and save comments for the diagnosis in the “Comment” area. Once images are all loaded successfully, users just need to click on the “Diagnose” button to start the diagnosis, which launches the execution of back-end programs for feature extraction and classification. Once the diagnosis is done, a message box pops up showing the diagnosis result – either melanoma or benign.

Figure 7

A screenshot of the desktop application for automated diagnosis of melanoma.


Computer-aided diagnosis of melanoma generally involves six steps: image acquisition, image processing, image segmentation, feature extraction, feature selection, and classification. In our work, we gathered images with combined single and multiple-scattered light measurements, for deciphering the multilayered characteristics of human skin. This is one scientific contribution of our work. The other strength of our work is that scans of lesion area do not involve background skin as we directly probe tiny spots in the lesion skin. Image segmentation was skipped in our work. The objective of segmentation in diagnosis of melanoma is to detect the border of the lesion so as to separate the lesion – region of interest from the background skin. Since we use the probe only within the region of interest (ROI), our approach does not need image segmentation. An approach without segmentation reduces the risk of diagnosis error that may be introduced by improper segmentation and reduces the cost and time caused by segmentation. In addition, we did not perform feature selection because the 10 statistical parameters extracted for each sample were all needed for classification.

Classification methods that have been applied to computer-aided diagnosis of melanoma include discriminant analysis [34, 35], ANN [36, 37], k-NN [11], support vector machine (SVM) [38], and decision trees, etc. To evaluate our approach, we performed classification using three different classifiers: ANN, NB and k-NN, with the combination of P scan and V scan, P scan alone, or V scan alone. Compared with ANN and k-NN, NB could achieve the best accuracy and sensitivity. Although ANN and k-NN achieved better specificity than NB, the sensitivity was not good. With NB, the experiment using the combination of P scan and V scan achieved better accuracy and specificity than using P scan or V scan alone, while the experiment with V scan alone presented the best sensitivity. We picked the classifier NB using the combined P scan and V scan which achieved the highest accuracy and integrated it in our application to provide automated diagnosis of melanoma.

There are some limitations in our work. First, although our approach could achieve 89% sensitivity, 89% specificity, and 89% accuracy with NB, the sensitivity with ANN and k-NN were not good enough. The small number of melanomas and much larger set of benign samples might make the benign samples dominate the classification. In addition, because of the limited number of melanomas in our experiment, it is likely that the performance of our approach is specific to the data sets used in this preliminary study. It is unclear how the performance varies when our approach is applied to a larger or/and new data set. Follow-up studies that incorporate larger size of melanoma may have the most reliability and provide greater confidence for the diagnosis than the classifier developed in this preliminary study. Second, some factors like the changes in the device, the person who uses the device, etc. might have effect on the outcome. Particularly, variation in skin color, age, gender in the sample set may influence the diagnosis result.


This paper presents a computer-aided approach for automatic and accurate diagnosis of melanoma. We evaluated our approach with three classifiers - ANN, NB, and k-NN, and our approach could achieve 89% sensitivity, 89% specificity, and 89% accuracy with NB. In the future, we will do follow-up study by collecting more real data, especially melanoma cases, to further evaluate our approach.



Artificial neural network


K-Nearest neighbour


Mean absolute deviation


Multiple-scattered light spectroscopy


Naïve bayes


Polarized light spectroscopy


Support vector machine.


  1. 1.

    Jemal A, Siegel R, Ward E, Murray T, Xu J, Thun MJ: Cancer statistics. CA Cancer J Clin. 2007, 57: 43-66. 10.3322/canjclin.57.1.43.

    Article  PubMed  Google Scholar 

  2. 2.

    Cancer facts and figures. 2013, []

  3. 3.

    Jemal A, Bray F, Center M, Ferlay J, Ward E, Forman D: Global cancer statistics. CA Cancer J Clin. 2011, 61: 69-90. 10.3322/caac.20107.

    Article  PubMed  Google Scholar 

  4. 4.

    Braun R, Rabinovitz H, Oliviero M, Kopf A, Saurat J: Dermoscopy of pigmented lesions. J Am Acad Dermatol. 2005, 52 (1): 109-121. 10.1016/j.jaad.2001.11.001.

    Article  PubMed  Google Scholar 

  5. 5.

    Stolz W, Riemann A, Cognetta A: ABCD rule of dermatoscopy: A new practical method for early recognition of malignant melanoma. Eur J Dermatol. 1994, 4: 521-527.

    Google Scholar 

  6. 6.

    Argenziano G, Fabbrocini G, Carli P, Giorgi V, Sammarco E, Delfino M: Epiluminescence microscopy for the diagnosis of doubtful melanocytic skin lesions: Comparison of the ABCD rule of dermatoscopy and a new 7-point checklist based on pattern analysis. Arch Derm. 1998, 134: 1563-1570.

    CAS  Article  PubMed  Google Scholar 

  7. 7.

    Menzies SW, Ingvar C, McCarthy WH: A sensitivity and specificity analysis of the surface microscopy features of invasive melanoma. Melanoma Res. 1996, 6 (1): 55-62. 10.1097/00008390-199602000-00008.

    CAS  Article  PubMed  Google Scholar 

  8. 8.

    Geller AC, Swetter SM, Brooks K, Demierre M, Yaroch AL: Screening, early detection, and trends for melanoma: current status (2000–2006) and future directions. J Am Acad Dermatol. 2007, 57: 555-572. 10.1016/j.jaad.2007.06.032.

    Article  PubMed  Google Scholar 

  9. 9.

    Argenziano G, Soyer HP, Chimenti S, Talamini R, Corona R, Sera F, Binder M, Cerroni L, De Rosa G, Ferrara G, Hofmann-Wellenhof R, Landthaler M, Menzies SW, Pehamberger H, Piccolo D, Rabinovitz HS, Schiffner R, Staibano S, Stolz W, Bartenjev I, Blum A, Braun R, Cabo H, Carli P, De Giorgi V, Fleming MG, Grichnik JM, Grin CM, Halpern AC, Johr R, et al: Dermoscopy of pigmented skin lesions: results of a consensus meeting via the Internet. J Am Acad Dermatol. 2003, 48: 679-693. 10.1067/mjd.2003.281.

    Article  PubMed  Google Scholar 

  10. 10.

    Manousaki AG, Manios AG, Tsompanaki EI, Panayiotides JG, Tsiftsis DD, Kostaki AK, Tosca AD: A simple digital image processing system to aid in melanoma diagnosis in an everyday melanocytic skin lesion unit: a preliminary report. Int J Dermatol. 2006, 45 (4): 402-410. 10.1111/j.1365-4632.2006.02726.x.

    Article  PubMed  Google Scholar 

  11. 11.

    Ganster H, Pinz A, Rohrer R, Wildling E, Binder M, Kittler H: Automated melanoma recognition. IEEE Trans Med Imaging. 2001, 20 (3): 233-239. 10.1109/42.918473.

    CAS  Article  PubMed  Google Scholar 

  12. 12.

    Alcón JF, Ciuhu C, Kate W, Heinrich A, Uzunbajakava N, Krekels G, Siem D, de Haan G: Automatic imaging system with decision support for inspection of pigmented skin lesions and melanoma diagnosis. IEEE J Select Top Sign Process. 2009, 3 (1): 14-25.

    Article  Google Scholar 

  13. 13.

    Garnavi R, Aldeen M: Computer-aided diagnosis of melanoma using border- and wavelet-based texture analysis. IEEE Trans Inf Technol Biomed. 2012, 16 (6): 1239-1251.

    Article  PubMed  Google Scholar 

  14. 14.

    Menzies SW, Bischof L, Talbot H, Gutenev A, Avramidis M, Wong L, Lo SK, Mackellar G, Skladnev V, McCarthy W, Kelly J, Cranney B, Lye P, Rabinovitz H, Oliviero M, Blum A, Varol A, De'Ambrosis B, McCleod R, Koga H, Grin C, Braun R, Johr R: The performance of SolarScan: an automated dermoscopy image analysis instrument for the diagnosis of primary melanoma. Arch Dermatol. 2005, 141: 1388-1396.

    Article  PubMed  Google Scholar 

  15. 15.

    Hoffmann K, Gambichler T, Rick A: Diagnostic and neural analysis of skin cancer (danaos). A multicentre study for collection and computer-aided analysis of data from pigmented skin lesions using digital dermoscopy. Br J Dermatol. 2003, 149: 801-809. 10.1046/j.1365-2133.2003.05547.x.

    CAS  Article  PubMed  Google Scholar 

  16. 16.

    Jamora MJ, Wainwright BD, Meehan SA, Bystryn JC: Improved identification of potentially dangerous pigmented skin lesions by computerized image analysis. Arch Derm. 2003, 139: 195-198.

    Article  PubMed  Google Scholar 

  17. 17.

    Monheit G, Cognetta AB, Ferris L, Rabinovitz H, Gross K, Martini M, Grichnik JM, Mihm M, Prieto VG, Googe P, King R, Toledano A, Kabelev N, Wojton M, Gutkowicz-Krusin D: The performance of MelaFind: a prospective multicenter study. Arch Dermatol. 2011, 147 (2): 188-194. 10.1001/archdermatol.2010.302.

    Article  PubMed  Google Scholar 

  18. 18.

    McCance KL, Huether SE, Brashers VL, Rote NS: Pathophysiology: The Biologic Basis for Disease in Adults and Children. 2009, Maryland Heights, Missouri: Mosby Inc., 6

    Google Scholar 

  19. 19.

    Garcia-Uribe A, Kehtarnavaz N, Marquez G, Prieto V, Duvic M, Wang LV: Skin cancer detection by spectroscopic oblique-incidence reflectometry: classification and physiological origins. Appl Opt. 2004, 43: 2643-2650. 10.1364/AO.43.002643.

    Article  PubMed  Google Scholar 

  20. 20.

    Elbaum M, Kopf A, Rabinovitz H, Langley R, Kamino H: Automatic differentiation of melanoma from melanocytic nevi with multispectral digital dermoscopy: A feasibility study. J Am Acad Dermatol. 2001, 44: 207-218. 10.1067/mjd.2001.110395.

    CAS  Article  PubMed  Google Scholar 

  21. 21.

    Pham T, Spott T, Svaasand L, Tromberg B: Quantifying the properties of two-layer turbid media with frequency-domain diffuse reflectance. Appl Opt. 2000, 39: 4733-4745. 10.1364/AO.39.004733.

    CAS  Article  PubMed  Google Scholar 

  22. 22.

    Hielscher A, Liu H, Chance B, Tittel F, Jacques S: Time-resolved photon emission from layered turbid media. Appl Opt. 1996, 35: 719-728. 10.1364/AO.35.000719.

    CAS  Article  PubMed  Google Scholar 

  23. 23.

    Kienle A, Patterson M, Dognitz N, Bays R, Wagnieres H: Noninvasive determination of the optical properties of two layered turbid media. Appl Opt. 1998, 37: 779-791. 10.1364/AO.37.000779.

    CAS  Article  PubMed  Google Scholar 

  24. 24.

    Franceschini M, Fantini S, Paunescu L, Maier J, Gratton E: Influence of a superficial layer in the quantitative spectroscopic study of strongly scattering media. Appl Opt. 1998, 37: 7447-7458. 10.1364/AO.37.007447.

    CAS  Article  PubMed  Google Scholar 

  25. 25.

    Martelli F, Bianco S, Zaccanti G, Pifferi A, Torricelli A, Bassi A, Taroni F, Cubeddu R: Phantom validation and in vivo application of an inversion procedure for retrieving the optical properties of diffusive layered media from time-resolved reflectance measurements. Opt. Lett. 2004, 29: 2037-2039. 10.1364/OL.29.002037.

    Article  PubMed  Google Scholar 

  26. 26.

    Backman V, Gurjar R, Badizadegan K, Itzkan I, Dasari RR, Perelman LT, Feld MS: Polarized light scattering spectroscopy for quantitative measurement of epithelial cellular structures in situ. IEEE J Select Top Quant Electron. 1999, 5: 1019-1026. 10.1109/2944.796325.

    CAS  Article  Google Scholar 

  27. 27.

    Mourant JR, Johnson TM, Freyer JP: Characterizing mammalian cells and cell phantoms by polarized backscattering fiber-optic measurements. Appl Opt. 2001, 40: 5114-5123. 10.1364/AO.40.005114.

    CAS  Article  PubMed  Google Scholar 

  28. 28.

    Mourant JR, Johnson TM, Carpenter S, Guerra A, Aida T, Freyer JP: Polarized angular dependent spectroscopy of epithelial cells and epithelial cell nuclei to determine the size scale of scattering structures. J Biomed Opt. 2002, 7: 378-387. 10.1117/1.1483317.

    CAS  Article  PubMed  Google Scholar 

  29. 29.

    Jacques S, Roman J, Lee K: Imaging superficial tissues with polarized light. Lasers Surg Med. 2000, 26: 119-129. 10.1002/(SICI)1096-9101(2000)26:2<119::AID-LSM3>3.0.CO;2-Y.

    CAS  Article  PubMed  Google Scholar 

  30. 30.

    Morgan S, Ridgway M: Polarization properties of light backscattered from a two layer scattering medium. Opt Express. 2000, 7: 395-402. 10.1364/OE.7.000395.

    CAS  Article  PubMed  Google Scholar 

  31. 31.

    Gurjar RS, Backman V, Perelman LT, Georgakoudi I, Badizadegan K, Itzkan I, Dasari RR, Feld MS: Imaging human epithelial properties with polarized light scattering spectroscopy. Nat Med. 2001, 7: 1245-1248. 10.1038/nm1101-1245.

    CAS  Article  PubMed  Google Scholar 

  32. 32.

    Mark H, Eibe F, Geoffrey H, Bernhard P, Peter R, Ian W: The WEKA Data mining software: an update. SIGKDD Explor Newsl. 2009, 11 (1): 10-18. 10.1145/1656274.1656278.

    Article  Google Scholar 

  33. 33.

    Wettschereck D, Aha DW, Mohri T: A review and empirical evaluation of feature weighting methods for a class of lazy learning algorithms. Artif Intell Rev. 1997, 10: 1-37. 10.1016/S0933-3657(97)00380-1.

    Article  Google Scholar 

  34. 34.

    Nimunkar A, Dhawan A, Relue P, Patwardhan S: Wavelet and statistical analysis for melanoma. Proceedings of SPIE Medical Imaging: Image Processing. Edited by: Milan S, Michael Fitzpatrick J. 2002, San Diego, CA: SPIE, 1346-1352. Feb. 23, 2002

    Google Scholar 

  35. 35.

    Burroni M, Corona R, Dell’Eva G, Sera F, Bono R, Puddu P, Puddu P, Perotti R, Nobile F, Andreassi L, Rubegni P: Melanoma computer-aided diagnosis: reliability and feasibility study. Clin Cancer Res. 2004, 10: 1881-1886. 10.1158/1078-0432.CCR-03-0039.

    Article  PubMed  Google Scholar 

  36. 36.

    Walvick R, Patel K, Patwardhan S, Dhawan A: Classification of melanoma using wavelet-transform-based optimal feature set. Proceedings of SPIE Medical Imaging: Image Processing. Edited by: Milan S, Michael Fitzpatrick J. 2004, San Diego, CA: SPIE, 944-951. Feb. 14, 2004

    Google Scholar 

  37. 37.

    Zagrouba E, Barhoumi W: An accelerated system for melanoma diagnosis based on subset feature selection. J Comput Inf Technol. 2005, 1: 69-82.

    Article  Google Scholar 

  38. 38.

    Li L, Zhang QZ, Ding YH, Jiang HB, Thiers BT, Wang JZ: A Computer-aided spectroscopic system for early diagnosis of melanoma. Proceedings of IEEE 25th International Conference on Tools with Artificial Intelligence. Edited by: Randall B, Zoey V. 2013, Washington, DC: IEEE, 145-150.

    Google Scholar 

Pre-publication history

  1. The pre-publication history for this paper can be accessed here:

Download references


This work is partially supported by National Science Foundation [DBI-0960586 and DBI-0960443]; National Institute of Health [1 R15 CA131808-01 and 1 R01 HD069374-01 A1].

Author information



Corresponding author

Correspondence to Lin Li.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

LL implemented the image acquisition, performed image processing, designed the approach for feature extraction and classification, carried out the evaluation, developed the desktop application, and drafted the manuscript. QZZ developed the spectroscopic device. YHD conducted the extraction of features. HBJ designed the spectroscopic approach for image acquisition. BHT collected real data set in this study. JZW led the design of the study. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Li, L., Zhang, Q., Ding, Y. et al. Automatic diagnosis of melanoma using machine learning methods on a spectroscopic system. BMC Med Imaging 14, 36 (2014).

Download citation


  • Melanoma
  • Artificial Neural Network
  • Pixel Intensity
  • Lesion Area
  • Desktop Application