NMR Propolis pipeline 1

Propolis peak list data was read and stored in a list, containing 2 elements, the dataset consisting in a list with the samples with their ppm intensities being the elements. The propolis metadata consists on the seasons and agroregions.

setwd("~/Dropbox")
library(metabolomicsUM)
source("Datasets/Propolis/NMR/scripts/propolis_metadata.R")

prop.nmr.metadata.file = "Datasets/Propolis/NMR/metadata/metadata_propolis_agro.csv"
prop.nmr.data.folder = "Datasets/Propolis/NMR/data"

get.metadata.agro(prop.nmr.data.folder, write.file = TRUE, file.name = prop.nmr.metadata.file)
prop.nmr.metadata = read.metadata(prop.nmr.metadata.file)

peaks.lists = read.csvs.folder(prop.nmr.data.folder)

Agroregions metadata used, own grouping peaks algorithm used, removed peak groups with less than 25% of values, missing values imputation with low value, and no normalization.

PREPROCESSING

Own grouping peaks algorithm used with step = 0.03:

# removing resonances in selected regions
peaks.lists = remove.peaks.interval.sample.list(peaks.lists, 0, 0.19)
peaks.lists = remove.peaks.interval.sample.list(peaks.lists, 3.29, 3.31)
peaks.lists = remove.peaks.interval.sample.list(peaks.lists, 4.85, 5)

#group peaks
prop.nmr.ds = group.peaks(peaks.lists, type = "nmr-peaks", metadata = prop.nmr.metadata, description = "NMR propolis", label.x = "ppm", label.values = "intensity")
sum.dataset(prop.nmr.ds)
## Dataset summary:
## Valid dataset
## Description:  NMR propolis 
## Type of data:  nmr-peaks 
## Number of samples:  59 
## Number of data points 293 
## Number of metadata variables:  2 
## Label of x-axis values:  ppm 
## Label of data points:  intensity 
## Number of missing values in data:  5376 
## Mean of data values:  0.09016594 
## Median of data values:  0.0287 
## Standard deviation:  0.1904829 
## Range of values:  0 10 
## Quantiles: 
##      0%     25%     50%     75%    100% 
##  0.0000  0.0081  0.0287  0.0929 10.0000

Peak groups with less than 25% of values were removed:

nsamps = num.samples(prop.nmr.ds)
prop.nmr.ds = remove.variables.by.nas(prop.nmr.ds,  0.75*nsamps)

There are 2659 missing values found in the dataset, which will be replaced with a low value (0.00005).

prop.nmr.na = missingvalues.imputation(prop.nmr.ds, method="value", value = 0.00005)

UNIVARIATE TESTS

An analysis of variance (ANOVA) was conducted over the data with tukey test also, and this is the top 10 results ordered by p-value:

anova.prop.nmr.na = aov.all.vars(prop.nmr.na, "agroregions")
anova.prop.nmr.na[1:20,]
##           pvalues     logs          fdr                              tukey
## 6.13 2.607511e-06 5.583774 0.0006310178 Plain-Highlands; Plateau-Highlands
## 6.17 1.275155e-05 4.894437 0.0015429370 Plain-Highlands; Plateau-Highlands
## 2.5  2.540262e-05 4.595122 0.0018893344 Plain-Highlands; Plateau-Highlands
## 6.09 3.309534e-05 4.480233 0.0018893344 Plain-Highlands; Plateau-Highlands
## 1.9  3.903583e-05 4.408537 0.0018893344 Plain-Highlands; Plateau-Highlands
## 2.95 5.957016e-05 4.224971 0.0024026632 Plain-Highlands; Plateau-Highlands
## 9.51 8.640438e-05 4.063464 0.0029871227 Plain-Highlands; Plateau-Highlands
## 6.75 1.711229e-04 3.766692 0.0047148604 Plain-Highlands; Plateau-Highlands
## 5.21 1.753460e-04 3.756104 0.0047148604 Plain-Highlands; Plateau-Highlands
## 7.78 3.539747e-04 3.451028 0.0085661869 Plain-Highlands; Plateau-Highlands
## 2.02 4.253408e-04 3.371263 0.0090007500 Plain-Highlands; Plateau-Highlands
## 6.96 4.463182e-04 3.350355 0.0090007500 Plain-Highlands; Plateau-Highlands
## 5.28 9.225031e-04 3.035032 0.0171236034     Plain-Highlands; Plateau-Plain
## 2.47 1.023137e-03 2.990066 0.0171236034 Plain-Highlands; Plateau-Highlands
## 1.99 1.134243e-03 2.945294 0.0171236034     Plain-Highlands; Plateau-Plain
## 2.92 1.135982e-03 2.944628 0.0171236034 Plain-Highlands; Plateau-Highlands
## 2.11 1.202898e-03 2.919771 0.0171236034     Plain-Highlands; Plateau-Plain
## 1.93 1.563358e-03 2.805942 0.0210184791 Plain-Highlands; Plateau-Highlands
## 2.38 1.961535e-03 2.707404 0.0249837635                      Plateau-Plain
## 6.78 2.119088e-03 2.673851 0.0252589164 Plain-Highlands; Plateau-Highlands

A heatmap with the correlations between all the variables is shown below:

correl.prop.nmr.na = correlations.dataset(prop.nmr.na, method = "pearson")
heatmap(correl.prop.nmr.na, col =  topo.colors(256))

CLUSTERING

Hierarchical clustering with euclidean distance and complete method was performed on the data and the resulting dendrogram is shown below:

hc.prop.nmr.na = clustering(prop.nmr.na, method = "hc", distance = "euclidean")
dendrogram.plot.col(prop.nmr.na, hc.prop.nmr.na, "agroregions")

K-Means was performed on the data also with 4 centers and the results and the plot giving for each cluster the median of the samples in blue, and in grey the values of all samples in that cluster are shown below:

kmeans.prop.nmr.na = clustering(prop.nmr.na, method = "kmeans", num.clusters = 4)
kmeans.plot(prop.nmr.na, kmeans.prop.nmr.na)

kmeans.df = kmeans.result.df(kmeans.prop.nmr.na, 4)
kmeans.df
##   cluster
## 1       1
## 2       2
## 3       3
## 4       4
##                                                                                                                                                                                samples
## 1                                                                                                                                                                                XX_sm
## 2                                                                                                                                            DC_au SA_au SJ_au UR_au SJ_sp UR_sp UR_wi
## 3 AC_au AN_au JB_au PU_au VR_au AC_sm AN_sm BR_sm CE_sm CN_sm DC_sm FP_sm JB_sm SA_sm SJC_sm SJ_sm UR_sm VR_sm AC_sp AN_sp DC_sp FP_sp JB_sp SA_sp VR_sp AC_wi AN_wi DC_wi FP_wi JB_wi
## 4                                                      BR_au CE_au CN_au IT_au SJC_au XX_au IT_sm PU_sm BR_sp CE_sp CN_sp IT_sp PU_sp SJC_sp BR_wi CE_wi CN_wi PU_wi SA_wi SJ_wi XX_wi

PCA

Principal components analysis was performed on the data and some plots are shown below:

pca.analysis.result = pca.analysis.dataset(prop.nmr.na)

pca.pairs.plot(prop.nmr.na, pca.analysis.result, "agroregions")

pca.screeplot(pca.analysis.result)

pca.scoresplot2D(prop.nmr.na, pca.analysis.result, "agroregions", ellipses = T)

pca.kmeans.plot2D(prop.nmr.na, pca.analysis.result, kmeans.result = kmeans.prop.nmr.na, ellipses = T)

MACHINE LEARNING

For classification models and prediction the following parameters were used: - models: PLS, J48, JRip, SVM and Random Forests - validation method: repeated cross-validation - number of folds: 5 - number of repeats: 10

Below are some results with the best tune for each model:

ml.prop.nmr = train.models.performance(prop.nmr.na, c("pls", "J48", "JRip", "svmLinear", "rf"), "agroregions", "repeatedcv", num.folds = 10, num.repeats = 10, tunelength = 20, metric = "ROC")
ml.prop.nmr$performance
##            Accuracy     Kappa Sensitivity Specificity       ROC AccuracySD
## pls       0.7443452 0.4210370   0.5797222   0.7929444 0.7817130  0.1211340
## J48       0.5735714 0.2241531   0.5002778   0.7405000 0.6355185  0.1469221
## JRip      0.5403571 0.1079771   0.4211111   0.6992222 0.5844838  0.1906540
## svmLinear 0.6363333 0.1767910   0.4416667   0.7172778 0.7434815  0.1548496
## rf        0.7420476 0.4483832   0.6319444   0.8046111 0.8073843  0.1466482
##             KappaSD SensitivitySD SpecificitySD     ROCSD
## pls       0.3003459     0.1935266    0.09372990 0.1745594
## J48       0.2594299     0.1852060    0.09021284 0.1481117
## JRip      0.3285980     0.2151556    0.11559154 0.1633691
## svmLinear 0.3168982     0.1845739    0.09775793 0.1580598
## rf        0.3346924     0.2192341    0.11150898 0.1509519

Also the confusion matrices and a plot using the first 3 PCs, showing the separation of the classes (agroregions) are shown below:

ml.prop.nmr$confusion.matrices
## $pls
## Cross-Validated (10 fold, repeated 10 times) Confusion Matrix 
## 
## (entries are percentages of table totals)
##  
##            Reference
## Prediction  Highlands Plain Plateau
##   Highlands       9.9   0.0     0.3
##   Plain           0.6   5.4     1.6
##   Plateau         9.7  13.3    59.2
## 
## 
## $J48
## Cross-Validated (10 fold, repeated 10 times) Confusion Matrix 
## 
## (entries are percentages of table totals)
##  
##            Reference
## Prediction  Highlands Plain Plateau
##   Highlands      10.9   2.8    11.2
##   Plain           0.3   5.6     9.1
##   Plateau         9.0  10.2    40.9
## 
## 
## $JRip
## Cross-Validated (10 fold, repeated 10 times) Confusion Matrix 
## 
## (entries are percentages of table totals)
##  
##            Reference
## Prediction  Highlands Plain Plateau
##   Highlands       3.6   1.6     6.0
##   Plain           1.7   6.9    11.6
##   Plateau        14.8  10.2    43.6
## 
## 
## $svmLinear
## Cross-Validated (10 fold, repeated 10 times) Confusion Matrix 
## 
## (entries are percentages of table totals)
##  
##            Reference
## Prediction  Highlands Plain Plateau
##   Highlands       2.3   0.9     1.4
##   Plain           0.4   5.5     3.9
##   Plateau        17.5  12.3    55.9
## 
## 
## $rf
## Cross-Validated (10 fold, repeated 10 times) Confusion Matrix 
## 
## (entries are percentages of table totals)
##  
##            Reference
## Prediction  Highlands Plain Plateau
##   Highlands      10.0   0.0     2.3
##   Plain           0.0   8.5     3.3
##   Plateau        10.1  10.1    55.6
pls.model = ml.prop.nmr$final.models$pls
pca.plot.3d(prop.nmr.na, pls.model, "agroregions")

And the variable importance in the four season classes for all models:

summary.var.importance(ml.prop.nmr, 10)
## $pls
##      Highlands     Plain  Plateau     Mean
## 2.05  96.82847  78.92434 99.82376 91.85886
## 1.18  26.41049  85.64895 70.78461 60.94802
## 1.58  29.19843 100.00000 50.73180 59.97675
## 6.75  95.21293  18.31625 66.27773 59.93564
## 2.02  88.41195  31.43812 59.78060 59.87689
## 3.75  81.46462  12.66070 83.06092 59.06208
## 1.15  54.88903  43.57149 74.66809 57.70953
## 3.81  85.00852  41.50461 42.44480 56.31931
## 1.99  82.29055  27.85301 54.37382 54.83913
## 1.9   84.15404  23.85125 53.71409 53.90646
## 
## $J48
##      Highlands     Plain  Plateau     Mean
## 2.11 100.00000 100.00000 88.92950 96.30983
## 5.17  96.24021  96.24021 85.16971 92.55004
## 1.87 100.00000 100.00000 77.44125 92.48042
## 1.9   97.49347  97.49347 72.56745 89.18480
## 5.28  94.98695  94.98695 77.44125 89.13838
## 5.42  96.24021  96.24021 74.72585 89.06876
## 5.62  96.24021  96.24021 73.05483 88.51175
## 2.2   93.73368  93.73368 78.06789 88.51175
## 0.66  86.21410  86.21410 86.21410 86.21410
## 2.86  74.93473  91.64491 91.64491 86.07485
## 
## $JRip
##      Overall Mean
## 2.11     100  100
## 2.95     100  100
## 3.25     100  100
## 0.27       0    0
## 0.31       0    0
## 0.34       0    0
## 0.41       0    0
## 0.44       0    0
## 0.48       0    0
## 0.5        0    0
## 
## $svmLinear
##      Highlands     Plain  Plateau     Mean
## 2.11 100.00000 100.00000 88.92950 96.30983
## 5.17  96.24021  96.24021 85.16971 92.55004
## 1.87 100.00000 100.00000 77.44125 92.48042
## 1.9   97.49347  97.49347 72.56745 89.18480
## 5.28  94.98695  94.98695 77.44125 89.13838
## 5.42  96.24021  96.24021 74.72585 89.06876
## 5.62  96.24021  96.24021 73.05483 88.51175
## 2.2   93.73368  93.73368 78.06789 88.51175
## 0.66  86.21410  86.21410 86.21410 86.21410
## 2.86  74.93473  91.64491 91.64491 86.07485
## 
## $rf
##        Overall      Mean
## 2.86 100.00000 100.00000
## 5.17  63.87155  63.87155
## 5.31  62.14360  62.14360
## 6.17  52.97779  52.97779
## 2.11  52.38290  52.38290
## 2.65  49.24093  49.24093
## 6.13  43.27100  43.27100
## 6.75  39.64729  39.64729
## 2.8   38.87769  38.87769
## 3.25  34.38245  34.38245

FEATURE SELECTION

Using recursive feature selection, various subsets were used with random forests classifier. The results are shown below:

feature.selection.result = feature.selection(prop.nmr.na, "agroregions", method="rfe", functions = rfFuncs, validation = "repeatedcv", repeats = 5, subsets = 2^(1:6))
feature.selection.result
## 
## Recursive feature selection
## 
## Outer resampling method: Cross-Validated (10 fold, repeated 5 times) 
## 
## Resampling performance over subset size:
## 
##  Variables Accuracy  Kappa AccuracySD KappaSD Selected
##          2   0.5685 0.1362     0.1772  0.3001         
##          4   0.5903 0.1989     0.1883  0.3481         
##          8   0.6690 0.3325     0.1290  0.2888         
##         16   0.7150 0.4163     0.1605  0.3538         
##         32   0.7485 0.4788     0.1556  0.3357         
##         64   0.7423 0.4531     0.1538  0.3444         
##        242   0.7503 0.4644     0.1576  0.3503        *
## 
## The top 5 variables (out of 242):
##    X6.17, X2.11, X6.13, X5.21, X2.86
plot(feature.selection.result, type=c("g","o"))

Also selection by filter was used with the results shown below:

feature.selection.result2 = feature.selection(prop.nmr.na, "agroregions", method="filter", functions = rfSBF, validation = "repeatedcv", repeats = 5, subsets = 2^(1:6))
feature.selection.result2
## 
## Selection By Filter
## 
## Outer resampling method: Cross-Validated (10 fold, repeated 5 times) 
## 
## Resampling performance:
## 
##  Accuracy  Kappa AccuracySD KappaSD
##    0.7434 0.4403     0.1677  0.3695
## 
## Using the training set, 73 variables were selected:
##    X0.5, X0.56, X0.61, X0.66, X0.99...
## 
## During resampling, the top 5 selected variables (out of a possible 129):
##    X0.5 (100%), X0.56 (100%), X0.61 (100%), X0.66 (100%), X0.99 (100%)
## 
## On average, 66.2 variables were selected (min = 50, max = 82)