Figure 3
Confusion matrices and radar plots for a perfect classifier (a, b), the best classifier, a decision tree with AdaBoost (c, d), and for new data (e, f) and the performance of the best classifiers on new data. The confusion matrices (a, c, e) give the scores for the four possible classification outcomes: true negative at the top left, true positive at the bottom right, false negative at the top right and false positive at the bottom left. The perfect classifier has no misclassifications, whereas the decision tree with AdaBoost places three class `0' samples and four class `1' samples into the wrong category. For the new data one sample has been identified as false positive and four as false negatives. The classification outcomes serve as a basis to calculate classification accuracy (ACC), classification error (Class Error), sensitivity (Sensitivity), specificity (Specificity), false-positive rate (FPR), precision (Precision) and F1 score (F1 score) as they are plotted in the radar plots (b, d, f). The value ROC AUC is determined by calculating the area under the curve of an ROC curve. |