We can compute the confusion, or error, matrix in order to determine how our manual calculation performed, when we classified the prediction outcomes as correct or not:
#Confusion matrix result <- sql("select outcome,correct, count(*) as k, avg(totrows) as totrows from preds_tbl where grp=1 group by 1,2 order by 1,2") result$classify_pct <- result$k/result$totrows display(result)
To determine the grand total correct model prediction, sum the correct=Y columns previously:
Summary of correct predictions for training group:
Correctly predicted outcome=1 | 20% |
Correctly predicted outcome=0 | 59% |
Total Correct Percentage | 79% |
You can see that there is much more predictive power in predicting outcome=0 than there is outcome=1.