.
.
.
.
.
To use a confusion matrix to calculate accuracy, you start by filling out the matrix with the outcomes from your classification model: One axis (typically the horizontal one) represents the predicted classes, and the other (typically the vertical one) represents the actual classes. Each cell in the matrix counts how many times the instances of an actual class were predicted as each class.
Calculating accuracy from the confusion matrix is straightforward once the matrix is populated. Accuracy is determined by the sum of the diagonal values (where predicted labels match actual labels) divided by the total number of data points. Mathematically these are the sum of the True Positives and True Negatives, where true positives are the instances correctly predicted as positive by the model, true negatives are the instances correctly predicted as negative, and total population is the sum of all cells in the matrix.
This calculation gives you the overall rate of correct predictions made by the model across all classes, providing a clear measure of how effective the model is at classifying new data correctly.

Confusion Matrix gives the count of predictions that are True Positive, True Negative, False Positive and False Negative. Accuracy is sum (True Positive and True Negative) divided by sum (True Positive, True Negative, False Positive and False Negative)