Добавить
Уведомления

Explain Confusion matrix in machine learning in hindi

A confusion matrix is a table that is often used to describe the performance of a classification model (or "classifier") on a set of test data for which the true values are known. ++ true positives (TP): These are cases in which we predicted yes (they have the disease), and they do have the disease. ++ true negatives (TN): We predicted no, and they don't have the disease. ++ false positives (FP): We predicted yes, but they don't actually have the disease. (Also known as a "Type I error.") ++ false negatives (FN): We predicted no, but they actually do have the disease. (Also known as a "Type II error.") ++ Accuracy: Overall, how often is the classifier correct? (TP+TN)/total = (100+50)/165 = 0.91 ++ Misclassification Rate: Overall, how often is it wrong? (FP+FN)/total = (10+5)/165 = 0.09 equivalent to 1 minus Accuracy also known as "Error Rate" ++ True Positive Rate: When it's actually yes, how often does it predict yes? TP/actual yes = 100/105 = 0.95 also known as "Sensitivity" or "Recall" ++ False Positive Rate: When it's actually no, how often does it predict yes? FP/actual no = 10/60 = 0.17 ++ True Negative Rate: When it's actually no, how often does it predict no? TN/actual no = 50/60 = 0.83 equivalent to 1 minus False Positive Rate also known as "Specificity" Precision: When it predicts yes, how often is it correct? TP/predicted yes = 100/110 = 0.91 ++ Prevalence: How often does the yes condition actually occur in our sample? actual yes/total = 105/165 = 0.64 music : http://www.bensound.com my github id :- https://github.com/harishkumawat2610 LinkedIn id :- https://www.linkedin.com/in/harishkumawat/

12+
17 просмотров
2 года назад
12+
17 просмотров
2 года назад

A confusion matrix is a table that is often used to describe the performance of a classification model (or "classifier") on a set of test data for which the true values are known. ++ true positives (TP): These are cases in which we predicted yes (they have the disease), and they do have the disease. ++ true negatives (TN): We predicted no, and they don't have the disease. ++ false positives (FP): We predicted yes, but they don't actually have the disease. (Also known as a "Type I error.") ++ false negatives (FN): We predicted no, but they actually do have the disease. (Also known as a "Type II error.") ++ Accuracy: Overall, how often is the classifier correct? (TP+TN)/total = (100+50)/165 = 0.91 ++ Misclassification Rate: Overall, how often is it wrong? (FP+FN)/total = (10+5)/165 = 0.09 equivalent to 1 minus Accuracy also known as "Error Rate" ++ True Positive Rate: When it's actually yes, how often does it predict yes? TP/actual yes = 100/105 = 0.95 also known as "Sensitivity" or "Recall" ++ False Positive Rate: When it's actually no, how often does it predict yes? FP/actual no = 10/60 = 0.17 ++ True Negative Rate: When it's actually no, how often does it predict no? TN/actual no = 50/60 = 0.83 equivalent to 1 minus False Positive Rate also known as "Specificity" Precision: When it predicts yes, how often is it correct? TP/predicted yes = 100/110 = 0.91 ++ Prevalence: How often does the yes condition actually occur in our sample? actual yes/total = 105/165 = 0.64 music : http://www.bensound.com my github id :- https://github.com/harishkumawat2610 LinkedIn id :- https://www.linkedin.com/in/harishkumawat/

, чтобы оставлять комментарии