






Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
A series of questions testing the understanding of ROC curves, Precision-Recall curves, and evaluating machine learning models on imbalanced datasets. The questions cover topics such as calculating confidence intervals, identifying false statements, and comparing ROC curves.
Typology: Slides
1 / 12
This page cannot be seen from the preview
Don't miss anything!
A. ROC curve summarize the trade-off between the true positive rate and the positive predictive value for a model B. Precision-Recall curve summarize the trade-off between the true positive rate and false positive rate for a model C. In both imbalanced and balanced datasets, the area under the curve (AUC) can be used as a summary of the model performance. D. If we decrease the false negative (select more positives), recall always increases, but precision may increase or decrease.
2.A: False, B: False, C: True, D: True
A. ROC curve summarize the trade-off between the true positive rate and the positive predictive value for a model B. Precision-Recall curve summarize the trade-off between the true positive rate and false positive rate for a model C. In both imbalanced and balanced datasets, the area under the curve (AUC) can be used as a summary of the model performance. D. If we decrease the false negative (select more positives), recall always increases, but precision may increase or decrease.
2.A: False, B: False, C: True, D: True
If you have an imbalanced dataset accuracy can give you false assumptions regarding the classifier’s performance, it’s better to rely on precision and recall.
Q2-1: A learned model h makes 10 errors over the 100 instances. Calculate the 95% confidence interval i.e. With approximately 95% probability, the true error lies in the interval ______. Take zC = 2
Q2-2: Which of the following statements is FALSE?
Q2-2: Which of the following statements is FALSE?
If p is sufficiently small, then reject the null hypothesis
A. Dashed black line represents random classification. B. ROC curve for any model can’t fall below the dashed black line. C. The model represented by solid blue line is better than that represent by solid lime.