Stay organized with collections Save and categorize content based on your preferences.

Explore the options below.

Consider a classification model that separates email into two categories: "spam" or "not spam." If you raise the classification threshold, what will happen to precision?
Definitely increase.
Raising the classification threshold typically increases precision; however, precision is not guaranteed to increase monotonically as we raise the threshold.
Probably increase.
In general, raising the classification threshold reduces false positives, thus raising precision.
Probably decrease.
In general, raising the classification threshold reduces false positives, thus raising precision.
Definitely decrease.
In general, raising the classification threshold reduces false positives, thus raising precision.

Explore the options below.

Consider a classification model that separates email into two categories: "spam" or "not spam." If you raise the classification threshold, what will happen to recall?
Always increase.
Raising the classification threshold will cause both of the following:
  • The number of true positives will decrease or stay the same.
  • The number of false negatives will increase or stay the same.
Thus, recall will never increase.
Always decrease or stay the same.
Raising our classification threshold will cause the number of true positives to decrease or stay the same and will cause the number of false negatives to increase or stay the same. Thus, recall will either stay constant or decrease.
Always stay constant.
Raising our classification threshold will cause the number of true positives to decrease or stay the same and will cause the number of false negatives to increase or stay the same. Thus, recall will either stay constant or decrease.

Explore the options below.

Consider two models—A and B—that each evaluate the same dataset. Which one of the following statements is true?
If Model A has better precision than model B, then model A is better.
While better precision is good, it might be coming at the expense of a large reduction in recall. In general, we need to look at both precision and recall together, or summary metrics like AUC which we'll talk about next.
If model A has better recall than model B, then model A is better.
While better recall is good, it might be coming at the expense of a large reduction in precision. In general, we need to look at both precision and recall together, or summary metrics like AUC, which we'll talk about next.
If model A has better precision and better recall than model B, then model A is probably better.
In general, a model that outperforms another model on both precision and recall is likely the better model. Obviously, we'll need to make sure that comparison is being done at a precision / recall point that is useful in practice for this to be meaningful. For example, suppose our spam detection model needs to have at least 90% precision to be useful and avoid unnecessary false alarms. In this case, comparing one model at {20% precision, 99% recall} to another at {15% precision, 98% recall} is not particularly instructive, as neither model meets the 90% precision requirement. But with that caveat in mind, this is a good way to think about comparing models when using precision and recall.