Precision and Recall

Introduction

In this series of “Model Evaluation Metrics”, this is our third blog. I hope you all are aware of the terms involved in “Confusion Matrix”. If you are not then simply go to this tutorial -:

Precision and Recall

So far we have discussed Accuracy, Confusion Matrix. As we have seen the disadvantages of Accuracy already. Hence, sometimes accuracy is not able to draw the conclusion for the model then there is another no. of metrics. One of them is “Precision and Recall”. 

Now, by following up an example will discuss Precision and Recall and hope will give a better understanding of “Why Precision and Recall”.

We are considering the use case in which one needs to identify for his customers to give some gift when they purchase something. So, they would like to visit their center again. We have certain attributes like Name, Age, Gender, Total Price of all goods purchased, Visited previously or not, etc. We build the model and now want to evaluate this model. Initially, we will go for accuracy but will check the value of Precision and Recall as well because sometimes accuracy is not able to give a better conclusion which will be clear to you after reading this blog.

Precision

Precision is mathematically defined as the ratio of True Positive(TP) divided by the sum of True Positive(TP) and False Positive (FP).

Precision basically answers “How many positive predictions are actually positive“.

Precision = TP / TP + FP

For the above use case, Precision will be -:

Precision =      No. of customers will be given gift actually / (No. of customers will be given gift actually + No. of customers who are given gifts but not in actually)

Recall

The recall is mathematically defined as the ratio of True Positive(TP) divided by the sum of True Positive(TP) and False Negative (FN).

Precision basically answers “How many actual positives are identified correctly“.

Recall = TP / TP + FN

For the above use case, Recall will be -:

Recall =      No. of customers will be given gift actually / (No. of customers will be given gift actually + No. of customers who are not given gifts but in actually they get)

Precision and Recall: Tug Of War

To fully evaluate the model, one needs to examine both the Precision and Recall side by side. It is a kind of a tedious task. As we increase Precision, Recall will decrease and vice-versa.

Hence, one solution is that is according to a particular problem we can see which one is more important i.e, Precision or Recall.

Another solution is one can take a weighted average of Precision and Recall. This solution will be discussed in the upcoming discussion.

Trade-Off between Precision and Recall

As we increase the Precision, the recall will decrease and vice-versa. Let’s check the graph for this trade-off.

This is about Precision and Recall. Sometimes it happens that we are not able to derive the conclusion with either of the metrics. Therefore, will try to define the new metric which is consolidating other metrics.

F1 Score is one which is derived from Precision and Recall. Let’s discuss it.

F1 Score

F1 Score is the weighted average of Precision and Recall. Sometimes accuracy doesn’t give good results then, one has to go for Precision, Recall etc.

The F1 score is generally used when data is unbalanced i.e, there is a large difference between numbers of Positive and Negative Classes.

So, this is all about Precision and Recall.

Hope you enjoy reading the evaluation metric.

Keep enjoying this series. Stay tuned with us.

Leave a Comment

Your email address will not be published. Required fields are marked *