The Full Wiki

Recall (information retrieval): Wikis

Advertisements

Note: Many of our articles have direct quotes from sources you can cite, within the Wikipedia article! This article doesn't yet, but we're working on it! See more info or our list of citable articles.

Encyclopedia

(Redirected to Precision and recall article)

From Wikipedia, the free encyclopedia

Recall and precision depend on the outcome (oval) of a query and its relation to all relevant documents (left) and the non-relevant documents (right). The more correct results (green), the better. Precision: horizontal arrow. Recall: diagonal arrow.

Precision and recall are two widely used statistical classifications.

Precision can be seen as a measure of exactness or fidelity, whereas Recall is a measure of completeness.

In an information retrieval scenario, Precision is defined as the number of relevant documents retrieved by a search divided by the total number of documents retrieved by that search, and Recall is defined as the number of relevant documents retrieved by a search divided by the total number of existing relevant documents (which should have been retrieved).

In a statistical classification task, the Precision for a class is the number of true positives (i.e. the number of items correctly labeled as belonging to the positive class) divided by the total number of elements labeled as belonging to the positive class (i.e. the sum of true positives and false positives, which are items incorrectly labeled as belonging to the class). Recall in this context is defined as the number of true positives divided by the total number of elements that actually belong to the positive class (i.e. the sum of true positives and false negatives, which are items which were not labeled as belonging to the positive class but should have been).

In information retrieval, a perfect Precision score of 1.0 means that every result retrieved by a search was relevant (but says nothing about whether all relevant documents were retrieved) whereas a perfect Recall score of 1.0 means that all relevant documents were retrieved by the search (but says nothing about how many irrelevant documents were also retrieved).

In a classification task, a Precision score of 1.0 for a class C means that every item labeled as belonging to class C does indeed belong to class C (but says nothing about the number of items from class C that were not labeled correctly) whereas a Recall of 1.0 means that every item from class C was labeled as belonging to class C (but says nothing about how many other items were incorrectly also labeled as belonging to class C).

Often, there is an inverse relationship between Precision and Recall, where it is possible to increase one at the cost of reducing the other. For example, an information retrieval system (such as a search engine) can often increase its Recall by retrieving more documents, at the cost of increasing number of irrelevant documents retrieved (decreasing Precision). Similarly, a classification system for deciding whether or not, say, a fruit is an orange, can achieve high Precision by only classifying fruits with the exact right shape and color as oranges, but at the cost of low Recall due to the number of false negatives from oranges that did not quite match the specification.

Usually, Precision and Recall scores are not discussed in isolation. Instead, either values for one measure are compared for a fixed level at the other measure (e.g. precision at a recall level of 0.75) or both are combined into a single measure, such as the F-measure, which is the weighted harmonic mean of precision and recall (see below), or the Matthews correlation coefficient.

Contents

Definition (information retrieval context)

In Information Retrieval contexts, Precision and Recall are defined in terms of a set of retrieved documents (e.g. the list of documents produced by a web search engine for a query) and a set of relevant documents (e.g. the list of all documents on the internet that are relevant for a certain topic).

Advertisements

Precision

In the field of information retrieval, precision is the fraction of retrieved documents that are relevant to the search:

 \mbox{precision}=\frac{|\{\mbox{relevant documents}\}\cap\{\mbox{retrieved documents}\}|}{|\{\mbox{retrieved documents}\}|}

Precision takes all retrieved documents into account, but it can also be evaluated at a given cut-off rank, considering only the topmost results returned by the system. This measure is called precision at n or P@n.

For example for a text search on a set of documents precision is the number of correct results divided by the number of all returned results.

Precision is also used with recall, the percent of all relevant documents that is returned by the search. The two measures are sometimes used together in the F1 Score (or f-measure) to provide a single measurement for a system.

Note that the meaning and usage of "precision" in the field of Information Retrieval differs from the definition of accuracy and precision within other branches of science and technology.

Recall

Recall in Information Retrieval is the fraction of the documents that are relevant to the query that are successfully retrieved.

 \mbox{recall}=\frac{|\{\mbox{relevant documents}\}\cap\{\mbox{retrieved documents}\}|}{|\{\mbox{relevant documents}\}|}

For example for text search on a set of documents recall is the number of correct results divided by the number of results that should have been returned

In binary classification, recall is called sensitivity. So it can be looked at as the probability that a relevant document is retrieved by the query.

It is trivial to achieve recall of 100% by returning all documents in response to any query. Therefore recall alone is not enough but one needs to measure the number of non-relevant documents also, for example by computing the precision.

Definition (classification context)

In the context of classification tasks, the terms true positives, true negatives, false positives and false negatives (see also Type I and type II errors) are used to compare the given classification of an item (the class label assigned to the item by a classifier) with the desired correct classification (the class the item actually belongs to). This is illustrated by the table below:

correct result / classification
 E1   E2 
obtained
result / classification
E1 tp
(true positive)
fp
(false positive)
E2 fn
(false negative)
tn
(true negative)

Precision and Recall are then defined as:[1]

\mbox{Precision}=\frac{tp}{tp+fp} \,
\mbox{Recall}=\frac{tp}{tp+fn} \,

Recall in this context is also referred to as the True Positive Rate, other related measures used in classification include True Negative Rate and Accuracy:[1]. True Negative Rate is also called Specificity.

\mbox{True Negative Rate}=\frac{tn}{tn+fp} \,
\mbox{Accuracy}=\frac{tp+tn}{tp+tn+fp+fn} \,

Probabilistic interpretation

It is possible to interpret Precision and Recall not as ratios but as probabilities:

  • Precision is the probability that a (randomly selected) retrieved document is relevant.
  • Recall is the probability that a (randomly selected) relevant document is retrieved in a search.

F-measure

A measure that combines Precision and Recall is the harmonic mean of precision and recall, the traditional F-measure or balanced F-score:

F = 2 \cdot \frac{\mathrm{precision} \cdot \mathrm{recall}}{ \mathrm{precision} + \mathrm{recall}}.

This is also known as the F1 measure, because recall and precision are evenly weighted.

It is a special case of the general Fβ measure (for non-negative real values of β):

F_\beta = (1 + \beta^2) \cdot \frac{\mathrm{precision} \cdot \mathrm{recall} }{ \beta^2 \cdot \mathrm{precision} + \mathrm{recall}}.

Two other commonly used F measures are the F2 measure, which weights recall twice as much as precision, and the F0.5 measure, which weights precision twice as much as recall.

The F-measure was derived by van Rijsbergen (1979) so that Fβ "measures the effectiveness of retrieval with respect to a user who attaches β times as much importance to recall as precision". It is based on van Rijsbergen's effectiveness measure E = 1 − (1/(α/P + (1 −  α)/R)). Their relationship is Fβ = 1 − E where α = 1/(β2 + 1).

See also

Sources

  1. ^ a b Olson, David L.; Delen, Dursun ”Advanced Data Mining Techniques” Springer; 1 edition (February 1, 2008), page 138, ISBN 3540769161
  • Baeza-Yates, R.; Ribeiro-Neto, B. (1999). Modern Information Retrieval. New York: ACM Press, Addison-Wesley. Seiten 75 ff. ISBN 0-201-39829-X
  • van Rijsbergen, C.V.: Information Retrieval. London; Boston. Butterworth, 2nd Edition 1979. ISBN 0-408-70929-4

External links


Advertisements






Got something to say? Make a comment.
Your name
Your email address
Message