False Negatives

Image result for False Negatives

False negatives are one of four components in a classical confusion matrix for binary classification. In binary classification, two types or classes are analyzed by a machine learning program or similar technology.

The idea with the confusion matrix is that engineers have the actual values on the test data in hand. Then they run the machine learning program, and it makes its predictions. If the prediction matches what is known, that is a successful outcome. If it doesn't, that's not a successful outcome.

In this type of paradigm, the successful outcomes are labeled as true, and the unsuccessful outcomes are labeled as false.

The idea with the confusion matrix is that engineers have the actual values on the test data in hand. Then they run the machine learning program, and it makes its predictions. If the prediction matches what is known, that is a successful outcome. If it doesn't, that's not a successful outcome.

In this type of paradigm, the successful outcomes are labeled as true, and the unsuccessful outcomes are labeled as false.

So to provide an example for false negatives, you have to look at how the confusion matrix is set up. Suppose, for example, you have two classes to be classified – the first one is a value, let's say, one, that is called the class number one or positive class. The other result is a zero, which we can call the class number two or negative class.

In this case, a false negative would be a result where the machine learning program guesses a zero, but the result was actually a one.

This type of construct is widely used in various kinds of machine learning projects.



Post a Comment

0 Comments