When using a neural network to perform classification and prediction, it is usually better to use cross-entropy error than classification error, and somewhat better to use cross-entropy error than mean squared error to evaluate the quality of the neural network. Let me explain. The basic idea is simple but there are a lot of related issues that greatly confuse the main idea. First, let me make it clear that we are dealing only with a neural network that is used to classify data, such as predicting a person’s political party affiliation (democrat, republican, other) from independent data such as age, sex, annual income, and so on. We are not dealing with a neural network that does regression, where the value to be predicted is numeric, or a time series neural network, or any other kind of neural network.

Now suppose you have just three training data items. Your neural network uses softmax activation for the output neurons so that there are three output values that can be interpreted as probabilities. For example suppose the neural network’s computed outputs, and the target (aka desired) values are as follows:

computed | targets | correct? ----------------------------------------------- 0.3 0.3 0.4 | 0 0 1 (democrat) | yes 0.3 0.4 0.3 | 0 1 0 (republican) | yes 0.1 0.2 0.7 | 1 0 0 (other) | no

This neural network has classification error of 1/3 = 0.33, or equivalently a classification accuracy of 2/3 = 0.67. Notice that the NN just barely gets the first two training items correct and is way off on the third training item. But now consider the following neural network:

computed | targets | correct? ----------------------------------------------- 0.1 0.2 0.7 | 0 0 1 (democrat) | yes 0.1 0.7 0.2 | 0 1 0 (republican) | yes 0.3 0.4 0.3 | 1 0 0 (other) | no

This NN also has a classification error of 1/3 = 0.33. But this second NN is better than the first because it nails the first two training items and just barely misses the third training item. To summarize, classification error is a very crude measure of error.

Now consider cross-entropy error. The cross-entropy error for the first training item in the first neural network above is:

-( (ln(0.3)*0) + (ln(0.3)*0) + (ln(0.4)*1) ) = -ln(0.4)

Notice that in the case of neural network classification, the computation is a bit odd because all terms but one will go away. (There are several good explanations of how to compute cross-entropy on the Internet.) So, the average cross-entropy error (ACE) for the first neural network is computed as:

-(ln(0.4) + ln(0.4) + ln(0.1)) / 3 = 1.38

The average cross-entropy error for the second neural network is:

-(ln(0.7) + ln(0.7) + ln(0.3)) / 3 = 0.64

Notice that the average cross-entropy error for the second, superior neural network is smaller than the ACE error for the first neural network. The ln() function in cross-entropy takes into account the closeness of a prediction and is a more granular way to compute error.

By the way, you can also measure neural network quality by using mean squared error but this has problems too. The squared error term for the first item in the first neural network would be:

(0.3 - 0)^2 + (0.3 - 0)^2 + (0.4 - 1)^2 = 0.09 + 0.09 + 0.36 = 0.54

And so the mean squared error for the first neural network is:

(0.54 + 0.54 + 1.34) / 3 = 0.81

The mean squared error for the second, better, neural network is:

(0.14 + 0.14 + 0.74) / 3 = 0.34

MSE isn’t a hideously bad approach but if you think about how MSE is computed you’ll see that, compared to ACE, MSE gives too much emphasis to the incorrect outputs. It might also be possible to compute a modified MSE that uses only the values associated with the 1s in the target, but I have never seen that approach used or discussed.

So, I think this example explains why using cross-entropy error is clearly preferable to using classification error. Somewhat unfortunately there are some additional issues here. The discussion above refers to computing error during the training process. After training, to get an estimate of the effectiveness of the neural network, classification error is usually preferable to MSE or ACE. The idea is that classification error is ultimately what you’re interested in.

Suppose you are using back-propagation for training. The back-propagation algorithm computes gradient values which are derived from some implicit measure of error. Typically the implicit error is mean squared error, which gives a particular gradient equation that involves the calculus derivative of the softmax output activation function. But you can use implicit cross-entropy error instead of implicit mean squared error. This approach changes the back-propagation equation for the gradients. I have never seen research which directly addresses the question of whether to use cross-entropy error for both the implicit training measure of error and also neural network quality evaluation, or to use cross-entropy just for quality evaluation. Such research may (and fact, probably) exists, but I’ve been unable to track any papers down.

To summarize, for a neural network classifier, during training you can use mean squared error or average cross-entropy error, and average cross-entropy error is considered slightly better. If you are using back-propagation, the choice of MSE or ACE affects the computation of the gradient. After training, to estimate the effectiveness of the neural network it’s better to use classification error.

== (Added on Dec. 12, 2016)

Several people asked about the advantage of cross-entropy error over mean squared error. Briefly, during back-propagation training, you want to drive output node values to either 1.0 or 0.0 depending on the target values. If you use MSE, the weight adjustment factor (the gradient) contains a term of (output) * (1 – output). As the computed output gets closer and closer to either 0.0 or 1.0 the value of (output) * (1 – output) gets smaller and smaller. For example, if output = 0.6 then (output) * (1 – output) = 0.24 but if output is 0.95 then (output) * (1 – output) = 0.0475. As the adjustment factor gets smaller and smaller, the change in weights gets smaller and smaller and training can stall out, so to speak.

But if you use cross-entropy error, the (output) * (1 – output) term goes away (the math is very cool). So, the weight changes don’t get smaller and smaller and so training isn’t s likely to stall out. Note that this argument assumes you’re doing neural network classification with softmax output node activation.

## 全部回应 0 条