Keras - Autoencoder Accuracy Stuck On Zero
Solution 1:
Accuracy on an autoencoder has little meaning, especially on a fraud detection algorithm. What I mean by this is that accuracy is not well defined on regression tasks. For example is it accurate to say that 0.1 is the same as 0.11. For the keras algorithm it is not. If you want to see how well your algorithm replicates the data I would suggest looking at the MSE or at the data itself. Many autoencoder use MSE as their loss function.
The metric you should be monitoring is the training loss on good examples vs the validation loss on fraudulent examples. There you can easily see if you can fit your real examples more closely than the fraudulent ones and how well your algorithm performs in practice.
Another design choice I would not make is relu in an autoencoder. ReLU works well with deeper model because of its simplicity and effectiveness in combating vanishing/exploding gradients. However, both of this concerns are non-factors in autoencoder and the loss of data hurts in an autoencoder. I would suggest using tanh as your intermediate activation function.
Post a Comment for "Keras - Autoencoder Accuracy Stuck On Zero"