Skip to content Skip to sidebar Skip to footer

Custom Loss Function Implementation Issue In Keras

I am implementing a custom loss function in keras. The output of the model is 10 dimensional softmax layer. To calculate loss: first I need to find the index of y firing 1 and then

Solution 1:

This happens because your function is not differentiable. It's made of constants.

There is simply no solution for this if you want argmax as result.


An approach to test

Since you're using "softmax", that means that only one class is correct (you don't have two classes at the same time).

And since you want index differences, maybe you could work with a single continuous result (continuous values are differentiable).

Work with only one output ranging from -0.5 to 9.5, and take the classes by rounding the result.

That way, you can have the last layer with only one unit:

lastLayer = Dense(1,activation = 'sigmoid', ....) #or another kind if it's not dense    

And change the range with a lambda layer:

lambdaLayer = Lambda(lambda x: 10*x - 0.5)

Now your loss can be a simple 'mae' (mean absolute error).

The downside of this attempt is that the 'sigmoid' activation is not evenly distributed between the classes. Some classes will be more probable than others. But since it's important to have a limit, it seems at first the best idea.

This will only work if you classes follow a logical increasing sequence. (I guess they do, otherwise you'd not be trying that kind of loss, right?)

Post a Comment for "Custom Loss Function Implementation Issue In Keras"