7

I want to implement a custom loss function in Python and It should work like this pseudocode:

aux = | Real - Prediction | / Prediction
errors = []
if aux <= 0.1:
 errors.append(0)
elif aux > 0.1 & <= 0.15:
 errors.append(5/3)
elif aux > 0.15 & <= 0.2:
 errors.append(5)
else:
 errors.append(2000)
return sum(errors)

I started to define the metric like this:

def custom_metric(y_true,y_pred):
    # y_true:
    res = K.abs((y_true-y_pred) / y_pred, axis = 1)
    ....

But I do not know how to get the value of the res for the if and else. Also I want to know what have to return the function.

Thanks

3 Answers 3

6

Also I want to know what have to return the function.

Custom metrics can be passed at the compilation step.

The function would need to take (y_true, y_pred) as arguments and return a single tensor value.

But I do not know how to get the value of the res for the if and else.

You can return the result from result_metric function.

def custom_metric(y_true,y_pred):
     result = K.abs((y_true-y_pred) / y_pred, axis = 1)
     return result

The second step is to use a keras callback function in order to find the sum of the errors.

The callback can be defined and passed to the fit method.

history = CustomLossHistory()
model.fit(callbacks = [history])

The last step is to create the the CustomLossHistory class in order to find out the sum of your expecting errors list.

CustomLossHistory will inherit some default methods from keras.callbacks.Callback.

  • on_epoch_begin: called at the beginning of every epoch.
  • on_epoch_end: called at the end of every epoch.
  • on_batch_begin: called at the beginning of every batch.
  • on_batch_end: called at the end of every batch.
  • on_train_begin: called at the beginning of model training.
  • on_train_end: called at the end of model training.

You can read more in the Keras Documentation

But for this example we only need on_train_begin and on_batch_end methods.

Implementation

class LossHistory(keras.callbacks.Callback):
    def on_train_begin(self, logs={}):
        self.errors= []

    def on_batch_end(self, batch, logs={}):
         loss = logs.get('loss')
         self.errors.append(self.loss_mapper(loss))

    def loss_mapper(self, loss):
         if loss <= 0.1:
             return 0
         elif loss > 0.1 & loss <= 0.15:
             return 5/3
         elif loss > 0.15 & loss <= 0.2:
             return 5
         else:
             return 2000

After your model is trained you can access your errors using following statement.

errors = history.errors
15
  • That might be a valid answer from a pure programming perspective at first, but I still believe this can't work because the gradient will not be useful. I might be wrong though. Commented Apr 27, 2018 at 11:41
  • @Mihai Alexandru-Ionut but how can I save the results of the conditions and use the callback, could you put an example? Thanks Commented Apr 27, 2018 at 11:50
  • @AlexanderHarnisch, now I understand. I updated my answer. Commented Apr 27, 2018 at 11:50
  • @Aceconhielo, Yes, I updated my answer and soon I will update it with a full solution example. Commented Apr 27, 2018 at 11:51
  • 1
    @Mihai Alexandru-Ionut never mind, fixed. But I have another question if you do not mind. The values that I want to predict are between 0 and 1 but my output in my prediction is sometimes < 0 and > 1. Do you know how could I fix it? Commented Apr 29, 2018 at 18:01
1

I'll take a leap here and say this won't work because it is not differentiable. The loss needs to be continuously differentiable so you can propagate a gradient through there.

If you want to make this work you need to find a way to do this without discontinuity. For example you could try a weighted average over your 4 discrete values where the weights strongly prefer the closest value.

1
  • My fault. I have forgotten to say that finally I want to sum all the errors. I modified my question Commented Apr 27, 2018 at 11:41
0

Appending to self directly didnt work for me, instead appending to params dict of self did the job, answering op it would be self.params['error'] = [], then add to the array as you see fit.

class CustomCallback(tf.keras.callbacks.Callback):
     
     def on_train_begin(self, logs=None):
          self.params['error'] = []

     def on_epoch_end(self, epochs, logs=None):
          #do something with self.params['error']

history = model.fit(callbacks = [CustomCallback()])

#When train ends

error = history.params['error']

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Not the answer you're looking for? Browse other questions tagged or ask your own question.