https://medium.com/@thongonary/how-to-compute-f1-score-for-each-epoch-in-keras-a1acd17715a2
https://datascience.stackexchange.com/questions/13746/how-to-define-a-custom-performance-metric-in-keras/20192
Accuracy Paradox).
Callback function, on which we can extend to compute the desired quantities.
Here is a sample code to compute and print out the f1 score, recall, and precision at the end of each epoch, using the whole validation data:
import numpy as np
from keras.callbacks import Callback
from sklearn.metrics import confusion_matrix, f1_score, precision_score, recall_score
class Metrics(Callback):
def on_train_begin(self, logs={}):
self.val_f1s = []
self.val_recalls = []
self.val_precisions = []
def on_epoch_end(self, epoch, logs={}):
val_predict = (np.asarray(self.model.predict(self.model.validation_data[0]))).round()
val_targ = self.model.validation_data[1]
_val_f1 = f1_score(val_targ, val_predict)
_val_recall = recall_score(val_targ, val_predict)
_val_precision = precision_score(val_targ, val_predict)
self.val_f1s.append(_val_f1)
self.val_recalls.append(_val_recall)
self.val_precisions.append(_val_precision)
print “ — val_f1: %f — val_precision: %f — val_recall %f” %(_val_f1, _val_precision, _val_recall)
return
metrics = Metrics()
on_epoch_end. Later on, we can access these lists as usual instance variables, for example:
print (metrics.val_f1s)
Define the model, and add the callback parameter in the fit function:
model.fit(training_data, training_target,
validation_data=(validation_data, validation_target),
nb_epoch=10,
batch_size=64,
callbacks=[metrics])
The printout during training would look like this:
Epoch 1/10
32320/32374 [============================>.] - ETA: 0s - loss: 0.0414 - val_f1: 0.375000 - val_precision: 0.782609 - val_recall 0.246575
32374/32374 [==============================] - 23s - loss: 0.0414 - val_loss: 0.0430
That’s it. Have fun training!