Category: 03. Callbacks API

  • SwapEMAWeights

    Swaps model weights and EMA weights before and after evaluation. This callbacks replaces the model’s weight values with the values of the optimizer’s EMA weights (the exponential moving average of the past model weights values, implementing “Polyak averaging”) before model evaluation, and restores the previous weights after evaluation. The SwapEMAWeights callback is to be used in conjunction…

  • ProgbarLogger

    Callback that prints metrics to stdout. Arguments Raises

  • CSVLogger

    Callback that streams epoch results to a CSV file. Supports all values that can be represented as a string, including 1D iterables such as np.ndarray. Arguments Example

  • TerminateOnNaN

    Callback that terminates training when a NaN loss is encountered.

  • LambdaCallback

    Callback for creating simple, custom callbacks on-the-fly. ‘), on_train_end=lambda logs: json_log.close() )

  • RemoteMonitor

    Callback used to stream events to a server. Requires the requests library. Events are sent to root + ‘/publish/epoch/end/’ by default. Calls are HTTP POST, with a data argument which is a JSON-encoded dictionary of event data. If send_as_json=True, the content type of the request will be “application/json”. Otherwise the serialized JSON will be sent within a form. Arguments

  • ReduceLROnPlateau

    Reduce learning rate when a metric has stopped improving. Models often benefit from reducing the learning rate by a factor of 2-10 once learning stagnates. This callback monitors a quantity and if no improvement is seen for a ‘patience’ number of epochs, the learning rate is reduced. Example Arguments

  • LearningRateScheduler

    Learning rate scheduler. At the beginning of every epoch, this callback gets the updated learning rate value from schedule function provided at __init__, with the current epoch and current learning rate, and applies the updated learning rate on the optimizer. Arguments Example

  • EarlyStopping

    Stop training when a monitored metric has stopped improving. Assuming the goal of a training is to minimize the loss. With this, the metric to be monitored would be ‘loss’, and mode would be ‘min’. A model.fit() training loop will check at end of every epoch whether the loss is no longer decreasing, considering the min_delta and patience if applicable. Once it’s found…

  • TensorBoard

    Enable visualizations for TensorBoard. TensorBoard is a visualization tool provided with TensorFlow. A TensorFlow installation is required to use this callback. This callback logs events for TensorBoard, including: When used in model.evaluate() or regular validation in addition to epoch summaries, there will be a summary that records evaluation metrics vs model.optimizer.iterations written. The metric names will be prepended with evaluation,…