Author: Awais Farooq

  • BackupAndRestore

    Callback to back up and restore the training state. BackupAndRestore callback is intended to recover training from an interruption that has happened in the middle of a Model.fit execution, by backing up the training states in a temporary checkpoint file, at the end of each epoch. Each backup overwrites the previously written checkpoint file, so at any given…

  • ModelCheckpoint

    Callback to save the Keras model or model weights at some frequency. ModelCheckpoint callback is used in conjunction with training using model.fit() to save a model or weights (in a checkpoint file) at some interval, so the model or weights can be loaded later to continue the training from the state saved. A few options this callback provides…

  • Base Callback class

    Base class used to build new callbacks. Callbacks can be passed to keras methods such as fit(), evaluate(), and predict() in order to hook into the various stages of the model training, evaluation, and inference lifecycle. To create a custom callback, subclass keras.callbacks.Callback and override the method associated with the stage of interest. Example If you want to use Callback objects in a…

  • Layer weight constraints

    Usage of constraints Classes from the keras.constraints module allow setting constraints (eg. non-negativity) on model parameters during training. They are per-variable projection functions applied to the target variable after each gradient update (when using fit()). The exact API will depend on the layer, but the layers Dense, Conv1D, Conv2D and Conv3D have a unified API. These layers expose two keyword arguments: Available weight constraints…

  • Layer weight regularizers

    Regularizers allow you to apply penalties on layer parameters or layer activity during optimization. These penalties are summed into the loss function that the network optimizes. Regularization penalties are applied on a per-layer basis. The exact API will depend on the layer, but many layers (e.g. Dense, Conv1D, Conv2D and Conv3D) have a unified API. These layers expose 3 keyword…

  • Layer weight initializers

    Usage of initializers Initializers define the way to set the initial random weights of Keras layers. The keyword arguments used for passing initializers to layers depends on the layer. Usually, it is simply kernel_initializer and bias_initializer: All built-in initializers can also be passed via their string identifier: Available initializers The following built-in initializers are available as part of…

  • Layer activation functions

    Usage of activations Activations can either be used through an Activation layer, or through the activation argument supported by all forward layers: This is equivalent to: All built-in activations may also be passed via their string identifier: Available activations relu function Applies the rectified linear unit activation function. With default values, this returns the standard ReLU activation: max(x, 0), the element-wise…

  • The base Layer class

    This is the class from which all layers inherit. A layer is a callable object that takes as input one or more tensors and that outputs one or more tensors. It involves computation, defined in the call() method, and a state (weight variables). State can be created: Layers are recursively composable: If you assign a Layer instance as an attribute…

  • Model training APIs

    Configures the model for training. Example Arguments fit method Trains the model for a fixed number of epochs (dataset iterations). Arguments Unpacking behavior for iterator-like inputs: A common pattern is to pass an iterator like object such as a tf.data.Dataset or a keras.utils.PyDataset to fit(), which will in fact yield not only features (x) but optionally targets (y) and sample weights…

  • The Sequential class

    Sequential groups a linear stack of layers into a Model. Examples add method Adds a layer instance on top of the layer stack. Arguments pop method