| menu | main |
|---|---|
| title | Framework Documentation |
keras init
class WandbCallback(keras.callbacks.Callback)WandbCallback automatically integrates keras with wandb.
Example:
model.fit(X_train, y_train, validation_data=(X_test, y_test),
callbacks=[WandbCallback()])
WandbCallback will automatically log history data from any metrics collected by keras: loss and anything passed into keras_model.compile()
WandbCallback will set summary metrics for the run associated with the "best" training
step, where "best" is defined by the monitor and mode attribues. This defaults
to the epoch with the minimum val_loss. WandbCallback will by default save the model
associated with the best epoch..
WandbCallback can optionally log gradient and parameter histograms.
WandbCallback can optionally save training and validation data for wandb to visualize.
Arguments:
monitorstr - name of metric to monitor. Defaults to val_loss.modestr - one of {"auto", "min", "max"}. "min" - save model when monitor is minimized "max" - save model when monitor is maximized "auto" - try to guess when to save the model (default). save_model: True - save a model when monitor beats all previous epochs False - don't save modelssave_graph- (boolean): if True save model graph to wandb (default: True).save_weights_onlyboolean - if True, then only the model's weights will be saved (model.save_weights(filepath)), else the full model is saved (model.save(filepath)).log_weights- (boolean) if True save histograms of the model's layer's weights.log_gradients- (boolean) if True log histograms of the training gradientstraining_data- (tuple) Same format (X,y) as passed to model.fit. This is needed for calculating gradients - this is mandatory iflog_gradientsisTrue.validate_data- (tuple) Same format (X,y) as passed to model.fit. A set of data for wandb to visualize. If this is set, every epoch, wandb will make a small number of predictions and save the results for later visualization.generatorgenerator - a generator that returns validation data for wandb to visualize. This generator should return tuples (X,y). Either validate_data or generator should be set for wandb to visualize specific data examples.validation_stepsint - ifvalidation_datais a generator, how many steps to run the generator for the full validation set.labelslist - If you are visualizing your data with wandb this list of labels will convert numeric output to understandable string if you are building a multiclass classifier. If you are making a binary classifier you can pass in a list of two labels ["label for false", "label for true"]. If validate_data and generator are both false, this won't do anything.predictionsint - the number of predictions to make for visualization each epoch, max is 100.input_typestring - type of the model input to help visualization. can be one of: ("image", "images", "segmentation_mask").output_typestring - type of the model output to help visualziation. can be one of: ("image", "images", "segmentation_mask").log_evaluationboolean - if True save a dataframe containing the full validation results at the end of training.class_colors[float, float, float] - if the input or output is a segmentation mask, an array containing an rgb tuple (range 0-1) for each class.log_batch_frequencyinteger - if None, callback will log every epoch. If set to integer, callback will log training metrics every log_batch_frequency batches.log_best_prefixstring - if None, no extra summary metrics will be saved. If set to a string, the monitored metric and epoch will be prepended with this value and stored as summary metrics.
This module hooks fast.ai Learners to Weights & Biases through a callback. Requested logged data can be configured through the callback constructor.
Examples:
WandbCallback can be used when initializing the Learner::
from wandb.fastai import WandbCallback
[...]
learn = Learner(data, ..., callback_fns=WandbCallback)
learn.fit(epochs)
Custom parameters can be given using functools.partial::
from wandb.fastai import WandbCallback
from functools import partial
[...]
learn = Learner(data, ..., callback_fns=partial(WandbCallback, ...))
learn.fit(epochs)
Finally, it is possible to use WandbCallback only when starting training. In this case it must be instantiated::
learn.fit(..., callbacks=WandbCallback(learn))
or, with custom parameters::
learn.fit(..., callbacks=WandbCallback(learn, ...))
class WandbCallback(TrackerCallback)Automatically saves model topology, losses & metrics. Optionally logs weights, gradients, sample predictions and best trained model.
Arguments:
learnfastai.basic_train.Learner - the fast.ai learner to hook.logstr - "gradients", "parameters", "all", or None. Losses & metrics are always logged.save_modelbool - save model at the end of each epoch. It will also load best model at the end of training.monitorstr - metric to monitor for saving best model. None uses default TrackerCallback monitor value.modestr - "auto", "min" or "max" to compare "monitor" values and define best model.input_typestr - "images" or None. Used to display sample predictions.validation_datalist - data used for sample predictions if input_type is set.predictionsint - number of predictions to make if input_type is set and validation_data is None.seedint - initialize random generator for sample predictions if input_type is set and validation_data is None.
| on_train_begin(**kwargs)Call watch method to log model topology, gradients & weights
| on_epoch_end(epoch, smooth_loss, last_metrics, **kwargs)Logs training loss, validation loss and custom metrics & log prediction samples & save model
| on_train_end(**kwargs)Load the best model.