Callbacks for customizing the training process
The training process can be stopped when a metric has stopped improving by using an appropriate callback
:
keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=0, verbose=0, mode='auto')
Loss history can be saved by defining a callback
like the following:
class LossHistory(keras.callbacks.Callback): def on_train_begin(self, logs={}): self.losses = [] def on_batch_end(self, batch, logs={}): self.losses.append(logs.get('loss')) model = Sequential() model.add(Dense(10, input_dim=784, init='uniform')) model.add(Activation('softmax')) model.compile(loss='categorical_crossentropy', optimizer='rmsprop') history = LossHistory() model.fit(X_train,Y_train, batch_size=128, nb_epoch=20, verbose=0, callbacks=[history]) print history.losses
Checkpointing
Checkpointing is a process that saves a snapshot of the application's state at regular intervals, so the application can be restarted from the last saved state in...