In this recipe, let's apply dropout to the output of the fully connected layer to reduce the chance of overfitting. The dropout step involves removing some neurons randomly during the learning process.
The dropout is connected to the output of the layer. Thus, model initial structure is set up and loaded. For example, in dropout current layer layer_fc1
is defined, on which dropout is applied.
- Create a placeholder for dropout that can take probability as an input:
keep_prob <- tf$placeholder(tf$float32)
- Use TensorFlow's dropout function to handle the scaling and masking of neuron outputs:
layer_fc1_drop <- tf$nn$dropout(layer_fc1, keep_prob)