Performing gradient descent cost optimization
In this recipe, let's define an optimizer that can minimize the cost. Post optimization, check for CNN performance.
Getting ready
The optimizer definition will require the cost
recipe to be defined as it goes as input to the optimizer.
How to do it...
- Run an Adam optimizer with the objective of minimizing the cost for a given
learning_rate
:
optimizer = tf$train$AdamOptimizer(learning_rate=1e-4)$minimize(cost)
- Extract the number of
correct_predictions
and calculate the mean percentage accuracy:
correct_prediction = tf$equal(y_pred_cls, y_true_cls) accuracy = tf$reduce_mean(tf$cast(correct_prediction, tf$float32))