In this chapter, we have learned how to find the optimal model parameter θ that is generalizable across tasks so that we can take fewer gradient steps and learn quickly on the new related tasks. We started off with MAML and we saw how MAML performs meta optimization to calculate the optimal model parameter. Next, we saw adversarial meta learning where we used both clean and adversarial samples for finding the robust initial model parameter. Later, we learned about CAML and we saw how it uses two different parameters, one for learning within the task and one for updating the model parameter.
In the next chapter, we will learn about meta-SGD and Reptile algorithm, which is again used for finding the better initial parameter of a model.