Now, we will see another interesting variant of prototypical networks called the semi-prototypical network. It deals with handling unlabeled examples. As we know, in the prototypical network, we compute the prototype of each class by taking the mean embedding of each class and then predict the class of query set by finding the distance between query points to the class prototypes.
Consider the case where our dataset contains some of the unlabeled data points: how do we compute the class prototypes of these unlabeled data points?
Let's say we have a support set,
where x is the feature and y is the label, and a query set,
. Along with these, we have one more set called the unlabeled set, R, where we have only unlabeled examples,
So, what can we do with this unlabeled set?
First, we will compute the class prototype with all the examples given in the support set. Next, we use soft k-means and assign the class for unlabeled examples in R—that is, we assign the class...