When dealing with data that is linearly separable, the goal of the Support Vector Machine (SVM) learning algorithm is to find the boundary between classes so that there are fewer misclassification errors. However, the problem is that there could be several decision boundaries (B1, B2), as you can see in the following figure:
As a result, the question arises as to which of the boundaries is better, and how to define better. The solution is to use margin as the optimization objective.
The objective of the SVM algorithm is to maximize the margin. The margin of a linear classifier is to increase the width of the boundary before hitting a data point. The algorithm first finds out the width of the hyperplane and then maximizes the margin. It chooses the decision boundary that has the maximum margin. So, for instance in the above figure, it chooses B1: