Errors in machine learning can be decomposed into two components: bias and variance. The difference between them is commonly explained using the shooting metaphor, as demonstrated in the following diagram. If you train a high-variance model on 10 different datasets, the results would be very different. If you train a high-bias model on 10 different datasets, you would get very similar results. In other words, high-bias models tend to underfit and high-variance models tend to overfit. Usually, the more parameters the model has the more it is prone to overfitting, but there are also differences between model classes: parametric models like linear and logistic regressions tend to be biased, while nonparametric models like KNN usually have a high variance:
Figure 7.4: Two components of errors: bias and variance