: An algorithm that estimates an unknown data item as being like the majority of the*k*-nearest neighbors algorithm*k*-closest neighbors to that item.**Naive Bayes classifier**: A way to classify a data item using Bayes' theorem concerning the conditional probabilities*P(A|B)=(P(B|A) * P(A))/P(B)*. It also assumes that variables in the data are independent, which means that no variable affects the probability of the remaining variables attaining a certain value.**Decision tree**: A model classifying a data item into one of the classes at the leaf node, based on matching properties between the branches on the tree and the actual data item.**Random decision tree**: A decision tree in which every branch is formed using only a random subset of the available variables during its construction.**Random forest**: An ensemble of random decision trees constructed on a random subset of the data with replacement, where a data item is classified to the class with the majority vote from its trees.: The clustering algorithm that divides a dataset into*K*-means algorithm*k*groups such that the members in each group are as similar as possible, that is, closest to one another.**Regression analysis**: A method for estimating the unknown parameters in a functional model that predicts the output variable from the input variables, for example, to estimate*a*and*b*in the linear model*y=a*x+b*.**Time series analysis**: The analysis of data dependent on time; it mainly includes the analysis of trends and seasonality.**Support vector machines**: A classification algorithm that finds the hyperplane dividing the training data into given classes. This division by the hyperplane is then used to classify the data further.**Principal component analysis**: The preprocessing of individual components of given data in order to achieve better accuracy, for example, rescaling of the variables in the input vector depending on how much impact they have on the end result.

**Text mining**: The search and extraction of text, and its possible conversion to numerical data that is used for data analysis.**Neural networks**: A machine learning algorithm consisting of a network of simple classifiers that make decisions based on the input or the results of the other classifiers in the network.**Deep learning**: The ability of a neural network to improve its learning process.: The rules that can be observed in training data and, on the basis of which, a classification of the future data can be made.*A priori*association rules**PageRank**: A search algorithm that assigns the greatest relevance to the search result that has the greatest number of incoming web links from the most relevant search results for a given search term. In mathematical terms, PageRank calculates a certain eigenvector representing these measures of relevance.**Ensemble learning**: A method of learning where different learning algorithms are used to reach a final conclusion.**Bagging**: A method of classifying a data item by the majority vote of the classifiers trained on random subsets of the training data.**Genetic algorithms**: Machine learning algorithms inspired by genetic processes, for example, an evolution where classifiers with the greatest accuracy are trained further.**Inductive inference**: A machine learning method for learning the rules that produced the actual data.**Bayesian networks**: A graph model representing random variables with their conditional dependencies.**Singular value decomposition**: The factorization of a matrix, a generalization of eigendecomposition, used in the least squares methods.**Boosting**: A machine learning meta-algorithm that decreases the variance in an estimation by making a prediction based on the ensembles of the classifiers.**Expectation maximization**: An iterative method for searching the parameters in the model that maximize the accuracy of the prediction of the model.