Book Image

Essential Statistics for Non-STEM Data Analysts

By : Rongpeng Li
Book Image

Essential Statistics for Non-STEM Data Analysts

By: Rongpeng Li

Overview of this book

Statistics remain the backbone of modern analysis tasks, helping you to interpret the results produced by data science pipelines. This book is a detailed guide covering the math and various statistical methods required for undertaking data science tasks. The book starts by showing you how to preprocess data and inspect distributions and correlations from a statistical perspective. You’ll then get to grips with the fundamentals of statistical analysis and apply its concepts to real-world datasets. As you advance, you’ll find out how statistical concepts emerge from different stages of data science pipelines, understand the summary of datasets in the language of statistics, and use it to build a solid foundation for robust data products such as explanatory models and predictive models. Once you’ve uncovered the working mechanism of data science algorithms, you’ll cover essential concepts for efficient data collection, cleaning, mining, visualization, and analysis. Finally, you’ll implement statistical methods in key machine learning tasks such as classification, regression, tree-based methods, and ensemble learning. By the end of this Essential Statistics for Non-STEM Data Analysts book, you’ll have learned how to build and present a self-contained, statistics-backed data product to meet your business goals.
Table of Contents (19 chapters)
1
Section 1: Getting Started with Statistics for Data Science
5
Section 2: Essentials of Statistical Analysis
10
Section 3: Statistics for Machine Learning
15
Section 4: Appendix

Applying the maximum likelihood approach with Python

Maximum Likelihood Estimation (MLE) is the most widely used estimation method. It estimates the probability parameters by maximizing a likelihood function. The obtained extremum estimator is called the maximum likelihood estimator. The MLE approach is both intuitive and flexible. It has the following advantages:

  • MLE is consistent. This is guaranteed. In many practices, a good MLE means the job that is left is simply to collect more data.
  • MLE is functionally invariant. The likelihood function can take various transformations before maximizing the functional form. We will see examples in the next section.
  • MLE is efficient. Efficiency means when the sample size tends to infinity, no other consistent estimator has a lower asymptotic MSE than MLE.

With that power in MLE, I bet you just can't wait to try it. Before maximizing the likelihood, we need to define the likelihood function first.

Likelihood function...