# Exercises

Run through the `introduction_to_data_analysis.ipynb`

notebook for a review of this chapter's content, review the `python_101.ipynb`

notebook (if needed), and then complete the following exercises to practice working with JupyterLab and calculating summary statistics in Python:

- Explore the JupyterLab interface and look at some of the shortcuts that are available. Don't worry about memorizing them for now (eventually, they will become second nature and save you a lot of time)—just get comfortable using Jupyter Notebooks.
- Is all data normally distributed? Explain why or why not.
- When would it make more sense to use the median instead of the mean for the measure of center?
- Run the code in the first cell of the
`exercises.ipynb`

notebook. It will give you a list of 100 values to work with for the rest of the exercises in this chapter. Be sure to treat these values as a sample of the population. - Using the data from
*exercise 4*, calculate the following statistics without importing anything from the`statistics`

module in the standard library (https://docs.python.org/3/library/statistics.html), and then confirm your results match up to those that are obtained when using the`statistics`

module (where possible):a) Mean

b) Median

c) Mode (hint: check out the

`Counter`

class in the`collections`

module of the standard library at https://docs.python.org/3/library/collections.html#collections.Counter)d) Sample variance

e) Sample standard deviation

- Using the data from
*exercise 4*, calculate the following statistics using the functions in the`statistics`

module where appropriate:a) Range

b) Coefficient of variation

c) Interquartile range

d) Quartile coefficient of dispersion

- Scale the data created in
*exercise 4*using the following strategies:a) Min-max scaling (normalizing)

b) Standardizing

- Using the scaled data from
*exercise 7*, calculate the following:a) The covariance between the standardized and normalized data

b) The Pearson correlation coefficient between the standardized and normalized data (this is actually 1, but due to rounding along the way, the result will be slightly less)