#### Overview of this book

This cookbook offers a range of data analysis samples in simple and straightforward R code, providing step-by-step resources and time-saving methods to help you solve data problems efficiently. The first section deals with how to create R functions to avoid the unnecessary duplication of code. You will learn how to prepare, process, and perform sophisticated ETL for heterogeneous data sources with R packages. An example of data manipulation is provided, illustrating how to use the “dplyr” and “data.table” packages to efficiently process larger data structures. We also focus on “ggplot2” and show you how to create advanced figures for data exploration. In addition, you will learn how to build an interactive report using the “ggvis” package. Later chapters offer insight into time series analysis on financial data, while there is detailed information on the hot topic of machine learning, including data classification, regression, clustering, association rule mining, and dimension reduction. By the end of this book, you will understand how to resolve issues and will be able to comfortably offer solutions to problems encountered while performing data analysis.
R for Data Science Cookbook
Credits
www.PacktPub.com
Preface
Free Chapter
Functions in R
Data Preprocessing and Preparation
Visualizing Data with ggplot2
Making Interactive Reports
Simulation from Probability Distributions
Statistical Inference in R
Time Series Mining with R
Index

## Eliminating duplicated rows with dplyr

To avoid counting duplicate rows, we can use the `distinct` operation in SQL. In `dplyr`, we can also eliminate duplicated rows from a given dataset.

Ensure that you completed the Enhancing a data.frame with a data.table recipe to load `purchase_view.tab` and `purchase_order.tab` as both `data.frame` and `data.table` into your R environment.

### How to do it…

Perform the following steps to distinct duplicate rows with `dplyr`:

1. First, we illustrate how to obtain unique products from the dataset:

```> order.dt %>% select(Product) %>% distinct() %>% head(3)
Product
1: P0006944501
2: P0006018073
3: P0002267974
```
2. We can also `distinct` duplicated rows containing multiple columns:

```> distinct.product.user.dt <- order.dt %>% select(Product, User) %>% distinct()