Book Image

Linux Shell Scripting Cookbook, Second Edition - Second Edition

Book Image

Linux Shell Scripting Cookbook, Second Edition - Second Edition

Overview of this book

The shell remains one of the most powerful tools on a computer system — yet a large number of users are unaware of how much one can accomplish with it. Using a combination of simple commands, we will see how to solve complex problems in day to day computer usage.Linux Shell Scripting Cookbook, Second Edition will take you through useful real-world recipes designed to make your daily life easy when working with the shell. The book shows the reader how to effectively use the shell to accomplish complex tasks with ease.The book discusses basics of using the shell, general commands and proceeds to show the reader how to use them to perform complex tasks with ease.Starting with the basics of the shell, we will learn simple commands with their usages allowing us to perform operations on files of different kind. The book then proceeds to explain text processing, web interaction and concludes with backups, monitoring and other sysadmin tasks.Linux Shell Scripting Cookbook, Second Edition serves as an excellent guide to solving day to day problems using the shell and few powerful commands together to create solutions.
Table of Contents (16 chapters)
Linux Shell Scripting Cookbook
Credits
About the Authors
About the Reviewers
www.PacktPub.com
Preface
Index

Finding and deleting duplicate files


Duplicate files are copies of the same files. In some circumstances, we may need to remove duplicate files and keep a single copy of them. Identification of duplicate files by looking at the file content is an interesting task. It can be done using a combination of shell utilities. This recipe deals with finding duplicate files and performing operations based on the result.

Getting ready

We can identify the duplicate files by comparing file content. Checksums are ideal for this task, since files with exactly the same content will produce the same checksum values. We can use this fact to remove duplicate files.

How to do it...

  1. Generate some test files as follows:

    $ echo "hello" > test ; cp test test_copy1 ; cp test test_copy2;
    $ echo "next" > other;
    # test_copy1 and test_copy2 are copy of test
    
  2. The code for the script to remove the duplicate files is as follows:

    #!/bin/bash
    #Filename: remove_duplicates.sh
    #Description:  Find and remove duplicate files...