Book Image

Linux Shell Scripting Cookbook, Second Edition - Second Edition

Book Image

Linux Shell Scripting Cookbook, Second Edition - Second Edition

Overview of this book

The shell remains one of the most powerful tools on a computer system — yet a large number of users are unaware of how much one can accomplish with it. Using a combination of simple commands, we will see how to solve complex problems in day to day computer usage.Linux Shell Scripting Cookbook, Second Edition will take you through useful real-world recipes designed to make your daily life easy when working with the shell. The book shows the reader how to effectively use the shell to accomplish complex tasks with ease.The book discusses basics of using the shell, general commands and proceeds to show the reader how to use them to perform complex tasks with ease.Starting with the basics of the shell, we will learn simple commands with their usages allowing us to perform operations on files of different kind. The book then proceeds to explain text processing, web interaction and concludes with backups, monitoring and other sysadmin tasks.Linux Shell Scripting Cookbook, Second Edition serves as an excellent guide to solving day to day problems using the shell and few powerful commands together to create solutions.
Table of Contents (16 chapters)
Linux Shell Scripting Cookbook
Credits
About the Authors
About the Reviewers
www.PacktPub.com
Preface
Index

Image crawler and downloader


Image crawlers are very useful when we need to download all the images that appear in a web page. Instead of going through the HTML sources and picking all the images, we can use a script to parse the image files and download them automatically. Let's see how to do it.

How to do it...

Let's write a Bash script to crawl and download the images from a web page, as follows:

#!/bin/bash
#Desc: Images downloader
#Filename: img_downloader.sh

if [ $# -ne 3 ];
then
  echo "Usage: $0 URL -d DIRECTORY"
  exit -1
fi

for i in {1..4}
do
  case $1 in
  -d) shift; directory=$1; shift ;;
   *) url=${url:-$1}; shift;;
  esac
done

mkdir -p $directory;
baseurl=$(echo $url | egrep -o "https?://[a-z.]+")

echo Downloading $url
curl -s $url | egrep -o "<img src=[^>]*>" | sed 's/<img src=\"\([^"]*\).*/\1/g' > /tmp/$$.list

sed -i "s|^/|$baseurl/|" /tmp/$$.list

cd $directory;

while read filename;
do
  echo Downloading $filename
  curl -s -O "$filename" --silent

done...