Book Image

Native Docker Clustering with Swarm

By : Fabrizio Soppelsa, Chanwit Kaewkasi
Book Image

Native Docker Clustering with Swarm

By: Fabrizio Soppelsa, Chanwit Kaewkasi

Overview of this book

Docker Swarm serves as one of the crucial components of the Docker ecosystem and offers a native solution for you to orchestrate containers. It’s turning out to be one of the preferred choices for Docker clustering thanks to its recent improvements. This book covers Swarm, Swarm Mode, and SwarmKit. It gives you a guided tour on how Swarm works and how to work with Swarm. It describes how to set up local test installations and then moves to huge distributed infrastructures. You will be shown how Swarm works internally, what’s new in Swarmkit, how to automate big Swarm deployments, and how to configure and operate a Swarm cluster on the public and private cloud. This book will teach you how to meet the challenge of deploying massive production-ready applications and a huge number of containers on Swarm. You'll also cover advanced topics that include volumes, scheduling, a Libnetwork deep dive, security, and platform scalability.
Table of Contents (18 chapters)
Native Docker Clustering with Swarm
Credits
About the Authors
About the Reviewer
www.PacktPub.com
Dedication
Preface

Deploying Spark, again


We choose a host where we want to run the Spark standalone manager, be aws-105, and tag it as such:

docker node update --label-add type=sparkmaster aws-105

Other nodes will host our Spark workers.

We start the Spark master on aws-105:

$ docker service create \
--container-label spark-master \
--network spark \
--constraint 'node.labels.type == sparkmaster' \
--publish 8080:8080 \
--publish 7077:7077 \
--publish 6066:6066 \
--name spark-master \
--replicas 1 \
--env SPARK_MASTER_IP=0.0.0.0 \
--mount type=volume,target=/data,source=spark,volume-driver=flocker 
    \
fsoppelsa/spark-master

First, the image. I discovered that there are some annoying things included into the Google images (such as unsetting some environment variables, so making a configuration from external, with --env switches, impossible). Thus, I created myself a pair of Spark 1.6.2 master and worker images.

Then,  --network. Here we say to this container to attach to the user-defined overlay network...