DevOps Journal #3 - Going with Docker Swarm

in #devops6 years ago (edited)

Going with Docker Swarm

I looked at some of the other options for orchestration over the last few days and I'm finding doing a self-hosted Docker Swarm setup will be the best for our situation.

You can get up and running pretty quickly on the node you want to manage the swarm, create it with the simple command docker swarm init. That command will return a join command that you should run on nodes (servers or devices) that you want to join the swarm.

Firewall Settings

However the join command will only work if you have the correct ports open on your nodes. Here is a summary of the ports needed for Docker Swarm (source Digital Ocean Docs):

  • TCP port 2376 for secure Docker client communication. This port is required for Docker Machine to work. Docker Machine is used to orchestrate Docker hosts.
  • TCP port 2377 This port is used for communication between the nodes of a Docker Swarm or cluster. It only needs to be opened on manager nodes.
  • TCP and UDP port 7946 for communication among nodes (container network discovery).
  • UDP port 4789 for overlay network traffic (container ingress networking).

OSI Model

The OSI model concept is one I've heard many times, but I never seem to be able to recall it properly. It's an important concept to have fresh in your mind when thinking about Docker Swarm.

OSI-Model.jpg
image source

Overlay Network

From what I understand if I want to run a Docker Stack ontop of multiple hosts, you need to create an overlay network.

This will allow all containers that may be hosted on different hosts but in the same stack to access a shared network. So you can have one container running on Digital Ocean accessing port 2000 of a container on EC2, without having to make any firewall adjustments. No adjustments are needed because they are part of the same network. Someone correct me if I'm wrong on my interpretation here.

When you create an overlay network you're creating a single layer 2 (data link) broadcast domain that spans multiple hosts/machines. This short video from Docker does a great job of explaining.

More on Docker Networking

I found another video from Docker Con EU 2017, which goes more in depth into re-mapping a traditional network architecture, to various Docker Swarm configurations.

What's Next

I'm finding that I'm understanding the big picture a bit more when it comes to re-writing our old setup with Docker Swarm. That said there is still a lot I'm not sure about, and best practices I want to look into.

Some questions I have:

  • Are there security issues with using a single swarm for multiple applications? With our past setup we had excessive amounts of stacks, which could have benefited from pooling resources. But what about the flip side, are there security risks to applications sharing a swarm?
  • The swarm init automatically generates a join command using the IP of the manager node. Will this cause problems if that IP changes? I'm thinking I should swap it out with a static uri?
  • What would it take to add a node behind a firewall to a swarm (not for production just for fun), could that be done through reverse ssh tunnel?

That is the research I've done. My next steps to try to put this into practice.

  • Create 2 test servers
  • Open ports required
  • Install Docker on both
  • Initialize one as swarm manager
  • Add second server to swarm
  • Create and add both to swarm overlay network
  • Run an application that uses 5 containers that require network communication between each other
  • Simulate heavy traffic and scale up by adding another test server

Coin Marketplace

STEEM 0.16
TRX 0.15
JST 0.028
BTC 57658.56
ETH 2273.22
USDT 1.00
SBD 2.46