page_title | page_description | page_keywords |
---|---|---|
Docker Swarm discovery |
Swarm discovery |
docker, swarm, clustering, discovery |
Docker Swarm comes with multiple Discovery backends.
You use a hosted discovery service with Docker Swarm. The service
maintains a list of IPs in your swarm. There are several available
services, such as etcd
, consul
and zookeeper
depending on what
is best suited for your environment. You can even use a static
file. Docker Hub also provides a hosted discovery service which you
can use.
This example uses the hosted discovery service on Docker Hub. It is not meant to be used in production scenarios but for development/testing only. Using Docker Hub's hosted discovery service requires that each node in the swarm is connected to the internet. To create your swarm:
First we create a cluster.
# create a cluster
$ swarm create
6856663cdefdec325839a4b7e1de38e8 # <- this is your unique <cluster_id>
Then we create each node and join them to the cluster.
# on each of your nodes, start the swarm agent
# <node_ip> doesn't have to be public (eg. 192.168.0.X),
# as long as the swarm manager can access it.
$ swarm join --advertise=<node_ip:2375> token://<cluster_id>
Finally, we start the Swarm manager. This can be on any machine or even your laptop.
$ swarm manage -H tcp://<swarm_ip:swarm_port> token://<cluster_id>
You can then use regular Docker commands to interact with your swarm.
docker -H tcp://<swarm_ip:swarm_port> info
docker -H tcp://<swarm_ip:swarm_port> run ...
docker -H tcp://<swarm_ip:swarm_port> ps
docker -H tcp://<swarm_ip:swarm_port> logs ...
...
You can also list the nodes in your cluster.
swarm list token://<cluster_id>
<node_ip:2375>
For each of your nodes, add a line to a file. The node IP address doesn't need to be public as long the Swarm manager can access it.
echo <node_ip1:2375> >> /tmp/my_cluster
echo <node_ip2:2375> >> /tmp/my_cluster
echo <node_ip3:2375> >> /tmp/my_cluster
Then start the Swarm manager on any machine.
swarm manage -H tcp://<swarm_ip:swarm_port> file:///tmp/my_cluster
And then use the regular Docker commands.
docker -H tcp://<swarm_ip:swarm_port> info
docker -H tcp://<swarm_ip:swarm_port> run ...
docker -H tcp://<swarm_ip:swarm_port> ps
docker -H tcp://<swarm_ip:swarm_port> logs ...
...
You can list the nodes in your cluster.
$ swarm list file:///tmp/my_cluster
<node_ip1:2375>
<node_ip2:2375>
<node_ip3:2375>
On each of your nodes, start the Swarm agent. The node IP address doesn't have to be public as long as the swarm manager can access it.
swarm join --advertise=<node_ip:2375> etcd://<etcd_ip>/<path>
Start the manager on any machine or your laptop.
swarm manage -H tcp://<swarm_ip:swarm_port> etcd://<etcd_ip>/<path>
And then use the regular Docker commands.
docker -H tcp://<swarm_ip:swarm_port> info
docker -H tcp://<swarm_ip:swarm_port> run ...
docker -H tcp://<swarm_ip:swarm_port> ps
docker -H tcp://<swarm_ip:swarm_port> logs ...
...
You can list the nodes in your cluster.
swarm list etcd://<etcd_ip>/<path>
<node_ip:2375>
On each of your nodes, start the Swarm agent. The node IP address doesn't need to be public as long as the Swarm manager can access it.
swarm join --advertise=<node_ip:2375> consul://<consul_addr>/<path>
Start the manager on any machine or your laptop.
swarm manage -H tcp://<swarm_ip:swarm_port> consul://<consul_addr>/<path>
And then use the regular Docker commands.
docker -H tcp://<swarm_ip:swarm_port> info
docker -H tcp://<swarm_ip:swarm_port> run ...
docker -H tcp://<swarm_ip:swarm_port> ps
docker -H tcp://<swarm_ip:swarm_port> logs ...
...
You can list the nodes in your cluster.
swarm list consul://<consul_addr>/<path>
<node_ip:2375>
On each of your nodes, start the Swarm agent. The node IP doesn't have to be public as long as the swarm manager can access it.
swarm join --advertise=<node_ip:2375> zk://<zookeeper_addr1>,<zookeeper_addr2>/<path>
Start the manager on any machine or your laptop.
swarm manage -H tcp://<swarm_ip:swarm_port> zk://<zookeeper_addr1>,<zookeeper_addr2>/<path>
You can then use the regular Docker commands.
docker -H tcp://<swarm_ip:swarm_port> info
docker -H tcp://<swarm_ip:swarm_port> run ...
docker -H tcp://<swarm_ip:swarm_port> ps
docker -H tcp://<swarm_ip:swarm_port> logs ...
...
You can list the nodes in the cluster.
swarm list zk://<zookeeper_addr1>,<zookeeper_addr2>/<path>
<node_ip:2375>
Start the manager on any machine or your laptop
swarm manage -H <swarm_ip:swarm_port> nodes://<node_ip1:2375>,<node_ip2:2375>
Or
swarm manage -H <swarm_ip:swarm_port> <node_ip1:2375>,<node_ip2:2375>
Then use the regular Docker commands.
docker -H <swarm_ip:swarm_port> info
docker -H <swarm_ip:swarm_port> run ...
docker -H <swarm_ip:swarm_port> ps
docker -H <swarm_ip:swarm_port> logs ...
...
The file
and nodes
discoveries support a range pattern to specify IP
addresses, i.e., 10.0.0.[10:200]
will be a list of nodes starting from
10.0.0.10
to 10.0.0.200
.
For example for the file
discovery method.
$ echo "10.0.0.[11:100]:2375" >> /tmp/my_cluster
$ echo "10.0.1.[15:20]:2375" >> /tmp/my_cluster
$ echo "192.168.1.2:[2:20]375" >> /tmp/my_cluster
Then start the manager.
swarm manage -H tcp://<swarm_ip:swarm_port> file:///tmp/my_cluster
And for the nodes
discovery method.
swarm manage -H <swarm_ip:swarm_port> "nodes://10.0.0.[10:200]:2375,10.0.1.[2:250]:2375"
Contributing a new discovery backend is easy, simply implement this interface:
type Discovery interface {
Initialize(string, int) error
Fetch() ([]string, error)
Watch(WatchCallback)
Register(string) error
}
The parameters are discovery
location without the scheme and a heartbeat (in seconds).
Returns the list of all the nodes from the discovery.
Triggers an update (Fetch
). This can happen either via a timer (like
token
) or use backend specific features (like etcd
).
Add a new node to the discovery service.