Tabla de Contenidos
Orquestación, clustering y Gestión (Docker avanzado)
Contenido perteneciente al curso Docker avanzado.
Orquestación
En la actualidad las aplicaciones son complejas y por regla general no llega con desplegar un solo contenedor en producción
Lo habitual es necesitar varios, que además deben escalar de manera diferente y otras complejidades.
Por ejemplo, un contenedor para el Front-End, uno o varios para la interfaz de servicios, otro para la base de datos… Por no hablar de si utilizas microservicios…
Todo esto da pie a la necesidad de la orquestación de contenedores, es decir, disponer de alguna herramienta o sistema que automatiza el despliegue, la gestión, el escalado, la interconexión y la disponibilidad de nuestras aplicaciones basadas en contenedores.
Un orquestador de contenedores se ocupa de cuestiones como:
- Configuración automática
- Despliegue y “levantado” automático de servicios basados en contenedores
- Balanceado de carga
- Auto-escalado y auto-reinicio de contenedores
- Control de la “salud” de cada contenedor
- Intercambio de datos y networking
- Mantenimiento de parámetros “secretos” y configuraciones
Kubernetes
Kubernetes es de hecho el motor de orquestación de contenedores más popular que existe en el mercado.
Comenzó siendo un proyecto de Google y miles de equipos de desarrolladores lo usan para desplegar contenedores en producción.
Google afirma que ejecuta miles de millones de contenedores usando Kubernetes cada semana.
La herramienta funciona agrupando contenedores que componen una aplicación en unidades lógicas para una fácil gestión y descubrimiento.
Docker Swarm
Swarm es la solución que propone Docker
Viene incluido junto al motor de Docker desde la versión 1.12.0, y ofrece muchas funciones avanzadas integradas como el descubrimiento de servicios, balanceo de carga, escalado y seguridad
Swarm sigue la filosofía de Docker de centrarse en la simplicidad y en la experiencia del desarrollador
Se podría decir que es más fácil de usar que Kubernetes de inicio, pero no tan potente y no tan adoptado por las empresas, los proveedores Cloud o por la comunidad
Google Kubernetes Engine (GKE)
Montado sobre Kubernetes, permite desplegar, gestionar y escalar aplicaciones de contenedores en la nube de Google u El objetivo de GKE es optimizar la productividad del departamento de desarrollo al mejorar la gestión de las cargas de trabajo basadas en contenedores
Oculta tanto las tareas de gestión simple como aquellas más complejas detrás de herramientas de líneas de comando, usando interfaces transparentes y fáciles de usar.
Azure Container Service (AKS)
Hace relativamente poco se le cambió el nombre de marca de ACS a AKS
El servicio de Azure es código abierto y está optimizado para su uso en las máquinas virtuales de Azure, denominadas Azure Virtual Machine
Proporciona las herramientas necesarias para crear, configurar y gestionar la infraestructura de contendedores Docker abiertos
AKS ofrece desarrollo simplificado de aplicaciones basadas en contenedores y despliegue con soporte para Kubernetes, Mesosphere DC/OS, o Swarm para la orquestación.
Amazon ECS
El servicio de AWS para orquestación de contenedores, Amazon ECS, es un sistema de gestión muy escalable que permite a los desarrolladores ejecutar aplicaciones en contenedores sobre instancias EC2.
Está formado por muchos componentes integrados que permiten la fácil planificación y despliegue de clústeres, tareas y servicios Docker.
Aunque, como principal desventaja, no existe soporte para ejecutar contenedores fuera de EC2, los aspectos a favor incluyen ventajas propias del servicio AWS tales como CloudTrail, CloudWatch, Elastic Load Balancing, etc.
Práctica
Kubernetes tiene una documentación interactiva muy buena para la introducción a su entorno: https://kubernetes.io/docs/tutorials/kubernetes-basics/
Práctica guiada: Orchestration with Docker Swarm
El objetivo de un orquestador es la capacidad de administrar más de un nodo docker (más de una instalación de un Docker Engine) donde desplegaremos contenedores
For the realization of this lab we need to set up an environment with 4 nodes (docker engine instances). For this purpose we will create a compose.yml with the following content:
name: docker_swarm services: node: image: docker:dind privileged: true deploy: replicas: 4
Estamos emulando la existencia de 4 máquinas donde queremos desplegar varios contenedores.
privileged:true no se debe usar a la ligera, ya que estamos haciendo un puente entre nuestro contenedor y nuestro sistema operativo. Podemos administrar la máquina anfitriona desde dentro del contenedor.
With the command docker compose up -d we will obtain 4 running containers:
- docker_swarm-node-1
- docker_swarm-node-2
- docker_swarm-node-3
- docker_swarm-node-4
go inside each of the nodes we will have to execute the following command:
docker exec -it docker_swarm-node-1 sh
I recommend opening a terminal for each node in the natural order. Terminal 1 for node 1, etc NOTE: Things like IP addresses and Swarm join tokens will be different in your lab. Remember to substitute the values shown here in the lab guide for the real values in your lab.
Step 1: Create a new Swarm
In this step you’ll initialize a new Swarm and verify that the operation worked.
For this lab to work you will need your Docker hosts running in single-engine mode and not in Swarm mode.
1. Execute the following command on node1.
# docker swarm init
Salida:
Swarm initialized: current node (fr9c5c6vjiv8ztsfhzri33vsa) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-16ghmifwqd4uhodn5eu0puah4nsvel9oiu53ftn9tmfrfki9l0-6ifmq51xqq36aqwxkgli6eqri 172.22.0.3:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
En situaciones reales el número de managers debe ser impar mayor que uno para tomar decisiones.
The command above has created a brand new Swarm and made node1 the first manager of the Swarm.
The first manager of any Swarm is automatically made the leader and the Certificate Authority (CA) for the Swarm. If you already have a CA and do not want Swarm to generate a new one, you can use the --external-ca flag to specify an external CA.
2. Verify that the Swarm was created successfully and that node1 is the leader of the new Swarm with the following command.
docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION fr9c5c6vjiv8ztsfhzri33vsa * 03785fcb4c28 Ready Active Leader 24.0.6
The command above will list all nodes in the Swarm. Notice that the output only lists one node and that the node is also the leader.
3. Run a docker info command and view the Swarm related information.
Client:
Version: 24.0.6
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc.)
Version: v0.11.2
Path: /usr/local/libexec/docker/cli-plugins/docker-buildx
compose: Docker Compose (Docker Inc.)
Version: v2.23.0
Path: /usr/local/libexec/docker/cli-plugins/docker-compose
Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 24.0.6
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Using metacopy: false
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: active
NodeID: fr9c5c6vjiv8ztsfhzri33vsa
Is Manager: true
ClusterID: u2kz2k1cucmlo201ix1aj91nx
Managers: 1
Nodes: 1
Default Address Pool: 10.0.0.0/8
SubnetSize: 24
Data Path Port: 4789
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 10
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Force Rotate: 0
Autolock Managers: false
Root Rotation In Progress: false
Node Address: 172.22.0.3
Manager Addresses:
172.22.0.3:2377
Runtimes: io.containerd.runc.v2 runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 7880925980b188f4c97b462f709d0db8e8962aff
runc version: v1.1.9-0-gccaecfc
init version: de40ad0
Security Options:
seccomp
Profile: builtin
cgroupns
Kernel Version: 6.1.0-0.deb11.11-amd64
Operating System: Alpine Linux v3.18 (containerized)
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 15.45GiB
Name: 03785fcb4c28
ID: 130823bf-cd9d-48d9-8cce-d2af6345a29e
Docker Root Dir: /var/lib/docker
Debug Mode: false
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine
The important things to note from the output above are; nodeID, ClusterID, CA Configuration.
It is important to know that the docker swarm init command performs at least two important security related operations:
- It creates a new CA (unless you specify –external-ca) and creates a key-pair to secure communications within the Swarm
- It creates two join tokens - one to join new workers to the Swarm, and the other to join new managers to the Swarm.
will look at these in the following steps
Step 2: Add a new Manager
Now that you have a Swarm initialized, it’s time to add another Manager. In order to add a new Manager you must know the manager join token for the Swarm you wish to join it to. The process below will show you how to obtain the manager join token and use it to add node2 as a new manager in the Swarm. 1. Use the docker swarm join-token manager command on node1 to get the manager join token.
# docker swarm join-token manager
To add a manager to this swarm, run the following command:
docker swarm join --token SWMTKN-1-16ghmifwqd4uhodn5eu0puah4nsvel9oiu53ftn9tmfrfki9l0-bbvoh153bm8uv9w1ya4jbw7o1 172.22.0.3:2377
The output of the command gives you the full command, including the join token, that you can run on any Docker node to join it as a manager.
2. Copy and paste the command in to node2. Remember to use the command and join token for your lab, and not the value shown in this lab guide.
docker swarm join --token SWMTKN-1-16ghmifwqd4uhodn5eu0puah4nsvel9oiu53ftn9tmfrfki9l0-bbv oh153bm8uv9w1ya4jbw7o1 172.22.0.3:2377 This node joined a swarm as a manager.
3. Run the docker node ls command from either node1 or node2 to list the nodes in the Swarm.
# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION fr9c5c6vjiv8ztsfhzri33vsa * 03785fcb4c28 Ready Active Leader 24.0.6 xn2teh4tagyk9d9o54klmvwa9 dab481e02e63 Ready Active Reachable 24.0.6
The join token used in the commands above will join any node to your Swarm as a manager. This means it is vital that you keep the join tokens private - anyone in possession of it can join nodes to the Swarm as managers.
Step 3: Add a new Worker
Adding a worker is the same process as adding a manager. The only difference is the token used.
Every Swarm maintains one manager join token and one worker join token.
1. Run a docker swarm join-token worker command from any of the managers in your Swarm to obtain the command and token required to add a new worker node.
# docker swarm join-token worker
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-16ghmifwqd4uhodn5eu0puah4nsvel9oiu53ftn9tmfrfki9l0-6ifmq51xqq36aqwxkgli6eqri 172.22.0.3:2377
Notice that the join token for managers and workers share some of the same values. Both start with SWMTKN-1, and both share the same Swarm root CA digest. It is only the last part of the token that determines if the token is for a manager or worker.
2. Switch to node3 and paste in the command from the previous step.
docker swarm join --token SWMTKN-1-16ghmifwqd4uhodn5eu0puah4nsvel9oiu53ftn9tmfrfki9l0-6ifmq51xqq36a qwxkgli6eqri 172.22.0.3:2377 This node joined a swarm as a worker.
3. Switch back to one of the manager nodes (node1 or node2) and run a docker node ls command to verify the node was added as a worker.
# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION gy9kbjjx9xmmyie1he3syqtmy 04ef2555f004 Ready Active 24.0.6 fr9c5c6vjiv8ztsfhzri33vsa * 03785fcb4c28 Ready Active Leader 24.0.6 xn2teh4tagyk9d9o54klmvwa9 dab481e02e63 Ready Active Reachable 24.0.6
The output above shows that node3 was added to the Swarm and is operating as a worker - the lack of a value in the MANAGER STATUS column indicates that the node is a worker.
Step 4: Rotate Join Keys
In this step you will rotate the Swarms worker join-key. This will invalidate the worker join-key used in previous steps. It will not affect the status of workers already joined to the Swarm, this means all existing workers will continue to be valid workers in the Swarm.
You will test that the rotate operation succeeded by attempting to add a new worker with the old key. This operation will fail. You will then retry the operation with the new key. This time it will succeed.
1. Rotate the existing worker key by execute the following command from either of the Swarm managers.
docker swarm join-token --rotate worker
Successfully rotated worker join token.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-16ghmifwqd4uhodn5eu0puah4nsvel9oiu53ftn9tmfrfki9l0-e1j9f03l6r6uyqif08tr3kb17 172.22.0.3:2377
Notice that the new join token still starts with SWMTKN-1 and keeps the same digest of the Swarms root CA 4h5log5…. It is only the last part of the token that has changed. This is because the new token is still a Swarm join token for the same Swarm. The system has only rotated the secret used to add new workers (the last portion).
2. Log on to node4 and attempt to join the Swarm using the old join token. You should be able to find the old join token in the terminal window of node3 from a previous step.
docker swarm join --token SWMTKN-1-16ghmifwqd4uhodn5eu0puah4nsvel9oiu53ftn9tmfrfki9l0-6ifmq51xqq36aqwxkgli6eqr i 172.22.0.3:2377 Error response from daemon: rpc error: code = InvalidArgument desc = A valid join token is necessary to join this cluster
The operation fails because the join token is no longer valid.
3. Retry the previous operation using the new join token given as the output to the docker swarm join-token --rotate worker command in a previous step.
docker swarm join --token SWMTKN-1-16ghmifwqd4uhodn5eu0puah4nsvel9oiu53ftn9tmfrfki9l0-e1j9f03l6r6uyqif08tr3kb17 172.22.0.3:2377 This node joined a swarm as a worker.
Rotating join tokens is something that you will need to do if you suspect your existing join tokens have been compromised. It is important that you manage your join-tokens carefully. This is because unauthorized nodes joining the Swarm is a security risk.
Step 5: View certificates
Each time a new manager or worker joins the Swarm it is issued with a client certificate. This client certificate is used in conjunction with the existing Swarm public key infrastructure (PKI) to authenticate the node and encrypt communications.
There are three important things to note about the client certificate:
- It specifies which Swarm the node is an authorized member of
- It contains the node ID
- It specifies the role the node is authorized to perform in the Swarm (worker or manager)
Execute the following command from any node in your Swarm to view the nodes client certificate.
openssl x509 -in /var/lib/docker/swarm/certificates/swarm-node.crt -text
Salida:
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
63:cb:46:b8:70:28:55:4b:09:8e:bf:26:72:95:06:4a:ec:1d:50:a5
Signature Algorithm: ecdsa-with-SHA256
Issuer: CN = swarm-ca
Validity
Not Before: Oct 19 14:14:00 2023 GMT
Not After : Jan 17 15:14:00 2024 GMT
Subject: O = u2kz2k1cucmlo201ix1aj91nx, OU = swarm-manager, CN = fr9c5c6vjiv8ztsfhzri33vsa
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
04:d0:7c:25:e5:0f:8e:82:82:af:1b:11:39:2e:3c:
26:a2:02:46:78:ef:2f:f6:e4:df:cd:ed:a4:25:de:
fa:d9:78:65:df:08:1b:c3:9e:7e:6d:d9:8f:99:3e:
15:35:ad:50:32:85:fa:50:fb:52:3b:cc:82:a8:ca:
10:d6:06:8d:f8
ASN1 OID: prime256v1
NIST CURVE: P-256
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature, Key Encipherment
X509v3 Extended Key Usage:
TLS Web Server Authentication, TLS Web Client Authentication
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Subject Key Identifier:
39:5E:86:E2:2B:E1:62:F8:06:EB:87:CE:1E:34:2C:FA:7A:D7:AF:96
X509v3 Authority Key Identifier:
02:2E:91:C5:74:4B:05:7B:C3:6A:68:91:3A:3F:4E:A0:43:CF:E2:8D
X509v3 Subject Alternative Name:
DNS:swarm-manager, DNS:fr9c5c6vjiv8ztsfhzri33vsa, DNS:swarm-ca
Signature Algorithm: ecdsa-with-SHA256
Signature Value:
30:45:02:21:00:a5:6a:aa:d3:b7:42:02:45:43:7d:d8:b5:7d:
8b:97:f9:d6:51:46:be:94:8d:c8:d7:48:9e:b8:af:85:b7:5d:
37:02:20:06:63:6a:22:92:b7:58:29:8e:07:96:db:2b:c0:ab:
cc:ca:e6:4f:25:0d:02:a2:4a:36:97:b4:cb:a0:17:40:68
-----BEGIN CERTIFICATE-----
MIICNTCCAdugAwIBAgIUY8tGuHAoVUsJjr8mcpUGSuwdUKUwCgYIKoZIzj0EAwIw
EzERMA8GA1UEAxMIc3dhcm0tY2EwHhcNMjMxMDE5MTQxNDAwWhcNMjQwMTE3MTUx
NDAwWjBgMSIwIAYDVQQKExl1Mmt6MmsxY3VjbWxvMjAxaXgxYWo5MW54MRYwFAYD
VQQLEw1zd2FybS1tYW5hZ2VyMSIwIAYDVQQDExlmcjljNWM2dmppdjh6dHNmaHpy
aTMzdnNhMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE0Hwl5Q+OgoKvGxE5Ljwm
ogJGeO8v9uTfze2kJd762Xhl3wgbw55+bdmPmT4VNa1QMoX6UPtSO8yCqMoQ1gaN
+KOBvzCBvDAOBgNVHQ8BAf8EBAMCBaAwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsG
AQUFBwMCMAwGA1UdEwEB/wQCMAAwHQYDVR0OBBYEFDlehuIr4WL4BuuHzh40LPp6
16+WMB8GA1UdIwQYMBaAFAIukcV0SwV7w2pokTo/TqBDz+KNMD0GA1UdEQQ2MDSC
DXN3YXJtLW1hbmFnZXKCGWZyOWM1YzZ2aml2OHp0c2ZoenJpMzN2c2GCCHN3YXJt
LWNhMAoGCCqGSM49BAMCA0gAMEUCIQClaqrTt0ICRUN92LV9i5f51lFGvpSNyNdI
nrivhbddNwIgBmNqIpK3WCmOB5bbK8CrzMrmTyUNAqJKNpe0y6AXQGg=
-----END CERTIFICATE-----
The important things to note about the output above are the three fields on the bottom line:
- The Organization (O) field contains the Swarm ID
- The Organization Unit (OU) field contains the nodes role
- The Common Name (CN) field contains the nodes ID
These three fields make sure the node operates in the correct Swarm, operates in the correct role,
and is the node it says it is.
You can use the docker swarm update --cert-expiry <TIME PERIOD> command to change frequency at which the client certificates in the Swarm are renewed. The default is 90 days (3 months).
Step 6: Rotate certificates
In this step you’ll view the existing certificate rotation period for your Swarm, and then alter that period.
Perform the following commands from a manager node in your Swarm.
1. Use the docker info command to view the existing certificate rotation period enforced in your Swarm.
# docker info (...) CA Configuration: Expiry Duration: 3 months Force Rotate: 0 Autolock Managers: false
The last two lines of the output above show that the current rotation period (Expiry Duration) is 3 months.
2. Use the docker swarm update command to change the rotation period.
# docker swarm update --cert-expiry 168h Swarm updated.
The --cert-expiry flag accepts time periods in the format 00h00m00s, where h is for hours, m is
for minutes, and s is for seconds. The example above sets the rotation period to 168 hours (7 days).
3. Run another docker info to check that the value has changed.
# docker info (...) CA Configuration: Expiry Duration: 7 days Force Rotate: 0 Autolock Managers: false
Step 7. Switching nodes between manager and worker
1. We can see the nodes joined to the cluster with docker node ls and their role in the Manager status column.
# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION gy9kbjjx9xmmyie1he3syqtmy 04ef2555f004 Ready Active 24.0.6 jijl07ycpf4tf87m5hjiio508 409a7aa741c4 Ready Active 24.0.6 fr9c5c6vjiv8ztsfhzri33vsa * 03785fcb4c28 Ready Active Leader 24.0.6 xn2teh4tagyk9d9o54klmvwa9 dab481e02e63 Ready Active Reachable 24.0.6
2. Using the node ID or name we can promote a worker node to manager with the following command:
# docker node promote gy9kbjjx9xmmyie1he3syqtmy Node gy9kbjjx9xmmyie1he3syqtmy promoted to a manager in the swarm.
Revisamos:
# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION gy9kbjjx9xmmyie1he3syqtmy 04ef2555f004 Ready Active Reachable 24.0.6 jijl07ycpf4tf87m5hjiio508 409a7aa741c4 Ready Active 24.0.6 fr9c5c6vjiv8ztsfhzri33vsa * 03785fcb4c28 Ready Active Leader 24.0.6 xn2teh4tagyk9d9o54klmvwa9 dab481e02e63 Ready Active Reachable 24.0.6
3. And we can do the reverse process. Downgrade a manager node to worker with the following command
# docker node demote gy9kbjjx9xmmyie1he3syqtmy Manager gy9kbjjx9xmmyie1he3syqtmy demoted in the swarm.
Revisamos:
# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION gy9kbjjx9xmmyie1he3syqtmy 04ef2555f004 Ready Active 24.0.6 jijl07ycpf4tf87m5hjiio508 409a7aa741c4 Ready Active 24.0.6 fr9c5c6vjiv8ztsfhzri33vsa * 03785fcb4c28 Ready Active Leader 24.0.6 xn2teh4tagyk9d9o54klmvwa9 dab481e02e63 Ready Active Reachable 24.0.6
Step 8. Managing a service
1. Let's create our first service. To do this we must execute the following command on a manager:
# docker service create --replicas 1 --name helloworld alpine ping docker.com
Ejemplo de salida:
zipfmczzckup7o8swo59qtc1k overall progress: 1 out of 1 tasks 1/1: running [==================================================>] verify: Service converged
Si usásemos Compose, el comando cambiaría a docker stack deploy
2. To list the deployed services we can use the following command # docker service ls
# docker service ls ID NAME MODE REPLICAS IMAGE PORTS zipfmczzckup helloworld replicated 1/1 alpine:latest
3. To know in which nodes each container of a service is deployed we must use this other command
docker service ps helloworld
# docker service ps helloworld ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS ljxyqlcmacky helloworld.1 alpine:latest 03785fcb4c28 Running Running about a minute ago
4. To know more details about the service we can use the command inspect
docker service inspect helloworld
# docker service inspect helloworld
[
{
"ID": "zipfmczzckup7o8swo59qtc1k",
"Version": {
"Index": 42
},
"CreatedAt": "2023-10-19T15:46:19.453011269Z",
"UpdatedAt": "2023-10-19T15:46:19.453011269Z",
"Spec": {
"Name": "helloworld",
"Labels": {},
"TaskTemplate": {
"ContainerSpec": {
"Image": "alpine:latest@sha256:eece025e432126ce23f223450a0326fbebde39cdf496a85d8c016293fc851978",
"Args": [
"ping",
"docker.com"
],
"Init": false,
"StopGracePeriod": 10000000000,
"DNSConfig": {},
"Isolation": "default"
},
(...)
"UpdateConfig": {
"Parallelism": 1,
"FailureAction": "pause",
"Monitor": 5000000000,
"MaxFailureRatio": 0,
"Order": "stop-first"
},
"RollbackConfig": {
"Parallelism": 1,
"FailureAction": "pause",
"Monitor": 5000000000,
"MaxFailureRatio": 0,
"Order": "stop-first"
},
"EndpointSpec": {
"Mode": "vip"
}
},
"Endpoint": {
"Spec": {}
}
}
]
5. Let's scale the service to see how the cluster behaves.
# docker service scale helloworld=5
# docker service scale helloworld=5 helloworld scaled to 5 overall progress: 5 out of 5 tasks 1/5: running [==================================================>] 2/5: running [==================================================>] 3/5: running [==================================================>] 4/5: running [==================================================>] 5/5: running [==================================================>] verify: Service converged
6. Comprobamos el efecto con los comandos service ls y service ps:
# docker service ls # docker service ps helloworld
# docker service ls ID NAME MODE REPLICAS IMAGE PORTS zipfmczzckup helloworld replicated 5/5 alpine:latest
# docker service ps helloworld ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS ljxyqlcmacky helloworld.1 alpine:latest 03785fcb4c28 Running Running 4 minutes ago of737jfn55kr helloworld.2 alpine:latest dab481e02e63 Running Running about a minute ago 21c2n6e8n3k9 helloworld.3 alpine:latest 04ef2555f004 Running Running about a minute ago yzeg8hudzv92 helloworld.4 alpine:latest 409a7aa741c4 Running Running about a minute ago o3x9hmpnq6hl helloworld.5 alpine:latest 409a7aa741c4 Running Running about a minute ago
7. Try to rescale the service again by deleting and adding new replicas and verify with service ls and
service ps the reaction of the nodes.
docker service scale helloworld=3
8. Finally we are going to delete the service
docker service rm helloworld
Extras:
Para saber si un nodo está en un cluster o no:
docker info --format '{{.Swarm.LocalNodeState}}'
Dejar el cluster (salir del cluster):
docker swarm leave
Para eliminar un nodo de un cluster
docker node rm <ID_NODO>
