Backup MySql Docker Container

Here is how you can make mysqldump on container that created from mariadb image

docker run -it --link db_1:mysql --rm mariadb sh -c 'exec mysqldump -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD" wordpress' > /backup/wordpress-$(date +\%F).sql

This command does the following:

  1. creates new container from mariadb image
  2. configure a link to your db container (db_1)
  3. run the mysqldump command inside the new container
  4. save the output of mysqldump command to a file
  5. remove the new container

Install docker swarm with consul, consul-template, registrator and haproxy

Tested On

OS: Ubuntu 14.04
Docker version: 1.10

About

Docker is great platform for build, ship and run application. Docker swarm is a native clustering for docker.

Swarm need discovery service for managing docker nodes and I choose to use consul for that because it’s a simple discovery service application and they also have consul-template which can be used to build dynamic configuration files for haproxy or other web servers. Other good options that docker support are etcd or zookeeper.

Consul can also be used as a key value store and monitoring system but here I am going to use it to manage docker nodes and my app services with registrator.

Network Architecture:

Docker swarm - New Page

Swarm discovery:

  1. (a) Swarm manager register it self in consul server that runs on the same host
  2. (b) Each swarm agent register it self in his local consul-client
  3. (c) The consul-client forward the registration action to the consul-server and the consul server register and add the swarm client to the cluster
  • In production you should run at least 3 consul servers and 3 swarm servers for high availability

App discovery:

  1. registrator listen for new containers that start inside docker
  2. registrator register the published ports of the new container in consul-client
  3. consul-client forward the publish ports to consul server
  4. consul-template run as a daemon and generate new haproxy configuration file based on template that include all added/removed containers of app and reload haproxy.

Installation

I am going to use 3 servers:

  1. mgr
  2. docker-1
  3. docker-2
  • install docker (all servers)
apt-get update
apt-get upgrade
apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
echo "deb https://apt.dockerproject.org/repo ubuntu-trusty main" > /etc/apt/sources.list.d/docker.list
apt-get update
apt-get purge lxc-docker
apt-get install linux-image-extra-$(uname -r) -y
apt-get install docker-engine -y
echo "DOCKER_OPTS=\"--cluster-advertise=192.168.11.10:2375 --cluster-store=consul://swarm-mgr:8500 -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock \"" >> /etc/default/docker service docker restart
  • start consul server (mgr server)
export PRIVATE_IP=192.168.11.10
docker run -d --name consul-srv-1 --restart=always -h consul-srv-1 -v /var/lib/consul:/data -p ${PRIVATE_IP}:8300:8300 -p ${PRIVATE_IP}:8301:8301 -p ${PRIVATE_IP}:8301:8301/udp -p ${PRIVATE_IP}:8302:8302 -p ${PRIVATE_IP}:8302:8302/udp -p ${PRIVATE_IP}:8400:8400 -p ${PRIVATE_IP}:8500:8500 -p ${PRIVATE_IP}:53:53/udp progrium/consul -server -advertise ${PRIVATE_IP} -bootstrap-expect 1
  • start consul client (docker-1 and docker-2)
export PRIVATE_IP=192.168.11.11
docker run -d --name consul-client --restart=always -h consul-client -p ${PRIVATE_IP}:8300:8300 -p ${PRIVATE_IP}:8301:8301 -p ${PRIVATE_IP}:8301:8301/udp -p ${PRIVATE_IP}:8302:8302 -p ${PRIVATE_IP}:8302:8302/udp -p ${PRIVATE_IP}:8400:8400 -p ${PRIVATE_IP}:8500:8500 -p ${PRIVATE_IP}:53:53/udp progrium/consul -advertise ${PRIVATE_IP} -join 192.168.11.10
  • start swarm manager (mgr server)
docker run -d --name swarm-mgr -p 3375:2375 --restart=always swarm manage -H tcp://0.0.0.0:2375 consul://192.168.11.10:8500/
  • start swarm agent (docker-1 and docker-2)
docker run -d --name swarm-agent --restart=always swarm join --advertise=192.168.11.11:2375 consul://192.168.11.10:8500/
  • run registrator (all servers)
docker run -d --name=registrator --restart=always --net=host --volume=/var/run/docker.sock:/tmp/docker.sock gliderlabs/registrator:latest consul://192.168.11.10:8500

Haproxy

for simplicity I will install haproxy and consul-template on the mgr server as a regular daemon. You can also install them on a separate server or inside docker.

  • install haproxy
apt-get install haproxy
  • download and install consul-template
cd /usr/local/src
wget https://releases.hashicorp.com/consul-template/0.13.0/consul-template_0.13.0_linux_amd64.zip
unzip consul-template_0.13.0_linux_amd64.zip
mv consul-template /usr/local/bin/
  • create template for consul-temaplate
vi /etc/haproxy/haproxy.ctmpl
global
 log /dev/log local0
 chroot /var/lib/haproxy
 user haproxy
 group haproxy
 daemon

defaults
 log global
 mode http
 option httplog
 
frontend app
 bind *:80
 default_backend app

backend app
 balance roundrobin
 {{range service "app"}}
 server {{.Node}}-{{.Port}} {{.Address}}:{{.Port}} check fall 3 rise 5 inter 2000 weight 2 {{end}}
  • run consul-template
consul-template -consul 192.168.11.10:8500 -template /etc/haproxy/haproxy.ctmpl:/etc/haproxy/haproxy.cfg:service haproxy reload

Install pacemaker on ubuntu

Tested On

OS: Ubuntu 14.04
Pacemaker Version: 1.1.10
Corosync Version: 2.3.3

About

Pacemaker is a cluster system for linux systems. pacemaker help you create highly available services by automatically recover/failover to multiple servers.

In this guide I will explain how I install pacemaker and corosync on ubuntu and configure haproxy cluster on two servers.

Install and configure pacemaker and corosync

  • run the following steps on both servers
  • install packages using apt-get
apt-get install pacemaker corosync fence-agents
  • configure corosync (change ring0_addr to the right address):
vi /etc/corosync/corosync.conf
# Please read the corosync.conf.5 manual page
totem {
 version: 2
 secauth: off
 cluster_name: pacemaker1
 transport: udpu
}

nodelist { 
 node { 
 ring0_addr: haproxy-1
 nodeid: 101 
 } 
 node { 
 ring0_addr: haproxy-2
 nodeid: 102 
 } 
}

quorum { 
 provider: corosync_votequorum 
 two_node: 1 
 wait_for_all: 1 
 last_man_standing: 1 
 auto_tie_breaker: 0 
}

logging {
 fileline: off
 to_logfile: yes
 to_syslog: no
 debug: on
 logfile: /var/log/corosync/corosync.log
 debug: off
 timestamp: on
 logger_subsys {
 subsys: AMF
 debug: off
 }
}
  • configure corosync to start
vi /etc/default/corosync
# start corosync at boot [yes|no]
START=yes
  • start corosync

service corosync start

  • download haproxy ocf resource
cd /usr/lib/ocf/resource.d/heartbeat
 40 curl -O https://raw.githubusercontent.com/thisismitch/cluster-agents/master/haproxy
chmod +x haproxy

ocf resource is a script that pacemaker use to start, stop and monitor a resource (service)

  • install and configure haproxy
apt-get install software-properties-common
add-apt-repository ppa:vbernat/haproxy-1.6
apt-get update
apt-get install haproxy

vi /etc/haproxy/haproxy.cfg
global
 log /dev/log local0
 log /dev/log local1 notice
 user haproxy
 group haproxy
 daemon

defaults
 mode http
 option forwardfor
 option http-server-close

frontend test
 bind 192.168.10.10:80
 default_backend test

backend test
 balance roundrobin
 server server1 192.168.20.11:8080 weight 10 check fall 5
 server server2 192.168.20.12:8080 weight 10 check fall 5
  • configure kernel parameter non local bind so we can start haproxy on both servers even if the server don’t own the vip
vi /etc/sysctl.conf
...
net.ipv4.ip_nonlocal_bind=1
  • reload sysctl.conf file
sysctl -p

Configure pacemaker resources

  • run the following steps on one server
  • configure vip resource
crm configure primitive test-ip ocf:heartbeat:IPaddr2 params ip=192.168.10.10 cidr_netmask=24 op monitor interval=30s

Here we configure a vip that in case of a problem with one server will failover to the other server.

  • configure haproxy resource
crm configure primitive haproxy ocf:heartbeat:haproxy op monitor interval=15s

we configure pacemaker to start and monitor haproxy every 15s, but we want to start haproxy on both servers so we will create a clone resource

  • clone haproxy resource
crm configure clone haproxy-clone haproxy

we create clone resource named haproxy-clone by cloning our haproxy resource. This configuration tell pacemaker to start haproxy on both servers at the same time.
now we need to make sure that the vip resource  is running where haproxy is healthy/running

  • create colocation resource
crm configure colocation test-ip-haproxy inf: test-ip haproxy-clone

This configuration tell pacemaker to run the test-ip resource where the haproxy is running so if we have a problem with haproxy on one server and pacemaker can’t restart haproxy automatically then pacemaker will make sure that test-ip will run on the server with the healthy haproxy by migrating the test-ip resource to the right server.

for more informatrion about pacemaker

Create ELK with Docker

Quick guide to build ELK with Docker

  • Install elasticsearch plugins (head and bigdesk)
docker run -v /docker/elasticsearch/plugins:/usr/share/elasticsearch/plugins elasticsearch plugin install mobz/elasticsearch-head
  • Start elasticsearch
docker run -d -p 9200:9200 -p 9300:9300 --name elk-es1 -v /docker/elasticsearch/config:/usr/share/elasticsearch/config -v /docker/elasticsearch/esdata:/usr/share/elasticsearch/data -v /docker/elasticsearch/plugins:/usr/share/elasticsearch/plugins -e ES_HEAP_SIZE=2g elasticsearch -Des.network.host=0.0.0.0
  • Run logstash container. This command will fail because you don’t have any logstash configuration files yet
docker run --rm -it --name elk-logstash1 --link elk-es1 -v /docker/logstash:/etc/logstash logstash -f /etc/logstash
  • Create logstash configuration file that read data from redis and send it to elasticsearch
vi /docker/logstash/output_es.conf
output {
 elasticsearch {
 hosts => "elk-es1"
 }
}
vi /docker/logstash/input_stdin.conf
input { stdin { } }
vi /docker/logstash/input_file.conf
input {
 file {
 path => "/etc/logstash/test.txt"
 start_position => "beginning"
 }
}
  • Run again logstash container
docker run --rm -it --name elk-logstash1 --link elk-es1 -v /docker/logstash:/etc/logstash logstash -f /etc/logstash
  • Logstash configured to read messages from stdin and send them to kibana so write few test messages and remove logstash container by using ctrl-c
  • Run kibana container
docker run --name elk-kibana1 --link elk-es1 -p 5601:5601 -d -e ELASTICSEARCH_URL=http://elk-es1:9200 kibana

  • Browse to kibana server and look for the new messages that you sent: http://docker_server_ip:5601