FIWARE Docker Container Service Installation and Administration Guide

Deployment Steps

This section describes the procedure for manually deploying a FIWARE Docker Container Service (FDCS) on OpenStack. In brief, the FDCS is a Multi-Tenant Swarm cluster. We refer to the node where the Multi-Tenant Swarm manager is running as the Swarm Management Node and nodes where the docker engines are running as the Docker Nodes. The following steps are required:

  1. Create an SSH key pair that will be used to access the Swarm Management Node, the Docker Nodes in the Swarm cluster, and NFS Server
  2. Create a security group for the Swarm Management Node. It containers rules for allowing public access to the Swarm Manager Port, SSH port, and Ping. For example:
    service IP Protocol From Port To Port Source
    SSH TCP 22 22 0.0.0.0/0 (CIDR)
    Ping ICMP 0 0 0.0.0.0/0 (CIDR)
    Swarmp Manager TCP 2376 2376 0.0.0.0/0 (CIDR)
  3. Create a Swarm Management Node VM instance. Associate it with its security group and key pair. Install Docker on the Swarm Management Node which will be used to launch the docker image of the Multi-Tenant Swarm.
  4. Create a security group for the Docker nodes. It containers rules for allowing public access to the SSH port, Ping, and docker auto assigned ports. The docker auto assigned ports are those ports that docker automatically assigns to containers as there external ports when they are not specifically designated in the docker command. It also containers a rule for exclusive access to the Docker port from the Swarm Management Node. Use the Swam Management Node’s public IP. Finally, allows interaction between the cluster's docker host nodes over their private network to support User Defined Overlay networks.
    service IP Protocol From Port To Port Source
    SSH TCP 22 22 0.0.0.0/0 (CIDR)
    Ping ICMP 0 0 0.0.0.0/0 (CIDR)
    Docker Engine TCP 2375 2375 Swarm Manager Public IP/32 (CIDR)
    Docker Containers auto assigned by docker engine TCP/UDP 32768 61000 0.0.0.0/0 (CIDR)
    Docker Overlay Network Control Plane TCP/UDP 7946 7946 Cluster's private network
    Docker Overlay Network Data Plane UDP 4789 4789 Cluster's private network
  5. Create docker nodes. Associate them with their security group and ssh keypair. [Install Docker on all the Docker Node instances](https://docs.docker.com/v1.11/). Enable swap cgroup memory limit following those steps from [docker documentation]( https://docs.docker.com/v1.11/engine/installation/linux/ubuntulinux/)
  6. Create a security group for the NFS server. It contains rules that allow the cluster's docker hosts to mount NFS volumes and ssh access for the administrator.
    service IP Protocol From Port To Port Source
    SSH TCP 22 22 0.0.0.0/0 (CIDR)
    Ping ICMP 0 0 0.0.0.0/0 (CIDR)
    NFS Server TCP 2049 2049 Cluster's private network
    NFS Server UDP 2049 2049 Cluster's private network
  7. Create a NFS Server. Associate it with its security group and key pair.
  8. Install, configure and start the NFS Server:
    1. Install NFS server >apt-get install nfs-kernel-server
    2. Create a directory that will be used to mount the docker volumes on the docker nodes: >sudo mkdir /var/lib/openstorage/nfs >sudo mkdir /var/lib/osd/mounts
    3. In /etc/hosts/ allow access to the docker volume directory to the docker nodes. For instance: /var/lib/openstorage/nfs cluster private network(rw,sync,no_subtree_check,no_root_squash). /var/lib/osd/mounts cluster private network/24(rw,sync,no_subtree_check,no_root_squash).
    4. Start the nfs server: >sudo service nfs-kernel-server restart
  9. Create a security group for the Key-Value Store server. It contains rules for ssh access and for servicing the Docker Nodes. Key-Value Store is used to support Docker Overlay Networks and the NFS Plugin driver.
    service IP Protocol From Port To Port Source
    SSH TCP 22 22 0.0.0.0/0 (CIDR)
    Ping ICMP 0 0 0.0.0.0/0 (CIDR)
    Key-Value Store Server's listening ports TCP 2049, 4001, etc. 2049, 4001, etc. Cluster's private network
  10. Create a Key-Value Store Server. Associate it with its security group and key pair.
  11. Install, configure and start the Key-Value Store Server: In this example we use [etcd](https://github.com/coreos/etcd), but [consul](https://www.consul.io/intro/getting-started/kv.html) and [ZooKeeper](https://zookeeper.apache.org/doc/r3.3.3/zookeeperStarted.html) also may be used.
    1. [Install Docker](https://docs.docker.com/v1.11/).
    2. [Run etcd as a docker container](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/container.md#docker) >sudo docker run -d --restart always -v /etcd0.etcd:/etcd0.etcd -v /usr/share/ca-certificates/:/etc/ssl/certs -p 4001:4001 -p 2380:2380 -p 2379:2379 --name etcd quay.io/coreos/etcd etcd -name etcd0 -advertise-client-urls http:// key-value server internal network ip:2379,http://192.168.209.52:4001 -listen-client-urls http://0.0.0.0:2379,http://0.0.0.0:4001 -initial-advertise-peer-urls http:// key-value server internal network ip:2380 -listen-peer-urls http://0.0.0.0:2380 -initial-cluster-token etcd-cluster-1 -initial-cluster etcd0=http:// key-value server internal network ip:2380 -initial-cluster-state new
  12. Launch the docker engine daemon on all the cluster's docker hosts. The cluster will support User Defined Overlay Networks and NFS Volumes. Configure the engine to listen on the port that was specified when you created the Docker Nodes security group above and interact with the key-value store to support the docker overlay natwork. You should also allow it to listen on a Linux file socket to simplify debug.
    1. Update /etc/default/docker with DOCKER_OPTS="-H unix:///var/run/docker.sock -H tcp://0.0.0.0:2375 --icc=false --cluster-store=etcd://key-value store ip:port --cluster-advertise=docker node ip:2375".
    2. restart docker as a service: >sudo service docker restart
    3. Configure and start the [OpenStorage Docker volume plugin drive](https://github.com/libopenstorage/openstorage) to support User Defined NFS volumes.
      1. Create [yaml cnfiguration file](https://github.com/libopenstorage/openstorage).
      2. Install NFS client >sudo apt-get install common-nfs
      3. Mount NFS to the directories: /var/lib/openstorage/nfs and /var/lib/osd/mounts /var/lib/osd/mounts. For example: > sudo mount -t nfs -o proto=tcp,port=2049 nfs server ip:/var/lib/openstorage/nfs /var/lib/openstorage/nfs > sudo mount -t nfs -o proto=tcp,port=2049 nfs server ip:/var/lib/osd/mounts /var/lib/osd/mounts
      4. Verify the mounts: >sudo df -P -T /var/lib/openstorage/nfs | tail -n +2 | awk '{print $2}' nfs >sudo df -P -T /var/lib/osd/mounts | tail -n +2 | awk '{print $2}' nfs
      5. Start the driver as a docker container: >sudo docker run -d --restart always --privileged -v /tmp:/tmp -v /var/lib/openstorage/:/var/lib/openstorage/ -v /var/lib/osd/:/var/lib/osd/ -v /run/docker/plugins:/run/docker/plugins -v /var/lib/docker:/var/lib/docker --name osd openstorage/osd -d -f /tmp/config_nfs.yaml --kvdb etcd-kv://key-value store ip:port/
  13. Configuring Swarm: FDCS can be configured by setting its environment variables. FDCS environment variables are briefly described below:
    • SWARM_ADMIN_TENANT_ID: contains the id of the tenant that may run docker commands as admin. Admin is authorized to manage docker resources, including tenant containers, volumes, network, etc., and issue all docker requests without any filtering.
    • SWARM_APIFILTER_FILE: may point to a json file that describes the docker commands an installation of the service wants to filter out. If no file is pointed to it defaults to apifilter.json in the directory where swarm is started. Currently there is only support for a "disableapi" array which contains a comma separated list of commands to disable. This is an example how an installation could disable network support:
      
      {
        "disableapi": [ networkslist, networkinspect, networkconnect, networkdisconnect, networkcreate, networkdelete ]
      }
      
    • SWARM_AUTH_BACKEND: if set to "Keystone" then Keystone is used to authenticate Tenants that docker requests based on the Authorization Token and Tenant ID in their request header.
    • SWARM_ENFORCE_QUOTA: if set to "true" then the Multi-Tenant Swarm Quota feature is enabled otherwise it is disabled. See SWARM_QUOTA_FILE for how to specify quotas.
    • SWARM_FLAVORS_ENFORCED: if set to "true" then the Multi-Tenant Swarm Flavors feature is enabled otherwise it is disabled. The flavors specification is embodied in a json file which contains a map describing the valid resource combinations that can appear in create container requests. Currently, Memory is the only resource that can be specified as a flavor. The Memory resource should be specified as a whole number which represents megabytes of memory. See SWARM_FLAVORS_FILE for how to specify flavors.
    • SWARM_FLAVORS_FILE: if SWARM_FLAVORS_ENFORCED is set to "true" then SWARM_FLAVORS_FILE points to a file with the flavors specification. If there is no file pointed to then it defaults to flavors.json in the directory where swarm is started. The specification must contain a "default" flavor. When the create container parameters do not match any of the specified flavors, the default flavor is applied to the create container replacing its original parameters. This is an example of the json flavors specification that is shipped with Multi-Tenant Swarm:
             
      {
        "default":{
         "Memory": 64
        },
        "medium":{
         "Memory": 128
        },
        "large":{
         "Memory": 256
        }
      
      }
      
      In the above flavors specification example there are three flavors default, medium, and large. default describes 64 megabytes of memory. medium describes 128 megabytes of memory. large describes 256 megabytes of memory. This means that a create container is limited to specifying its memory as 64MB, 128MB, or 256MB. If none is specified then the system will apply the default, i.e. 64MB.
    • SWARM_KEYSTONE_URL: if SWARM_AUTH_BACKEND is set to "Keystone" then SWARM_KEYSTONE_URL must specify Keystone's URL, e.g. http://cloud.lab.fi-ware.org:4730/v2.0/.
    • SWARM_NETWORK_AUTHORIZATION: if set to "false" then the Multi-Tenant Swarm Network Authorization feature is disabled otherwise it is enable.
    • SWARM_MEMBERS_TENANT_ID: contains the tenant id whose members are eligible to use the service. If not set then any valid token tenant id may use the service. SWARM_MEMBERS_TENANT_ID is only valid when SWARM_AUTH_BACKEND is set to Keystone.
    • SWARM_MULTI_TENANT: if set to "false" then the Multi-Tenant Swarm is disabled otherwise Multi-Tenant Swarm is enabled. When Multi-Tenant Swarm is disabled the result is that the service is launched as vanilla Swarm. Generally disabling Multi-Tenant Swarm is used for debugging purposes to discover if a bug is related to the swarm docker configuration or to a Multi-Tenant Swarm feature.
    • SWARM_QUOTA_FILE: if SWARM_ENFORCE_QUOTA is set to "true" then SWARM_QUOTA_FILE may specify the quota specification. If there is no file pointed to it defaults to quota.json in the directory where swarm is started. Currently quota support is limited to tenant memory consumption and it is the same for all tenants. This is an example a json quota specification:
      
      {
         "Memory": 300
      }
      
  14. Start Multi-Tenant Swarm Manager daemon (without TLS) on the Swarm Management Node. The Multi-Tenant Swarm docker image resides in the FIWARE Docker Hub repository at fiware/swarm_multi_tenant(https://hub.docker.com/r/fiware/swarm_multi_tenant/) If token discovery is to be used then add the discovery flag, otherwise use the file flag to point to a file with a list of all the Docker Node public ips and docker ports. For instance: >docker run -t -p 2376:2375 -v /tmp/cluster.ipstmp/cluster.ips -e SWARM_AUTH_BACKEND=Keystone -e SWARM_KEYSTONE_URL=http://cloud.lab.fi-ware.org:4730/v2.0/ -t fiware/swarm_multi_tenant:v0 --debug manage file:///tmp/cluster.ips
  15. Test the cluster’s remote connectivity by pinging and sshing to all the instances (including the Swarm Management Node).
  16. Test whether the Multi-Tenant Swarm Cluster works as expected by using docker commands on your local docker client. The docker –H flag specifies the Swarm Manager Node and swarm port. The docker –config specifies the directory where a config.json file is prepared with a valid token and a valid tenantid. For instance: >docker –H tcp://:2376 --config $HOME/dir docker command See the FIWARE Docker Container Service Users Guide for more details on how to use the service.