Home | CHANGELOG | VHP Portal | DEV Portal |
Brief Setup Networks Clusters Services Admin
Docker is a technology to ship stable and secure containers in any environment. Once in the environment, it allows us to bring up and down these containers as needed, for whatever reason.These environments are by default isolated from their host, and are made up of isolated containers. This allows for security on the host and further security with in the docker network. Environments are made up from the following.
Images are to be created for each needed service and pushed to our dockerhub (cmv343). Docker-compose files can be created to “group” images as needed. Ec2 units are created and docker is installed (with docker-compose). A service is then created to start docker, pull any needed images, and then run the compose file. Once up, the environment can be updated by “re”composing or restarting the individual containers. Ec2 units will be templated with the proper files and start up, so if the whole thing needs to come down, it can be re-created with any updates.
Docker and docker-compose need are installed and ready to run services from a compose folder (/etc/docker/compose).
$-> sudo yum install docker -y
$-> sudo service docker start
# make docker autostart
$-> sudo chkconfig docker on
# I strongly recommend install also: git sudo yum install -y git)
# It may be necessary to restart the ec2 unit
$-> sudo reboot
# docker-compose (latest version)
$-> sudo curl -L https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose
# Fix permissions after download
$-> sudo chmod +x /usr/local/bin/docker-compose
# Verify success
$-> docker-compose version
Once the file is created, new service can be created through the one server by referencing a folder in /etc/docker/compose/
There is only one .service file that acts as a template used across all services without change.Below is the file contents and it attached to systemd. Based on the this link.
’’’ sudo vim /etc/systemd/system/docker-compose@.service: ‘’’
[Unit]
Description=%i service with vhpportal
PartOf=docker.service
After=docker.service
[Service]
Type=oneshot
RemainAfterExit=true
WorkingDirectory=/etc/docker/compose/%i
ExecStart=/usr/bin/docker-compose up -d --remove-orphans
ExecStop=/usr/bin/docker-compose down
[Install]
WantedBy=multi-user.target
The %i will be the name of our compose service. On start we wish to bring up the service by that name, and remove all other running containers associated with it. On stop, the service will be brought down. This will not do any other cleaning of the docker daemon (prune images, check for open containers).
Once created, it needs to be enabled. This is not shown in the link. We think it is necessary for the service to start on ec2 reboot.
sudo systemctl enable docker-compose@<service>
It can be further controlled through the following commands
sudo systemctl start docker-compose@<service>
sudo systemctl restart docker-compose@<service>
sudo systemctl stop docker-compose@<service>
sudo systemctl status docker-compose@<service>
Docs: Docker Remote Access
To connect these instances with our computers we need to edit the default docker.service file. This can be done by:
sudo systemctl edit docker.service
The following lines can be added to the file, between the commented lines (as directed). This file is a bit confusing if you are first seeing it. There appears to be a lot of optional code commented out, but the way it appears on screen is that the editor is the one commenting this out. So far, you can keep everything there, and just have the below lines as readable (not commented). The following are the only commands i use in the editor. I am not very familiar with it just yet. save -> CTRL+o AND THEN CTRL+m (idk, i tried it and it work) exit -> CTRL+x
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// -H tcp://0.0.0.0:6000
The above allows tcp connects from anywhere on port 6000. Initially we wanted to only allow the vhp-bastion’s IP to have access, but docker will not start properly when tried. So, we are going to secure access by only allowing access through the ec2 security groups. Port 6000 is THE port we communicate with our docker through. It should not be unique on any of the ec2 used in our system.
This section is short because it can better be explained here, but it is important to highlight some points. First, naming. Our name structure is a prefix of either “prod-“ OR “dev-“. In this there will only be one prod network as its the prod network. Dev is used more as a template for niche needs, but will have a special network that acts as a “parallel” to prod.
Production => vhpportal.com
Development => dev.vhpportal.com
The above will reflect the branches prod-vhpportal and dev-vhpportal respectively. From here on any special configurations will fall under Production OR Development, and can be viewed in the respective branches.
Our services can be described as one docker container / network. We group these into clusters as follows
Each cluster will represent (currently) an ec2 unit, and will have an AMI holding the associated compose file of the cluster. These clusters will be the main point of interaction when working on the services.
The vn-network-services repo will hold of the clusters and associated files in directories on the root level. Branches are setup and named to match the associated nework on aws. So, our production (vhpportal.com) network could be found at branch “prod-vhpportal”. The repo will always have a “dev” branch to be organized in a way to be branched from when setting up new networks.
Each cluster directory will have folders to match the included services, and a docker-compose.yml. The compose file will hold cluster setup, declare needed services, and required images for these services. The note here is the services in the compose file may not match the folders. These folders are here to help with putting together the images. In the case of a load-balancer, we would create more than one service from the same image.
It will be a docker-compose.yml file to represent the cluster. Our current configurations are basic, but working well. What we may want to look at is for the compose file to build the images (if needed) in the production configurations. Right now we are having the separately build, push, and pull images before we can re-compose. Documentation can be found here
Bastion -cmv343/vhp-bastion
VHPportal
Core-Cluster
Services-Cluster
Mart-Cluster
VHP services are the smallest part of the networks and can be broken down types. These types can be hinted at by the ( -lb | -rp | blank ) the image names. |
Application - service that holds an application (usually nodeJS) running behind a reverse-proxy OR load-balancer
Reverse-Proxy - used to stand at the entrance of of the docker network and distrubute request to services in the same cluster, or to some other cluster.
Load-Balancer - Normally in the same service as the reverse-proxy, but helps… balance the request coming into a cluster to one or more service types.
Since services are built to images, they will need a Dockerfile at the root of the service’s directory.
There will be to main setups for our dockerfiles, it is either a node application or nginx reverse-proxy/load-balancer. Each individual container is described below, but some of the basics are discussed here.
To keep some consistency, the following base images are used:
Nginx => FROM nginx:alpine
This is need mostly with applications. We have worked hard to lock down our git repositories and have opted to accomplish these pulls using ssh keys. After setting up these keys for the user, we will pass the key to the build command.
sudo docker build --ssh default=/home/<user>/.ssh/<key file> -t <image>:<tag> .
Inside the Dockerfile we can place the following
RUN apk add --no-cache openssh-client git
RUN mkdir -p -m 0700 ~/.ssh && ssh-keyscan github.com >> ~/.ssh/known_hosts
RUN --mount=type=ssh git clone -b <branch> git@github.com:<repo name>.git
To decrease the exposure (at least attempt) of the key we can adjust the above into multi stages. The first stage is responsible for the pulling in of the code, and the second starts a new image copying only the necessary code into it.
FROM alpine as gset
RUN apk add --no-cache openssh-client git
RUN mkdir -p -m 0700 ~/.ssh && ssh-keyscan github.com >> ~/.ssh/known_hosts
RUN --mount=type=ssh git clone -b <branch> git@github.com:<repo name>.git
FROM node:alpine
COPY --from=gset /<repo name> /<repo name>