Tuto pour le déploiement des sites avec Docker
A la fin de ce tuto, vous pourrez déployer votre site (ou service) sur les serveurs du BR ! Ceci permet de profiter d'une infrastructure gratuite, performante et robuste !
Note : ce tutoriel peut être délicat, n'hésitez pas à contacter un BRman pour obtenir de l'aide.
Comment déployer des conteneurs Docker sur l'infrastructure du BR
Overview
This tutorial will walk you through :
- How to convert your website to a Docker service
- How to build it automatically on Gitlab-CI
- How to deploy it automatically on our servers
Requirements
- Basics of git (videos here or the beginning of this awesome online book) ;
- Basics of containers (see a really good explanation here) and of docker-compose (see the getting-started here) ;
- Basics of web (see an introduction on MDN) ;
- Basics of gitlab-ci (reading the getting-started is recommended) ;
Existing architecture
A physical server called endalcher
deploys and runs your container. It uses the following services to connect your service to the Internet:
- docker-compose to deploy the correct environment for your containers ;
- traefik as a reverse proxy to serve your site behind your endpoint, it is configured here ;
- nginx to serve your site to the outside world (as all the binets sites).
How to convert your website to a Docker service
The exact procedure will be dependant on your project. An example can be found here : qdj-Dockerfile
A Docker container is an isolated machine on which you can run your website. In this step, we will help you convert you app to be Docker-ready. This basically means installing everything your app needs inside a linux computer. You'll need to install Docker on your computer first windows-installer.
We assume your project is hosted on the BR's gitlab instance. Create a deployment
directory at the root of your repository and create a file Dockerfile
(with no extension) and a file docker-compose.yaml
in it.
- Dockerfile
FROM hello-world
- docker-compose.yaml
# this is a docker-compose meant for local testing of the deployment image.
# for the real deployment, go to the docker-services BR gitlab project
version: "3"
services:
app:
build:
context: ..
dockerfile: deployment/Dockerfile
# ports:
# - "8000:8000"
Then open a terminal in this directory and type docker compose up --build
. If everything went well, you see a nice message from Docker telling you everything is working !
Now, you need to make your app work in the docker container. For this, you need:
- To update the Dockerfile to install everything you need. The following works for a basic (Django) python app :
FROM python:3
# copies all the code inside the container
COPY . /my_super_project
WORKDIR /my_super_project
RUN pip install --no-cache-dir -r requirements.txt
CMD python3 manage.py runserver
- To update the docker-compose file to expose the port you need (uncomment the corresponding line in the dockerfile).
How to handle persistant data
If you app uses an external database or need to not lose some files accross resets, you should use docker volumes with or without a postgres image. The jtx-docker-compose might help you understand what you need.
How to handle secrets
Often your app has some secrets in a file (such as .env) that is not in Git. DO NOT put it in git (or all your secrets will be stolen !). Later we are going to see how to securely make them available for the automatic gitlab build.
Automatic build
We will you gitlab continuous integration to build your docker image directly on the BR's servers. Write the following into a file name .gitlab-ci.yml
at the root of your repository :
# Build docker image and upload to registry
variables:
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_PIPELINE_IID
PROJECT: mySuperProject
docker:build:
stage: build
only: # put here the git branches you want the job to run on :
- main
- stable
image: docker:stable
services:
- docker:dind
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: '' # disables TLS in docker-in-docker (not needed)
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build --pull --build-arg version=$CI_PIPELINE_IID -t $IMAGE_TAG -f deployment/Dockerfile .
- docker push $IMAGE_TAG
As you commit this file to Gitlab, you should see a new job appear (or look under Build/jobs). Your image should build correctly (if not, it means something is not available to gitlab that is when you build on your computer. Read the job logs and try to fix it !)
Checking the image
This step is optional, but it is the easiest way to debug your image ! You can run it locally like so :
docker login registry.binets.fr
docker run --rm registry.binets.fr/<nom.prenom>/<my-gitlab-repo>:<image>
Where image
is the number of the image (available in Gitlab -> Deploy -> Container registry
) and <nom.prenom>/<my-gitlab-repo>
is the same as in your project Gitlab URL.
Typically, your app won't start because you forgot a secret file (see below how to manage that !).
Automatic deployment
We will now add a second CI job to your repository to deploy your image. It will trigger another CI pipeline of br/docker-services. This pipeline has the rights to deploy on endalcher
, and do so using a docker-compose.yaml
file. To trigger it, add the following to your .gitlab-ci.yml
docker:deploy:
stage: deploy
only:
- stable
image: debian:latest
when: manual
script:
- apt-get update && apt-get install -y curl
- curl --fail-with-body --request POST --form token=$TRIGGER_TOKEN --form ref=master --form variables[PROJECT]=$PROJECT --form variables[DOCKER_IMAGE]=$IMAGE_TAG https://gitlab.binets.fr/api/v4/projects/560/trigger/pipeline
Make sure the variable PROJECT
in .gitlab-ci.yml
is correct : our pipeline will deploy the file named docker-compose.${PROJECT}.yaml
.
In order to create the correct docker-compose.${PROJECT}.yaml
file, you will need to create a Merge Request on br/docker-services, or send a docker-compose.yaml
file to a BRman. The br/docker-services is full of examples and a basic one is explained below :
Extra parameters and secrets
You can also pass any parameter needed in the compose file by adding arguments to the cURL call like so : --form variables[MY_SUPER_VARIABLE]="elle vaut 42!"
If you want to pass secret environnements variables, store then in gitlab under Settings/CI-CD/Variables
and make sure they are "protected" and "masked". Pass them like so --form variables[MY_PASSWORD]=$PASSWORD_STORED_IN_GITLAB
.
Be aware that any secret passed this way is visible to the BR and everyone that has maintainer access to you repository ! If you need help setting up, don't hesitate to ask for help from the BR !
Secret files
Be aware that any secret passed this way too is visible to the BR and everyone that has ANY access to you repository (because they can get the docker image) ! So only do this for a private registry or make sure your setting Settings -> General -> Visibility, project features, permissions
only allows for registry access for members
Another way to store secrets in your Gitlab repo is Gitlab -> Settings -> CI-CD -> Secure-files
. You can upload any (small) secret file here and it will not be exposed ! You can downlod it at buildtime like so :
# Build docker image and upload to registry
docker:build:
stage: build
only:
- main
- stable
image: docker:stable
services:
- docker:dind
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
# For disabling TLS in docker-in-docker (not needed)
DOCKER_TLS_CERTDIR: ""
# !!! where to download our secure .env file
SECURE_FILES_DOWNLOAD_PATH: "."
script:
# !!! download secure .env file
- apk add --no-cache curl bash
- curl --silent "https://gitlab.com/gitlab-org/incubation-engineering/mobile-devops/download-secure-files/-/raw/main/installer" | bash
# build docker image
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build --pull --build-arg version=$CI_PIPELINE_IID -t $IMAGE_TAG -f deployment/Dockerfile .
- docker push $IMAGE_TAG
- echo "=> pushed $IMAGE_TAG"
Private repository
Keeping your repository private can be beneficial for security or privacy. But it require an extra step for the BR to access the docker image. We wille pass a CI deploy token as cURL parameter.
- Go to your
Settings -> Repository -> Deploy token
and create a token name exactlygitlab-deploy-token
- Add the following parameters to the cURL call in your CI deploy job :
--form variables[OPTIONAL_PRIVATE_CI_REGISTRY]=$CI_REGISTRY --form variables[OPTIONAL_PRIVATE_CI_REGISTRY_USER]=$CI_DEPLOY_USER --form variables[OPTIONAL_PRIVATE_CI_REGISTRY_PASSWORD]=$CI_DEPLOY_PASSWORD
Example
You want to deploy the QDJ site with docker, and bind it to the server name qdj.binets.fr
. Your site is running on flask and uses a postgresql database.
Pre-existant configuration
You have a little Dockerfile
:
# Base image: python3 FROM python:3 WORKDIR /usr/src/app # Install dependacies COPY requirements.txt ./ RUN pip install --no-cache-dir -r requirements.txt # Copy application files COPY . . EXPOSE 5000/tcp # the command runs the migrations and then launches gunicorn to serve the app CMD python -m flask db upgrade && gunicorn -b :5000 qdj:app
You have the following docker-compose.yaml
:
version: '3.1'
services:
db:
# Use a small postgres image, as we don't need anything on this container
image: postgres:alpine
environment:
POSTGRES_USER: qdj
POSTGRES_PASSWORD: somepassword
POSTGRES_DB: qdj
# store the data on a persistent file
volumes:
- "qdj-data:/var/lib/postgresql/data"
restart: always
app:
# Use the image from Dockerfile
image: ${DOCKER_IMAGE}
environment:
FLASK_APP: qdj.py
FLASK_ENV: production
FLASK_DEBUG: 0
DATABASE_URI: "postgresql://qdj:somepassword@db:5432/qdj"
SECRET_KEY:
# Tells to docker to start app after db (otherwise flask crashes)
depends_on:
- db
# Always restart on crash
restart: always
volumes:
qdj-data:
Production docker-compose.yaml
We will adapt your docker-compose.yaml
file in order to bind your site to the hostname :
- First let's connect your
app
container to the global network. We won't bind thedb
container to the global network to protect it from intruders. Network graph in endalcher We will add network configuration to thedocker-compose.yaml
:to enable the network
web
, we add at the bottom of the file:networks: web: external: true
to connect
app
to the networkweb
, we add to theapp
configuration:networks: - web - default
Then we can add labels to tell
traefik
to match the hostname to a packet going toapp
:labels: - "traefik.docker.network=web" - "traefik.frontend.rule=Host:qdj.binets.fr" - "traefik.port=5000"
This gives us the following docker-compose.qdj.yaml
:
version: '3.1'
services:
db:
image: postgres:alpine
restart: always
environment:
POSTGRES_USER: qdj
POSTGRES_PASSWORD: qdjpw
POSTGRES_DB: qdj
volumes:
- "qdj-data:/var/lib/postgresql/data"
app:
image: ${DOCKER_IMAGE}
environment:
FLASK_APP: qdj.py
FLASK_ENV: production
FLASK_DEBUG: 0
DATABASE_URI: "postgresql://qdj:qdjpw@db:5432/qdj"
SECRET_KEY:
depends_on:
- db
restart: always
networks:
- web
- default
labels:
- "traefik.docker.network=web"
- "traefik.frontend.rule=Host:qdj.binets.fr"
- "traefik.port=5000"
volumes:
qdj-data:
networks:
web:
external: true
This file goes into the br/docker-services repository, it will be used by the triggered pipeline to create your containers.
Triggering pipelines with Gitlab CI
In this pipeline, we will need to add 2 jobs:
- one to build the docker image using the
Dockerfile
- one to trigger the br/docker-services pipeline to create the containers.
To build the docker image, we will use the template do so:
# Build docker image and upload to registry
docker:build:
stage: build
only:
- master
- stable
image: docker:stable
services:
- docker:dind
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
# For disabling TLS in docker-in-docker (not needed)
DOCKER_TLS_CERTDIR: ''
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build --pull --build-arg version=$CI_PIPELINE_IID -t $IMAGE_TAG -f deployment/Dockerfile .
- docker push $IMAGE_TAG
The second job is rather tricky. We will use curl
to trigger the pipeline, using Gitlab Pipeline API. This call is secured by a trigger token, which should be saved into your project variables as $TRIGGER_TOKEN
. This is what the job looks like:
docker:deploy:
stage: deploy
when: manual
script:
- apt-get update && apt-get install -y curl
- curl --request POST --form token=$TRIGGER_TOKEN --form ref=master --form variables[PROJECT]=qdj --form variables[DOCKER_IMAGE]=$IMAGE_TAG https://gitlab.binets.fr/api/v4/projects/560/trigger/pipeline
Pay attention to this syntax because debugging this api call takes a lot of time...
If you want to add variables that will be used in your docker-compose.yaml
, add other arguments to curl
like --form variables[WELCOME_MESSAGE]="Hello World!"
.
You can see an example of a CI using this in the br/qdj repo.
Note: for BR project, you can use the trigger
syntax in the API call since you don't need autorisation.
History
- In 2017, the BR16 set up a Kubernetes cluster. They managed to set up a workflow to deploy seamlessly from gitlab on that cluster. However, the configuration was hard to maintain and the BR17 was not able to keep it going.
- In 2018, the BR17 (Oliver Facklam + Hadrien Renaud) set up a small docker service on a computer from the Salle Informatiques
Developping a site for BR docker deployement
Docker allows for most technologies to be abstracted into a container and thus most - if not all - sites can be deployed in docker. However, here is a bit of advice to make deployement less painful :
- Before using a particular external service (something that should run in a seperate container), check that a well supported docker image exists and with a BRman that there are no other constraints (for example, using a MinIO will go through a seperate dedicated server that already exists rather than a container).
- Use a postgres database to get access to an admin website for your server.