Improving your Continuous Integration Setup with Docker and GitLab-CI
Posted on Oct 11, 2016. Updated on Feb 3, 2018
A typical Jenkins 1.0 setup for Continuous Integration (CI) comes with some drawbacks. The job configuration is stored somewhere else but not in the version control system. This makes it hard to set up a new job correctly or to track configuration changes. Another pain point are the various tools (JDK, Maven, node, gulp etc.) that have to be installed and maintained on all Jenkins nodes. This increases the maintenance effort and can slow down the development. Let’s consider some solutions for these issues.
We at Spreadshirt are currently improving our Continuous Integration infrastructure. I like to give you a short overview of the improvements. Thank and props go out to our delivery engineering team!
Job Configuration as Part of the Project Source Code
Your Git repository should be the single source of truth for the CI. Hence, the job configuration should be located in the project repository within your Git. This way, you can track the changes and restore old versions if things got messed up. Moreover, you can branch the job configuration and change them without impacting the builds of your master branch. Besides, it’s easy for everyone to set up the job for the project based on the stored configuration.
We made some good experiences with GitLab-CI. The job configuration is stored in the .gitlab-ci.yml which is located in the project root. A very simple configuration looks like this:
my_job:
stage: build
script:
- mvn deploy -DSomeImportantArguments=ABC
Jenkins 2.0 also supports this concept.
Running the Build within a Docker Container
Let’s assume the following scenario:
- Our backend service is written in Java and its build needs a JDK 1.8 and Maven 3.3.9.
- Contrary, our frontend is written in JavaScript, HTML and SASS and requires node.js 6.7 and gulp 3.91 for the build.
- And finally there is our Python script requiring Python 3.5, PIP and PyBuilder 0.11.8.
In this case you have to install this huge amount of heterogeneous tools on your CI server; or more precisely on every node. It’s hard to maintain them and it’s even harder to set up a new CI node correctly. Moreover, the development team can get blocked if they have to wait for the administrator who installs the required tools.
Besides things can get even worse: Image you need different versions of the same build tools. For instance, Java service A requires Maven 3.3.9, but service B needs Maven 3.0.0. Yes, Jenkins allows you to select a certain Maven version in the job configuration. But all required Maven versions have to be installed and maintained on every Jenkins node by the administrator.
The solution is to run the build within a Docker container. Hence, the build for our Java service will run within a Container that contains a proper JDK and Maven. So the CI node is not polluted and you can run the job on every node. This simplifies the CI infrastructure. Besides, the development teams are more independent. They know best what the build of their application requires and can set up a tailored Docker images for the build.
Again, GitLab-CI supports this feature with Docker Runners. You can specify the base image in the .gitlab-ci.yml:
image: maven:3.3.9-jdk-8-alpine
my_job:
stage: build
script:
- mvn deploy -DSomeImportantArguments=ABC
What else?
- GitLab-CI and Jenkins 2.0 support Delivery Pipelines as a first class citizen.
- If you are already using GitLab, GitLab-CI is the natural solution for Continuous Integration or Continuous Delivery. The CI/CD capabilities are integrated seamlessly into the git repository and work out well. For instance, you don’t need to define WebHooks for commits and branch creation or set up Jenkins’ branch scanning.
- GitLab-CI’s job description is much more concise then Jenkin’s one, but less flexible. However, until now we got along with the provided functionality.
- The only shortcoming of GitLab-CI is the missing integration of JUnit test reports. You can’t see the test results in the Web UI and there is no history of test failures (how often has a test already failed?). But there are a lot of tickets covering this issue.