Overview

A brief introduction

This was initially written to support in-person, instructor-led workshops and tutorials

These materials are maintained by Jérôme Petazzoni and multiple contributors

You can also follow along on your own, at your own pace

We included as much information as possible in these slides

We recommend having a mentor to help you …

… Or be comfortable spending some time reading the Docker documentation …

… And looking for answers in the Docker forums, StackOverflow, and other outlets

About these slides

All the content is available in a public GitHub repository:

You can get updated “builds” of the slides there:

Typos? Mistakes? Questions? Feel free to hover over the bottom of the slide …

Docker 30,000ft overview

In this lesson, we will learn about:

  • Why containers (non-technical elevator pitch)

  • Why containers (technical elevator pitch)

  • How Docker helps us to build, ship, and run

  • The history of containers

We won’t actually run Docker or containers in this chapter (yet!).

Don’t worry, we will get to that fast enough !

OK… Why the buzz around containers ?

The software industry has changed

Before:

  • monolithic applications

  • long development cycles

  • single environment

  • slowly scaling up

Now:

  • decoupled services

  • fast, iterative improvements

  • multiple environments

  • quickly scaling out

Deployment becomes very complex

Many different stacks:

  • languages

  • frameworks

  • databases

Many different targets:

  • individual development environments

  • pre-production, QA, staging…

  • production: on prem, cloud, hybrid

Results

  • Dev-to-prod reduced from 9 months to 15 minutes (ING)

  • Continuous integration job time reduced by more than 60% (BBC)

  • Deploy 100 times a day instead of once a week (GILT)

  • 70% infrastructure consolidation (MetLife)

  • 60% infrastructure consolidation (Intesa Sanpaolo)

  • 14x application density; 60% of legacy datacenter migrated in 4 months (GE Appliances)

  • etc.

Escape dependency hell

  • Write installation instructions into an INSTALL.txt file

  • Using this file, write an install.sh script that works for you

  • Turn this file into a Dockerfile, test it on your machine

  • If the Dockerfile builds on your machine, it will build anywhere

  • Rejoice as you escape dependency hell and “works on my machine”

Never again “worked in dev - ops problem now!”

On-board developers and contributors rapidly

  • Write Dockerfiles for your application component

  • Use pre-made images from the Docker Hub (mysql, redis…)

  • Describe your stack with a Compose file

  • On-board somebody with two commands:

git clone ...
docker-compose up

With this, you can create development, integration, QA environments in minutes !

Implement reliable CI easily

  • Build test environment with a Dockerfile or Compose file

  • For each test run, stage up a new container or stack

  • Each run is now in a clean environment

  • No pollution from previous tests

Way faster and cheaper than creating VMs each time!

Use container images as build artefacts

  • Build your app from Dockerfiles

  • Store the resulting images in a registry

  • Keep them forever (or as long as necessary)

  • Test those images in QA, CI, integration…

  • Run the same images in production

  • Something goes wrong? Rollback to previous image

  • Investigating old regression? Old image has your back!

Images contain all the libraries, dependencies, etc. needed to run the app.

Decouple “plumbing” from application logic

  • Write your code to connect to named services (“db”, “api”…)

  • Use Compose to start your stack

  • Docker will setup per-container DNS resolver for those names

  • You can now scale, add load balancers, replication … without changing your code

Note: this is not covered in this intro level workshop !

Formats and APIs, before Docker

  • No standardized exchange format. (No, a rootfs tarball is not a format!)

  • Containers are hard to use for developers. (Where’s the equivalent of docker run debian?)

  • As a result, they are hidden from the end users.

  • No re-usable components, APIs, tools. (At best: VM abstractions, e.g. libvirt.)

Analogy:

  • Shipping containers are not just steel boxes.

  • They are steel boxes that are a standard size, with the same hooks and holes.

Formats and APIs, after Docker

  • Standardize the container format, because containers were not portable.

  • Make containers easy to use for developers.

  • Emphasis on re-usable components, APIs, ecosystem of standard tools.

  • Improvement over ad-hoc, in-house, specific tools.

Shipping, before Docker

  • Ship packages: deb, rpm, gem, jar, homebrew…

  • Dependency hell.

  • “Works on my machine.”

  • Base deployment often done from scratch (debootstrap…) and unreliable.

Shipping, after Docker

  • Ship container images with all their dependencies.

  • Images are bigger, but they are broken down into layers.

  • Only ship layers that have changed.

  • Save disk, network, memory usage.

Example

Layers:

  • CentOS

  • JRE

  • Tomcat

  • Dependencies

  • Application JAR

  • Configuration

Devs vs Ops, before Docker

  • Drop a tarball (or a commit hash) with instructions.

  • Dev environment very different from production.

  • Ops don’t always have a dev environment themselves …

  • … and when they do, it can differ from the devs’.

  • Ops have to sort out differences and make it work …

  • … or bounce it back to devs.

  • Shipping code causes frictions and delays.

Devs vs Ops, after Docker

  • Drop a container image or a Compose file.

  • Ops can always run that container image.

  • Ops can always run that Compose file.

  • Ops still have to adapt to prod environment, but at least they have a reference point.

  • Ops have tools allowing to use the same image in dev and prod.

  • Devs can be empowered to make releases themselves more easily.