Getting Started With Docker.

Getting Started With Docker.

THE SERVER SERIES EPISODE 2

Docker is an amazing tool, allowing you to run multiple programs socially distanced but sharing resources.

Introduction

"Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly"

Docker stemmed from the idea of Virtualisation. Pioneered by Jim Rymarczyk at IBM in the early sixties, Virtualisation works off of the idea of creating a virtual computer, running a virtual OS (e.g. Ubuntu) inside another larger computer.

So it's basically computerception.

Now, this works; but it's terribly slow. Why? Because it attempts to run multiple OS's at the same time on one computer.

Containerization speeds this up, by allowing applications to share a kernel.

Docker is the most popular Containerization tools, but there are others available.

Docker looks something like this:

Why wouldn't you just run all applications on just the Host OS?

Well, if you did this, applications would overlap each other; One application's storage might encroach into another one's or an application might require particular networking ports to run.

Containerization and Virtualization cleans this up.

Installation

This will guide Docker installation on Ubuntu.

  1. Let's refresh our favorite Linux package index, APT.
sudo apt-get update

This will pull the latest packages from the internet.

2. Now, we'll install some utilities.

sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg \
    lsb-release

3. Now, we'll grab a security key, to make sure that when we download Docker afterwards, we're not getting a knock-off

 curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

4. Next, we'll download the necessary repository to our package index configuration.

echo \
  "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

5. Afterwards, We'll update the package index for the second time, and we'll begin installation of Docker

 sudo apt-get update
 sudo apt-get install docker-ce docker-ce-cli containerd.io

6. Finally, Let's test our setup, by running the Docker-Hello World Image.

sudo docker run hello-world

Post installation clean-up

After the installation, you might have an error citing "permission denied"; To fix this:

  1. Add a user group called "docker"
 sudo groupadd docker

2. Add your user to that group

sudo usermod -aG docker $USER

3. Refresh the user permissions

 newgrp docker 

How do I use Docker then?

Basic Understanding

Imagine you'd like to run an application; Now, also imagine a shipyard. In Docker, we'd place this application in a container.

How do we know which applications go in which containers? We use images, similar to an image on the side of a container, to show what application runs inside.

A volume is what we call the surface area occupied by the container on computer memory (e.g. hard drives). It represents the storage of the container.

We can build images using a dockerfile. A Dockerfile is a set of instructions used to create an image, similar to DNA, Dockerfiles can be built off other images; So you can have an image inside an image. When an image is based off another one, we refer to the original as a base image.

Containers are immutable, meaning that in order to change code running in an image, the entire container must be rebuilt.

Ports are what we use to access our containers from the outside world, similar to a port where shipping containers are stored. Ports work in numbers, popular ones being:

  • 80 - HTTP
  • 443 - HTTPS
  • 22 - SSH
  • 25 - SMTP
  • 21 - FTP

Extending Docker : Common Terms

In industry, people often refer to OCI;

OCI stands for Open Container Inititative, and it's what it says on the tin. Docker is built off OCI, as it is built with tools such as containerd. However, there are bits of Docker that aren't part of the Open Container Inititative and aren't open source, convincing admins to look elsewhere such as to LXC.

Docker Compose is a way of scripting container initialisation. Say i'd like to write a configuration that starts a bunch of containers at once. I'd use a docker-compose.yaml file where i'd write my configuration and then tell Docker to run it with

docker-compose up

If we went back to our shipping analogy, Docker-Compose is the equivalent to one of these:

K8 or Kubernetes is also quite popular. It was developed by Google and really needs another blog post to explain. In basic terms, Kubernetes turns each Docker server into a part of a bigger machine; It allows multiple servers to run multiple containers in harmony. In our analogy, it's likely the equivalent to the World Trade Organisation's computers, where roughly all ports around the world can be managed from a phone-call away. Docker-Swarm works in a similar way, except it was made in-house by the Docker Team.

Terms like Reverse-Proxy (namely, NGINX) or Load Balancer relate to networking and security and I recon that this too needs another blog post. However, the general idea is that you'd use a reverse proxy to control port allocation and access of containers from the outside world. SSL and TLS regard the security and encryption of traffic in and out of your server.

Conclusion

Docker is a brilliant tool, that is in the toolbox of many people who self-host services on their servers.

I ran a Reddit poll, to find more; There's even more talk about Docker alternatives there if you're interested:

Do you utilise Docker in your setup? from selfhosted

Thanks for reading. As a reward, here's some related memes:

Just remember, even containers can become stuck.