Working with Docker it's EASY!

I haven't post in a while, and it's gonna be a good rant.... I mean, content! For the past decade we've been bombarded by new tech specially in the development area.... and lots; LOTSSSS of new fancy words to describe technology that came to this world to make our development lives easier, NOT HARD

From a developer perspective Docker is very simple; so don't get fooled by the marketing, business, or just normies trying to make a colorful resume. For the everyday-dev it's no more than just a simple program you install & very easy to get started with (easier than other stuff like linux, git, etc...)

Table of Contents

What?

'Back-in-my-day', says the xxl-boomer we used to config all servers one by one and deploy the apps manually. Than we got better servers and virtual machines.... and than tools like vagrant to help us provision those machines. 

Somewhere along the way we got Docker (and by Docker I mean containers, Docker isn't the only player here.... and they been around for a long time.) 

Docker uses the Host system kernel  and that's why they are lightweight and super fast compared to a fully-functional virtual machine.

Our First container

The way it works it similar to Vagrant ; you start from a base image/container, which can be something like a distro version(like debian, apline, ubuntu, etc), a piece of software name (like nginx, apache, mariadb), or even stuff like 'wordpress' that contains all you need already.

Step One

$ docker pull debian:stable
# 'debian' is the name of the image, and 'stable' the 'tag'. Check each project Hub page for mor info on avilable tags

This goes to the Docker Hub (a registry of docker images shared by the community )

We got the image, now lets run it:

$ docker run -d -t --name mydebiantest debian:stable
# -d = 'Detach', kinda 'lets run on background & return to host's prompt)
# -t = Allocate a pseudo-TT (this allows you you to connect to it using a terminal-like prompt, used for stuff like ssh)

Holy elon musks!!!, that was fast...... humm let's check if it is running, it can't be that good:

$ docker ps

There you go, your own ubuntu container running, isolated, secure, with it's own network, cpu, ram..

I just can't believe it..... ok, let's ssh into it:

$ docker exec -it mydebiantest /bin/bash
# -i = interactive (we want to type stuff, and recieve feedback)
# -t = tty , self explanatory 

And stop it with

$ docker stop mydebiantest

And resume it later-on with with

$ docker start mydebiantest

A Web server

That's cool, but lets make something more useful.

$ docker pull nginx:alpine
# let's use ngine runing on apline linux (a very common & fast and lightweight combo)

And to run it, lets adds some parameters to 'share/mount' a working folder with our container that contains our HTML files:

$ docker run -it -d -p 8080:80 -v ~/public:/usr/share/nginx/html --name web nginx:alpine
# -p 8080:80 = Port mapping, this will link our host's 8080 to our container exposed 80
# -v ~/public:/usr/share/nginx/html = Same as ports, this will make your local folder ~/public point to our container's /usr/share/nginx/html (default nginx folder)

That's it, you can now access your website on http://localhost:8080

It's all about Images

From here you can work with your containers: create images/tags from them; publish them on the public docker Hub (or a privet one) or deploy them from to any Docker-enabled service such as : AWS, Linode, Azure, etc.

But just to be clear: That's not really a necessary step; you may use Docker on your workstation to replicate a server environment, install different versions of things like nginx, apache, php, mysql, etc without the need of 'installing' everything on your PC.

Dockerfile

So far we created images using the command run/build and passing a bunch of arguments; but what about if we need to re-build it, configure the software inside our containers, setting up some environment variables?

To archive such things we need to use Dockerfiles, that way Docker can build images from reading the instructions inside it. There are many instructions we can use, but again to get overwhelm its easy.

Btw, always name your files as Dockerfile, your editor will highlight the syntax and what? Do you think you can come with a better name than Docker's best practices?

Let's see this with an example:

A MySQL/MariaDB container

Create a Dockerfile:

# the base image
FROM mariadb:latest

# This is an example only, these should be actually ENV variables.
ENV MYSQL_DATABASE=docker
ENV MYSQL_USER=docker
EBV MYSQL_PASSWORD=docker
ENV MYSQL_ROOT_PASSWORD=docker

CMD ["mysqld"]

# not really necesary, only if other containers needed (like a PHP) or connecting from the host
EXPOSE 3306

Simple and very straight forward. Here is how to use it:

# Create our image
docker build . --tag myqtmysqlimg

# Start our container
docker run -it -d -p 3306:3306 --name yolomysql myqtmysqlimg

# Connect to our db
docker exec -it yolomysql mysql -u docker -p

# Note: you can  also connect from any other Database client on your host because we exposed 3306

Docker-compose

Docker-compose basically a way to provision multiple containers(named services), volumes (persistent data), networks (connectivity between containers) and much more.

Lots of concepts, i know.... but its just a silly YAML file named docker-compose.yml

# File: docker-compose.yml

version: "3"
    
services:
  mysql:
    image: mariadb:latest
    volumes:
      - database:/var/lib/mysql
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: mywordpressrooot 
      MYSQL_DATABASE: wordpress
      MYSQL_USER: wordpress
      MYSQL_PASSWORD: wordpress
    
  wordpress:
    depends_on:
      - mysql
    image: wordpress:latest
    ports:
      - "8080:80"
    restart: always
    environment:
      WORDPRESS_DB_HOST: mysql:3306
      WORDPRESS_DB_USER: wordpress
      WORDPRESS_DB_PASSWORD: wordpress
      WORDPRESS_DB_NAME: wordpress
volumes:
  database: {}

And now:

# lets build and run our blog
$ docker-compose up -d

# Done, visit https://localhost:8080

# Grats, you have a blog using this bloated CMS.

# Stop everythin'
$ docker-compose down

Some bits about it:

  • we use two containers, mariadb and wordpress itself
  • Creates a volume to persist database stuff by mounting /var/lib/mysql into it.
  • WORDPRESS_DB_HOST points to mysql:3306, mysql is the name of the other service.

Now this is cool, but where can i put all my Dockerfile stuff?

On this example I used a fully pre-build image, (note the services/wordpress/image" key in our yml)

Instead of image we can use build. So let's re-create our first example by creating two files:

# docker-compose.yml

version: "3"
    
services:
  web:
    build: 
      context: .
    container_name: web
    volumes:
      - ./public:/var/www/html
    ports:
      - "8080:80"

and

# Dockerfile

FROM php:7.4-apache

EXPOSE 80

CMD ["apache2-foreground"]

Also make sure you create a 'public' folder with an index.html inside it. And that's it, that's how you do it.

After that, go and:

$ docker-compose up -d

Visit http://localhost:8080 on your browser, that's it.

A Docker-Compose PHP+Nginx+MariaDB app skeleton

On my Github profile I pushed a full docker-composer skeleton to get started, with some kind of ideas on how to organize a project.

Other links

That's all, go and learn something new in less than a day, stop getting pushed out by normie soydevs

About
Related Posts
  • Fancy words in Tech Episode II: GraphQL

    It's been around for many years, and like other buzz-words many companies choose the tech just because "certain FANG company uses", then It must be good for my solution. Here i will try to summarize the most important key points of this technology, when and where is intended to be used and what a developer does when working on a project like this.

  • Using Makefiles to improve Docker image build experience

    Makefile is a building tool that's been around forever; and while some of us may remember it from building our first couple projects (and may hate those times and the struggle); But I believe it's an awesome tool that by just adding a single file to our project It can speed / easy up our daily work flow on: how we build, run locally, etc our 'Dockerized' projects

  • The SOLID Programming Principles OOP(s)!

    We can think of these as a general guide. There are many more, here are the ones I come up often, they are five and go by the acronym of S.O.L.I.D. The good thing about SOLID is that helps you decouple/breaking apart your code. Giving our modules independence of each other and make our code more maintainable, scalable, reusable and testable.

Comments
The comments are closed on this section.
Hire me!

I'm currently open for new projects or join pretty much any project i may fit in, let me know!

Read about what I do, or contact me for details.