Friday, 5 August 2016

A Docker Experiment

Containers are a topic that have been rising in occurrence for me, for the last year or so. From using them as part of the architecture at work or for various pet projects friends have been working on. I figured it was time to experiment myself and get to grips with what people have been talking about.
It seemed like a good idea to find some introductory material that would give me an overview of its uses without delving too far into the details so I could see what it actually was, so I found a PluralSight course about Docker, specifically “Docker and Containers:The Big Picture”. This gave a nice overview of what Docker is, and more importantly what it was trying to achieve and how to use it.
With a bit more of an understanding, I wanted to use it for something, preferably something familiar. I decided to try setting up ElasticSearch and Kibana containers, where Kibana would visualize the ElasticSearch data.
I used bits of this article along the way as a guide, if you'd prefer a more detailed reference: https://docs.docker.com/docker-for-windows/
If you're on Windows 10 you might have a slightly different experience as you're able to use Docker for Windows. The machine I had available is on Windows 8 so I used the Docker ToolBox, so this post will assume you're doing similar.
At installation, if you don't have Hyper-V you'll want to tick the box for installing VirtualBox as you'll need one or the other, you can also useVMWare, etc. but I'm using Hyper-V. If you've not used virtualisation on your system before you may need to turn on something like VT-x to continue. You will also require a virtual switch. If you're a Hyper-V user on a laptop you might want to read on a bit first, as I encountered a problem here.
With that installed I opted to use PowerShell to continue. Mostly because it looks better in screenshots. It needs to be elevated to Administrator to use Hyper-V.
With setup out of the way you want to choose a name for your docker machine and run something like:
    docker-machine create --driver hyperv elkmachine

If you're not using Hyper-V you'll need to change the driver to whichever you've decided to use, and replace elkmachine with your chosen machine name.
This will create a docker virtual machine that you can see in the Hyper-V Manager.
You may run into a problem where it seems to hang at 'Waiting for host to start'. I found this when I attempted it on my laptop initially, and came across this post: https://www.packet6.com/allowing-windows-8-1-hyper-v-vm-to-work-with-wifi/
Simply put, a standard virtual switch won't work, and you need to create an internal virtual switch and in the properties of your network connection, share the connection with the WiFi virtual switch. In addition to the article, if you have a regular virtual switch and a WiFi virtual switch, docker create might default to the first one. This could be the order that they're created in but I didn't test it, and just removed the external virtual switch as it probably wouldn't work anyway.
With that sorted, you should see something like this:

If you look in Hyper-V now, you'll see a virtual machine with the docker machine name with some boot2docker text in the window.

The docker machine's now been created, we need to use set env to create containers on it.


As the terminal says we need to run:
    "C:\Program Files\Docker Toolbox\docker-machine.exe" env elkbox | Invoke-Expression
Somewhere around or before this point you'd want to decide what container you want to run, there's an excellent resource of existing containers for most mainstream software at https://hub.docker.com/explore/. There are other alternative sources, but lucky for us, both ElasticSearch and Kibana have official containers here. Before we can use the containers we need to 'pull' them. This is on a per-machine basis, so each of your docker machines will need to pull them separately, but within a machine once you've pulled a container it's usable.
First we want to pull ElasticSearch:
    docker pull elasticsearch
Following that we want to run the container, with some flags specified, the command we will use is:
    docker run -d -p 9200:9200 -p 9300:9300 elasticsearch
The -d will run the container in detached mode, particularly useful with ElasticSearch which will otherwise block the terminal.
-p maps a container port to a host port. In these cases we want the mapping to be the same but this is useful for duplicate containers.
Once this is done we can check its status with "docker ps", you can see the result of the previous commands below.

We now have an ElasticSearch container running, but we should check on it further. To hit it we need to know the IP address of the docker machine. We can do this by using:

    docker-machine ip elkbox


When we ran the container we set ports 9200 and 9300, so with the IP address and the port, we can hit ElasticSearch from the browser and should get some JSON back to let us know it's there, like so:


So, we know ElasticSearch is ready to go, now we want to set up a Kibana container and point it at ElasticSearch. Like ElasticSearch, we need to pull the container, which is simply:

    docker pull kibana

After some fiddling I found the best way to run the Kibana container was with this command:

    docker run –name kibana -e ELASTICSEARCH_URL=http://192.168.137.132:9200 -p 5601:5601 -d kibana

You can see once again we're mapping some ports, and we're using one of the containers parameters to set the URL of ElasticSearch for it to use. Once again, we can test it easily as we know it's at the same IP but on port 5601 as we specified in the run command. And in the browser you should see something like:



Now it's just a case of pushing your data to ElasticSearch. I'm finding this approach preferable to my previous way of running these 2 services in command prompts, since I don't have to have a bunch of windows open and I can stop the docker machine and not have to worry about remembering specifically which services I've started as they all belong to the same docker machine. 
It was also quicker to get going immediately even with the docker installation when considering installing Java and potentially other prerequisites. I'll be doing some more experimentation around this soon.