Tag Archives: docker

Become DevOps overnight: Continuous deployment for your scalable cloud app.

Some of the things we hate to spend time while development are setting up environments, building and deploying stuff. But good news is nowadays there are plenty of tools to solve this. In this post I would like to share a very quick way of becoming a DevOps overnight and automating all the boring part of getting your product running seamlessly as you develop.

Snapshot of my Tutum Services after setting up continuous deployment

#1. Iaas, Paas, Saas and tech stack decisions

At the start of our project we had to decide how our tech stack is going to be – our philosophy was to use Iaas for any stateless process or jobs like API servers or event processors. For persistence alone we decided to go with Saas solutions. We picked up NodeJS for APIs and Java / Python for daemon processes. Being part of Microsoft Bizspark, we run all these processes on Azure Linux instances. For temporary persistence, we found AWS pretty good performance or price wise and picked up Kinesis + DynamoDB. S3 was chosen for long term storage. The strategy was to be able to easily swap across cloud service providers at any point in future with almost no tight coupling with any vendor.

#2. Local development

Local development has to be as fast as possible – personally I find using docker in my early stages of development slows me down and also messes up my local machine with chunky images. So on my local machine I prefer to stick to run my apps in the standard way without any containerization.

#3. Dockerization

Docker is simply awesome when it comes to deploying programs to cloud instances. I can also easily horizontal scale test, load balance test with just multiple docker container instances on a single node. All that’s needed is to add a simple Dockerfile in every project directory. A NodeJS example is shown below.

FROM node:0.12

# Bundle app source
ADD . /src
# Install app dependencies
RUN cd /src; npm install

EXPOSE 3000
CMD ["node", "/src/app.js"]

#4. Continuous deployment with Tutum

Tutum is still in Beta, but it’s awesome and free (atleast for now)! The first step in setting up Tutum is to go to Account Info and add Cloud Providers and Source Providers – in our case it’s Microsoft Azure and GitHub. Tutum has a very clear definition of the components required for setting up a continuous deployment:

a. Repository – Here we create a new (private) repository in Tutum and link to our GitHub to sync on every update. The source code gets pulled from GitHub and docker images are built inside Tutum’s repository with every GitHub update.

b. Node – We can create Azure instances right from Tutum. You have to set up Tutum to be able to get access to Azure. Each instance is a Node in Tutum.

c. Services – A service is a process or a program that you run. Services can comprise of one or more docker containers depending upon if we scale or not. Services can be deployed on one or more nodes to horizontally scale.

While creating Nodes and Services, Tutum allows to specify tags like “dev”, “prod”, “front-end”, “back-end”. The tags determine on what nodes a service gets deployed. Thus we can have separate nodes for “front-end dev”, another for “font-end prod” etc.

Tutum is not super fast yet – I believe it’s mainly due to the time taken to build docker images. But still decent enough. For continuous deployment, we have to specify “Autodeploy” option while creating the service. Another good feature I found with Tutum is that there are jumpstart services like HA load balancer – it really makes setting up a high availability API cluster a breeze.

#5. Slack Integration

Like so many other startups we are quite excited about Slack. I have seen Slack integration with other Continuous Integration products like Circle CI and was surprised to see even Tutum Beta had that. I created a new channel in our Slack and from Integration settings enabled Incoming Webook – this gives an URL I have to paste in Tutum > Account Info > Notifications > Slack. And that’s it, we have a continuous deployment ready with all the bells and whistles.

Like I mentioned at the start there are multiple options to automate build, test and deployment. This post suggests a very economic yet scalable solution using Tutum – and literally I was able to learn and get everything running overnight!

Nginx 404 page not found error due to failed (13: Permission denied)

root@a8cbfa1e38d9:/# cat /var/log/nginx/error.log

2014/07/30 11:23:23 [crit] 370#0: *1 stat() "/usr/share/nginx/html/" failed (13: Permission denied), client: ::1, server: localhost, request: "GET / HTTP/1.1", host: "localhost"

I just did a vanila installation of Nginx in Ubuntu 14.04 but still got a 404 error. I was running in docker saw that the worker threads of nginx were running with user www-data.

root@a8cbfa1e38d9:/usr/share/nginx/html# ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 11:21 ? 00:00:00 /bin/bash
root 390 1 0 11:29 ? 00:00:00 nginx: master process nginx
www-data 391 390 0 11:29 ? 00:00:00 nginx: worker process
www-data 392 390 0 11:29 ? 00:00:00 nginx: worker process
www-data 393 390 0 11:29 ? 00:00:00 nginx: worker process
www-data 394 390 0 11:29 ? 00:00:00 nginx: worker process
root 395 1 0 11:29 ? 00:00:00 ps -ef

Doing chown -R www-data:www-data for /usr/share/nginx/ did not help.

So I tracked down the config file where the worker thread user is specified and changed www-data to root to get it working.

Change user in line1 of /etc/nginx/nginx.conf.

Docker App Tutorial – Creating a docker container for ELK (Elasticsearch + Logstash + Kibana)

This is very quick tutorial for beginners to get up to speed with docker.

Docker Installation in OSX:
1. Download boot2docker package from https://github.com/boot2docker/osx-installer/releases
2. Run the package after download to install to applications.
3. Launch boot2docker from Apps, it will open in a terminal with a bash interface.
4. Run: “docker version” to test docker installation.

Getting a base docker image
To get a base Ubuntu image, run: docker pull ubuntu:14.04
Test the ubuntu image by: docker run ubuntu:14.04 echo “hello world”. You could also skip step 1 as if ubuntu:14.04 was not present it will be pulled when trying to run.
Get a terminal to the ubuntu server within docker: docker run -t -i ubuntu:14.04 /bin/bash

Creating custom docker image with ELK
1. Create a new docker file.
bash-3.2$ pwd
/Users/admin/work/docker/elk
bash-3.2$ touch Dockerfile

2. Add contents from link to the docker file. The comments should make the file self explanatory.
https://github.com/cyberabis/docker-elkauto/blob/master/Dockerfile

3. Create an image with the docker file
bash-3.2$ docker build -t=”username/elk” .

4. Start elastic search.
bash-3.2$ docker run -d -p 80:80 -p 3333:3333 -p 9200:9200 username/elk /elk_start.sh

5. If running in mac, get the IP of the VM. If Linux this will be the same IP as host.
bash-3.2$ boot2docker ip
The VM’s Host only interface IP address is: 192.168.59.103

6. Launch Kibana from http://192.168.59.103:80

7. Send some test messages to logstash.
echo ‘A test message1’ | nc 192.168.59.103 3333
echo ‘A test message2’ | nc 192.168.59.103 3333
echo ‘A test message3’ | nc 192.168.59.103 3333
echo ‘A test message4’ | nc 192.168.59.103 3333

8. Goto the sample dashboard in Kibana and you should see these messages.

Trying out this container from my docker hub repository
Use “sudo” before commands if required.
Make sure docker is installed.
Run: docker pull cyberabis/docker-elkauto
Run: docker run -d -p 80:80 -p 3333:3333 -p 9200:9200 cyberabis/docker-elkauto /elk_start.sh
Launch Kibana at port 80 of your docker!

Removing old docker processes and images with unknown name

If you see lot of images without a name when you do “docker images”, it’s because of old processes that you can see when you do “docker ps -a”. These can be cleaned up by:

docker ps -a | grep 'weeks ago' | awk '{print $1}' | xargs docker rm
docker images | grep "<none>" | awk '{print $3}' | xargs docker rmi

Change the grep word as you wish.