Running a python/django app containerised in docker image pushed on private repo on top of kubernetes cluster

Recently I was looking for more flexible way to ship our code in production,   docker and kubernetes are the sweetheart of the devops engineers. Docker is something that let us containerise our app in an image and then we let that image to run on production. I know that we all have some experience with Virtual Machines, it is easy to get confused about the difference that will stop you from appreciating what docker does. A virtual machine works separately on top of the hypervisor of your computer on the other hand docker creates another layer of abstraction on top of your OS. It lets you share the similarities the images that you already have on your OS, and on top of that it adds another layer that has the differences. Now we can run multiple linux image on top of one machine without costing double. So it optimises, it is intelligent and it saves us. When we are running a cluster of n nodes, the complexity exponentially, what if something broke somewhere in a docker container in a cluster of n nodes? How can we ensure that which docker container should be run on which node? How do we move that docker container to another node because that node is going to be turned off for maintenance? We need a manager, who takes care of them, don’t we? Kubernetes comes along takes the responsibility.

First of all lets setup kubernetes.

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF

We would turn the selinux off as the documentation says so.

setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

Now we would be installing docker, kubeadm, and in more abstract kubernetes.

yum install -y docker kubelet kubeadm kubectl --disableexcludes=kubernetes

We would need to ensure that these are the service that should be the first thing it should be doing when the computer turns on:

systemctl enable kubelet && systemctl start kubelet
systemctl enable docker && systemctl start docker

Now that we have our kubernetes and docker running on master and slave node. We would need to change one or two things in configuration for a safe initial launching on master.

vi /var/lib/kubelet/kubeadm-flags.env

KUBELET_KUBEADM_ARGS=--cgroup-driver=systemd
#--network-plugin=cni

We are initializing the node on master:

kubeadm init

for future token creation

sudo kubeadm token create --print-join-command 

It should generate a token for you which you would need to copy and paste on your slave node, but before that you would be need to put configuration file in proper directory with proper permission.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

on slave server

kubeadm join 10.0.15.10:6443 --token vzau5v.vjiqyxq26lzsf28e --discovery-token-ca-cert-hash sha256:e6d046ba34ee03e7d55e1f5ac6d2de09fd6d7e6959d16782ef0778794b94c61e

if you are getting something similar:

I0706 07:18:56.609843    1084 kernel_validator.go:96] Validating kernel config
	[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_sh ip_vs ip_vs_rr ip_vs_wrr] or no builtin kernel ipvs support: map[ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{} ip_vs_rr:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

 Pulling images required for setting up a Kubernetes cluster

running following would help

for i in ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack_ipv4; do modprobe $i; done

Now if you run following in server, you would know that you have some nodes attached to your kubernetes cluster.

sudo kubectl get nodes
sudo kubectl describe nodes

if you had issues dealing with nodes:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

For this demonstration I will be adding an existing python/django application on this cluster. So first of all I would be needing to dockerize that first.

at my Dockerfile

# set the base image 
FROM python:3.7
# File Author / Maintainer
MAINTAINER Sadaf
#add project files to the usr/src/app folder

#set directoty where CMD will execute 
WORKDIR /usr/src/app
ADD app_name ./app_name
COPY /app_name/requirements.txt .
# Get pip to download and install requirements: --no-cache-dir 
RUN pip install -r requirements.txt
# Expose ports
EXPOSE 8000
# default command to execute
WORKDIR /usr/src/app/app_name
RUN chmod +x app_name/gunicorn.sh
CMD ./app_name/gunicorn.sh
#ENTRYPOINT ["/bin/bash", "app_name/gunicorn.sh"]

Now that we have a dockerfile ready. Lets build that image:

sudo docker build -t app_name_api_server .

Time to run that image and expose it on port 8000

sudo docker run -p 8000:8000 -i -t app_name_api_server

If you like what you see on localhost:8000! Congratulations! Your app is working on docker. Now let’s push that image on docker hub. For me I have created a private repo on docker hub. To be able to push your image on docker hub you would be needing to add tags to that image first then you can push it.

sudo docker tag app_name_api_server sadaf2605/app_name_api_server
sudo docker push sadaf2605/app_name_api_server

Now that you have your image pushed on docker hub. Now we will go back to our kubernetes master. As the image that we want to pull are on private image, no wonder we would need some sort of credential to pull it.

DOCKER_REGISTRY_SERVER=docker.io
DOCKER_USER=Type your dockerhub username, same as when you `docker login`
DOCKER_EMAIL=Type your dockerhub email, same as when you `docker login`
DOCKER_PASSWORD=Type your dockerhub pw, same as when you `docker login`

kubectl create secret docker-registry myregistrykey \
  --docker-server=$DOCKER_REGISTRY_SERVER \
  --docker-username=$DOCKER_USER \
  --docker-password=$DOCKER_PASSWORD \
  --docker-email=$DOCKER_EMAIL

Now lets define and yaml file that we are going to define our kubernetes deployments, app_name.yaml.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-name-api-server
spec:
  selector:
    matchLabels:
      run: app-name-api-server
  replicas: 1
  template:
    metadata:
      labels:
        run: app-name-api-server
    spec:

      containers:
      - name: app-name-api-server
        image: index.docker.io/sadaf2605/app_name_api_server:latest
        imagePullPolicy: Always
        ports:
        - containerPort: 8000
          #hostPort: 8000
        env:
        - name: DB_USERNAME
          value: "user"
        - name: DB_PASSWORD
          value: "password"
        - name: DB_NAME
          value: "dbname"
        - name: DB_HOST
          value: "1.2.2.3"
      imagePullSecrets:
      - name: myregistrykey
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      dnsPolicy: "None"
      dnsConfig:
        nameservers:
          - 8.8.8.8
      imagePullSecrets:
      - name: myregistrykey
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: kube-dns
  namespace: kube-system
data:
  upstreamNameservers: |
    ["8.8.8.8"]

Now time to run a deployment using that configuration:

sudo kubectl apply -f app_name.yaml

Lets check if we have a deployment or not:

sudo kubectl get deployments

Lets check if any instance of our docker container is running or not.

sudo kubectl get pods

Now we would be creating a service that would let us access to these pods from outside of pods.

sudo kubectl expose deployment app-name-api-server --type=LoadBalancer --name=app-name-api-server
sudo kubeadm upgrade plan --feature-gates CoreDNS=true