How to use angular service within another service

Actually it is easy!

import { Injectable, Inject } from '@angular/core';

import { LoginService } from '../common/login.service';
export class PermissionService {

  constructor(@Inject(LoginService) private login_service: LoginService) { }

    const u = this.login_service.getUser()
    return u.is_staff;

Configuring django for centralised log monitoring with ELK stack with custom logging option (eg. client ip, username, request & response data)

When you are lucky enough to have enough users that you decide to roll another cloud instance for your django app, logging becomes a little bit tough because in your architecture now you would be needing a load balancer which will be proxying request from one instance to another instance based on requirement. Previously we had log in one machine to log monitoring was easier, when someone reported a error we went to that instance and looked for errors, but now as we have multiple instance we have to go to all the instance, regardless of security risks, i would say it is a lot of work. So I think it would be wise to have a centralized log aggregating service.

For log management and monitoring we are using Elastic Logstash and Kibana popularly known as ELK stack. For this blog we will be logging pretty much all the request and its corresponding responses so that debugging process gets handy for us. To serve this purpose we will leverage django middlewares and python-logstash.

First of all let’s configure our for logging:

    'version': 1,
    'disable_existing_loggers': True,
    'formatters': {
        'standard': {
            'format': '%(asctime)s [%(levelname)s] %(name)s: %(message)s'
        'logstash': {
            '()': 'proj_name.formatter.SadafNoorLogstashFormatter',
    'handlers': {
        'default': {
            'filename': '/var/log/proj_name/django.log',
            'maxBytes': 1024*1024*5, # 5 MB
            'backupCount': 5,
        'logstash': {
          'level': 'DEBUG',
          'class': 'logstash.TCPLogstashHandler',
          'host': 'ec2*****',
          'port': 5959, # Default value: 5959
          'version': 1, # Version of logstash event schema. Default value: 0 (for backward compatibility of the library)
          'message_type': 'logstash',  # 'type' field in logstash message. Default value: 'logstash'.
          'fqdn': False, # Fully qualified domain name. Default value: false.
          #'tags': ['tag1', 'tag2'], # list of tags. Default: None.
          'formatter': 'logstash',

        'request_handler': {
            'filename': '/var/log/proj_name/django.log',
            'maxBytes': 1024*1024*5, # 5 MB
            'backupCount': 5,
            'formatter': 'standard',
    'loggers': {
        'sadaf_logger': {
            'handlers': ['default', 'logstash'],
            'level': 'DEBUG',
            'propagate': True

As you can see we are using a custom logging format. We can leave this configuration and by default LogstashFormatterVersion1 is the logging format that will work just fine. But I chose to define my own logging format because my requirement is different, I am running behind a proxy server, I want to log who has done that and from which IP. So roughly my Log Formatter looks like following:

from logstash.formatter import LogstashFormatterVersion1

from django.utils.deprecation import MiddlewareMixin
class SadafNoorLogstashFormatter(LogstashFormatterVersion1):
    def __init__(self,*kargs, **kwargs):
        print(*kargs, **kwargs)
        super().__init__(*kargs, **kwargs)

    def format(self, record,sent_request=None):
        print(sent_request, "old req")
        caddr = "unknown"

        if 'HTTP_X_FORWARDED_FOR' in record.request.META:
            caddr = record.request.META['HTTP_X_FORWARDED_FOR'] #.split(",")[0].strip()
#        print(record.request.POST,record.request.GET, record.request.user)
        message = {
            '@timestamp': self.format_timestamp(record.created),
            '@version': '1',
            'message': record.getMessage(),
            'client': caddr,
            'username': str(record.request.user),

            'path': record.pathname,
            'tags': self.tags,
            'type': self.message_type,
            #'request': self.record

            # Extra Fields
            'level': record.levelname,

        # Add extra fields
#        print(type(self.get_extra_fields(record)['request']))

        # If exception, add debug info
        if record.exc_info:

        return self.serialize(message)

As our requirement is to log every request our middleware may look like following:

import logging

request_logger = logging.getLogger('sadaf_logger')
from datetime import datetime
from django.utils.deprecation import MiddlewareMixin
class LoggingMiddleware(MiddlewareMixin):
    Provides full logging of requests and responses
    _initial_http_body = None
    def __init__(self, get_response):
        self.get_response = get_response

    def process_request(self, request):
        self._initial_http_body = request.body # this requires because for some reasons there is no way to access request.body in the 'process_response' method.

    def process_response(self, request, response):
        Adding request and response logging
#        print(response.content, "xxxx")
        if request.path.startswith('/') and \
                (request.method == "POST" and
                         request.META.get('CONTENT_TYPE') == 'application/json'
                 or request.method == "GET"):
            status_code = getattr(response, 'status_code', None)

            if status_code:
                if status_code >= 400:
                    log_lvl = logging.ERROR
                    log_lvl = logging.INFO

                               "GET: {}"
                                   extra ={
                                       'request': request,
                                       'request_method': request.method,
                                       'request_url': request.build_absolute_uri(),
                                       'request_body': self._initial_http_body.decode("utf-8"),
                                       'status': response.status_code
                    #'tags': {
                    #    'url': request.build_absolute_uri()
#            print(request.POST,"fff")
        return response

So pretty much you are done. Go login to your Kibana dashboard, make index pattern that you are interest and see your log:

Sample AWS CodeDeploy configuration for django

AWS has its own continuous integration tool known as CodeDeploy, using a simple command you would be able to deploy on multiple servers when you want to change something on code base.

Installing code deploy to instance

If code deploy client is not installed at your instance, you would need to do that:

sudo yum install -y ruby wget
cd /opt
chmod +x ./install
sudo ./install auto

Create CodeDeploy Application

You have to create Code Deploy application with Deployment type to Inplace deployment, and deployment Configuration set to CodeDeployDefault.OneAtATime.
Give it a name under Ec2 configuration and Amazon ec2 instance, say the name is Code deploy instance. Now you have to add the same tag to all your code deploy instances.

Set IAM Permissions

Now that we are done with installation, we would need to setup IAM rules:
First create an IAM group called CodeDeployGroup. This group needs AmazonS3FullAccess and AWSCodeDeployFullAccess permissions. Create a user and add it to this group. This user only needs programmatic access.Save key and key id to somewhere safe.

Create role that has Trusted entities and Policies are and AWSCodeDeployRole AmazonS3FullAccess, respectively.

Edit trust relationship to following:

  "Version": "2012-10-17",
  "Statement": [
      "Effect": "Allow",
      "Principal": {
        "Service": [
      "Action": "sts:AssumeRole"

Create new s3 bucket with previously created IAM rules.

CodeDeploy configuration

My codebase  structure is something like following:

- src
  - <django project>
- scripts

appspec.yml is the file that contains our hooks and configuration for code deploy.

version: 0.0
os: linux
  - source: src
    destination: /home/centos/proj_name
    - location: scripts/install_dependencies
      timeout: 300
      runas: root
    - location: scripts/stop_server
      timeout: 300
      runas: root
    - location: scripts/start_server
      timeout: 300
      runas: root

for django scripts/install_dependencies may look like following:

sudo yum install -y gcc openssl-devel bzip2-devel wget
sudo yum install -y make git
cd /opt
command -v python3.6 || {
    tar xzf Python-3.6.3.tgz
    cd Python-3.6.3
    sudo ./configure --enable-optimizations
    sudo make altinstall
sudo yum install -y mysql-devel

for scripts/start_server I have following:

cd /home/centos/evaly
pip3.6 install -r requirements.txt
nohup uwsgi --http :80 --module evaly.wsgi > /dev/null 2>&1 &

for scripts/stop_server I have following:

pkill uwsgi

I have borrowed a python script from bitbucket team which looks like following:

# Copyright 2016, Inc. or its affiliates. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file
# except in compliance with the License. A copy of the License is located at
# or in the "license" file accompanying this file. This file is distributed on an "AS IS"
# BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations under the License.
A BitBucket Builds template for deploying an application revision to AWS CodeDeploy
from __future__ import print_function
import os
import sys
from time import strftime, sleep
import boto3
from botocore.exceptions import ClientError

VERSION_LABEL = strftime("%Y%m%d%H%M%S")

def upload_to_s3(artifact):
    Uploads an artifact to Amazon S3
        client = boto3.client('s3')
    except ClientError as err:
        print("Failed to create boto3 client.\n" + str(err))
        return False
            Body=open(artifact, 'rb'),
    except ClientError as err:
        print("Failed to upload artifact to S3.\n" + str(err))
        return False
    except IOError as err:
        print("Failed to access in this directory.\n" + str(err))
        return False
    return True

def deploy_new_revision():
    Deploy a new application revision to AWS CodeDeploy Deployment Group
        client = boto3.client('codedeploy')
    except ClientError as err:
        print("Failed to create boto3 client.\n" + str(err))
        return False

        response = client.create_deployment(
                'revisionType': 'S3',
                's3Location': {
                    'bucket': os.getenv('S3_BUCKET'),
                    'key': BUCKET_KEY,
                    'bundleType': 'zip'
            description='New deployment from BitBucket',
    except ClientError as err:
        print("Failed to deploy application revision.\n" + str(err))
        return False     
    Wait for deployment to complete
    while 1:
            deploymentResponse = client.get_deployment(
            if deploymentStatus == 'Succeeded':
                print ("Deployment Succeeded")
                return True
            elif (deploymentStatus == 'Failed') or (deploymentStatus == 'Stopped') :
                print ("Deployment Failed")
                return False
            elif (deploymentStatus == 'InProgress') or (deploymentStatus == 'Queued') or (deploymentStatus == 'Created'):
        except ClientError as err:
            print("Failed to deploy application revision.\n" + str(err))
            return False      
    return True

def main():
    if not upload_to_s3('/Users/sadafnoor/Projects/evaly/'):
    if not deploy_new_revision():

if __name__ == "__main__":

I have written a script to zip up my source code so that the script can upload it to s3 and eventually all my ec2 instances will be downloading that zip from s3.

export APPLICATION_NAME="CodeDeployApplicationName" 
export AWS_DEFAULT_REGION="ap-south-1"

export DEPLOYMENT_CONFIG="CodeDeployDefault.OneAtATime"

export DEPLOYMENT_GROUP_NAME="CodeDeployDeploymentGroup"
export S3_BUCKET="S3BucketName"
zip -r ../ src/* appspec.yml scripts/*

Dealing with mandatory ForiegnkeyField for fields that is not in django rest framework serializers

Although I am big fan of django rest framework but sometime i feel it is gruesome to deal with nested serializers (Maybe I am doing something wrong, feel free to suggest me your favourite trick.)

Suppose we have two models, ASerializer is based on A model, BSerializer is based on `B` model. A and B models are related, say B has a foreign key to A. So while creating B it is mandatory to define A but A serializer is full of so much data that I don’t want to have that unnecessary overhead at my BSerializer, but when creating B I must have it. Here how I solved it:

For the sake of brevity let’s say A is our Category, and B is Product. Every Product has a Category, so Product has a foreign key of Category, but I am not making it visible at ProductSerializer given that category has a lot of unnecessary information that is not necessary.

from django.shortcuts import get_object_or_404
class ProductSerializer(serializers.ModelSerializer):
    def to_internal_value(self, data):
        if data.get('category'):
            self.fields['category'] = serializers.PrimaryKeyRelatedField(

            cat_slug = data['category']['slug']
            cat = get_object_or_404(Category, slug=cat_slug)

        return super().to_internal_value(data)

A Django Rest Framework Jwt middleware to support request.user

I am using following Django (2.0.1), djangorestframework (3.7.7), djangorestframework-jwt (1.11.0) on top of python 3.6.3. By default djangorestframework-jwt does not include users in django’s usual requst.user. If you are using code>djangorestframework, chances are you have a huge code base at API which is leveraging this, not to mention at your permission_classes at viewsets. Since you are convinced about the fact that jwts are the best tools for your project, no wonder that you would love to migrate from your old token to new jwt tokens. To make the migration steps easier, we will write a middleware that will set request.user for us.

from django.utils.functional import SimpleLazyObject
from rest_framework_jwt.serializers import VerifyJSONWebTokenSerializer
from rest_framework.exceptions import ValidationError

#from rest_framework.request from Request
class AuthenticationMiddlewareJWT(object):
    def __init__(self, get_response):
        self.get_response = get_response

    def __call__(self, request):
        request.user = SimpleLazyObject(lambda: self.__class__.get_jwt_user(request))
        if not request.user.is_authenticated:
            token = request.META.get('HTTP_AUTHORIZATION', " ").split(' ')[1]
            data = {'token': token}
                valid_data = VerifyJSONWebTokenSerializer().validate(data)
                user = valid_data['user']
                request.user = user
            except ValidationError as v:
                print("validation error", v)

        return self.get_response(request)

And you need to register your middleware in settings:


Angular 5 image upload recipe

To prepare our html to upload images we would need to add input field to our html.

<input type="file" class="form-control" name="logo" required (change)="handleFileInput($" >

When file is changed it is going to call a function. We would need to define that function at our component.

 image_to_upload: File = null;
 handleFileInput(files: FileList) {
   this.image_to_upload = files.item(0);

 create(image_data) {
   this.image_service.uploadImage(this.image_to_upload, image_data);

Then at our service we would need to make the post request.

uploadImage(fileToUpload, image_data) {
  const endpoint = 'http://localhost:8000/image/';

const formData: FormData = new FormData();

  for(const k in image_data){
    formData.append(k, image_data[k]);

  formData.append('image', fileToUpload,;
   return this.http
    .post(endpoint, formData).toPromise().then(
      (res) => {

Which will be sufficient for any API that accepts POST request:


If we want to implement it using django, it would look like following:

Django model:

class Image(models.Model):
    image = models.FileField(storage = MyStorage(location="media/shop"))

Django Rest Framework serializers and viewsets:

class ImageSerializer(serializers.ModelSerializer):
    class Meta:
        model = Image
        fields = ('id','logo',)

class ImageViewSet(viewsets.ModelViewSet):
    queryset = Image.objects.all()
    serializer_class = ImageSerializer

router = routers.DefaultRouter()
router.register(r'image', ImageViewSet)

urlpatterns = router.urls

Integrating amazon s3 with django using django-storage and boto3

If we are lucky enough to get high amount of traffic at our website, next thing we start to think about is performance. The throughput of a website loading depends on the speed we are being able to deliver the contents of the website, to our users from our storages. In vanilla django, all assets including css, js, files and images are being stored locally in a predefined or preconfigured folder. To enhance performance we may have to decide to use a third party storage service that alleviate the headache of caching, zoning, replicating and to build the infrastructure of a Content Delivery Network. Ideally we would like to have a pluggable solution, something that allows us to switch storages from this to that, based on configuration. django-storages is one of the cool libraries from django community that helps to maintain 3rd party storage services like aws s3, google cloud, ftp, dropbox and so on. Amazon Webservice is one of the trusted service that offers a large range of services, s3 is one of the cool services from AWS that helps us to store static assets. boto3 is a python library being distributed by amazon to interact with amazon s3.

First thing first, to be able to store files on s3 we would need permission. In AWS world, all sorts of permissions are being managed using Identity Access Management (IAM).
i) In amazon console, you will be able to find IAM under Security, Identity & Compliance. Go there.
ii) We would need to add user with programmatic access.
iii) We would need to add new group.
iv) We would need to set policy for the group. Amazon provides bunch of predefined policies. For our use case, we can choose AmazonS3FullAccess
v) We have to store User, Access key ID and the Secret access key.

In s3 we can organize our contents into multiple buckets. We can use several buckets for a single Django project, sometime it is more efficient to use more but for now we will use only one. We will need to create bucket.

Now we need to install:

pip install boto3
pip install django-storages

We will need to add storages inside our INSTALLED_APPS of along with other configuration files of



AWS_ACCESS_KEY_ID = '#######'
AWS_STORAGE_BUCKET_NAME = '####bucket-name'
#    'CacheControl': 'max-age=86400',
AWS_LOCATION = 'static'

    os.path.join(BASE_DIR, 'mysite/static'),
STATICFILES_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'

When we are using django even when we don’t write any html, css or js files for our projects, it already has few because many of classes that we will be using at our views, its parent class may have static template files, base html files, css, js files. These static assets are being stored in our python library folder. To move then from library folder to s3 we will need to use following command:

python collectstatic

Thing to notice here is that, previously static referred to localhost:port but now it is being referred to s3 link.

{% static 'img/logo.png' %}

We may like to have some custom configuration for file storage, say we may like to put media files in a separate directory, we may not like it to be overwritten by another user. In that case we can define a child class of S3Boto3Storage and change the value of DEFAULT_FILE_STORAGE.

from storages.backends.s3boto3 import S3Boto3Storage

class MyStorage(S3Boto3Storage):
    location = 'media'
    file_overwrite = False
DEFAULT_FILE_STORAGE = 'mysite.storage_backends.MediaStorage'  

Now all our file related fields like models.FileField(), models.ImageField() will be uploading file in our s3 bucket inside the directory ‘media’.

Now we may have different types of storages, some of them will be storing documents, some of them will be publicly accessible, some of them will be classified. Their directory could be different and so on so forth.

class MyPrivateFileStorage(S3Boto3Storage):
    location = 'classified'
    default_acl = 'private'
    file_overwrite = False
    custom_domain = False

If we want to use any other storages that is not defined in DEFAULT_FILE_STORAGE in We would need to define it at the field of our model models.FileField(storage=PrivateMediaStorage()).

Running a python/django app containerised in docker image pushed on private repo on top of kubernetes cluster

Recently I was looking for more flexible way to ship our code in production,   docker and kubernetes are the sweetheart of the devops engineers. Docker is something that let us containerise our app in an image and then we let that image to run on production. I know that we all have some experience with Virtual Machines, it is easy to get confused about the difference that will stop you from appreciating what docker does. A virtual machine works separately on top of the hypervisor of your computer on the other hand docker creates another layer of abstraction on top of your OS. It lets you share the similarities the images that you already have on your OS, and on top of that it adds another layer that has the differences. Now we can run multiple linux image on top of one machine without costing double. So it optimises, it is intelligent and it saves us. When we are running a cluster of n nodes, the complexity exponentially, what if something broke somewhere in a docker container in a cluster of n nodes? How can we ensure that which docker container should be run on which node? How do we move that docker container to another node because that node is going to be turned off for maintenance? We need a manager, who takes care of them, don’t we? Kubernetes comes along takes the responsibility.

First of all lets setup kubernetes.

cat <<EOF > /etc/yum.repos.d/kubernetes.repo

We would turn the selinux off as the documentation says so.

setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

Now we would be installing docker, kubeadm, and in more abstract kubernetes.

yum install -y docker kubelet kubeadm kubectl --disableexcludes=kubernetes

We would need to ensure that these are the service that should be the first thing it should be doing when the computer turns on:

systemctl enable kubelet && systemctl start kubelet
systemctl enable docker && systemctl start docker

Now that we have our kubernetes and docker running on master and slave node. We would need to change one or two things in configuration for a safe initial launching on master.

vi /var/lib/kubelet/kubeadm-flags.env


We are initializing the node on master:

kubeadm init

for future token creation

sudo kubeadm token create --print-join-command 

It should generate a token for you which you would need to copy and paste on your slave node, but before that you would be need to put configuration file in proper directory with proper permission.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

on slave server

kubeadm join --token vzau5v.vjiqyxq26lzsf28e --discovery-token-ca-cert-hash sha256:e6d046ba34ee03e7d55e1f5ac6d2de09fd6d7e6959d16782ef0778794b94c61e

if you are getting something similar:

I0706 07:18:56.609843    1084 kernel_validator.go:96] Validating kernel config
	[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_sh ip_vs ip_vs_rr ip_vs_wrr] or no builtin kernel ipvs support: map[ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{} ip_vs_rr:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

 Pulling images required for setting up a Kubernetes cluster

running following would help

for i in ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack_ipv4; do modprobe $i; done

Now if you run following in server, you would know that you have some nodes attached to your kubernetes cluster.

sudo kubectl get nodes
sudo kubectl describe nodes

if you had issues dealing with nodes:

kubectl apply -f

For this demonstration I will be adding an existing python/django application on this cluster. So first of all I would be needing to dockerize that first.

at my Dockerfile

# set the base image 
FROM python:3.7
# File Author / Maintainer
#add project files to the usr/src/app folder

#set directoty where CMD will execute 
WORKDIR /usr/src/app
ADD app_name ./app_name
COPY /app_name/requirements.txt .
# Get pip to download and install requirements: --no-cache-dir 
RUN pip install -r requirements.txt
# Expose ports
# default command to execute
WORKDIR /usr/src/app/app_name
RUN chmod +x app_name/
CMD ./app_name/
#ENTRYPOINT ["/bin/bash", "app_name/"]

Now that we have a dockerfile ready. Lets build that image:

sudo docker build -t app_name_api_server .

Time to run that image and expose it on port 8000

sudo docker run -p 8000:8000 -i -t app_name_api_server

If you like what you see on localhost:8000! Congratulations! Your app is working on docker. Now let’s push that image on docker hub. For me I have created a private repo on docker hub. To be able to push your image on docker hub you would be needing to add tags to that image first then you can push it.

sudo docker tag app_name_api_server sadaf2605/app_name_api_server
sudo docker push sadaf2605/app_name_api_server

Now that you have your image pushed on docker hub. Now we will go back to our kubernetes master. As the image that we want to pull are on private image, no wonder we would need some sort of credential to pull it.
DOCKER_USER=Type your dockerhub username, same as when you `docker login`
DOCKER_EMAIL=Type your dockerhub email, same as when you `docker login`
DOCKER_PASSWORD=Type your dockerhub pw, same as when you `docker login`

kubectl create secret docker-registry myregistrykey \
  --docker-server=$DOCKER_REGISTRY_SERVER \
  --docker-username=$DOCKER_USER \
  --docker-password=$DOCKER_PASSWORD \

Now lets define and yaml file that we are going to define our kubernetes deployments, app_name.yaml.

apiVersion: apps/v1
kind: Deployment
  name: app-name-api-server
      run: app-name-api-server
  replicas: 1
        run: app-name-api-server

      - name: app-name-api-server
        imagePullPolicy: Always
        - containerPort: 8000
          #hostPort: 8000
        - name: DB_USERNAME
          value: "user"
        - name: DB_PASSWORD
          value: "password"
        - name: DB_NAME
          value: "dbname"
        - name: DB_HOST
          value: ""
      - name: myregistrykey
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      dnsPolicy: "None"
      - name: myregistrykey
apiVersion: v1
kind: ConfigMap
  name: kube-dns
  namespace: kube-system
  upstreamNameservers: |

Now time to run a deployment using that configuration:

sudo kubectl apply -f app_name.yaml

Lets check if we have a deployment or not:

sudo kubectl get deployments

Lets check if any instance of our docker container is running or not.

sudo kubectl get pods

Now we would be creating a service that would let us access to these pods from outside of pods.

sudo kubectl expose deployment app-name-api-server --type=LoadBalancer --name=app-name-api-server
sudo kubeadm upgrade plan --feature-gates CoreDNS=true