Simple trick that can can help us to achieve Zero Downtime when dealing with DB migration

Currently we are dealing with quite a few deployment processes. For a company that enables DevOps culture, deployment happens many many times a day. Tiny fraction of code change goes to deployment, and as the change size is so small it gets easier to spot a bug and if the bug is crucial maybe it is time to rollback to an older version and to be able to have a database that accepts rollback, yet we have to do it with zero downtime so that the user do not understand a thing. It is often is not as easy as it sounds in principal.

Before describing about few key idea to solve this common problem lets discuss few of our most common deployment architectures.

In a blue/green deployment architecture, it consists of two different version of application running concurrently, one of them can be the production stage and another one can be development platform, but we need to note that both of the version of the app must be able to handle 100% of the requests. We need to configure the proxy to stop forwarding requests to the blue deployment and start forwarding them to the green one in a manner that it works on-the-fly so that no incoming requests will be lost between the changes from blue deployment to green.

Canary Deployment is a deployment architecture where rather than forwarding all the users to a new version, we migrate a small percentage of users or a group of users to new version. Canary Deployment is a little bit complicated to implement, because it would require smart routing Netflix’s OSS Zuul can be a tool that helps. Feature toggles can be done using FF4J and Togglz.

As we can see that most of the deployment processes requires 2 version of the application running at the same time but the problem arises when there is database involved that has migration associated with it because both of the application must be compatible with the same database.So the schema versions between consecutive releases must be mutually compatible.

Now how can we achieve zero downtime on these deployment strategies?

So we can’t do database migrations that are destructive or can potentially cause us to lose data. In this blog we will be discussing how can we approach database migrations:

One of the most common problem that we face during UPDATE TABLE is that it locks up the database. We don’t control the amount of time it will take to ALTER TABLE but most popular DBMSs available in the market, issuing an ALTER TABLE ADD COLUMN statement won’t lead to locking. For example if we want to change the type of field of database field rather than changing the field type we can add a new column.

When adding column we should not be adding a NOT NULL constraint at the very beginning of the migration even if the model requires it because this new added column will only be consumed by the new version of the application where as the new version still doesn’t provide any value for this newly added column and it breaks the INSERT/UPDATE statements from current version. We need to assure that the new version reads values from the old column but writes on both.  This is to assure that all new rows will have both columns populated with correct values. Now that new columns are being populated in a new way, it is time to deal with the old data, we need to copy the data from the old column to the new column so that all of your current rows also have both columns populated, but the locking problem arises when we try to UPDATE.

Instead of just issuing a single statement to achieve a single column rename, we’ll need to get used to breaking these big changes into multiple smaller changes. One of the solution could be taking baby steps like this:

ALTER TABLE customers ADD COLUMN correct VARCHAR(20); UPDATE customers SET correct = wrong

WHERE id BETWEEN 1 AND 100; UPDATE customers SET correct = wrong

WHERE id BETWEEN 101 AND 200;
ALTER TABLE customers DELETE COLUMN wrong;

When we are done with old column data population. Finally when we would have enough confidence that we will never need the old version, we can delete a column, as it is a destructive operation the data will be lost and no longer recoverable.

As a precaution, we should delete only after a quarantine period. After quarantined period when we are enough confident that we would no longer need our old version of schema or even a rollback that does require that version of schema then we can stop populating the old column.  If you decide to execute this step, make sure to drop any NOT NULL constraint or else you will prevent your code from inserting new rows.

How to use angular service within another service

Actually it is easy!

import { Injectable, Inject } from '@angular/core';

import { LoginService } from '../common/login.service';
@Injectable()
export class PermissionService {

  constructor(@Inject(LoginService) private login_service: LoginService) { }

  hasCategoryDeletePerm(){
    const u = this.login_service.getUser()
    return u.is_staff;
  }
}

Configuring django for centralised log monitoring with ELK stack with custom logging option (eg. client ip, username, request & response data)

When you are lucky enough to have enough users that you decide to roll another cloud instance for your django app, logging becomes a little bit tough because in your architecture now you would be needing a load balancer which will be proxying request from one instance to another instance based on requirement. Previously we had log in one machine to log monitoring was easier, when someone reported a error we went to that instance and looked for errors, but now as we have multiple instance we have to go to all the instance, regardless of security risks, i would say it is a lot of work. So I think it would be wise to have a centralized log aggregating service.

For log management and monitoring we are using Elastic Logstash and Kibana popularly known as ELK stack. For this blog we will be logging pretty much all the request and its corresponding responses so that debugging process gets handy for us. To serve this purpose we will leverage django middlewares and python-logstash.

First of all let’s configure our settings.py for logging:

LOGGING = {
    'version': 1,
    'disable_existing_loggers': True,
    'formatters': {
        
        'standard': {
            'format': '%(asctime)s [%(levelname)s] %(name)s: %(message)s'
        },
        'logstash': {
            '()': 'proj_name.formatter.SadafNoorLogstashFormatter',
        },
    },
    'handlers': {
        'default': {
            'level':'DEBUG',
            'class':'logging.handlers.RotatingFileHandler',
            'filename': '/var/log/proj_name/django.log',
            'maxBytes': 1024*1024*5, # 5 MB
            'backupCount': 5,
            'formatter':'standard',
        },  
        'logstash': {
          'level': 'DEBUG',
          'class': 'logstash.TCPLogstashHandler',
          'host': 'ec2*****.compute.amazonaws.com',
          'port': 5959, # Default value: 5959
          'version': 1, # Version of logstash event schema. Default value: 0 (for backward compatibility of the library)
          'message_type': 'logstash',  # 'type' field in logstash message. Default value: 'logstash'.
          'fqdn': False, # Fully qualified domain name. Default value: false.
          #'tags': ['tag1', 'tag2'], # list of tags. Default: None.
          'formatter': 'logstash',
      },

        'request_handler': {
            'level':'DEBUG',
            'class':'logging.handlers.RotatingFileHandler',
            'filename': '/var/log/proj_name/django.log',
            'maxBytes': 1024*1024*5, # 5 MB
            'backupCount': 5,
            'formatter': 'standard',
        },
    },
    'loggers': {
        'sadaf_logger': {
            'handlers': ['default', 'logstash'],
            'level': 'DEBUG',
            'propagate': True
        },
    }
}

As you can see we are using a custom logging format. We can leave this configuration and by default LogstashFormatterVersion1 is the logging format that will work just fine. But I chose to define my own logging format because my requirement is different, I am running behind a proxy server, I want to log who has done that and from which IP. So roughly my Log Formatter looks like following:

from logstash.formatter import LogstashFormatterVersion1

from django.utils.deprecation import MiddlewareMixin
class SadafNoorLogstashFormatter(LogstashFormatterVersion1):
    def __init__(self,*kargs, **kwargs):
        print(*kargs, **kwargs)
        super().__init__(*kargs, **kwargs)


    def format(self, record,sent_request=None):
        print(record)
        print(sent_request, "old req")
        caddr = "unknown"
        #print(record.request.META)

        if 'HTTP_X_FORWARDED_FOR' in record.request.META:
            caddr = record.request.META['HTTP_X_FORWARDED_FOR'] #.split(",")[0].strip()
        
#        print(record.request.POST,record.request.GET, record.request.user)
        message = {
            '@timestamp': self.format_timestamp(record.created),
            '@version': '1',
            'message': record.getMessage(),
            'host': self.host,
            
            'client': caddr,
            'username': str(record.request.user),

            'path': record.pathname,
            'tags': self.tags,
            'type': self.message_type,
            #'request': self.record

            # Extra Fields
            'level': record.levelname,
            'logger_name': record.name,
        }

        # Add extra fields
#        print(type(self.get_extra_fields(record)['request']))
        message.update(self.get_extra_fields(record))

        # If exception, add debug info
        if record.exc_info:
            message.update(self.get_debug_fields(record))

        return self.serialize(message)

As our requirement is to log every request our middleware may look like following:

import logging

request_logger = logging.getLogger('sadaf_logger')
from datetime import datetime
from django.utils.deprecation import MiddlewareMixin
class LoggingMiddleware(MiddlewareMixin):
    """
    Provides full logging of requests and responses
    """
    _initial_http_body = None
    def __init__(self, get_response):
        self.get_response = get_response

    def process_request(self, request):
        self._initial_http_body = request.body # this requires because for some reasons there is no way to access request.body in the 'process_response' method.


    def process_response(self, request, response):
        """
        Adding request and response logging
        """
#        print(response.content, "xxxx")
        if request.path.startswith('/') and \
                (request.method == "POST" and
                         request.META.get('CONTENT_TYPE') == 'application/json'
                 or request.method == "GET"):
            status_code = getattr(response, 'status_code', None)
            print(status_code)

            if status_code:
                if status_code >= 400:
                    log_lvl = logging.ERROR
                else:
                    log_lvl = logging.INFO

            #request_logger.log(logging.DEBUG,)
            request_logger.log(log_lvl,
                               "GET: {}"
                               ""
                               .format(
                                   request.GET,
                                   ), 
                                   extra ={
                                       'request': request,
                                       'request_method': request.method,
                                       'request_url': request.build_absolute_uri(),
                                       'request_body': self._initial_http_body.decode("utf-8"),
                                       'response_body':response.content,
                                       'status': response.status_code
                                   }
                                       #extra={
                    #'tags': {
                    #    'url': request.build_absolute_uri()
                    #}
                #}
                )
#            print(request.POST,"fff")
        print("hot")
        return response

So pretty much you are done. Go login to your Kibana dashboard, make index pattern that you are interest and see your log:

Dealing with mandatory ForiegnkeyField for fields that is not in django rest framework serializers

Although I am big fan of django rest framework but sometime i feel it is gruesome to deal with nested serializers (Maybe I am doing something wrong, feel free to suggest me your favourite trick.)

Suppose we have two models, ASerializer is based on A model, BSerializer is based on `B` model. A and B models are related, say B has a foreign key to A. So while creating B it is mandatory to define A but A serializer is full of so much data that I don’t want to have that unnecessary overhead at my BSerializer, but when creating B I must have it. Here how I solved it:

For the sake of brevity let’s say A is our Category, and B is Product. Every Product has a Category, so Product has a foreign key of Category, but I am not making it visible at ProductSerializer given that category has a lot of unnecessary information that is not necessary.

from django.shortcuts import get_object_or_404
class ProductSerializer(serializers.ModelSerializer):
    def to_internal_value(self, data):
        if data.get('category'):
            self.fields['category'] = serializers.PrimaryKeyRelatedField(
                queryset=Category.objects.all())

            cat_slug = data['category']['slug']
            cat = get_object_or_404(Category, slug=cat_slug)
            
            data['category']= cat.id



        return super().to_internal_value(data)

A Django Rest Framework Jwt middleware to support request.user

I am using following Django (2.0.1), djangorestframework (3.7.7), djangorestframework-jwt (1.11.0) on top of python 3.6.3. By default djangorestframework-jwt does not include users in django’s usual requst.user. If you are using code>djangorestframework, chances are you have a huge code base at API which is leveraging this, not to mention at your permission_classes at viewsets. Since you are convinced about the fact that jwts are the best tools for your project, no wonder that you would love to migrate from your old token to new jwt tokens. To make the migration steps easier, we will write a middleware that will set request.user for us.

from django.utils.functional import SimpleLazyObject
from rest_framework_jwt.serializers import VerifyJSONWebTokenSerializer
from rest_framework.exceptions import ValidationError

#from rest_framework.request from Request
class AuthenticationMiddlewareJWT(object):
    def __init__(self, get_response):
        self.get_response = get_response


    def __call__(self, request):
        request.user = SimpleLazyObject(lambda: self.__class__.get_jwt_user(request))
        if not request.user.is_authenticated:
            token = request.META.get('HTTP_AUTHORIZATION', " ").split(' ')[1]
            print(token)
            data = {'token': token}
            try:
                valid_data = VerifyJSONWebTokenSerializer().validate(data)
                user = valid_data['user']
                request.user = user
            except ValidationError as v:
                print("validation error", v)
            

        return self.get_response(request)

And you need to register your middleware in settings:



MIDDLEWARE = [
    #...
    'path.to.AuthenticationMiddlewareJWT',
]

Angular 5 image upload recipe

To prepare our html to upload images we would need to add input field to our html.

<input type="file" class="form-control" name="logo" required (change)="handleFileInput($event.target.files)" >

When file is changed it is going to call a function. We would need to define that function at our component.

 
 image_to_upload: File = null;
 handleFileInput(files: FileList) {
   this.image_to_upload = files.item(0);
 }

 create(image_data) {
   this.image_service.uploadImage(this.image_to_upload, image_data);
 }

Then at our service we would need to make the post request.

uploadImage(fileToUpload, image_data) {
  const endpoint = 'http://localhost:8000/image/';

const formData: FormData = new FormData();

  for(const k in image_data){
    formData.append(k, image_data[k]);
  }

  formData.append('image', fileToUpload, fileToUpload.name);
  console.log(formData);
   return this.http
    .post(endpoint, formData).toPromise().then(
      (res) => {
        console.log(res);
      }
    );
 }

Which will be sufficient for any API that accepts POST request:

{
 image: 
}

If we want to implement it using django, it would look like following:

Django model:

class Image(models.Model):
    image = models.FileField(storage = MyStorage(location="media/shop"))

Django Rest Framework serializers and viewsets:

class ImageSerializer(serializers.ModelSerializer):
    class Meta:
        model = Image
        fields = ('id','logo',)

class ImageViewSet(viewsets.ModelViewSet):
    queryset = Image.objects.all()
    serializer_class = ImageSerializer

router = routers.DefaultRouter()
router.register(r'image', ImageViewSet)

urlpatterns = router.urls

Integrating amazon s3 with django using django-storage and boto3

If we are lucky enough to get high amount of traffic at our website, next thing we start to think about is performance. The throughput of a website loading depends on the speed we are being able to deliver the contents of the website, to our users from our storages. In vanilla django, all assets including css, js, files and images are being stored locally in a predefined or preconfigured folder. To enhance performance we may have to decide to use a third party storage service that alleviate the headache of caching, zoning, replicating and to build the infrastructure of a Content Delivery Network. Ideally we would like to have a pluggable solution, something that allows us to switch storages from this to that, based on configuration. django-storages is one of the cool libraries from django community that helps to maintain 3rd party storage services like aws s3, google cloud, ftp, dropbox and so on. Amazon Webservice is one of the trusted service that offers a large range of services, s3 is one of the cool services from AWS that helps us to store static assets. boto3 is a python library being distributed by amazon to interact with amazon s3.

First thing first, to be able to store files on s3 we would need permission. In AWS world, all sorts of permissions are being managed using Identity Access Management (IAM).
i) In amazon console, you will be able to find IAM under Security, Identity & Compliance. Go there.
ii) We would need to add user with programmatic access.
iii) We would need to add new group.
iv) We would need to set policy for the group. Amazon provides bunch of predefined policies. For our use case, we can choose AmazonS3FullAccess
v) We have to store User, Access key ID and the Secret access key.

In s3 we can organize our contents into multiple buckets. We can use several buckets for a single Django project, sometime it is more efficient to use more but for now we will use only one. We will need to create bucket.

Now we need to install:

pip install boto3
pip install django-storages

We will need to add storages inside our INSTALLED_APPS of settings.py along with other configuration files of

django-storage
INSTALLED_APPS = [
    'django.contrib.auth',
    'django.contrib.contenttypes',
    'django.contrib.sessions',
    'django.contrib.messages',
    'django.contrib.staticfiles',

    'storages',
]


AWS_ACCESS_KEY_ID = '#######'
AWS_SECRET_ACCESS_KEY = '#####'
AWS_STORAGE_BUCKET_NAME = '####bucket-name'
AWS_S3_CUSTOM_DOMAIN = '%s.s3.amazonaws.com' % AWS_STORAGE_BUCKET_NAME
#AWS_S3_OBJECT_PARAMETERS = {
#    'CacheControl': 'max-age=86400',
#}
AWS_LOCATION = 'static'

STATICFILES_DIRS = [
    os.path.join(BASE_DIR, 'mysite/static'),
]
STATIC_URL = 'https://%s/%s/' % (AWS_S3_CUSTOM_DOMAIN, AWS_LOCATION)
STATICFILES_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'

When we are using django even when we don’t write any html, css or js files for our projects, it already has few because many of classes that we will be using at our views, its parent class may have static template files, base html files, css, js files. These static assets are being stored in our python library folder. To move then from library folder to s3 we will need to use following command:

python manage.py collectstatic

Thing to notice here is that, previously static referred to localhost:port but now it is being referred to s3 link.

{% static 'img/logo.png' %}

We may like to have some custom configuration for file storage, say we may like to put media files in a separate directory, we may not like it to be overwritten by another user. In that case we can define a child class of S3Boto3Storage and change the value of DEFAULT_FILE_STORAGE.

#storage_backends.py

from storages.backends.s3boto3 import S3Boto3Storage

class MyStorage(S3Boto3Storage):
    location = 'media'
    file_overwrite = False
DEFAULT_FILE_STORAGE = 'mysite.storage_backends.MediaStorage'  

Now all our file related fields like models.FileField(), models.ImageField() will be uploading file in our s3 bucket inside the directory ‘media’.

Now we may have different types of storages, some of them will be storing documents, some of them will be publicly accessible, some of them will be classified. Their directory could be different and so on so forth.

class MyPrivateFileStorage(S3Boto3Storage):
    location = 'classified'
    default_acl = 'private'
    file_overwrite = False
    custom_domain = False

If we want to use any other storages that is not defined in DEFAULT_FILE_STORAGE in settings.py. We would need to define it at the field of our model models.FileField(storage=PrivateMediaStorage()).

Crawling a website with python scrapy

Think of a website as a web, how do we crawl that web? Chances are you went to that navigation menu and found a link that you found interesting you clicked on that and you went to that page to find that important information that you were looking for. Or probably your favourite search engine did it for you. How did your search engine did that or how can you make that traversal automatically? Exactly thats where crawler comes into business. Chances are your search engine started crawling on your website based on a link you shared somewhere. We will create one such crawler using python’s crawling framework called scrapy. For last couple of months I have been using it, so felt like it would be wrong not to have a blog about it.

It is always better to have a python virtual environment, so lets set it up:

$ virtualenv .env
$ source .env/bin/activate

Now that we have a virtual environment running, we will install scrapy.
$ pip install scrapy

it has some dependency, like lxml which is basically being used for html parsing using selectors, cryptography and ssl related python libraries will also be installed. Pip takes care of everything, but when we will be writing codes, we will see this in our error message very often, so it is always good idea to have some idea about the dependencies.

Now that we have it installed, we have access to few new commands. Using these commands we can create our own scrapy project, which is not strictly necessary but still I personally like to have everything bootstrapped here the way the creator wanted me to have, that way I could possibly have the same code standard the author of scrapy had while writing this framework.

$ scrapy startproject blog_search_engine

It will create bunch of necessary and unnecessary files you can read about all of those files at documentation, but the interesting part here is that it will create a configuration file called scrapy.cfg , which empowers you with few extra commands. Your spider resides inside the other folder. Spiders are basically the BOT that contains the characteristics defination of that BOT. Usually you can create a spider using following command as a solid start:

$ scrapy genspider wordpress wordpress.com

It will generate a spider called wordpress inside your blog_search_engine/blog_search_engine/spiders/ directory. It creates a 4 or 5 lines of code at your file which does nothing. Lets give it some functionality, shall we? But we don’t know yet what we are automating. We are visiting wordpress.com and we will find the a links of an article, and then we will go that link and get that article. So before we write our spider we need to define what we are looking for right? Lets define our model. Model are usually stored inside items.py . A possible Article might have following fields.

class Article(scrapy.Item):
    title = scrapy.Field()
    body = scrapy.Field()
    link = scrapy.Field()

Now we will define our spider.

class WordPressSpider(scrapy.Spider):
    name = 'wordpress'
    start_urls = [ 'www.wordpress.com' ]

    def parse(self, response):
        article_links = response.css("#post-river").xpath("//a/@href").extract()

        for link in article_links:
            if "https://en.blog.wordpress.com/" in link:
                yield scrapy.Request(article_url,
                                     self.extract_article)

    def extract_article(self, response):
        article = Article()
        css = lambda s: response.css(s).extract()
        
        article['title'] = css(".post-title::text").extract()[0]

        body_html=" ".join(css('.entrytext'))
        body_soup = BeautifulSoup(body_html)
        body_text = ''.join(soup.findAll(text=True))


        article['body'] = body_text
        yield article

As we had configured at our scrapy settings yield at the parse hands over your article to pipeline, as it looks, pipeline could be a great place for database operations. This is possibly out of the scope of this particular blog, but yet you can have an outline of what you might need to do if you are using sqlalchemy as database, although sqlalchemy won’t be particularly helpful to deal with what we intend to do here, still i felt it would be helpful to have them.

class BlogSearchEnginePipeline(object):
    def process_item(self, item, spider):
        # a = Article(title=item['title'],body=item['body'])
        # db.session.add(instance)
        # db.session.commit()
        print 'article found:', item['title'], item['body']

        return item

Now we have a spider defined. But how do we run it? Its actually easy, but remember that you need to be inside your scrapy project to make this command work!

$ scrapy crawl wordpress

On the side note scrapy actually provide us options to pass parameters from commandline to pass argument to spider, we just need to define an intializer parameter

class WordPress
        name = "wordpress"
        ...
        def __init__(self, param=None):
                pass
        ...

Now we could call:

$ scrapy crawl wordpress -a param=helloworld

In this blog I tried to give you an outline of database orms. Sofar we have a spider but this spider has no great use so far, we will try to create a search engine with this spider at my next blog. Databases that sqlalchemy deals with are not particularly super good with text searches elastic search could be a great option if we are looking forward to implement a search option so at my next blog, I will be writing about a basic search engine implementation using elastic search. Thats in my todo list for this weekend.

Interesting Swift: overview of some interesting things in swift

So I am playing with swift a little bit for this holiday, and found out it to be very interesting language to play with, they actually designed many things that is actually more interesting than the languages I am very used to with. This blog is possibly not going to be a blog that I would refer people to read for learning as I am mentioning java, python or ruby all the way I am going with this blog and not everyone is supposed to be aquatinted with these language, but I am doing it get away with the feeling that I had while playing around.

So far I could find two ways in swift to initialize variables either var or let. var is for variables that actually varies and let is something that remains the same forever. Java has its constant using final that prevented us from changing the variable values once it got initialised. Ruby ensures it by putting its variable name all capital, which I am not a great fan of. Python does not provide much straight forward support for constants. I personally like constants because I believe internally at programming language implementation level it should give compliers and interpreters enough power to optimize and it helps me to reduce bugs in my code as I would have full control over the code on what I am doing and what the variable is intented to be doing even after passing my code to some one else to modify or add more features. Not to mention I have every freedom whether to define a variable with an associated type or let the initialized value to infer its type, which was also great. Although it is considered as a bad coding style to let swift not to infer the type.

var text: string = "hello world"
var text = "hi world"

lazy vars are also interesting, I actually never thought of something like that as I thought compilers or other optimizations tool we are using would take care of that. When we declare a variable as lazy variable, swift initialize that variable when it is absolutely required.

It is also interesting and very useful I believe the way swift syntax is taking care of if variable is initializaable or not.

if (let text = button.currentTitle){
 //display.text = display.text + text  
}

Python has a different syntax idea about ternary operations where java, ruby has almost the same syntax as swift.

output = a != nil ? a! : b

Some short hands on ternary operations comes handy like ruby:

output = a ?? " "

This is interesting that swift won’t allow me to put a variable without we initializing it. Or we have options for definiting a variable as optional when we are not intending to initialize it, now what are optionals? Glad that you asked.

Optionals are actually very interesting concept to me, In ruby code, while using other peoples API send? and send! with a bang, these variations of name actually told me a lot of details and safetly measure required or not on that methods by convention. It felt like swift took it to the next level, so far as my introductory tutorial states some of the properties of an instance can be optional type as well, that is being represented with a question mark in the end, like String?, that is just a special string. Optional actually can be used for chained calls and if it fails to reach to the end of the expression it returns nil. So it saves us from writing a lot of lets and if clauses.

let x = display?.text.hasValues{}

point to note that, x will be an optional in that case because we can never get a value return from this kind of chained methods.

We even can define this optional strings in our code.

var label: String?;

Its interesting and completely make sense that for getting back data from dictionaries we get an optional type in return. Also AnyObject is a type that takes anything and everything through a parameter or pretty much anywhere as a type but we might want to use as to treat that anyobject to something else let foo=ao as? String{}. as is just casting it to another class.

These special we actually need to take special care when handling with these optional properties. We need to unwrap that variable with a bang(!).

func touchButton(button: UIButton){
 let text = button.currentTitle!
 print("pressed \(text) button")
}

I have mentioned earlier that you can’t define a variable when you do not initialize it. Let me add more information to that statement, we actually can implicitly unwrap a variable while defining it as well, that will reduce our burden of unwrapping it every time we intended to use it.

var display: UILabel!
print("\(display.text)")

var cash: Double {
set: {
  return Double(label.text!)
}
get:{
  label.text = cash
}
}

Apart from computed properties, there are actually few other interestng properties too, like observer properties. willset and didset could be one of them.

Swift function declaration could be very interesting as well, we can define internal and external parameters in swift functions, internal parameter has scope locally when external parameters are the parameter the callee of the method has to fullfil.

Like python it provides options for positioned parameter, in swift it can be done using an underscore before the parameter name, which I did not like most, but it is how they speak swift.

Java has some support for lambda expressions, python and ruby has way better support for it though. My first impression on swift anonymous methods was pretty interesting, I would say, it is even easier.

{str1 :String, stt2:String} ~> String in
return str1 + str2

Swift has options for anonymous abstract methods as well, which can be written as following:

(Double, Double)~> Double

Need to mention that in like python or ruby functions are also considers as object in swift. So does this previous implementation of abstract class. I will be talking about its use cases in a while.

Speaking of enums I am not a great fan of python or ruby enums so I would deliberately avoid talking about them, Java has great support for enums, it supports internal methods in enum which is great, I am glad that swift also supports internal methods like java even more interesting way actually. I guess in java I did not have the credibility to use separate initializer for separate cases, what swif offers,

enum MathOperator{
 case None
 case Constant(Double)
 case Unary((Double)~> Double)
 case Binary((Double, Double)~> Double)
}

As you can see, now we have options to define our enums with so many informations, even with functions.

We can have a dictionary of our operatons like following in swift:

let operations: Dictionary<String, MathOperator> ={
 "e": MathOperator.Constant(M_E),
 "±": MathOperator.Unary(sign_change),
 "+": MathOperator.Binary({op1 :Double, op2:Double} ~> String in
  return str1 + str2
 )
}
func sign_change(number: Double) ~> Double{
 return -1* number
}

Now when we need to use this we can use swift switch statements, point to note on swift switches are they are not fall through like typical switches, if we want it then we would need fallthrough to make it fall to next case:

func sign_change(number: Double) ~> Double{

 if let op = operations["+"]{
   switch op:
    case .Constant (let value):
      return value
    case .Unary (let function):
      return function(input1)
    case .Binary(let function):
      return function(input1,input2)
 }
}

Java does not support struct, but as far as I know there are ways to implement struct like objects in java, I have heard people saying that. I have read, and seen people bundling properties in struct in ruby and python, I have used struct for classes for temporary uses and data bundling in ruby. In swift I think it has huge scope for using it for temporary classes, I have been warned that every time I am creating a class it is possibly taking a whole new space in the memory unlike a class reference. So I need to be a little bit warned when using structs, but swift has its own beautiful way to manage it, like every time only when the data changes in swift struct it consumes the memory and it has intelligence to figure out which part of data it need to reallocate, not all of them always. Enums or structs stores its properties as a constant so we can’t change it without recreating the struct in the heap and there are special ways to do that using function like putting mutating func while declaring a function.

I really liked the way I can ensure type order in swift touples, I missed this sort of declaration while working in python as I wanted to have somewhat a gurantee that a function return type will be somewhat fixed.

let x: (String, Int, Double) = {"hello world", -1, 0.2}

Recently i had a great fight with python ands its unicode to make a work done perfectly. We need to add special comments at the header of script. Swift was a lot programmer friendly in that regard, Swift unicodes are full unicodes.

I guess sometime I will have to use objective C codes, as the tutor mentioned about NSObject class. To use some of the objective c apis I would need to inherit NSObject class. It has many other NS objects, I don’t think it would be very interesting thing to talk about. So deliberately ignoring them.

CentOS postgis setup: You need JSON-C for ST_GeomFromGeoJSON

I am actually struggling with this error on our centos server all day, this is how I fixed it. I could not recover everything from my commandline history but this is what I could find out. Hope it will be helpful. I am sure while cleaning up my server for production it will be helpful to me. So cheers!

All this started with this error message on my centos server:

npm-2 Unhandled rejection SequelizeDatabaseError: You need JSON-C for ST_GeomFromGeoJSON
npm-2     at Query.formatError (/home/centos/jobcue.com/node_modules/sequelize/lib/dialects/postgres/query.js:357:14)
npm-2     at null. (/home/centos/jobcue.com/node_modules/sequelize/lib/dialects/postgres/query.js:88:19)
npm-2     at emitOne (events.js:77:13)
npm-2     at emit (events.js:169:7)
npm-2     at Query.handleError (/home/centos/jobcue.com/node_modules/pg/lib/query.js:108:8)
npm-2     at null. (/home/centos/jobcue.com/node_modules/pg/lib/client.js:171:26)
npm-2     at emitOne (events.js:77:13)
npm-2     at emit (events.js:169:7)
npm-2     at Socket. (/home/centos/jobcue.com/node_modules/pg/lib/connection.js:109:12)
npm-2     at emitOne (events.js:77:13)
npm-2     at Socket.emit (events.js:169:7)
npm-2     at readableAddChunk (_stream_readable.js:153:18)
npm-2     at Socket.Readable.push (_stream_readable.js:111:10)
npm-2     at TCP.onread (net.js:531:20)

When I tried to install json-c on server it was like:

sudo yum install json-c
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: centos.eecs.wsu.edu
 * epel: s3-mirror-us-west-2.fedoraproject.org
 * extras: linux.mirrors.es.net
 * updates: mirror.raystedman.net
Package json-c-0.11-4.el7_0.x86_64 already installed and latest version
Nothing to do

Then I started panicing. Tried 5-6 hours long yum battles and figure out a solution that would look like following:

Install some dependencies at first:

yum install geos-devel.x86_64
yum install proj-devel.x86_64
yum install gdal-devel.x86_64
yum install libxml2-devel.x86_64
yum install json-c-devel.x86_64

yum install postgresql92-devel
sudo yum install postgresql-server 

sudo yum install geos geos-devel
wget http://download.osgeo.org/proj/proj-4.8.0.tar.gz
gzip -d proj-4.8.0.tar.gz
tar -xvf proj-4.8.0.tar
cd proj-4.8.0
./configure
make
sudo make install

I needed to install gdal:

installing gdal:

sudo rpm -Uvh http://elgis.argeo.org/repos/6/elgis-release-6-6_0.noarch.rpm
sudo yum install -y gdal
./configure
make
make install

Obviously I needed to install json c:

sudo yum install json-c-devel

I needed to know where it is located:

rpm -ql json-c json-c-devel

for me it was at:

/usr/include/*

Now it is time to built our postgis like this:

wget http://download.osgeo.org/postgis/source/postgis-2.2.1.tar.gz
tar xvzf postgis-2.2.1.tar.gz
cd postgis-2.2.1
./configure --with-jsonc=/usr/include

make
make install
sudo make install