In this blog, I am going to share a simple code snippet that I have used to run a load tests on geth node. To get things done, I am using the locust python library, and to connect to get node, I am using web3 library. This piece of code finds 10 addresses at the initialization step. After initialization, it spawns a predefined number of users and tries to find the balance of 1 of those 10 addresses.
Continue reading “How to run a simple load test on Geth Node”Category: Uncategorized
A dirty patch to fix Django annotate related group by year/month/day related bug
Recently I was working on a Django project with a team, where we wanted to run some group-by queries for analytical data representation. As we already know that Django does not directly support group-by but there are ways one can achieve it by using Django values and annotate functions.
Model.objects.annotate(year=ExtractYear('timestamp'))
.values('year')
.annotate(ycount=Count('id'))
It was supposed to return a QuerySet that contains a count of entries that has been created in a specific year. Instead, it was returning a QuerySet that containing individual data.
During the first step of my investigation, I tried to log the SQL query that is associated with this query and it logged something like this.
SELECT EXTRACT(YEAR FROM `tablename`.`timestamp`) AS `year`, COUNT(`tablename`.`id`) AS `ycount` FROM `tablename` GROUP BY EXTRACT(YEAR FROM `tablename`.`timestamp`), `tablename`.`timestamp`
The SQL query that I wanted my ORM to create was:
SELECT EXTRACT(YEAR FROM `tablename`.`timestamp`) AS `year`, COUNT(`tablename`.`id`) AS `ycount` FROM `tablename` GROUP BY EXTRACT(YEAR FROM `tablename`.`timestamp`)
The difference was subtle, but the Django was grouping by using two fields and that was the reason behind this unintended result.
How can we possibly bypass this possible bug from Django? Since we had no way to group timestamps. The solution I had in mind is, while running queries, what if I can temporarily replace the value of timestamp on the runtime? Since values
, or F
does not allow to replace value
of a field I had to rely on extra
function that comes with Django.
Model.objects.annotate(year=ExtractYear('timestamp'))
.values('year')extra(select = {'timestamp': 'year'})
.annotate(ycount=Count('id'))
Which has produced the following SQL:
SELECT DATE(`tablename`.`timestamp`) AS `date`, MAX(`tablename`.`quantity`) AS `count` FROM `tablename` GROUP BY DATE(`tablename`.`timestamp`), (date)
It is probably not the ideal solution but it got things done before the Django team solves the problem. If you have a better solution in mind, I would love to talk about it and implement it.
Writing a k8s controller in GoLang: delete pods on secret change
The motivation behind writing was to explore how customer controller works in kubernetes. As I was doing it, i felt like doing something that solves a problem. Most of the deployment practically we use has some form of secret mounted with it. It is usually a common practice to iterate those secret every now and then, but one problem that we face is that, after we change the secrets of kubernetes, it does not reflect the pods immediately. It is never a flaw but the idea here is, along with the new deployment of the application, the secrets are going to be changed but for people like me, who wants to see the changes immediately, it can be annoying sometime. One way to solve the problem is to kill the pods associated with the deployment one by one. As the pods are being recreated, it picks the latest secret instead of the old one. Usually developers uses kubectl command to delete the pods but in this blog I am going to write a custom controller using go lang.
Continue reading “Writing a k8s controller in GoLang: delete pods on secret change”Shallow diving k8s components: etcd
Etcd is a high available key-value data storage that stores all the data necessary for running a Kubernetes cluster. The first time I learned about etcd i asked myself why? There are so many production-ready key-value databases out there. Why did the Kubernetes team choose etcd? What am I missing? That lead me to learn more about etcds. Etcd is perfect for kubernetes because of at least 2 reasons. One of them is because it is robust in nature. It makes sure that the data are consistent across the cluster. It makes sure that it is highly available. Another reason is, it has a feature called watch
. Watch allows an observer to subscribe to changes on a particular data. It goes perfectly with Kubernete’s design paradigm.
Collecting docker and syslogs using ssl enabled filebeat OpenDistro ELK
docker-compose.yml
version: '3' services: oelk-node1: image: amazon/opendistro-for-elasticsearch:0.9.0 container_name: oelk-node1 environment: - cluster.name=oelk-cluster - bootstrap.memory_lock=true # along with the memlock settings below, disables swapping - "ES_JAVA_OPTS=-Xms512m -Xmx512m" # minimum and maximum Java heap size, recommend setting both to 50% of system RAM - opendistro_security.ssl.http.enabled=false - path.repo=/usr/share/elasticsearch/backup ulimits: memlock: soft: -1 hard: -1 volumes: - oelk-data1:/usr/share/elasticsearch/data - /var/log/elasticsearchbkup:/usr/share/elasticsearch/backup ports: - 9200:9200 - 9600:9600 # required for Performance Analyzer networks: - oelk-net oelk-node2: image: amazon/opendistro-for-elasticsearch:0.9.0 container_name: oelk-node2 environment: - cluster.name=oelk-cluster - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms512m -Xmx512m" - discovery.zen.ping.unicast.hosts=oelk-node1 - opendistro_security.ssl.http.enabled=false ulimits: memlock: soft: -1 hard: -1 volumes: - oelk-data2:/usr/share/elasticsearch/data networks: - oelk-net kibana: image: amazon/opendistro-for-elasticsearch-kibana:0.9.0 container_name: oelk-kibana ports: - 5601:5601 expose: - "5601" environment: ELASTICSEARCH_URL: http://oelk-node1:9200 ELASTICSEARCH_HOSTS: https://oelk-node1:9200 networks: - oelk-net logstash: image: docker.elastic.co/logstash/logstash:6.7.1 volumes: - ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro - ./logstash/pipeline:/usr/share/logstash/pipeline:ro - "./certs:/etc/certs" ports: - "5044:5044" environment: LS_JAVA_OPTS: "-Xmx256m -Xms256m" networks: - oelk-net depends_on: - oelk-node1 - oelk-node2 filebeat: hostname: filebeat build: context: filebeat dockerfile: Dockerfile volumes: - "/var/lib/docker/containers:/usr/share/dockerlogs/data:ro" - "/var/logs:/usr/share/syslogs:ro" - "/var/log/syslog:/var/log/syslog.log:ro" - "/var/run/docker.sock:/var/run/docker.sock" - "./certs:/etc/certs" networks: - oelk-net depends_on: - logstash volumes: oelk-data1: oelk-data2: networks: oelk-net:
pipeline/logstash.conf
input{ beats { port => 5044 ssl => true ssl_certificate_authorities => ["/etc/certs/ca.crt"] ssl_certificate => "/etc/certs/logstash.crt" ssl_key => "/etc/certs/logstash.key" ssl_verify_mode => "force_peer" } # http{ # port => 5044 # } } filter { # if [docker][image] =~ /^logstash/ { # drop { } # } mutate { rename => ["host", "server"] convert => {"server" => "string"} #this may be be not necessary but just in case added it } } ## Add your filters / logstash plugins configuration here output { elasticsearch { hosts => "oelk-node1:9200" user => admin password => admin } }
filebeat/Dockerfile
FROM docker.elastic.co/beats/filebeat:6.7.1 #FROM docker-logs-elk/filebeat:1.0.0 # Copy our custom configuration file COPY config/filebeat.yml /usr/share/filebeat/filebeat.yml USER root # Create a directory to map volume with all docker log files #RUN mkdir /usr/share/filebeat/dockerlogs RUN chown -R root /usr/share/filebeat/filebeat.yml RUN chmod -R go-w /usr/share/filebeat/filebeat.yml
filebeat.yml
filebeat.inputs: - type: docker combine_partial: true containers: path: "/usr/share/dockerlogs/data" stream: "stdout" ids: - "*" # - type: log # # Change to true to enable this input configuration. # enabled: true # # Paths that should be crawled and fetched. Glob based paths. # paths: # - /var/log/syslog.log # filebeat.prospectors: # - type: log # enabled: true # paths: # - '/usr/share/dockerlogs/data/*/*-json.log' # json.message_key: log # json.keys_under_root: true # processors: # - add_docker_metadata: ~ output: logstash: hosts: ["logstash:5044"] ssl.certificate_authorities: ["/etc/certs/ca.crt"] ssl.certificate: "/etc/certs/beat.crt" ssl.key: "/etc/certs/beat.key"
Hello world!
Welcome to WordPress. This is your first post. Edit or delete it, then start writing!
How to add flask-admin to a Blueprint?
For those who works with closed source tools you won’t understand the freedom that we have on opensource tools because in real life our requirements changed over time, the tool that we are using they also start to grow and start to cover things they did not had in mind when they started the project. As a developer, we want our tools to do different things and as a person to person and project to project, we may have different sense of beauty and different meaning of code organization phylosophy. In computer science in general we always try to map our problem with a known solution that we have already solved before. So when you have access to source code of your tool, you can easily dig up that source code extend or alter the functionality and map your solution that matches your situation.
For example while working on python flask framework after couple years I realized how big they have grown over time, they do pretty much everything django is capable to do and it is even better because of their sense of modularity and flexibility. So for this project I am working on I started using flask, flask-admin for administrative panel and I am using flasks blueprint to separate different components of my project. Flask admin is actually not very comfortable or easy to attach with blueprints, that actually makes sense because because it adds admin panel and admin panel should be attached with the main app rather than a sub app like blueprint. But I actually had different use case and with admin panel I had to add my custom views which I don’t want to put at my app.py
rather I want it to be in my controller. Other class architecture I had in mind will cause a circular dependency which I always get me in panic. I may not be very much neat and clean, pretty, tidy, person in personal life, I know I have limitations but I try to keep my code pretty and tidy and a thing of beauty that made me dig up the source code of those libraries at my office hours to rewrite this. Enough talk, if Linus Torvalds visits my blog ever he is going to get real mad at me for talking too much. So here you go, my code that I am using that satisfies my need:
# admin_blueprint.py from flask import Blueprint from flask_admin.contrib.sqla import ModelView from flask_admin import Admin class AdminBlueprint(Blueprint): views=None def __init__(self,*args, **kargs): self.views = [] return super(AdminBlueprint, self).__init__('admin2', __name__,url_prefix='/admin2',static_folder='static', static_url_path='/static/admin') def add_view(self, view): self.views.append(view) def register(self,app, options, first_registration=False): print app admin = Admin(app, name='microblog', template_mode='adminlte') for v in self.views: admin.add_view(v) return super(AdminBlueprint, self).register(app, options, first_registration)
#app/admin/controller.py from admin_blueprint import AdminBlueprint from common.models import MyModel, db app = AdminBlueprint('admin2', __name__,url_prefix='/admin2',static_folder='static', static_url_path='/static/admin') app.add_view(ModelView(MyModel, db.session))
#app/__init__.py from flask_sqlalchemy import SQLAlchemy # Define the WSGI application object app = Flask(__name__,template_folder="../templates",static_folder="../templates") from app.api.controllers import app as api from app.frontend.controllers import app as frontend from app.admin.controllers import app as admin # Register blueprint(s) app.register_blueprint(api) app.register_blueprint(frontend) app.register_blueprint(admin) # replacing the following code that I had #from flask_admin import Admin #from flask_admin.contrib.sqla import ModelView #from common.models import * #admin = Admin(app, name='microblog', template_mode='adminlte') #admin.add_view(ModelView(MyModel, db.session))
My First ever attempt at Digital Painting
This is my first ever attempt at “Digital Painting(!)”, someday i want to make better than this . . . 😀
*.VMG to *.DOC Decoder
I have written a decoder for *.vmg to *.doc.
I think this will help you guys to decode *.VMG to *.DOC file since it has been reported that ABC Amber Nokia Converter sometimes fails to convert this type files sometimes and i have got no other software to do this type of task. See if it helps or not!
Caution i suspect 90% it may not work, but rest of the time IT WILL WORK FINE!VMG To DOC Decoder .jar
Deshi Example of Recursive drawing
Happy Birthday Bangladesh.
Though i am a little late but check out the code, our smrityShoudho can be a local example of recursion, check out the code below (well, maybe it is nothing impressive, but wasted a lot of time to make it, so tried to share among friends, LOL 😀 ):
import java.awt.*;
import javax.swing.*;
@SuppressWarnings("serial")
public class smrityShoudho extends JPanel{
public void painLine(Graphics g, int x1,int y1, int x2,int y2, int dir){
g.drawLine(x1, y1, x2,y2);
if(x2<0 || x2 > frame.getWidth() || y2<0|| y2>frame.getHeight()) return;
painLine(g,x1,y1+frame.getHeight()/8,x2-frame.getWidth()/14*dir,y2,dir);
}
public void paint(Graphics g) {
painLine(g,frame.getWidth()/2,0,frame.getWidth()/2,frame.getHeight(),1);
painLine(g,frame.getWidth()/2,0,frame.getWidth()/2,frame.getHeight(),-1);
}
static JFrame frame;
public static void main(String args[]) {
frame = new JFrame();
frame.add(new smrityShoudho());
frame.setSize(700, 400);
frame.setVisible(true);
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
}
}
For those who does not have knowledge about Bangladesh and this monument, should check this following links:
http://en.wikipedia.org/wiki/Jatiyo_Smriti_Soudho
http://en.wikipedia.org/wiki/Bangladesh_Liberation_War
Jatiyo Sriti Soudho (Bengali: জাতীয় স্মৃতি সৌধ Jatio Sriti Shoudho) or National Martyrs’ Memorial is a monument in Bangladesh. It is the symbol of the valour and the sacrifice of 3 million life who had died for the Bangladesh Liberation War of 1971, which brought the independence of Bangladesh from Pakistani rule. The monument is located in Savar, about 35 km north-west of the capital, Dhaka.[1] It was designed by Syed Mainul Hossain. he main monument is composed of seven isosceles triangular planes each varying in size in its height and base. The highest one has the smallest base while the broadest base has the lowest height. The planes are folded at the middle and placed one after another. The highest point of the structure reaches 150 feet. This unique arrangement of the planes has created a structure that seems to change its configuration when viewed from different angles. The architect has used concrete for the monument while all the other structures and pavements of the complex are made of red bricks. Use of different materials has added to the gravity of the monument. 7 planks symbolizes 7 decade, 7 heros, 7 great leaders and when this 7 planks reach to the same point in the top, the one country BANGLADESH!
thanks for toleration. 😛