Protobuf: Migrating from proto2.6 to proto3

Uninstall protobuf-compiler
To remove just protobuf-compiler package itself from Ubuntu

sudo apt-get remove protobuf-compiler
Uninstall protobuf-compiler and it’s dependent packages
To remove the protobuf-compiler package and any other dependant package which are no longer needed from Ubuntu.

sudo apt-get remove –auto-remove protobuf-compiler
Purging protobuf-compiler
If you also want to delete configuration and/or data files of protobuf-compiler from Ubuntu

sudo apt-get purge protobuf-compiler

To delete configuration and/or data files of protobuf-compiler and it’s dependencies from Ubuntu Trusty then execute:

sudo apt-get purge –auto-remove protobuf-compiler

Once uninstalled, installation for proto3 follow the steps as described below

curl -OL https://github.com/google/protobuf/releases/download/v3.2.0/protoc-3.2.0-linux-x86_64.zip

unzip protoc-3.2.0-linux-x86_64.zip -d protoc3 

Note: run command ‘sudo apt install unzip’ if the program ‘unzip’ is currently not installed.

sudo mv protoc3/bin/* /usr/local/bin/

sudo mv protoc3/include/* /usr/local/include/

 

To check where protbuf compiler is installed or the version of the protobuf compiler,

user@LT-201:~/Downloads$ which protoc
/usr/local/bin/protoc
user@LT-201:~/Downloads$ protoc –version
libprotoc 3.2.0

VS code for Angular – Setup

Step 1: Install VS Code

Step 2: Install debugger-for-chrome extension

Step 3:

Press F1 to open command prompt in VS code and execute “Tasks: Configure Task Runner Command” this will generate sample tasks.json file in .vscode directory

.vscode folder has launch.json and tasks.json files

Ensure that launch.json has “sourceMaps” : true ,

Example:
 “configurations”: [
        {
            “type”: “chrome”,
            “request”: “launch”,
            “name”: “Launch Chrome”,
            “url”: “http://localhost:4200”,
            “sourceMaps”: true,
            “webRoot”: “${workspaceRoot}”
        }
tasks.json file to contain
{
    // See https://go.microsoft.com/fwlink/?LinkId=733558
    // for the documentation about the tasks.json format
    “version”: “0.1.0”,
    “command”: “tsc”,
    “isShellCommand”: true,
    “args”: [“-p”, “.”],
    “showOutput”: “silent”,
    “problemMatcher”: “$tsc”
}

Note if .vscode does not exits create manual folder in the root directory of the project and add the launch.json manually

 

 

 

MySQL Remote access denied – Host is not allowed to connect to this MySQL server

At times when the MySQL server is residing on a different machine, and you want to access MySQL remotely you would get the error as access denied.

To resolve the issue,

First ensure that MySQL server is configured to accept remote connections from the user by verifying the My.cnf

Locate the my.cnf at

vim /etc/mysql/my.cnf, at times the configuration file could also be 
located at etc/mysql/mysql.conf.d/mysqld.cnf

comment out the lines 
#bind-address           = 127.0.0.1
#skip-networking

Restart the MySQL Server


Give permissions to the user who is trying to connect the MySQL Server
mysql> GRANT ALL PRIVILEGES ON *.* TO 'USERNAME'@'%' IDENTIFIED BY 'PASSWORD' WITH GRANT OPTION;

Verify that the privileges have been assigned
SELECT * from information_schema.user_privileges where grantee like "'USERNAME'%";

Finally flush the privileges to take effect.
FLUSH PRIVILEGES;

Just in case if mistakingly privileges have been given to revoke the privileges to the uer

mysql> REVOKE ALL PRIVILEGES, GRANT OPTION FROM 'USERNAME'@'%';
Following will revoke all options for USERNAME from particular IP:

mysql> REVOKE ALL PRIVILEGES, GRANT OPTION FROM 'USERNAME'@'1.2.3.4';
Its better to check information_schema.user_privileges table after running REVOKE command.
 

 


 

MySQL Configuration file in Workbench

A fresh installation of MySQL – Workbench in ubuntu at times does not reflect the mysql configuration file which is being used by the server. To set it up we first need to know the configuration files used at start up.

Following are the steps to determine which configuration file is used.

Determine mysql process from where it is initiated

user@Sys-201:/etc/mysql/conf.d$ ps aux | grep mysql
mysql    22272  0.1  1.2 1509844 198076 ?      Ssl  14:51   0:01 /usr/sbin/mysqld
user     22312  0.0  0.0  12548  3024 ?        S    14:51   0:00 /bin/bash /usr/bin/mysql-workbench
user     22315  0.0  0.0   4508   860 ?        S    14:51   0:00 /bin/sh /usr/bin/catchsegv /usr/lib/mysql-workbench/mysql-workbench-bin
user     22317  1.5  1.0 1506344 164356 ?      SLl  14:51   0:11 /usr/lib/mysql-workbench/mysql-workbench-bin
user     22514  0.0  0.0  14224   932 pts/1    S+   15:04   0:00 grep –color=auto mysql

besides the mysql row, it should indicate the configuration file which mysql is using when it starts off

If no file found, double check which instance of mysqld is running by issuing
user@Sys-201:/etc/mysql/conf.d$ which mysqld
/usr/sbin/mysqld

The one with the above matches the one listed, issue the following command to detect the order of loading the configuration files.
user@Sys-201:/etc/mysql/conf.d$ mysqld –verbose –help  | grep -A 1 “Default options”
mysqld: Can’t change dir to ‘/var/lib/mysql/’ (Errcode: 13 – Permission denied)
Default options are read from the following files in the given order:
/etc/my.cnf /etc/mysql/my.cnf ~/.my.cnf

 

Once you know for certain that the configuration file used, the same can be associated with MySQL – Workbench by

Database-ManageConnections-> Select Connection in the Left hand pane-> The right hand side pane is enabled, select System Profile Tab-> Configuration File, specify the config file

 

MySQL Metrics

If you can’t measure it, you can’t improve it.

So the biggest thing when it comes to measuring is from where do you start. Here we lay down a few metrics which would help us in setting up things to measure the MySQL performance. This would be a building block and not a comprehensive list  of metrics to measure MySQL performance.

Performance is captured in terms of response times, the quicker the response time the better is the performance, though improving performance could orient towards the resources but we would not be delving into resources to get a better performance, but to profile as to where the could be the areas where performance could be improved.

Ensure to have the mysql slow query log enabled, check the variable by issuing the command

SHOW VARIABLES LIKE ‘%slow_query_log%’;
SET GLOBAL slow_query_log = ‘ON’;

Validate whether slow query log is enabled by issuing the same command, it should be ON now.

Also look up as to where the slow queries are logged by issuing the command

SHOW VARIABLES LIKE ‘%slow_query_log_file%’;

Just in case if you want to alter as to where the query log needs to be saved, issue the command

SET GLOBAL slow_query_log_file = ‘/var/logs/mysql-slow.log’;

With this we are set to start off with query-digest which would profile information for slow queries. This would help us dig deep into areas where the time is being spent in terms of execution and waiting while the query is getting executed.

user@Sys-201:~$ pt-query-digest –limit=100% /var/lib/mysql/Sys-201-slow.log > ptqd1.out

Just in case if you dont get to see any information in the file which is specified, probably you need to use the root login.

https://poormansprofiler.org/

MySQL does not need SQL

https://www.olindata.com/en/blog/2014/08/analysing-slow-mysql-queries-pt-query-digest

Profiling your slow queries using pt-query-digest and some love from Percona Server

https://raw.githubusercontent.com/major/MySQLTuner-perl/master/mysqltuner.pl

http://www.php-trivandrum.org/open-php-myprofiler/

Calculate Mysql Memory Usage – Quick Stored Procedure

https://www.percona.com/blog/2006/05/17/mysql-server-memory-usage/

http://mysql.rjweb.org/doc.php/memory

http://20bits.com/article/10-tips-for-optimizing-mysql-queries-that-dont-suck

MySQL Optimizations

Partitioning

MySQL partitioning is a concept with 2 contexts primarily the horizontal partitioning and the other being vertical partitioning.

Partitioning of relational data, it usually refers to decomposing your tables or breaking your tables either row-wise (horizontally) or column-wise (vertically).

Vertical partitioning, aka row splitting, uses the same splitting techniques as database normalization, but usually the term (vertical / horizontal) data partitioning refers to a physical optimization whereas normalization is an optimization on the conceptual level.

This is usually preferred when there is rare data used in one table, and you move the rare data into another table, thus reducing the overall size when running queries.

Horizontal partitioning“, or sharding, is replicating [copying] the schema, and then dividing the data based on a shard key.

Vertical partitioning” involves dividing up the schema (and the data goes along for the ride).

Horizontal Partitioning in data base

Keeping all the fields Ex:Table Employees has id,name,Geographical location ,email,designation,phone

Ex:1.Keeping all the fields and distributing records in multiple machines.say id= 1-100000 or 100000-200000 records in one machine each and distributing over multiple machines.

Ex:2.Keeping separate databases for Regions EG: Asia Pacific,North America

Key:Picking set of rows based on a criteria

Vertical Partitioning in data base

It is similar to Normalization where the same table is divided in to multiple tables and used with joins if required.

Ex: id, name,designation is put in one table and phone , email which may not be frequently accessed are put in another.

Key:Picking set of columns based on a criteria.

  • Horizontal/Vertical Scaling is different from partitioning

Horizontal Scaling:

is about adding more machines to enable improved responsiveness and availability of any system including database.The idea is to distribute the work load to multiple machines.

Vertical Scaling:

is about adding more capability in the form of CPU,Memory to existing machine or machines to enable improved responsiveness and availability of any system including database.In a virtual machine set up it can be configured virtually instead of adding real physical machines.

https://www.nylas.com/blog/growing-up-with-mysql/

MySQL does not need SQL

https://medium.com/@jeeyoungk/how-sharding-works-b4dec46b3f6

http://highscalability.com/blog/2013/4/15/scaling-pinterest-from-0-to-10s-of-billions-of-page-views-a.html

 http://softwareas.com/horizontally-scaling-databases-mysqlpostgres-sharding/

 https://github.com/evanelias/jetpants

Elias_percona_live_sc_2013

Massively Sharded MySQL at Tumblr Presentation

RDS_WhitePaper

https://noc.wikimedia.org/

http://blog.maxindelicato.com/2008/12/scalability-strategies-primer-database-sharding.html

https://www.percona.com/blog/2016/08/30/mysql-sharding-with-proxysql/

https://www.percona.com/blog/2009/08/06/why-you-dont-want-to-shard/

https://blog.asana.com/2015/04/sharding-is-bitter-medicine/

http://project-voldemort.com

vitess.io

http://www.craigkerstiens.com/2012/11/30/sharding-your-database/

https://www.percona.com/blog/2017/01/30/mysql-sharding-models-for-saas-applications/