The Google Structured Data Testing Tool
The Google Structured Data Testing Tool
In case of deleted folder /usr/include folder from ubuntu, to restore it back to normal
login in to console, use CTRL+ALT+F1 to move to the console, type out the following command, this should restore the packages and everything should be fine.
sudo apt-get install --reinstall $(dpkg -S /usr/include/*|cut -d':' -f1|tr -d ','|tr '\n' ' ')
To remove just protobuf-compiler package itself from Ubuntu
sudo apt-get remove protobuf-compiler
Uninstall protobuf-compiler and it’s dependent packages
To remove the protobuf-compiler package and any other dependant package which are no longer needed from Ubuntu.
sudo apt-get remove –auto-remove protobuf-compiler
If you also want to delete configuration and/or data files of protobuf-compiler from Ubuntu
sudo apt-get purge protobuf-compiler
To delete configuration and/or data files of protobuf-compiler and it’s dependencies from Ubuntu Trusty then execute:
sudo apt-get purge –auto-remove protobuf-compiler
Once uninstalled, installation for proto3 follow the steps as described below
curl -OL https://github.com/google/protobuf/releases/download/v3.2.0/protoc-3.2.0-linux-x86_64.zip
unzip protoc-3.2.0-linux-x86_64.zip -d protoc3
Note: run command ‘sudo apt install unzip’ if the program ‘unzip’ is currently not installed.
sudo mv protoc3/bin/* /usr/local/bin/
sudo mv protoc3/include/* /usr/local/include/
To check where protbuf compiler is installed or the version of the protobuf compiler,
user@LT-201:~/Downloads$ which protoc
user@LT-201:~/Downloads$ protoc –version
Steps i took to fix this problem in case if someone encounters it:
Step 1: Install VS Code
Step 2: Install debugger-for-chrome extension
Press F1 to open command prompt in VS code and execute “Tasks: Configure Task Runner Command” this will generate sample tasks.json file in .vscode directory
.vscode folder has launch.json and tasks.json files
Ensure that launch.json has “sourceMaps” : true ,
Note if .vscode does not exits create manual folder in the root directory of the project and add the launch.json manually
At times when the MySQL server is residing on a different machine, and you want to access MySQL remotely you would get the error as access denied.
To resolve the issue,
First ensure that MySQL server is configured to accept remote connections from the user by verifying the My.cnf
Locate the my.cnf at
vim /etc/mysql/my.cnf, at times the configuration file could also be located at etc/mysql/mysql.conf.d/mysqld.cnf comment out the lines
#bind-address = 127.0.0.1 #skip-networking Restart the MySQL Server Give permissions to the user who is trying to connect the MySQL Server
mysql> GRANT ALL PRIVILEGES ON *.* TO 'USERNAME'@'%' IDENTIFIED BY 'PASSWORD' WITH GRANT OPTION; Verify that the privileges have been assigned
SELECT * from information_schema.user_privileges where grantee like "'USERNAME'%"; Finally flush the privileges to take effect.
FLUSH PRIVILEGES; Just in case if mistakingly privileges have been given to revoke the privileges to the uer
mysql> REVOKE ALL PRIVILEGES, GRANT OPTION FROM 'USERNAME'@'%'; Following will revoke all options for USERNAME from particular IP: mysql> REVOKE ALL PRIVILEGES, GRANT OPTION FROM 'USERNAME'@'220.127.116.11'; Its better to check information_schema.user_privileges table after running REVOKE command.
A fresh installation of MySQL – Workbench in ubuntu at times does not reflect the mysql configuration file which is being used by the server. To set it up we first need to know the configuration files used at start up.
Following are the steps to determine which configuration file is used.
Determine mysql process from where it is initiated
user@Sys-201:/etc/mysql/conf.d$ ps aux | grep mysql
mysql 22272 0.1 1.2 1509844 198076 ? Ssl 14:51 0:01 /usr/sbin/mysqld
user 22312 0.0 0.0 12548 3024 ? S 14:51 0:00 /bin/bash /usr/bin/mysql-workbench
user 22315 0.0 0.0 4508 860 ? S 14:51 0:00 /bin/sh /usr/bin/catchsegv /usr/lib/mysql-workbench/mysql-workbench-bin
user 22317 1.5 1.0 1506344 164356 ? SLl 14:51 0:11 /usr/lib/mysql-workbench/mysql-workbench-bin
user 22514 0.0 0.0 14224 932 pts/1 S+ 15:04 0:00 grep –color=auto mysql
besides the mysql row, it should indicate the configuration file which mysql is using when it starts off
If no file found, double check which instance of mysqld is running by issuing
user@Sys-201:/etc/mysql/conf.d$ which mysqld
The one with the above matches the one listed, issue the following command to detect the order of loading the configuration files.
user@Sys-201:/etc/mysql/conf.d$ mysqld –verbose –help | grep -A 1 “Default options”
mysqld: Can’t change dir to ‘/var/lib/mysql/’ (Errcode: 13 – Permission denied)
Default options are read from the following files in the given order:
/etc/my.cnf /etc/mysql/my.cnf ~/.my.cnf
Once you know for certain that the configuration file used, the same can be associated with MySQL – Workbench by
Database-ManageConnections-> Select Connection in the Left hand pane-> The right hand side pane is enabled, select System Profile Tab-> Configuration File, specify the config file
If you can’t measure it, you can’t improve it.
So the biggest thing when it comes to measuring is from where do you start. Here we lay down a few metrics which would help us in setting up things to measure the MySQL performance. This would be a building block and not a comprehensive list of metrics to measure MySQL performance.
Performance is captured in terms of response times, the quicker the response time the better is the performance, though improving performance could orient towards the resources but we would not be delving into resources to get a better performance, but to profile as to where the could be the areas where performance could be improved.
Ensure to have the mysql slow query log enabled, check the variable by issuing the command
SHOW VARIABLES LIKE ‘%slow_query_log%’;
SET GLOBAL slow_query_log = ‘ON’;
Validate whether slow query log is enabled by issuing the same command, it should be ON now.
Also look up as to where the slow queries are logged by issuing the command
SHOW VARIABLES LIKE ‘%slow_query_log_file%’;
Just in case if you want to alter as to where the query log needs to be saved, issue the command
SET GLOBAL slow_query_log_file = ‘/var/logs/mysql-slow.log’;
With this we are set to start off with query-digest which would profile information for slow queries. This would help us dig deep into areas where the time is being spent in terms of execution and waiting while the query is getting executed.
user@Sys-201:~$ pt-query-digest –limit=100% /var/lib/mysql/Sys-201-slow.log > ptqd1.out
Just in case if you dont get to see any information in the file which is specified, probably you need to use the root login.
MySQL partitioning is a concept with 2 contexts primarily the horizontal partitioning and the other being vertical partitioning.
Partitioning of relational data, it usually refers to decomposing your tables or breaking your tables either row-wise (horizontally) or column-wise (vertically).
Vertical partitioning, aka row splitting, uses the same splitting techniques as database normalization, but usually the term (vertical / horizontal) data partitioning refers to a physical optimization whereas normalization is an optimization on the conceptual level.
This is usually preferred when there is rare data used in one table, and you move the rare data into another table, thus reducing the overall size when running queries.
“Horizontal partitioning“, or sharding, is replicating [copying] the schema, and then dividing the data based on a shard key.
“Vertical partitioning” involves dividing up the schema (and the data goes along for the ride).
Keeping all the fields Ex:Table Employees has id,name,Geographical location ,email,designation,phone
Ex:1.Keeping all the fields and distributing records in multiple machines.say id= 1-100000 or 100000-200000 records in one machine each and distributing over multiple machines.
Ex:2.Keeping separate databases for Regions EG: Asia Pacific,North America
Key:Picking set of rows based on a criteria
It is similar to Normalization where the same table is divided in to multiple tables and used with joins if required.
Ex: id, name,designation is put in one table and phone , email which may not be frequently accessed are put in another.
Key:Picking set of columns based on a criteria.
is about adding more machines to enable improved responsiveness and availability of any system including database.The idea is to distribute the work load to multiple machines.
is about adding more capability in the form of CPU,Memory to existing machine or machines to enable improved responsiveness and availability of any system including database.In a virtual machine set up it can be configured virtually instead of adding real physical machines.