It can be interesting to have a proxy in front of databases servers. Even if your configuration involve a single server, this proxy can be used to do query cache or query monitoring for example.

In this article serie I’ll talk about ProxySQL that I discovered a few weeks ago, why it’s a powerfull tool for many use cases and why - if you are looking for a “query routing” solution in front of your databases servers - you should take a look at.

First of all here is a link to the nice and beautifull ProxySQL documentation.

If you are running a Debian server, it’s easy to install, works out of the box, thanks to the developpers to provide apt repositories.

Installation on a Debian system

  • For this article sake, I’ll deploy ProxySQL on a standalone Debian server.
Hostname IP
psql01 192.168.56.1

Following this documentation

apt install -y --no-install-recommends lsb-release wget apt-transport-https ca-certificates gnupg
wget -O - 'https://repo.proxysql.com/ProxySQL/proxysql-2.4.x/repo_pub_key' | apt-key add - 
echo deb https://repo.proxysql.com/ProxySQL/proxysql-2.4.x/$(lsb_release -sc)/ ./ | tee /etc/apt/sources.list.d/proxysql.list
apt update
apt install proxysql

By default, ProxySQL is listening en 127.0.0.1:6032 for the administrative shell, and 0.0.0.1:6033 for incoming MySQL queries. You can configure ProxySQL using the configuration files available in /etc/proxysql folder, or load your configuration directly from the internal database.

I prefer to use this feature, it allows to configure ProxySQL, load the configuration to runtime and if you are ok with it, save it to disk. This small feature makes a big difference. Imagine for example that you delete your mysql query rules, load them to runtime and your application does not work anymore. Thanks to the 3 steps configuration, your empty mysql rules set was not save to disk, so you only have to restart ProxySQL and the rules on disk will be restored to runtime.

So to summarize, there are 3 steps during the configuration :

  • Write your rules in the shell
  • Load the rules to runtime
  • Save the rules to disk, to make them loaded after the next ProxySQL restart.

The first configurations

As we saw previously, ProxySQL listens on two differents ports :

  • 6032 : administrative shell
  • 6033 : MySQL incoming connections (read from right to left 😉)

How do I connect to this thing ?

If you want an access on the administrative shell, it’s necessary to install the mysql client. On debian 11 you can do it using :

apt update
apt install mariadb-client --no-install-recommends

For testing purposes you can then do :

mysql -u admin -padmin -h localhost -P6032

This should lead you to a kind of MySQL shell. You can exit using Ctrl + C.

I’m used to add an alias to make the connection easier and avoid typing in the full command each time :

alias proxysqlshell='mysql -u admin -padmin -h 127.0.0.1 -P6032 --prompt ProxySQL\> '

Now you have a cool and clean MySQL prompt when you type proxysqlshell in your terminal.(Don’t forget to reload/reconnect to your shell).

So, what’s next ?

One of the first thing to do, according to the ProxySQL documentation is to create an user on your ProxySQL nodes. This user will be used to monitor the backend servers. So this user must exist on your MySQL servers, with proper privileges.

Below, we have the command line to add this user on all ProxySQL servers :

root@psql01:~# proxysqlshell
ProxySQL>UPDATE global_variables SET variable_value='psql-monitoring' WHERE variable_name='mysql-monitor_username';
ProxySQL>UPDATE global_variables SET variable_value='psql_monitoring-password' WHERE variable_name='mysql-monitor_password';
ProxySQL>LOAD MYSQL VARIABLES TO RUNTIME;
ProxySQL>SAVE MYSQL VARIABLES TO DISK;

So, here we configured psql-monitoring as the monitoring user, and psql-monitoring-password as password. You can note the LOAD TO RUNTIME and SAVE TO DISK which are two differents but importants statements.

Now you have to connect to your MySQL backend servers and create this monitoring user. If you have a multi server (especially master / slave), create this user ONLY on the master. The replication mechanism will do its job 😉

Here is an example :

mysql> create user 'psql-monitoring'@'192.168.56.%' identified by 'psql-monitoring-password';
mysql> grant usage on *.* to 'psql-monitoring'@'192.168.56.%';
mysql> flush privileges;

Here I used the % wildcard MySQL character. The wildcard usage is ok for my lab needs. In a production env I would probably make it more restrictive. It’s not the safest configuration but it’s better than % alone.

What about my applications ?

For this first article, we keep it simple.

We have two databases located on two differents servers :

App Db name Ip MySQL user MySQL Password
Nextcloud nextcloud_db 192.168.56.3 nextcloud_sql_admin nextcloud_sql_pass
GLPI glpi_db 192.168.56.4 glpi_sql_admin glpi_sql_pass

The first thing to do is to add the database servers on our ProxySQL :

insert into mysql_servers (hostgroup_id,hostname,port,comment) values (20,'192.168.56.3',3306,'Nextcloud db server');
insert into mysql_servers (hostgroup_id,hostname,port,comment) values (30,'192.168.56.4',3306,'Glpi db server');
load mysql servers to runtime;
save mysql servers to disk;

A new thing appeared : the hostgroup notion. Hostgroup is a very imprortant setting in ProxySQL, it’s a variable situated in the heart of the “routing process”. In a single master / slaves scenario, a baic usage is to add the master in a dedicated hostgroup and all the slaves in another one.

For the moment we just add each backend server in a specific hostgroup, we keep things easy to understand.

Now we need to add users allowed to connect from client applications to backend MySQL servers, through ProxySQL. So the user must exit on the backend server, and we have to add it on the ProxySQL side :

insert into mysql_users (username,password,active,default_hostgroup) values ('nextcloud_sql_admin','nextcloud_sql_pass',1,20);
insert into mysql_users (username,password,active,default_hostgroup) values ('glpi_sql_admin','glpi_sql_pass',1,30);
load mysql users to runtime;
save mysql users to disk;

Now all queries sent to the ProxySQL with users :

  • nextcloud_sql_admin will be routed to hostgroup 20 : 192.168.56.3 : nextcloud_db
  • glpi_sql_admin will be routed to hostgroup 30 : 192.168.56.4 : glpi_db

Yes but my apps are talking to tcp/3306, not to tcp/6033

The default ProxySQL configuration is to listen on tcp/6033 port. But the MySQL default one is tcp/3306 and sometimes, you can’t or don’t want to change this destination port into your app. So the best solution is to change the mysql-interfaces variable from 0.0.0.0:6033 to 0.0.0.0:3306

update global_variables set variable_value='0.0.0.0:3306' where variable_name='mysql-interfaces';
save mysql variables to disk;

Now you can connect to your ProxySQL with your MySQL credentials :

mysql -u nextcloud_sql_admin -p -h 192.168.56.1

ProxySQL can now route your queries automatically based on username.

Enjoy, you just installed your first ProxySQL server 😉