To solve our problem we are running a hybrid cloud. Few of our services are running on cloud and some of our services are running in premise locally in our country where we have our users and where AWS does not provide service. To able to do that we need a database replica that has read facility.
We need to creating replica user:
CREATE USER 'replica'@'%' IDENTIFIED BY 'slavepass'; GRANT REPLICATION SLAVE ON *.* TO 'replica'@'%';
Then create a new DB Cluster parameter group and set binlog_format
to MIXED
. Modify the Aurora cluster and select the custom parameter group. Restart your db to apply those changes. Now if you run following command you will be able to see the bin log file name and position.
show master status
Now we need to dump our master user data to sql dump so that we can feed our slave database.
mysqldump --single-transaction --routines --triggers --events -h XXX.azhxxxxxx2zkqxh3j.us-east-1.rds.amazonaws.com -u bhuvi –-password='xxx' my_db_name > my_db_name.sql
It can be GB to TB of data depending on your database size. So it will take time to download.
Run follwoing to know your mysql configuration file:
mysqld --help -verbose | grep my.cnf
For me it is /usr/local/etc/my.cnf
vi /usr/local/etc/my.cnf
and change server-id to:
[mysqld] server-id = 2
now lets import these data into our mysql.
mysql -u root –-password='xxx' my_db_name < my_db_name.sql
Now we need to let our slave database know who is the master:
CHANGE MASTER TO MASTER_HOST = 'RDS END Point name', MASTER_PORT = 3306, MASTER_USER = '', MASTER_PASSWORD = '', MASTER_LOG_FILE='', MASTER_LOG_POS=;
Now we need to start the slave.
start slave;