Setting Up Asynchronous MySQL Replication: A Step-by-Step Guide
MySQL replication is a fundamental feature for achieving high availability, scalability, and robust backup strategies. Asynchronous replication, the most common type, ensures that data written to the primary server (Master) is eventually copied to one or more secondary servers (Slaves), without the Master waiting for the Slave to confirm the transaction.
This comprehensive guide provides a detailed, step-by-step tutorial for configuring a standard Master-Slave asynchronous replication setup using MySQL. We will cover the necessary server configuration adjustments, user setup, and the critical steps for initializing the data synchronization.
Prerequisites and Overview
Before starting the configuration, ensure you have:
- Two running MySQL servers (Server A: Master, Server B: Slave).
- Network connectivity between the two servers (usually TCP port 3306 must be open).
- Root or administrative access to configure both MySQL instances and modify the
my.cnformy.iniconfiguration files.
For the purpose of this guide, we assume the Master Server IP is 192.168.1.100 and the Slave Server IP is 192.168.1.101.
Phase 1: Configuring the Master Server
The Master server must be configured to record all data modification events in a binary log file, which the Slave will read from.
Step 1: Edit the Master Configuration File (my.cnf)
Locate the MySQL configuration file (typically /etc/mysql/my.cnf or /etc/my.cnf) and add or modify the following directives within the [mysqld] section.
[mysqld]
# 1. Unique ID for this server (must be greater than 0)
server-id=1
# 2. Enable binary logging
log-bin=mysql-bin
# 3. List of databases to replicate (optional, but recommended)
# binlog-do-db=mydatabase
# 4. Optional: Ensure connection uses TCP/IP, useful for testing
# bind-address=0.0.0.0
Note: The
server-idmust be unique across all servers participating in the replication topology.
Step 2: Restart MySQL and Verify Binary Logging
After saving the configuration file, restart the MySQL service on the Master server.
# Debian/Ubuntu
sudo systemctl restart mysql
# RHEL/CentOS
sudo systemctl restart mysqld
Log into the MySQL command line interface and verify that binary logging is active:
SHOW VARIABLES LIKE 'log_bin';
-- Value should be ON
Step 3: Create the Replication User
Replication requires a dedicated user account on the Master server with specific privileges that the Slave will use to connect and retrieve the binary logs. Ensure this user can connect remotely from the Slave's IP address (192.168.1.101).
CREATE USER 'repl_user'@'192.168.1.101' IDENTIFIED BY 'secure_password';
GRANT REPLICATION SLAVE ON *.* TO 'repl_user'@'192.168.1.101';
FLUSH PRIVILEGES;
Step 4: Record the Current Master Status
Before proceeding, we must establish the exact position (File and Position) in the binary log where the Slave should begin reading. This step is critical for synchronization.
FLUSH TABLES WITH READ LOCK; -- Temporarily halt writes
SHOW MASTER STATUS;
-- IMPORTANT: Note down these two values:
-- File: mysql-bin.000001
-- Position: 1234
-- DO NOT UNLOCK TABLES YET IF YOU ARE TAKING AN INITIAL SNAPSHOT (Step 6)
Phase 2: Configuring the Slave Server
Step 5: Edit the Slave Configuration File (my.cnf)
Configure the Slave server with a unique ID and optional settings.
[mysqld]
# Unique ID for this server (must be different from Master)
server-id=2
# Optional: Recommended for safety
read_only=1
# Optional: Enable relay logging
relay_log=mysql-relay-bin
Restart the MySQL service on the Slave server after saving the changes.
Step 6: Initial Data Transfer (Snapshot)
If the Slave server is empty, you must populate it with the Master's current data structure and content. This initial snapshot must be taken while the Master tables are locked (from Step 4).
From the Master server, run the mysqldump command. We use the --master-data=2 flag to automatically include the necessary CHANGE MASTER TO statement in the dump file, which simplifies Step 7.
# Run on Master server console/shell
mysqldump -u root -p --all-databases --master-data=2 --single-transaction > master_dump.sql
# Now, return to the Master MySQL CLI and release the lock
UNLOCK TABLES;
Transfer master_dump.sql to the Slave server and import it:
# Run on Slave server console/shell
mysql -u root -p < master_dump.sql
Best Practice: Using
master-data=2is highly recommended as it automates capturing the correct binary log position right at the start of the dump.
Phase 3: Initiating Replication
Step 7: Define the Master Connection
On the Slave server's MySQL command line, execute the CHANGE MASTER TO command, substituting the values noted in Step 4 and the user created in Step 3.
CHANGE MASTER TO
MASTER_HOST='192.168.1.100',
MASTER_USER='repl_user',
MASTER_PASSWORD='secure_password',
MASTER_LOG_FILE='mysql-bin.000001', -- The File recorded in Step 4
MASTER_LOG_POS=1234; -- The Position recorded in Step 4
Step 8: Start Replication and Verification
After defining the connection parameters, start the Slave's replication threads.
START SLAVE;
Verify that the replication threads are running and communicating correctly using the SHOW SLAVE STATUS command:
SHOW SLAVE STATUS\G
Examine the output for the following critical fields:
| Field | Expected Value | Description |
|---|---|---|
Slave_IO_Running |
Yes |
Slave is successfully connecting to the Master. |
Slave_SQL_Running |
Yes |
Slave is applying transactions to its database. |
Seconds_Behind_Master |
0 or low number |
Indicates replication latency. Should quickly drop to 0. |
If either Slave_IO_Running or Slave_SQL_Running shows No, examine the Last_IO_Error or Last_SQL_Error fields for troubleshooting clues (e.g., firewall issues, incorrect credentials, duplicate keys).
Troubleshooting and Maintenance Tips
Handling Replication Errors
If the Slave encounters an error (e.g., attempting to insert a duplicate primary key), the Slave_SQL_Running thread will stop. You can usually bypass minor, non-critical errors using:
STOP SLAVE;
SET GLOBAL SQL_SLAVE_SKIP_COUNTER = 1;
START SLAVE;
Warning: Use
SQL_SLAVE_SKIP_COUNTERjudiciously. Skipping transactions can lead to data divergence (inconsistency) between the Master and Slave.
Checking for Consistency
While asynchronous replication is efficient, it does not guarantee immediate consistency. For high-stakes environments, utilize tools like Percona Toolkit's pt-table-checksum to periodically check for data drift between the Master and Slave.
Managing Binary Logs
Binary logs consume disk space over time. Configure log expiration on the Master to prevent disk overuse:
[mysqld]
# Remove binary logs older than 7 days (604800 seconds)
expire_logs_days=7