本篇内容介绍了“zabbix上怎么对mysql数据库做分区表”的有关知识,在实际案例的操作过程中,不少人都会遇到这样的困境,接下来就让小编带领大家学习一下如何处理这些情况吧!希望大家仔细阅读,能够学有所成!
独立表空间设置:
innodb_file_per_table=1
修改表索引:
zabbix3.2以上版本跳过此步骤
mysql> Alter table history_text drop primary key, add index (id), drop index history_text_2, add index history_text_2 (itemid, id);
Query OK, 0 rows affected (0.49 sec)
Records: 0 Duplicates: 0 Warnings: 0
mysql> Alter table history_log drop primary key, add index (id), drop index history_log_2, add index history_log_2 (itemid, id);
Query OK, 0 rows affected (2.71 sec)
Records: 0 Duplicates: 0 Warnings: 0
mysql>
创建存储过程:
增加分区的存储过程:
DELIMITER $$CREATE PROCEDURE `partition_create`(SCHEMANAME varchar(64), TABLENAME varchar(64), PARTITIONNAME varchar(64), CLOCK int)BEGIN
DECLARE RETROWS INT;
SELECT COUNT(1) INTO RETROWS
FROM information_schema.partitions
WHERE table_schema = SCHEMANAME AND table_name = TABLENAME AND partition_description >= CLOCK;
IF RETROWS = 0 THEN
SELECT CONCAT( "partition_create(", SCHEMANAME, ",", TABLENAME, ",", PARTITIONNAME, ",", CLOCK, ")" ) AS msg;
SET @sql = CONCAT( 'ALTER TABLE ', SCHEMANAME, '.', TABLENAME, ' ADD PARTITION (PARTITION ', PARTITIONNAME, ' VALUES LESS THAN (', CLOCK, '));' );
PREPARE STMT FROM @sql;
EXECUTE STMT;
DEALLOCATE PREPARE STMT;
END IF;END$$DELIMITER ;
删除分区的存储过程:
DELIMITER $$CREATE PROCEDURE `partition_drop`(SCHEMANAME VARCHAR(64), TABLENAME VARCHAR(64), DELETE_BELOW_PARTITION_DATE BIGINT)BEGIN
DECLARE done INT DEFAULT FALSE;
DECLARE drop_part_name VARCHAR(16);
DECLARE myCursor CURSOR FOR
SELECT partition_name
FROM information_schema.partitions
WHERE table_schema = SCHEMANAME AND table_name = TABLENAME AND CAST(SUBSTRING(partition_name FROM 2) AS UNSIGNED) < DELETE_BELOW_PARTITION_DATE;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE;
SET @alter_header = CONCAT("ALTER TABLE ", SCHEMANAME, ".", TABLENAME, " DROP PARTITION ");
SET @drop_partitions = "";
OPEN myCursor;
read_loop: LOOP
FETCH myCursor INTO drop_part_name;
IF done THEN
LEAVE read_loop;
END IF;
SET @drop_partitions = IF(@drop_partitions = "", drop_part_name, CONCAT(@drop_partitions, ",", drop_part_name));
END LOOP;
IF @drop_partitions != "" THEN
SET @full_sql = CONCAT(@alter_header, @drop_partitions, ";");
PREPARE STMT FROM @full_sql;
EXECUTE STMT;
DEALLOCATE PREPARE STMT;
SELECT CONCAT(SCHEMANAME, ".", TABLENAME) AS `table`, @drop_partitions AS `partitions_deleted`;
ELSE
SELECT CONCAT(SCHEMANAME, ".", TABLENAME) AS `table`, "N/A" AS `partitions_deleted`;
END IF;END$$DELIMITER ;
维护分区的存储过程:
DELIMITER $$CREATE PROCEDURE `partition_maintenance`(SCHEMA_NAME VARCHAR(32), TABLE_NAME VARCHAR(32), KEEP_DATA_DAYS INT, HOURLY_INTERVAL INT, CREATE_NEXT_INTERVALS INT)BEGIN
DECLARE OLDER_THAN_PARTITION_DATE VARCHAR(16);
DECLARE PARTITION_NAME VARCHAR(16);
DECLARE OLD_PARTITION_NAME VARCHAR(16);
DECLARE LESS_THAN_TIMESTAMP INT;
DECLARE CUR_TIME INT;
CALL partition_verify(SCHEMA_NAME, TABLE_NAME, HOURLY_INTERVAL);
SET CUR_TIME = UNIX_TIMESTAMP(DATE_FORMAT(NOW(), '%Y-%m-%d 00:00:00'));
SET @__interval = 1;
create_loop: LOOP
IF @__interval > CREATE_NEXT_INTERVALS THEN
LEAVE create_loop;
END IF;
SET LESS_THAN_TIMESTAMP = CUR_TIME + (HOURLY_INTERVAL * @__interval * 3600);
SET PARTITION_NAME = FROM_UNIXTIME(CUR_TIME + HOURLY_INTERVAL * (@__interval - 1) * 3600, 'p%Y%m%d%H00');
IF(PARTITION_NAME != OLD_PARTITION_NAME) THEN
CALL partition_create(SCHEMA_NAME, TABLE_NAME, PARTITION_NAME, LESS_THAN_TIMESTAMP);
END IF;
SET @__interval=@__interval+1;
SET OLD_PARTITION_NAME = PARTITION_NAME;
END LOOP;
SET OLDER_THAN_PARTITION_DATE=DATE_FORMAT(DATE_SUB(NOW(), INTERVAL KEEP_DATA_DAYS DAY), '%Y%m%d0000');
CALL partition_drop(SCHEMA_NAME, TABLE_NAME, OLDER_THAN_PARTITION_DATE);END$$DELIMITER ;
检查分区、创建第一个分区的存储过程:
DELIMITER $$CREATE PROCEDURE `partition_verify`(SCHEMANAME VARCHAR(64), TABLENAME VARCHAR(64), HOURLYINTERVAL INT(11))BEGIN
DECLARE PARTITION_NAME VARCHAR(16);
DECLARE RETROWS INT(11);
DECLARE FUTURE_TIMESTAMP TIMESTAMP;
SELECT COUNT(1) INTO RETROWS
FROM information_schema.partitions
WHERE table_schema = SCHEMANAME AND table_name = TABLENAME AND partition_name IS NULL;
IF RETROWS = 1 THEN
SET FUTURE_TIMESTAMP = TIMESTAMPADD(HOUR, HOURLYINTERVAL, CONCAT(CURDATE(), " ", '00:00:00'));
SET PARTITION_NAME = DATE_FORMAT(CURDATE(), 'p%Y%m%d%H00');
-- Create the partitioning query
SET @__PARTITION_SQL = CONCAT("ALTER TABLE ", SCHEMANAME, ".", TABLENAME, " PARTITION BY RANGE(`clock`)");
SET @__PARTITION_SQL = CONCAT(@__PARTITION_SQL, "(PARTITION ", PARTITION_NAME, " VALUES LESS THAN (", UNIX_TIMESTAMP(FUTURE_TIMESTAMP), "));");
-- Run the partitioning query
PREPARE STMT FROM @__PARTITION_SQL;
EXECUTE STMT;
DEALLOCATE PREPARE STMT;
END IF;END$$DELIMITER ;
将上面4个存储过程语句写到一个sql文件里,partition_call.sql
执行:mysql -uzabbix -p'zabbix' zabbix < partition_call.sql
使用存储过程:
mysql> CALL partition_maintenance('<zabbix_db_name>', '<table_name>', <days_to_keep_data>, <hourly_interval>, <num_future_intervals_to_create>)
zabbix_db_name:库名
table_name:表名
days_to_keep_data:保存多少天的数据
hourly_interval:每隔多久生成一个分区
num_future_intervals_to_create:本次一共生成多少个分区
例如:
mysql> CALL partition_maintenance('zabbix', 'history', 7, 24, 7);
这个例子就是history表最多保存7天的数据,每隔24小时生成一个分区,这次一共生成7个分区
统一调用存储过程:
DELIMITER $$
CREATE PROCEDURE `partition_maintenance_all`(SCHEMA_NAME VARCHAR(32))BEGIN
CALL partition_maintenance(SCHEMA_NAME, 'history', 28, 24, 14);
CALL partition_maintenance(SCHEMA_NAME, 'history_log', 28, 24, 14);
CALL partition_maintenance(SCHEMA_NAME, 'history_str', 28, 24, 14);
CALL partition_maintenance(SCHEMA_NAME, 'history_text', 28, 24, 14);
CALL partition_maintenance(SCHEMA_NAME, 'history_uint', 28, 24, 14);
CALL partition_maintenance(SCHEMA_NAME, 'trends', 730, 24, 14);
CALL partition_maintenance(SCHEMA_NAME, 'trends_uint', 730, 24, 14);END$$
DELIMITER ;
将这条语句保存成sql文件 partition_all.sql,再次导入存储过程
mysql -uzabbix -p'zabbix' zabbix < partition_all.sql
计划任务每天调用一次:
注意:
若数据量比较大,首次执行的时间较长,请使用nohup执行(我当时执行了15个小时左右,这期间zabbix是无法正常工作的,获取的agent数据不展示,但数据不会丢失)
nohup time mysql -uzabbix -pzabbix zabbix -e "CALL partition_maintenance_all('zabbix');" &> /tmp/file.txt &
后面只需要调用这个存储过程就可以了,每天执行一次:
mysql -uzabbix -pzabbix zabbix -e "CALL partition_maintenance_all('zabbix');"
写成crontab:
# crontab -e
0 1 * * * /data/tools/mysql/bin/mysql -uzabbix -pzabbix zabbix -e "CALL partition_maintenance_all('zabbix');"
执行脚本:
mysql> CALL partition_maintenance('zabbix', 'history', 28, 24, 14);
+-----------------------------------------------------------+
| msg |
+-----------------------------------------------------------+
| partition_create(zabbix,history,p201404160000,1397718000) |
+-----------------------------------------------------------+
1 row in set (0.39 sec)
+-----------------------------------------------------------+
| msg |
+-----------------------------------------------------------+
| partition_create(zabbix,history,p201404170000,1397804400) |
+-----------------------------------------------------------+
1 row in set (0.51 sec)
mysql> CALL partition_maintenance_all('zabbix');
+----------------+--------------------+
| table | partitions_deleted |
+----------------+--------------------+
| zabbix.history | N/A |
+----------------+--------------------+
1 row in set (0.01 sec)
....
....
....
+--------------------+--------------------+
| table | partitions_deleted |
+--------------------+--------------------+
| zabbix.trends_uint | N/A |
+--------------------+--------------------+
1 row in set (22.85 sec)
Query OK, 0 rows affected, 1 warning (22.85 sec)
[root@hk-zabbix ~]# mysql -uzabbix -p'zabbix' zabbix -e "CALL partition_maintenance_all('zabbix');"
mysql: [Warning] Using a password on the command line interface can be insecure.
+-----------------------------------------------------------+
| msg |
+-----------------------------------------------------------+
| partition_create(zabbix,history,p201811080000,1541692800) |
+-----------------------------------------------------------+
+-----------------------------------------------------------+
| msg |
+-----------------------------------------------------------+
| partition_create(zabbix,history,p201811090000,1541779200) |
关闭housekeeping:
完成。
“zabbix上怎么对mysql数据库做分区表”的内容就介绍到这里了,感谢大家的阅读。如果想了解更多行业相关的知识可以关注亿速云网站,小编将为大家输出更多高质量的实用文章!