这篇文章主要为大家展示了“ceph rbd在线resize的示例分析”,内容简而易懂,条理清晰,希望能够帮助大家解决疑惑,下面让小编带领大家一起研究并学习一下“ceph rbd在线resize的示例分析”这篇文章吧。
ceph rbd在线resize
扩容前
[root@mon0 ceph]# rbd create myrbd/rbd1 -s 1024 --image-format=2[root@mon0 ceph]# rbd ls myrbdrbd1[root@mon0 ceph]# rbd info myrbd/rbd1rbd image 'rbd1':size 1024 MB in 256 objectsorder 22 (4096 kB objects)block_name_prefix: rbd_data.12ce6b8b4567format: 2features: layering
扩容
[root@mon0 ceph]# rbd resize myrbd/rbd1 -s 2048Resizing image: 100% complete...done.
在rbd1未格式化和挂载之前,直接resize就可以了。如果rbd1已经格式化并挂载了,需要一些额外的操作:
[root@mon0 ceph]# rbd map myrbd/rbd1[root@mon0 ceph]# rbd showmappedid pool image snap device 0 test test.img - /dev/rbd0 1 myrbd rbd1 - /dev/rbd1 [root@mon0 ceph]# mkfs.xfs /dev/rbd1log stripe unit (4194304 bytes) is too large (maximum is 256KiB)log stripe unit adjusted to 32KiBmeta-data=/dev/rbd1 isize=256 agcount=9, agsize=64512 blks = sectsz=512 attr=2, projid32bit=0data = bsize=4096 blocks=524288, imaxpct=25 = sunit=1024 swidth=1024 blksnaming =version 2 bsize=4096 ascii-ci=0log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=8 blks, lazy-count=1realtime =none extsz=4096 blocks=0, rtextents=0[root@mon0 ceph]# mount /dev/rbd1 /mnt[root@mon0 ceph]# df -hFilesystem Size Used Avail Use% Mounted on/dev/sda1 529G 20G 482G 4% /tmpfs 16G 408K 16G 1% /dev/shm/dev/sdb 559G 33G 527G 6% /openstack/dev/sdc 1.9T 75M 1.9T 1% /cephmp1/dev/sdd 1.9T 61M 1.9T 1% /cephmp2/dev/rbd1 2.0G 33M 2.0G 2% /mnt[root@mon0 ceph]# rbd resize myrbd/rbd1 -s 4096Resizing image: 100% complete...done.[root@mon0 ceph]# df -hFilesystem Size Used Avail Use% Mounted on/dev/sda1 529G 20G 482G 4% /tmpfs 16G 408K 16G 1% /dev/shm/dev/sdb 559G 33G 527G 6% /openstack/dev/sdc 1.9T 75M 1.9T 1% /cephmp1/dev/sdd 1.9T 61M 1.9T 1% /cephmp2/dev/rbd1 2.0G 33M 2.0G 2% /mnt[root@mon0 ceph]# xfs_growfs /mntmeta-data=/dev/rbd1 isize=256 agcount=9, agsize=64512 blks = sectsz=512 attr=2, projid32bit=0data = bsize=4096 blocks=524288, imaxpct=25 = sunit=1024 swidth=1024 blksnaming =version 2 bsize=4096 ascii-ci=0log =internal bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=8 blks, lazy-count=1realtime =none extsz=4096 blocks=0, rtextents=0data blocks changed from 524288 to 1048576[root@mon0 ceph]# df -hFilesystem Size Used Avail Use% Mounted on/dev/sda1 529G 20G 482G 4% /tmpfs 16G 408K 16G 1% /dev/shm/dev/sdb 559G 33G 527G 6% /openstack/dev/sdc 1.9T 75M 1.9T 1% /cephmp1/dev/sdd 1.9T 61M 1.9T 1% /cephmp2/dev/rbd1 4.0G 33M 4.0G 1% /mnt
还有一种情况是,rbd1已经被挂载到一个vm上:
virsh domblklist myvmrbd resize myrbd/rbd1#这里需要通过virsh blockresize进行操作virsh blockresize --domain myvm --path vdb --size 100Grbd info myrbd/rbd1
以上是“ceph rbd在线resize的示例分析”这篇文章的所有内容,感谢各位的阅读!相信大家都有了一定的了解,希望分享的内容对大家有所帮助,如果还想学习更多知识,欢迎关注编程网行业资讯频道!