文章详情

短信预约-IT技能 免费直播动态提醒

请输入下面的图形验证码

提交验证

短信预约提醒成功

openstack pike版如何使用ceph作后端存储

2023-06-04 22:01

关注

小编给大家分享一下openstack pike版如何使用ceph作后端存储,相信大部分人都还不怎么了解,因此分享这篇文章给大家参考一下,希望大家阅读完这篇文章后大有收获,下面让我们一起去了解一下吧!

节点分布
10.1.1.1 controller
10.1.1.2 compute
10.1.1.3 middleware
10.1.1.4 network
10.1.1.5 compute2
10.1.1.6 compute3
10.1.1.7 cinder
##分布式存储
后端存储用的是ceph,mon_host = 10.1.1.2,10.1.1.5,10.1.1.6
##给cinder创建数据库,服务以及endpoint
mysql -u root -p
create database cinder;
grant all privileges on cinder.* to 'cinder'@'localhost' identified by '123456';
grant all privileges on cinder.* to 'cinder'@'%' identified by '123456';


cat admin-openrc 
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_DOMAIN_ID=default
export OS_USERNAME=admin
export OS_PROJECT_NAME=admin
export OS_PASSWORD=123456
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
export OS_AUTH_URL=http://controller:35357/v3
source admin-openrc
创建cinder用户
openstack user create --domain default --password-prompt cinder
cinder 用户加入admin组
openstack role add --project service --user cinder admin
创建service
openstack service create --name cinderv2 --description "OpentStack Block Storage" volumev2
openstack service create --name cinderv3 --description "OpentStack Block Storage" volumev3
创建API endpoint
openstack endpoint create --region RegionOne volumev2 public http://cinder:8776/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne volumev2 internal http://cinder:8776/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne volumev2 admin http://cinder:8776/v2/%\(tenant_id\)s


openstack endpoint create --region RegionOne volumev3 public http://cinder:8776/v3/%\(tenant_id\)s
openstack endpoint create --region RegionOne volumev3 internal http://cinder:8776/v3/%\(tenant_id\)s
openstack endpoint create --region RegionOne volumev3 admin http://cinder:8776/v3/%\(tenant_id\)s


创建ceph pool
在ceph 上执行如下命令
ceph osd pool create volumes 128
ceph osd pool create images 128
ceph osd pool vms 128


ceph 用户授权
因为后端存储用的是ceph,所以要给ceph客户端授权,以便ceph用户能访问相应的ceph pool,使用到ceph的有glance,cinder,nova-compute
ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children,allow rwx pool=volumes,allow rwx pool=vms,allow rwx pool=images'
ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children,allow rwx pool=images'


ceph auth list
client.cinder
key: AQDQEWdaNU9YGBAAcEhKd6KQKHN9HeFIIS4+fw==
caps: [mon] allow r
caps: [osd] allow class-read object_prefix rbd_children,allow rwx pool=volumes,allow rwx pool=vms,allow rwx pool=images
client.glance
key: AQD4EWdaTdZjJhAAuj8CvNY59evhiGtEa9wLzw==
caps: [mon] allow r
caps: [osd] allow class-read object_prefix rbd_children,allow rwx pool=images


在controller,cinder,compute节点建/etc/ceph目录
将给controller,cinder,compute节点,建立授权文件
ceph auth get-or-create client.glance |ssh controller sudo tee /etc/ceph/ceph.client.glance.keyring
ceph auth get-or-create client.cinder |ssh cinder sudo tee /etc/ceph/ceph.client.cinder.keyring
ceph auth get-or-create client.cinder |ssh compute sudo tee /etc/ceph/ceph.client.cinder.keyring
ceph auth get-or-create client.cinder |ssh compute2 sudo tee /etc/ceph/ceph.client.cinder.keyring
ceph auth get-or-create client.cinder |ssh compute3 sudo tee /etc/ceph/ceph.client.cinder.keyring
给ceph.client.glance.keyring 赋予glance权限ceph.client.cinder.keyring赋予cinder权限
chown glance.glance /etc/ceph/ceph.client.glance.keyring
chown cinder.cinder /etc/ceph/ceph.client.cinder.keyring
将ceph的配置文件/etc/ceph/ceph.conf,复制一份到glance,cinder和compute节点的/etc/ceph目录
安装和配置组件
cinder 节点
yum install -y openstack-cinder python-ceph ceph-common python-rbd
在/etc/ceph目录下要有如下文件
[root@cinder ~]# ll /etc/ceph/
total 12
-rw-r--r-- 1 cinder cinder  64 1月  26 15:52 ceph.client.cinder.keyring
-rw-r--r-- 1 root   root   263 1月  26 15:53 ceph.conf


cp /etc/cinder/cinder.conf{,.bak}
>/etc/cinder/cinder.conf


cat /etc/cindr/cinder.conf

[DEFAULT]
auth_strategy = keystone
transport_url = rabbit://openstack:123456@middleware
log_dir = /var/log/cinder/api.log
enabled_backends = ceph


[database]
connection = mysql+pymysql://cinder:123456@middleware/cinder


[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = 123456


[oslo_concurrency]
lock_path = /var/lib/cinder/tmp




[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
rbd_user = cinder
rbd_secret_uuid = f85def47-c1ac-46fe-a1d5-c0139c46d91a

重启cinder服务
systemctl restart openstack-api.service
systemctl restart openstack-scheduler.service
systemctl restart openstack-volume.service




glance节点
安装ceph客户端
yum install -y python-ceph ceph-common python-rbd
在/etc/ceph目录下,要有如下文件
[root@controller ~]# ll /etc/ceph/
total 12
-rw-r--r-- 1 glance glance  64 1月  23 19:31 ceph.client.glance.keyring
-rw-r--r-- 1 root   root   416 1月  24 10:32 ceph.conf
有关ceph的配置
/etc/glance/glance.conf
[DEFAULT]
#enable image locaions and take advantage of copy-on-write cloning for images
show_image_direct_url = true
[glance_store]
stores = rbd
default_store = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8
重启glance服务
systemctl restart openstack-glance-api.service


compute节点
安装ceph客户端
yum install -y python-ceph ceph-common python-rbd
uuidgen生产uid,和/etc/cinder/cinder.conf中的uuid保持一致
f85def47-c1ac-46fe-a1d5-c0139c46d91a
创建secret文件
cat secret.xml 
<secret ephemeral='no' private='no'>
  <uuid>f85def47-c1ac-46fe-a1d5-c0139c46d91a</uuid>
  <usage type='ceph'>
    <name>client.cinder secret</name>
  </usage>
</secret>
定义secret
sudo virsh secret-define --file secret.xml
sudo virsh secret-set-value --secret f85def47-c1ac-46fe-a1d5-c0139c46d91a --base64 $(cat ceph.client.cinder.keyring |awk '/key/{print $3}')


virsh secret-list
 UUID                                  Usage
--------------------------------------------------------------------------------
 f85def47-c1ac-46fe-a1d5-c0139c46d91a  ceph client.cinder secret


/etc/nova/nova.conf配置
[libvirt]
virt_type = qemu
cpu_mode = none
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = f85def47-c1ac-46fe-a1d5-c0139c46d91a
disk_cachemodes="network=writeback"
live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"
inject_password = false
inject_key = false
inject_partition = -2
在/etc/ceph目录下要有如下文件
[root@cinder ~]# ll /etc/ceph/
total 12
-rw-r--r-- 1 cinder cinder  64 1月  26 15:52 ceph.client.cinder.keyring
-rw-r--r-- 1 root   root   263 1月  26 15:53 ceph.conf
重启nova-compute服务
systemctl restart openstack-nova-compute.service

以上是“openstack pike版如何使用ceph作后端存储”这篇文章的所有内容,感谢各位的阅读!相信大家都有了一定的了解,希望分享的内容对大家有所帮助,如果还想学习更多知识,欢迎关注编程网行业资讯频道!

阅读原文内容投诉

免责声明:

① 本站未注明“稿件来源”的信息均来自网络整理。其文字、图片和音视频稿件的所属权归原作者所有。本站收集整理出于非商业性的教育和科研之目的,并不意味着本站赞同其观点或证实其内容的真实性。仅作为临时的测试数据,供内部测试之用。本站并未授权任何人以任何方式主动获取本站任何信息。

② 本站未注明“稿件来源”的临时测试数据将在测试完成后最终做删除处理。有问题或投稿请发送至: 邮箱/279061341@qq.com QQ/279061341

软考中级精品资料免费领

  • 历年真题答案解析
  • 备考技巧名师总结
  • 高频考点精准押题
  • 2024年上半年信息系统项目管理师第二批次真题及答案解析(完整版)

    难度     813人已做
    查看
  • 【考后总结】2024年5月26日信息系统项目管理师第2批次考情分析

    难度     354人已做
    查看
  • 【考后总结】2024年5月25日信息系统项目管理师第1批次考情分析

    难度     318人已做
    查看
  • 2024年上半年软考高项第一、二批次真题考点汇总(完整版)

    难度     435人已做
    查看
  • 2024年上半年系统架构设计师考试综合知识真题

    难度     224人已做
    查看

相关文章

发现更多好内容

猜你喜欢

AI推送时光机
位置:首页-资讯-后端开发
咦!没有更多了?去看看其它编程学习网 内容吧
首页课程
资料下载
问答资讯