接上篇
计算服务:
安装配置控制节点:
yum install openstack-nova-api openstack-nova-conductor \
openstack-nova-console openstack-nova-novncproxy \
openstack-nova-scheduler
此时,缺少一个包: python-pygments需要自己下载并安装
1、获得 admin 凭证来获取只有管理员能执行的命令的访问权限:
#. admin-openrc
2、要创建服务证书,完成这些步骤:
创建 nova 用户:
openstack user create --domain default \
--password-prompt nova
给 nova 用户添加 admin 角色:
openstack role add --project service --user nova admin
创建 nova 服务实体:
openstack service create --name nova \
--description "OpenStack Compute" compute
创建 Compute 服务 API 端点 :
# openstack endpoint create --region RegionOne \
> compute public http://172.25.33.10:8774/v2.1/%\(tenant_id\)s
# openstack endpoint create --region RegionOne compute internal http://172.25.33.10:8774/v2.1/%\(tenant_id\)s
+--------------+---------------------------------------------+
| Field | Value |
+--------------+---------------------------------------------+
| enabled | True |
| id | 44b3adb6ce2348908abbf4d3f9a52f2b |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | a394a2c40c144d6fb9db567a1105c44a |
| service_name | nova |
| service_type | compute |
| url | http://172.25.33.10:8774/v2.1/%(tenant_id)s |
+--------------+---------------------------------------------+
# openstack endpoint create --region RegionOne compute admin http://172.25.33.10:8774/v2.1/%\(tenant_id\)s
编辑``/etc/nova/nova.conf``文件并完成下面的操作:
1、在``[DEFAULT]``部分,只启用计算和元数据API
[DEFAULT]
enabled_apis = osapi_compute,metadata
在``[api_database]``和``[database]``部分,配置数据库的连接:
[api_database]
connection = mysql+pymysql://nova:nova@172.25.33.10/nova_api
[database]
connection = mysql+pymysql://nova:nova@172.25.33.10/nova
在 “[DEFAULT]” 和 “[oslo_messaging_rabbit]”部分,配置 “RabbitMQ” 消息队列访问:
[DEFAULT]
rpc_backend = rabbit
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = rabbit
在 “[DEFAULT]” 和 “[keystone_authtoken]” 部分,配置认证服务访问
[DEFAULT]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://172.25.33.10:5000
auth_url = http://172.25.33.10:35357
memcached_servers = 172.25.33.10:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova
在 [DEFAULT 部分,配置``my_ip`` 来使用控制节点的管理接口的IP 地址。
[DEFAULT]
my_ip = 10.0.0.11
在 [DEFAULT] 部分,使能 Networking 服务:
[DEFAULT]
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
默认情况下,计算服务使用内置的防火墙服务。由于网络服务包含了防火墙服务,你必须使用``nova.virt.firewall.NoopFirewallDriver``防火墙服务来禁用掉计算服务内置的防火墙服务
在``[vnc]``部分,配置VNC代理使用控制节点的管理接口IP地址
[vnc]
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
在 [glance] 区域,配置镜像服务 API 的位置:
[glance]
api_servers = http://controller:9292
在 [oslo_concurrency] 部分,配置锁路径:
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
同步Compute 数据库:
# su -s /bin/sh -c "nova-manage api_db sync" nova
# su -s /bin/sh -c "nova-manage db sync" nova
# systemctl enable openstack-nova-api.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
# systemctl start openstack-nova-api.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
# grep ^[a-Z] /etc/nova/nova.conf
rpc_backend = rabbit
enabled_apis = osapi_compute,metadata
auth_strategy = keystone
my_ip = 172.25.33.10
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
debug=true
connection = mysql+pymysql://nova:nova@172.25.33.10/nova_api
connection = mysql+pymysql://nova:nova@172.25.33.10/nova
api_servers = http://172.25.33.10:9292
auth_uri = http://172.25.33.10:5000
auth_url = http://172.25.33.10:35357
memcached_servers = 172.25.33.10:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova
lock_path = /var/lib/nova/tmp
rabbit_host = 172.25.33.10
rabbit_userid = openstack
rabbit_password = rabbit
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
安装和配置计算节点:
minion2:172.25.33.11
安装软件包:
# yum install openstack-nova-compute
编辑``/etc/nova/nova.conf``文件并完成下面的操作
在``[DEFAULT]`` 和 [oslo_messaging_rabbit]部分,配置``RabbitMQ``消息队列的连接:
[DEFAULT]
rpc_backend = rabbit
[oslo_messaging_rabbit]
rabbit_host =172.25.33.10
rabbit_userid = openstack
rabbit_password =rabbit
在 “[DEFAULT]” 和 “[keystone_authtoken]” 部分,配置认证服务访问
[DEFAULT]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://172.25.33.10:5000
auth_url = http://172.25.33.10:35357
memcached_servers = 172.25.33.10:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova
在 [DEFAULT] 部分,配置 my_ip 选项
[DEFAULT]
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
将其中的 MANAGEMENT_INTERFACE_IP_ADDRESS 替换为计算节点上的管理网络接口的IP 地址
my_ip =172.25.33.11
在 [DEFAULT] 部分,使能 Networking 服务:
[DEFAULT]
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
缺省情况下,Compute 使用内置的防火墙服务。由于 Networking 包含了防火墙服务,所以你必须通过使用 nova.virt.firewall.NoopFirewallDriver 来去除 Compute 内置的防火墙服务
在``[vnc]``部分,启用并配置远程控制台访问:
[vnc]
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://172.25.33.10:6080/vnc_auto.html
在 [glance] 区域,配置镜像服务 API 的位置:
[glance]
api_servers = http://172.25.33.10:9292
在 [oslo_concurrency] 部分,配置锁路径:
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
官方文档漏掉的配置:回报错误:oslo_service.service [-] Error starting thread.
或PlacementNotConfigured: This compute is not configured to talk to the placement service
[placement]
auth_uri = http://172.25.33.10:5000
auth_url = http://172.25.33.10:35357
memcached_servers = 172.25.33.10:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova
os_region_name = RegionOne
完成安装
1、确定您的计算节点是否支持虚拟机的硬件加速。
#egrep -c '(vmx|svm)' /proc/cpuinfo
如果这个命令返回了 one or greater 的值,那么你的计算节点支持硬件加速且不需要额外的配置。
如果这个命令返回了 zero 值,那么你的计算节点不支持硬件加速。你必须配置 libvirt 来使用 QEMU 去代替 KVM
# egrep -c '(vmx|svm)' /proc/cpuinfo
0
在 /etc/nova/nova.conf 文件的 [libvirt] 区域做出如下的编辑
[libvirt]
virt_type = qemu
2、启动计算服务及其依赖,并将其配置为随系统自动启动:
# systemctl enable libvirtd.service openstack-nova-compute.service
# systemctl start libvirtd.service openstack-nova-compute.service
验证操作:在控制节点172.25.33.10上进行
获得 admin 凭证来获取只有管理员能执行的命令的访问权限:
#. admin-openrc
列出服务组件,以验证是否成功启动并注册了每个进程:
# openstack compute service list
+----+------------------+------------------+----------+---------+-------+--------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+------------------+------------------+----------+---------+-------+--------------------+
| 1 | nova-conductor | server10.example | internal | enabled | up | 2017-04-04T14:07:4 |
| | | | | | | 9.000000 |
| 2 | nova-scheduler | server10.example | internal | enabled | up | 2017-04-04T14:07:5 |
| | | | | | | 1.000000 |
| 3 | nova-consoleauth | server10.example | internal | enabled | up | 2017-04-04T14:07:5 |
| | | | | | | 0.000000 |
| 6 | nova-compute | server11.example | nova | enabled | up | 2017-04-04T14:07:5 |
| | | .com | | | | 1.000000
网络服务:
控制节点:
OpenStack网络(neutron)管理OpenStack环境中所有虚拟网络基础设施(VNI),物理网络基础设施(PNI)的接入层。OpenStack网络允许租户创建包括像 firewall, :term:`load balancer`和 :term:`virtual private network (×××)`等这样的高级虚拟网络拓扑。
配置:
1、获得 admin 凭证来获取只有管理员能执行的命令的访问权限:
. admin-openrc
2、要创建服务证书,完成这些步骤:
创建``neutron``用户:
openstack user create --domain default --password-prompt neutron
添加``admin`` 角色到``neutron`` 用户:
openstack role add --project service --user neutron admin
创建``neutron``服务实体:
# openstack service create --name neutron \
> --description "OpenStack Networking" network
创建网络服务API端点
# openstack endpoint create --region RegionOne \
> network public http://172.25.33.10:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 0092457b66b84d869d710e84c715219c |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | a33565b8fdfa4531963fdbb74245d960 |
| service_name | neutron |
| service_type | network |
| url | http://172.25.33.10:9696 |
+--------------+----------------------------------+
# openstack endpoint create --region RegionOne network internal http://172.25.33.10:9696
# openstack endpoint create --region RegionOne network admin http://172.25.33.10:9696
本网络实例采用公共网络:
选项1采用尽可能简单的架构进行部署,只支持实例连接到公有网络(外部网络)。没有私有网络(个人网络),路由器以及浮动IP地址。只有``admin``或者其他特权用户才可以管理公有网络
选项2在选项1的基础上多了layer-3服务,支持实例连接到私有网络。``demo``或者其他没有特权的用户可以管理自己的私有网络,包含连接公网和私网的路由器。另外,浮动IP地址可以让实例使用私有网络连接到外部网络,例如互联网
yum install openstack-neutron openstack-neutron-ml2 \
openstack-neutron-linuxbridge ebtables
配置服务组件
Networking 服务器组件的配置包括数据库、认证机制、消息队列、拓扑变化通知和插件。
编辑``/etc/neutron/neutron.conf`` 文件并完成如下操作:
在 [database] 部分,配置数据库访问
[database]
connection = mysql+pymysql://neutron:neutron@172.25.33.10/neutron
在``[DEFAULT]``部分,启用Modular Layer 2 (ML2)插件,路由服务和重叠的IP地址:
[DEFAULT]
core_plugin = ml2
service_plugins =
在 “[DEFAULT]” 和 “[oslo_messaging_rabbit]”部分,配置 “RabbitMQ” 消息队列的连接:
[DEFAULT]
rpc_backend = rabbit
[oslo_messaging_rabbit]
rabbit_host = 172.25.33.10
rabbit_userid = openstack
rabbit_password =rabbit
在 “[DEFAULT]” 和 “[keystone_authtoken]” 部分,配置认证服务访问:
[DEFAULT]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://172.25.33.10:5000
auth_url = http://172.25.33.10:35357
memcached_servers = 172.25.33.10:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
在``[DEFAULT]``和``[nova]``部分,配置网络服务来通知计算节点的网络拓扑变化:
[DEFAULT]
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
[nova]
auth_url = http://172.25.33.10:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova
在 [oslo_concurrency] 部分,配置锁路径:
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
配置 Modular Layer 2 (ML2) 插件
ML2插件使用Linuxbridge机制来为实例创建layer-2虚拟网络基础设施
编辑``/etc/neutron/plugins/ml2/ml2_conf.ini``文件并完成以下操作:
在``[ml2]``部分,启用flat和VLAN网络以及VXLAN网络::
[ml2]
type_drivers = flat,vlan
在``[ml2]``部分,禁用私有网络:
[ml2]
tenant_network_types =
在``[ml2]``部分,启用Linuxbridge机制:
[ml2]
mechanism_drivers = linuxbridge
在``[ml2]`` 部分,启用端口安全扩展驱动:
[ml2]
extension_drivers = port_security
在``[ml2_type_flat]``部分,配置公共虚拟网络为flat网络
[ml2_type_flat]
flat_networks = provider
在 ``[securitygroup]``部分,启用 ipset 增加安全组规则的高效性:
[securitygroup]
enable_ipset = True
配置Linuxbridge代理
Linuxbridge代理为实例建立layer-2虚拟网络并且处理安全组规则。
编辑``/etc/neutron/plugins/ml2/linuxbridge_agent.ini``文件并且完成以下操作:
在``[linux_bridge]``部分,将公共虚拟网络和公共物理网络接口对应起来:
[linux_bridge]
physical_interface_mappings =public:eth0
将``PUBLIC_INTERFACE_NAME`` 替换为底层的物理公共网络接口
在``[vxlan]``部分,禁用VXLAN覆盖网络
[vxlan]
enable_vxlan = False
在 ``[securitygroup]``部分,启用安全组并配置 Linuxbridge iptables firewall driver:
[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
配置DHCP代理
The DHCP agent provides DHCP services for virtual networks
编辑``/etc/neutron/dhcp_agent.ini``文件并完成下面的操作:
在``[DEFAULT]``部分,配置Linuxbridge驱动接口,DHCP驱动并启用隔离元数据,这样在公共网络上的实例就可以通过网络来访问元数据
[DEFAULT]
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True
配置元数据代理
编辑``/etc/neutron/metadata_agent.ini``文件并完成以下操作:
在``[DEFAULT]`` 部分,配置元数据主机以及共享密码:
[DEFAULT]
nova_metadata_ip = 172.25.33.10
metadata_proxy_shared_secret =redhat
为计算节点配置网络服务
编辑``/etc/nova/nova.conf``文件并完成以下操作:
在``[neutron]``部分,配置访问参数,启用元数据代理并设置密码:
[neutron]
url = http://172.25.33.10:9696
auth_url = http:/172.25.33.10:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = True
metadata_proxy_shared_secret = redhat
完成安装
网络服务初始化脚本需要一个超链接 /etc/neutron/plugin.ini``指向ML2插件配置文件/etc/neutron/plugins/ml2/ml2_conf.ini``。如果超链接不存在,使用下面的命令创建它:
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
同步数据库:
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
最后显示OK 即为成功
重启计算API 服务
# systemctl restart openstack-nova-api.service
开机启动
# systemctl enable neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
# systemctl start neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
对于网络选项2,同样启用layer-3服务并设置其随系统自启动
# systemctl enable neutron-l3-agent.service
# systemctl start neutron-l3-agent.service
计算节点:
# yum install openstack-neutron-linuxbridge ebtables ipset
Networking 通用组件的配置包括认证机制、消息队列和插件
编辑``/etc/neutron/neutron.conf`` 文件并完成如下操作:
在``[database]`` 部分,注释所有``connection`` 项,因为计算节点不直接访问数据库。
在“[DEFAULT]” 和 “[oslo_messaging_rabbit]”部分,配置 “RabbitMQ” 消息队列的连接:
[DEFAULT]
rpc_backend = rabbit
[oslo_messaging_rabbit]
rabbit_host = 172.25.33.10
rabbit_userid = openstack
rabbit_password = rabbit
在 “[DEFAULT]” 和 “[keystone_authtoken]” 部分,配置认证服务访问:
[DEFAULT]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://172.25.33.10:5000
auth_url = http://172.25.33.10:35357
memcached_servers = 172.25.33.10:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neturon
在 [oslo_concurrency] 部分,配置锁路径:
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
选择公有网络:(可以将minion1上的配置考过来)
配置Linuxbridge代理
Linuxbridge代理为实例建立layer-2虚拟网络并且处理安全组规则。
编辑``/etc/neutron/plugins/ml2/linuxbridge_agent.ini``文件并且完成以下操作:
在``[linux_bridge]``部分,将公共虚拟网络和公共物理网络接口对应起来:
[linux_bridge]
physical_interface_mappings =public:eth0
在``[vxlan]``部分,禁止VXLAN覆盖网络:
[vxlan]
enable_vxlan = False
在 ``[securitygroup]``部分,启用安全组并配置 Linuxbridge iptables firewall driver:
[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDr
编辑``/etc/nova/nova.conf``文件并完成下面的操作:
在``[neutron]`` 部分,配置访问参数:
[neutron]
url = http://172.25.33.10:9696
auth_url = http://172.25.33.10:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
重启计算服务:
# systemctl restart openstack-nova-compute.service
开机启动:
# systemctl enable neutron-linuxbridge-agent.service
# systemctl start neutron-linuxbridge-agent.service
检验:
neutron ext-listneutron ext-list
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+---------------------------+--------------------------------------------------+
| alias | name |
+---------------------------+--------------------------------------------------+
| default-subnetpools | Default Subnetpools |
| availability_zone | Availability Zone |
| network_availability_zone | Network Availability Zone |
| binding | Port Binding |
| agent | agent |
| subnet_allocation | Subnet Allocation |
| dhcp_agent_scheduler | DHCP Agent Scheduler |
| tag | Tag support |
| external-net | Neutron external network |
| flavors | Neutron Service Flavors |
| net-mtu | Network MTU |
| network-ip-availability | Network IP Availability |
| quotas | Quota management support |
| provider | Provider Network |
| multi-provider | Multi Provider Network |
| address-scope | Address scope |
| subnet-service-types | Subnet service types |
| standard-attr-timestamp | Resource timestamps |
| service-type | Neutron Service Type Management |
| tag-ext | Tag support for resources: subnet, subnetpool, |
| | port, router |
| extra_dhcp_opt | Neutron Extra DHCP opts |
| standard-attr-revisions | Resource revision numbers |
| pagination | Pagination support |
| sorting | Sorting support |
| security-group | security-group |
| rbac-policies | RBAC Policies |
| standard-attr-description | standard-attr-description |
| port-security | Port Security |
| allowed-address-pairs | Allowed Address Pairs |
| project-id | project_id field enabled |
+---------------------------+--------------------------------------------------+
列出代理以验证启动 neutron 代理是否成功:
# neutron agent-list
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+----------+------------+----------+-------------------+-------+----------------+---------------+
| id | agent_type | host | availability_zone | alive | admin_state_up | binary |
+----------+------------+----------+-------------------+-------+----------------+---------------+
| 0d135b32 | DHCP agent | server10 | nova | :-) | True | neutron-dhcp- |
| -f115-4d | | .example | | | | agent |
| 2f-8296- | | | | | | |
| 27c6590c | | | | | | |
| a08c | | | | | | |
| 6c603475 | Metadata | server10 | | :-) | True | neutron- |
| -571a-4b | agent | .example | | | | metadata- |
| de-a414- | | | | | | agent |
| b6531938 | | | | | | |
| 8508 | | | | | | |
| b8667984 | Linux | server11 | | :-) | True | neutron- |
| -0d75 | bridge | .example | | | | linuxbridge- |
| -47bf- | agent | .com | | | | agent |
| 958b-c88 | | | | | | |
| 6244ff1f | | | | | | |
| 7 | | | | | | |
+----------+------------+----------+-------------------+-------+----------------+---------------+
配置文件一览:
控制节点:
# cat /etc/neutron/neutron.conf
[DEFAULT]
rpc_backend = rabbit
core_plugin = ml2
service_plugins =
auth_strategy = keystone
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
[database]
connection = mysql+pymysql://neutron:neutron@172.25.33.10/neutron
[oslo_messaging_rabbit]
rabbit_host = 172.25.33.10
rabbit_userid = openstack
rabbit_password = rabbit
[keystone_authtoken]
auth_uri = http://172.25.33.10:5000
auth_url = http://172.25.33.10:35357
memcached_servers = 172.25.33.10:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
[nova]
auth_url = http://172.25.33.10:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
# grep ^[a-Z] /etc/nova/nova.conf
rpc_backend = rabbit
enabled_apis = osapi_compute,metadata
auth_strategy = keystone
my_ip = 172.25.33.10
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
connection = mysql+pymysql://nova:nova@172.25.33.10/nova_api
connection = mysql+pymysql://nova:nova@172.25.33.10/nova
api_servers = http://172.25.33.10:9292
auth_uri = http://172.25.33.10:5000
auth_url = http://172.25.33.10:35357
memcached_servers = 172.25.33.10:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova
url = http://172.25.33.10:9696
auth_url = http:/172.25.33.10:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = True
metadata_proxy_shared_secret = redhat//Z这个密码后边要用
lock_path = /var/lib/nova/tmp
rabbit_host = 172.25.33.10
rabbit_userid = openstack
rabbit_password = rabbit
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
[root@server10 ~]# grep ^[a-Z] /etc/neutron/plugins/ml2/ml2_conf.ini
type_drivers = flat,vlan
tenant_network_types =
mechanism_drivers = linuxbridge
extension_drivers = port_security
flat_networks = provider
enable_ipset = True
[root@server10 ~]# grep ^[a-Z] /etc/neutron/plugins/ml2/linuxbridge_agent.ini
physical_interface_mappings = public:eth0
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewal
enable_vxlan = False
# grep ^[a-Z] /etc/neutron/plugins/ml2/linuxbridge_agent.ini
physical_interface_mappings = public:eth0
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewal
enable_vxlan = False
[root@server10 ~]# grep ^[a-Z] //etc/neutron/dhcp_agent.ini
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True
# grep ^[a-Z] //etc/neutron/dhcp_agent.ini
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True
[root@server10 ~]# grep ^[a-Z] /etc/neutron/metadata_agent.ini
nova_metadata_ip = 172.25.33.10
metadata_proxy_shared_secret = redhat//用的是上边的元数据区密码
计算节点:
# grep ^[a-Z] /etc/neutron/neutron.conf
rpc_backend = rabbit
auth_strategy = keystone
rabbit_host = 172.25.33.10
rabbit_userid = openstack
rabbit_password = rabbit
auth_uri = http://172.25.33.10:5000
auth_url = http://172.25.33.10:35357
memcached_servers = 172.25.33.10:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
lock_path = /var/lib/neutron/tmp
# grep ^[a-Z] /etc/neutron/plugins/ml2/linuxbridge_agent.ini
physical_interface_mappings = public:eth0
enable_vxlan = False
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
# grep ^[a-Z] /etc/nova/nova.conf
rpc_backend = rabbit
enabled_apis = osapi_compute,metadata
auth_strategy = keystone
my_ip = 172.25.33.10
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
connection = mysql+pymysql://nova:nova@172.25.33.10/nova_api
connection = mysql+pymysql://nova:nova@172.25.33.10/nova
api_servers = http://172.25.33.10:9292
auth_uri = http://172.25.33.10:5000
auth_url = http://172.25.33.10:35357
memcached_servers = 172.25.33.10:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova
virt_type = qemu
url = http://172.25.33.10:9696
auth_url = http://172.25.33.10:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
lock_path = /var/lib/nova/tmp
rabbit_host = 172.25.33.10
rabbit_userid = openstack
rabbit_password = rabbit
auth_uri = http://172.25.33.10:5000
auth_url = http://172.25.33.10:35357
memcached_servers = 172.25.33.10:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova
os_region_name = RegionOne
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = 172.25.33.11
ovncproxy_base_url = http://172.25.33.10:6080/vnc_auto.html
注意:所有密码和服务名称相同
172.25.33.10为控制节点
172.25.33.11为计算节点
至此,基础服务已经完成,可以创建实例:
----------
创建虚拟网络
---------
公共网络:
创建公共网络:
1、在控制节点上,加载 admin 凭证来获取管理员能执行的命令访问权限:
source admin-openrc
2、创建网络:
# neutron net-create --shared --provider:physical_network provider \
> --provider:network_type flat public
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2017-04-09T11:35:39Z |
| description | |
| id | 876887d3-2cf3-4253-9804-346f180b6077 |
| ipv4_address_scope | |
| ipv6_address_scope | |
| mtu | 1500 |
| name | public |
| port_security_enabled | True |
| project_id | 7f1f3eae73dc439da7f53c15c634c4e7 |
| provider:network_type | flat |
| provider:physical_network | provider |
| provider:segmentation_id | |
| revision_number | 3 |
| router:external | False |
| shared | True |
| status | ACTIVE |
| subnets | |
| tags | |
| tenant_id | 7f1f3eae73dc439da7f53c15c634c4e7 |
| updated_at | 2017-04-09T11:35:39Z |
+---------------------------+--------------------------------------+
``–shared``选项允许所有项目使用虚拟网络
查看网络CIDR # neutron net-list
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+----------------------+--------+----------------------+-----------------------+
| id | name | tenant_id | subnets |
+----------------------+--------+----------------------+-----------------------+
| 876887d3-2cf3-4253-9 | public | 7f1f3eae73dc439da7f5 | 6428d4dd-e15d-48b0 |
| 804-346f180b6077 | | 3c15c634c4e7 | -995e-45df957f4735 |
| | | | 172.25.33.0/24 |
+----------------------+--------+----------------------+-----------------------+
3、在网络上创建一个子网:
# neutron subnet-create --name provider --allocation-pool start=172.25.33.100,end=172.25.33.200 --dns-nameserver 114.114.114.114 --gateway 172.25.33.250 public 172.25.33.0/24
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
Created a new subnet:
+-------------------+----------------------------------------------------+
| Field | Value |
+-------------------+----------------------------------------------------+
| allocation_pools | {"start": "172.25.33.100", "end": "172.25.33.200"} |
| cidr | 172.25.33.0/24 |
| created_at | 2017-04-09T11:40:38Z |
| description | |
| dns_nameservers | 114.114.114.114 |
| enable_dhcp | True |
| gateway_ip | 172.25.33.250 |
| host_routes | |
| id | 6428d4dd-e15d-48b0-995e-45df957f4735 |
| ip_version | 4 |
| ipv6_address_mode | |
| ipv6_ra_mode | |
| name | provider |
| network_id | 876887d3-2cf3-4253-9804-346f180b6077 |
| project_id | 7f1f3eae73dc439da7f53c15c634c4e7 |
| revision_number | 2 |
| service_types | |
| subnetpool_id | |
| tags | |
| tenant_id | 7f1f3eae73dc439da7f53c15c634c4e7 |
| updated_at | 2017-04-09T11:40:38Z |
+-------------------+----------------------------------------------------+
使用提供者物理网络的子网CIDR标记替换``PROVIDER_NETWORK_CIDR``。即上文列出的子网
将 DNS_RESOLVER 替换为DNS解析服务的IP地址。在大多数情况下,你可以从主机``/etc/resolv.conf`` 文件选择一个使用。
将``PUBLIC_NETWORK_GATEWAY`` 替换为公共网络的网关,一般的网关IP地址以 ”.1” 结尾。 也可以使用宿主机的IP。
创建m1.nano规格的主机
默认的最小规格的主机需要512 MB内存。对于环境中计算节点内存不足4 GB的,我们推荐创建只需要64 MB的``m1.nano``规格的主机。若单纯为了测试的目的,请使用``m1.nano``规格的主机来加载CirrOS镜像
# openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano
+----------------------------+---------+
| Field | Value |
+----------------------------+---------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 1 |
| id | 0 |
| name | m1.nano |
| os-flavor-access:is_public | True |
| properties | |
| ram | 64 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 1 |
+----------------------------+---------+
生成一个键值对
大部分云镜像支持公共密钥认证而不是传统的密码认证。在启动实例前,你必须添加一个公共密钥到计算服务。
导入租户``demo``的凭证
$ . demo-openrc
生成和添加秘钥对:
$ ssh-keygen -q -N ""
$ openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
+-------------+-------------------------------------------------+
| Field | Value |
+-------------+-------------------------------------------------+
| fingerprint | 7f:a9:fd:62:e4:2b:87:84:27:f1:ce:d4:c1:89:f3:b8 |
| name | mykey |
| user_id | 251ad20a4d754dc4a104a3f5b8159142 |
+-------------+-------------------------------------------------+
验证公钥的添加:
# openstack keypair list
+-------+-------------------------------------------------+
| Name | Fingerprint |
+-------+-------------------------------------------------+
| mykey | 7f:a9:fd:62:e4:2b:87:84:27:f1:ce:d4:c1:89:f3:b8 |
+-------+-------------------------------------------------+
增加安全组规则
默认情况下, ``default``安全组适用于所有实例并且包括拒绝远程访问实例的防火墙规则。对诸如CirrOS这样的Linux镜像,我们推荐至少允许ICMP (ping) 和安全shell(SSH)规则。
添加规则到 default 安全组。
允许 ICMP (ping):
# openstack security group rule create --proto icmp default
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| created_at | 2017-04-09T11:46:06Z |
| description | |
| direction | ingress |
| ether_type | IPv4 |
| id | 5a168a4b-7e2a-40ee-8302-d19fbb7dda6d |
| name | None |
| port_range_max | None |
| port_range_min | None |
| project_id | 45a1b89bc5de479e8d3e04eae314ee88 |
| protocol | icmp |
| remote_group_id | None |
| remote_ip_prefix | 0.0.0.0/0 |
| revision_number | 1 |
| security_group_id | eb93c9e4-c2fd-45fc-806c-d1640ac3bf2e |
| updated_at | 2017-04-09T11:46:06Z |
+-------------------+--------------------------------------+
允许安全 shell (SSH) 的访问:
[root@server10 ~]# openstack security group rule create --proto tcp --dst-port 22 default
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| created_at | 2017-04-09T11:46:34Z |
| description | |
| direction | ingress |
| ether_type | IPv4 |
| id | 26a91aee-5cd7-4c4d-acc6-104b7be0bc59 |
| name | None |
| port_range_max | 22 |
| port_range_min | 22 |
| project_id | 45a1b89bc5de479e8d3e04eae314ee88 |
| protocol | tcp |
| remote_group_id | None |
| remote_ip_prefix | 0.0.0.0/0 |
| revision_number | 1 |
| security_group_id | eb93c9e4-c2fd-45fc-806c-d1640ac3bf2e |
| updated_at | 2017-04-09T11:46:34Z |
+-------------------+--------------------------------------+
在公有网络上创建实例
一个实例指定了虚拟机资源的大致分配,包括处理器、内存和存储。
列出可用类型:
# openstack flavor list
+----+---------+-----+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+---------+-----+------+-----------+-------+-----------+
| 0 | m1.nano | 64 | 1 | 0 | 1 | True |
+----+---------+-----+------+-----------+-------+-----------+
这里由于给虚拟机的内存过小发生了一个cannot allocate memory的报错
列出可用镜像:
# openstack p_w_picpath list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 2ed41322-bbd2-45b0-8560-35af76041798 | cirros | active |
+--------------------------------------+--------+--------+
列出可用网络:
# openstack network list
+----------------------------------+--------+----------------------------------+
| ID | Name | Subnets |
+----------------------------------+--------+----------------------------------+
| 876887d3-2cf3-4253-9804-346f180b | public | 6428d4dd-e15d-48b0-995e- |
| 6077 | | 45df957f4735 |
+----------------------------------+--------+----------------------------------+
这个实例使用 ``provider``公有网络。 你必须使用ID而不是名称才可以使用这个网络
列出可用的安全组:
# openstack security group list
+----------------------------+---------+------------------------+---------+
| ID | Name | Description | Project |
+----------------------------+---------+------------------------+---------+
| eb93c9e4-c2fd-45fc-806c- | default | Default security group | |
| d1640ac3bf2e | | | |
+----------------------------+---------+------------------------+---------+
创建实例
启动实例:
使用``public``公有网络的ID替换``PUBLIC_NET_ID``
# openstack server create --flavor m1.nano --p_w_picpath cirros --nic net-id=876887d3-2cf3-4253-9804-346f180b6077 --security-group default --key-name mykey public-instance
+-----------------------------+-----------------------------------------------+
| Field | Value |
+-----------------------------+-----------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-STS:power_state | NOSTATE |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | nJ5gwMuEG4vN |
| config_drive | |
| created | 2017-04-09T12:11:15Z |
| flavor | m1.nano (0) |
| hostId | |
| id | 9ddc6c6b-4847-47ae-91de-8cd7a607c212 |
| p_w_picpath | cirros (2ed41322-bbd2-45b0-8560-35af76041798) |
| key_name | mykey |
| name | public-instance |
| progress | 0 |
| project_id | 45a1b89bc5de479e8d3e04eae314ee88 |
| properties | |
| security_groups | name='default' |
| status | BUILD |
| updated | 2017-04-09T12:11:16Z |
| user_id | 251ad20a4d754dc4a104a3f5b8159142 |
| volumes_attached | |
+-----------------------------+-----------------------------------------------+
检查实例的状态
# openstack server list
+----------------------+-----------------+--------+----------+------------+
| ID | Name | Status | Networks | Image Name |
+----------------------+-----------------+--------+----------+------------+
| 9ddc6c6b-4847-47ae- | public-instance | BUILD | | cirros |
| 91de-8cd7a607c212 | | | | |
+----------------------+-----------------+--------+----------+------------+
当构建过程完全成功后,状态会从 BUILD``变为``ACTIVE
使用虚拟控制台访问实例
获取你实例的 Virtual Network Computing (VNC) 会话URL并从web浏览器访问它: