架构
实验环境
角色 | 主机名 | 网卡 | 系统环境 |
Controller Node | controller.qween.com | 管理接口eth0:192.168.22.128
外部接口eth1:192.168.36.130 | CentOS6.8 |
Compute Node | compute1.qween.com | 管理接口eth0:192.168.22.129
隧道接口eth1:10.0.10.100 | CentOS6.8 |
Network Node | network.qween.com | 管理接口eth0:192.168.22.131
隧道接口eth1:10.0.10.110 外部接口eth2:192.168.36.133 |
CentOS6.8 |
Block Storage Node
| stor1.qween.com | 管理接口eth0:192.168.22.132外部接口eth1:192.168.36.135 |
#各节点时间同步 */10 * * * * /usr/sbin/ntpdate 202.120.2.101 &> /dev/null#各节点已禁用NetworkManager服务 chkconfig NetworkManager off#各节点已清空防火墙规则,并保存#各节点已基于hosts实现主机名通信 /etc/hosts 192.168.22.128 controller.qween.com controller 192.168.22.129 compute1.qween.com compute1 192.168.22.131 network.qween.com network 192.168.22.132 stor1.qween.com stor1《官方文档:https://docs.openstack.org》 1、添加规则[root@controller ~]# iptables -t nat -A POSTROUTING -s 192.168.22.0/24 -j SNAT --to-source 192.168.36.130[root@controller ~]# service iptables save //保存
[root@controller ~]# vim /etc/sysctl.conf net.ipv4.ip_forward = 1 //打开网卡间转发功能[root@controller ~]# sysctl -p
2、安装并初始化MySQL服务器
[root@controller ~]# yum install mariadb-galera-server [root@controller ~]# service mysqld start [root@controller ~]# mysql [root@controller ~]# vim /etc/my.cnf
[mysqld] datadir = /mydata/data default-storage-engine = innodb innodb_file_per_table = ON character-set-server = utf8 skip_name_resolve = ON
3、安装配置identity服务(Keystone)
3.1安装 [root@controller ~]# yum install openstack-utils openstack-keystone python-keystoneclient -y
3.2授权 [root@controller ~]# mysql > CREATE DATABASE keystone; > GRANT ALL ON keystone.* TO 'keystone'@'192.168.22.%' IDENTIFIED BY 'keystone'; > FLUSH PRIVILEGES; > exit; 3.3以keystone用户运行keystone-manage db_sync命令同步数据库 [root@controller ~]# su -s /bin/sh -c 'keystone-manage db_sync' keystone // 以keystone用户运行keystone-manage db_sync命令同步数据库 [root@controller ~]# mysql > use keystone; > SHOW TABLES; //初始化成功
3.4修改配置文件 [root@controller ~]# openstack-config --set /etc/keystone/keystone.conf database connection mysql://keystone:keystone@192.168.22.128/keystone {{上面的命令相当于 # vim /etc/keystone/keystone.conf [database] connection mysql://keystone:keystone@192.168.22.128/keystone //基于mysql协议,以keystone为用户名,keystone为密码访问192.168.22.128服务器上名字为keystone的数据库}}
3.5配置token
[root@controller ~]# ADMIN_TOKEN=$(openssl rand -hex 10) [root@controller ~]# echo $ADMIN_TOKEN > .admin_token.rc [root@controller ~]# vim /etc/keystone/keystone.conf admin_token=82051964278b344ebf28 [root@controller ~]# export OS_SERVICE_TOKEN=$ADMIN_TOKEN [root@controller ~]# export OS_SERVICE_ENDPOINT=http://controller:35357/v2.0
3.6创建本地pki(openstack用到的证书服务)
[root@controller ~]# keystone-manage pki_setup --keystone-user keystone --keystone-group keystone [root@controller ~]# chown -R keystone.keystone /etc/keystone/ssl [root@controller ~]# chmod -R o-rwx /etc/keystone/ssl
3.7启动服务
[root@controller ~]# service openstack-keystone start Starting keystone: [ OK ] [root@controller ~]# chkconfig openstack-keystone on [root@controller ~]# ss -tnlp | grep keystone-all LISTEN 0 128 *:35357 *:* LISTEN 0 128 *:5000 *:* [root@controller ~]# tail /var/log/keystone/keystone.log 查看有没有错误日志
3.8创建用户、角色、tenant
创建admin用户 [root@controller ~]# keystone user-create --name=admin --pass=admin --email=admin@qween.com
+----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | email | admin@qween.com | | enabled | True | | id | 2338be9fb49cbcc6cb0ebe160d54028a | | name | admin | | username | admin | +----------+----------------------------------+ [root@controller ~]# keystone help user-create [root@controller ~]# keystone user-list 用户列表 创建admin角色 [root@controller ~]# keystone role-create --name=admin 创建admin tenant(租户) [root@controller ~]# keystone tenant-create --name=admin --description="Admin Tenant" 关联用户、角色及tenant [root@controller ~]# keystone user-role-add --user admin --tenant admin --role admin [root@controller ~]# keystone user-role-add --user admin --role _member_ --tenant admin [root@controller ~]# keystone user-role-list --user admin --tenant admin 查看admin用户属于的角色 创建普通用户 [root@controller ~]# keystone user-create --name=demo --pass=demo --email=demo@scholar.com [root@controller ~]# keystone tenant-create --name=demo --description="Demo Tenant" [root@controller ~]# keystone user-role-add --user=demo --role=_member_ --tenant=demo 创建一个服务tenant以备后用 [root@controller ~]# keystone tenant-create --name=service --description="Service Tenant"
3.9设定Keystone为API endpoint
[root@controller ~]# keystone service-create --name=keystone --type=identity \ > --description="OpenStack Identity"
为上面新建的服务添加endpoint(服务访问入口) [root@controller ~]# keystone endpoint-create \ > --service-id=$(keystone service-list | awk '/ identity / {print $2}') \ > --publicurl=http://controller:5000/v2.0 \ 公共访问入口 > --internalurl=http://controller:5000/v2.0 \ > --adminurl=http://controller:35357/v2.0 管理接口 [root@controller ~]# keystone endpoint-list 修改认证方式为基于用户账号认证
[root@controller ~]# unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT [root@controller ~]# keystone --os-username=admin --os-password=admin --os-tenant-name=admin --os-auth-url=http://controller:35357/v2.0 token-get [root@controller ~]# vim ~/.admin-openrc.sh export OS_USERNAME=admin export OS_PASSWORD=admin export OS_TENANT_NAME=admin export OS_AUTH_URL=http://controller:35357/v2.0 [root@controller ~]# . ~/.admin-openrc.sh
验正是否生效 [root@controller ~]# keystone user-list
4、安装配置Image服务(glance)
4.1安装
[root@controller ~]# yum install openstack-glance python-glanceclient -y
4.2修改配置文件
[root@controller ~]# cd /etc/glance [root@controller ~]# cp glance-api.conf{,.bak} [root@controller ~]# cp glance-registry.conf{,.bak} [root@controller ~]# vim glance-api.conf Database Options connection mysql://glance:glance@192.168.22.128/glance [root@controller ~]# vim glance-registry.conf Database Options connection mysql://glance:glance@192.168.22.128/glance
4.3创建并初始化数据库
[root@controller ~]# mysql > CREATE DATABASE glance CHARACTER SET utf8; > GRANT ALL ON glance.* TO 'glance'@'192.168.22.%' IDENTIFIED BY 'keystone'; > FLUSH PRIVILEGES; > exit; [root@controller ~]# su -s /bin/sh -c 'glance-manage db_sync' glance [root@controller ~]# mysql > use glance; > SHOW TABLES; [root@controller ~]# tail /var/log/glance/api.log
4.4创建glance用户
[root@controller ~]# keystone user-create --name=glance --pass=glance --email=glance@qween.com
+----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | email | glance@qween.com | | enabled | True | | id | 1ddd3b916c7559c5570d1b0f46c5478f | | name | glance | | username | glance | +----------+----------------------------------+
[root@controller ~]# keystone user-role-add --user=glance --tenant=service --role=admin [root@controller ~]# keystone user-role-list --user=glance --tenant=service
4.5配置Glance服务使用Identity服务认证
[root@controller ~]# vim /etc/glance/glance-api.conf [keystone_authtoken] auth_host=controller auth_port=35357 auth_protocol=http admin_tenant_name=service admin_user=glance admin_password=glance auth_uri=http://controller:5000 [paste_deploy] flavor=keystone [root@controller ~]# vim /etc/glance/glance-registry.conf [keystone_authtoken] auth_host=controller auth_port=35357 auth_protocol=http admin_tenant_name=service admin_user=glance admin_password=glance auth_uri=http://controller:5000 [paste_deploy] flavor=keystone
4.6添加glance服务,在keystone注册glance服务
[root@controller ~]# keystone service-create --name=glance --type=image \ > --description="OpenStack Image Service"
[root@controller ~]# keystone endpoint-create \ > --service-id=$(keystone service-list | awk '/ image / {print $2}') \ > --publicurl=http://controller:9292 \ > --internalurl=http://controller:9292 \ > --adminurl=http://controller:9292
4.7启动服务
[root@controller ~]# service openstack-glance-api start [root@controller ~]# service openstack-glance-registry start [root@controller ~]# chkconfig openstack-glance-api on [root@controller ~]# chkconfig openstack-glance-registry on [root@controller ~]# ss -tnl LISTEN 0 128 *:35357
4.8映像文件的创建与上传
[root@controller ~]# qemu-imgqemu-img info cirros-no_cloud-0.3.0-x86_64-disk.img //查看映像文件格式信息 [root@controller ~]# glance image-create --name=cirros-0.3.0-x86_64 --disk-format=qcow2 \ > --container-format=bare --is-public=true < /root/cirros-no_cloud-0.3.0-x86_64-disk.img --disk-format 磁盘影像文件格式(aki,vhd,vmdk,raw,qcow2,vdi,iso) --container-format 影像容器格式(ari,aki,bare,ovf)
[root@controller ~]# glance image-list [root@controller ~]# glance image-show image-create cirros-0.3.0-x86_64 //显示指定image的详细信息 [root@controller ~]# glance help image-download //下载image文件 [root@controller ~]# glance help image-delect //删除image文件
5、安装配置Compute服务(nova)
5.1 compute controller service
5.1.1 安装启动qpid
[root@controller ~]# yum install qpid-cpp-server -y [root@controller ~]# vim /etc/qpidd.conf auth=no [root@controller ~]# service qpidd start Starting Qpid AMQP daemon: [ OK ] [root@controller ~]# chkconfig qpidd on [root@controller ~]# ss -tnl | grep qpid LISTEN 0 10 :::5672 LISTEN 0 10 *:5672
5.1.2 安装配置compute服务
[root@controller ~]# yum install openstack-nova-api openstack-nova-cert openstack-nova-conductor \ > openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler \ > python-novaclient
5.1.3配置nova服务
[root@controller ~]# mysql > CREATE DATABASE nova CHARACTER SET 'utf8'; > GRANT ALL ON nova.* TO 'nova'@'192.168.22.%' IDENTIFIED BY 'nova'; > FLUSH PRIVILEGES; > exit; [root@controller ~]# cp /etc/nova/nova.conf{,.bak} [root@controller ~]# vim /etc/nova/nova.conf [DEFAULT] rpc_backend=qpid 为nova指定连接队列服务qpid的相关信息 qpid_hostname=192.168.22.128 [database] connection mysql://nova:nova@192.168.22.128/nova 将 my_ip、vncserver_listen和vncserver_proxyclient_address参数的值设定为所属“管理网络”接口地址 [root@controller ~]# vim /etc/nova/nova.conf [DEFAULT] my_ip=192.168.22.128 vncserver_listen=192.168.22.128 vncserver_proxyclient_address=192.168.22.128 [root@controller ~]# su -s /bin/bash -c "nova-manager db sync" nova [root@controller ~]# mysql > use nova; > SHOW TABLES; [root@controller ~]# tail /var/log/nova/nova-manager.log
5.1.3创建nova用户
[root@controller ~]# keystone user-create --name=nova --pass=nova --email=nova@qween.com
[root@controller ~]# keystone user-role-add --user=nova --tenant=service --role=admin [root@controller ~]# keystone user-role-list --tenant=service --user=nova
5.1.4 设定nova调用keystone API的相关配置
[root@controller ~]# vim /etc/nova/nova.conf [DEFAULT] auth_strategy=keystone //基于keystone认证 [keystone_authtoken] auth_uri=http://controller:5000 //对外使用的接口 auth_host=controller //认证主机 auth_protocol=http auth_port=35357 //认证使用的管理端口 admin_user=nova admin_tenant_name=service admin_password=nova
5.1.5添加nova服务,在keystone中注册nova服务
[root@controller ~]# keystone service-create --name=nova --type=compute --description="OpenStack Compute" [root@controller ~]# keystone endpoint-create \ > --service-id=$(keystone service-list | awk '/ compute / {print $2}') \ > --publicurl=http://controller:8774/v2/%\(tenant_id)s \ > --internalurl=http://controller:8774/v2/%\(tenant_id)s \ > --adminurl=http://controller:8774/v2/%\(tenant_id)s
5.1.6 启动服务
[root@controller ~]# for svc in api cert consoleauth scheduler conductor novncproxy; do service openstack-nova-$svc start; chkconfig openstack-nova-$svc on; done [root@controller ~]# netstat -tnlp 8774 [root@controller ~]# tail /var/log/nova/api.log [root@controller ~]# nova help [root@controller ~]# nova image-list
5.2compute节点(hypervisor)
5.2.1测试计算节点是否支持硬件虚拟化
[root@compute1 ~]# grep -E -i --color=auto "(vmx|svm)" /proc/cpuinfo
5.2.2安装配置compute
[root@compute1 ~]# yum install openstack-nova-compute [root@compute1 ~]# vim /etc/nova/nova.conf [DEFAULT] qpid_hostname=192.168.22.128 为nova指定连接队列服务qpid的相关信息 rpc_backend=qpid auth_strategy=keystone connection=mysql://nova:nova@192.168.22.128/nova glance_host=controller /指定运行glance服务的主机 [keystone_authtoken] auth_uri=http://controller:5000 //对外使用的接口 auth_host=controller //认证主机 auth_protocol=http auth_port=35357 //认证使用的管理端口 admin_user=nova admin_tenant_name=service admin_password=nova 修改网络参数 my_ip=192.168.22.129 vnc_enabled=True vncserver_listen=0.0.0.0 /所有地址 vncserver_proxyclient_address=192.168.22.129 /代理客户端地址 设置novncproxy的base_url为控制节点的地址 novncproxy_base_url http://controller:6080/vnc_auto.html vif_plugging_timeout 10 /设置虚拟网络接口插件的超时时长 vif_plugging_is_fatal False /虚拟网络接口设置不成功也启动(测试场景) virt_type=kvm /使用的虚拟化方式(若不支持kvm,可设为qemu) 5.2.3 启动服务 [root@compute1 ~]# service libvirtd start [root@compute1 ~]# ls mod kvm kvm_intel [root@compute1 ~]# service messagebus start /启动总线服务 [root@compute1 ~]# service openstack-nova-compute start [root@compute1 ~]# netstat -tnlp [root@compute1 ~]# chkconfig libvrit on [root@compute1 ~]# chkconfig messagebus on [root@compute1 ~]# chkconfig openstack-nova-compute on
5.2.4 验证添加的compute节点是否能够使用
[root@controller ~]# nova hypervisor-list +----+---------------------+ | ID | Hypervisor hostname | +----+---------------------+ | 1 | compute1.qween.com | +----+---------------------+
6、安装配置Network服务(neutron)
6.1controller node
6.1.1 创建neutron数据库
[root@controller ~]# mysql > CREATE DATABASE neutron; > GRANT ALL ON neutron.* TO 'neutron'@'192.168.22.%' IDENTIFIED BY 'neutron'; > FLUSH PRIVILEGES; > exit;
6.1.2 在keystone中添加neutron,并添加到管理角色
[root@controller ~]# keystone user-create --name neutron --pass neutron --email neutron@qween.com
[root@controller ~]# keystone user-role-add --user neutron --tenant service --role admin [root@controller ~]# keystone user-role-list --user neutron --tenant service
6.1.3添加neutron服务及访问接口
[root@controller ~]# keystone service-create --name neutron --type network --description "OpenStack Networking"
[root@controller ~]# keystone endpoint-create \ > --service-id $(keystone service-list | awk '/ network / {print $2}') \ > --publicurl http://controller:9696 \ > --adminurl http://controller:9696 \ > --internalurl http://controller:9696
+-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | adminurl | http://controller:9696 | | id | 41307aad4b2144c79a4da6322e4ce8a6 | | internalurl | http://controller:9696 | | publicurl | http://controller:9696 | | region | regionOne | | service_id | 4edd459c11b5c0b379f821801a4e4082 | +-------------+----------------------------------+
6.1.4安装配置neutron server
[root@controller ~]# yum install openstack-neutron openstack-neutron-ml2 python-neutronclient [root@controller ~]# rpm -ql openstack-neutron [root@controller ~]# vim /etc/neutron/neutron.conf connection = mysql://neutron:neutron@192.168.22.128:3306/neutron //配置 neutron连接数据库的URL auth_strategy = keystone identity_uri = http://controller:5000 auth_host = controller auth_protocol = http auth_port = 35357 admin_tenant_name = service admin_user = neutron admin_password = neutron 配置neutron server使用的消息队列服务 rpc_backend = neutron.openstack.common.rpc.impl_qpid qpid_hostname = 192.168.22.131
6.1.5配置neutron server通知compute节点相关网络定义的发生改变
[root@controller ~]# vim /etc/neutron/neutron.conf
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://controller:8774/v2
nova_admin_username = nova
nova_admin_tenant_id = 4edd459c11b5c0b379f821801a4e4082 ( # keystone tenant-list命令获取 )
nova_admin_password = nova
nova_admin_auth_url = http://controller:35357/v2.0
6.1.6配置Modular Layer 2 (ML2)插件及相关服务
[root@controller ~]# vim /etc/neutron/neutron.conf
[DEFAULT]
core_plugin = ml2
service_plugins = router
[root@controller ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = gre
tenant_network_types = gre
mechanism_drivers = openvswitch
[ml2_type_gre]
tunnel_id_ranges = 1:1000 //可用范围
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver //防火墙驱动
enable_security_group = True
6.1.7配置Compute服务能够使用Networking功能
[root@controller ~]# vim /etc/nova/nova.conf [DEFAULT]
network_api_class = nova.network.neutronv2.api.API
neutron_url = http://controller:9696
neutron_auth_strategy = keystone
neutron_admin_tenant_name = service
neutron_admin_username = neutron
neutron_admin_password = neutron
neutron_admin_auth_url = http://controller:35357/v2.0 //认证接口
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver //linux网络接口驱动
firewall_driver = nova.virt.firewall.NoopFirewallDriver
security_group_api = neutron //安全组访问api
6.1.8完成安装、启动服务
创建连接文件
[root@controller neutron]# ln -s plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini Networking服务初始化脚本需要通过符号链接文件/etc/neutron/plugin.ini链接至选择使用的插件 [root@controller neutron]# ls
重启服务
[root@controller ~]# for svc in api scheduler conductor; do service openstack-nova-${svc} restart;done
启动服务
[root@controller ~]# service neutron-server start [root@controller ~]# chkconfig neutron-server on [root@controller ~]# tail /var/log/neutron/server.log | grep -i 'ERROR'
6.2 Network Node
6.2.1配置内核网络参数
[root@network ~]# vim /etc/sysctl.conf net.ipv4.ip_forward = 1 net.ipv4.conf.all.rp_filter = 0 net.ipv4.conf.default.rp_filter = 0 net.bridge.bridge-nf-call-ip6tables = 1 //允许添加iptables规则 net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-arptables = 1 [root@network ~]# sysctl -p
6.2.2 安装软件包(确保大多数安装包来自openstack)
[root@network ~]# yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch
6.2.3 修改配置文件
配置连入keystone
[root@network ~]# cp /etc/neutron/neutron.conf{,.bak} [root@network ~]# vim /etc/neutron/neutron.conf [DEFAULT] auth_strategy = keystone
[keystone_authtoken]
identity_uri = http://controller:5000 auth_host = controller auth_protocol = http auth_port = 35357 admin_tenant_name = service admin_user = neutron admin_password = neutron
配置其使用的消息队列服务
rpc_backend = neutron.openstack.common.rpc.impl_qpid
qpid_hostname = 192.168.22.18
core_plugin = ml2 /核心插件
service_plugins = router /服务插件
6.2.4 配置Layer-3 (L3) agent
[root@network ~]# vim /etc/neutron/l3_agent.ini [DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver use_namespaces = True /允许使用名称空间 verbose = True /用于排错
6.2.5配置DHCP agent
[root@network ~]# vim /etc/neutron/dhcp_agent.ini [DEFAULT] verbose = True interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq use_namespaces = True dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf /配置neutron中dhcp服务使用自定义配置文件 [root@network ~]# vim /etc/neutron/dnsmasq-neutron.conf
dhcp-option-force=26,1454 强制26项帧大小为1454
6.2.6配置metadata(元数据) agent
[root@network ~]# cp /etc/neutron/metadata_agent.ini{,.bak} [root@network ~]# vim /etc/neutron/metadata_agent.ini [DEFAULT]
verbose = True auth_url = http://controller:5000/v2.0 auth_region = regionOne admin_tenant_name = service admin_user = neutron admin_password = neutron nova_metadata_ip = controller metadata_proxy_shared_secret = METADATA_SECRET /元数据共享时的密钥
6.2.7在控制节点上执行如下命令
[root@controller ~]# vim /etc/nova/nova.conf [DEFAULT] service_neutron_metadata_proxy = true neutron_metadata_proxy_shared_secret = METADATA_SECRET
[root@controller ~]# service openstack-nova-api restart
6.2.8配置ML2插件的相关参数
[root@network ~]# ifconfig [root@network ~]# vim/etc/sysconfig/network-scripts/ifcfg-eth1 DEVICE=eth1 BOOTPROTO=none IPADDR=10.0.10.110 NETMASK=255.255.255.255 NM_CONTROLLED=no ONBOOT=yes TYPE=Ethernet IPV6INIT=no USERCTL=no [root@network ~]# ifdown eth1 [root@network ~]# ifup eth1 [root@network ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = gre tenant_network_types = gre mechanism_drivers = openvswitch
[ml2_type_gre] tunnel_id_ranges = 1:1000
[ovs] local_ip = 10.0.10.110 /隧道接口地址 tunnel_type = gre enable_tunneling = True
[securitygroup] firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver enable_security_group = True
6.2.9 启动Open vSwitch服务
[root@network ~]# service openvswitch start [root@network ~]# chkconfig openvswitch on [root@network ~]# ovs-vsctl add-br br-int /添加桥设备 [root@network ~]# ovs-vsctl add-br br-ex /添加外部桥 [root@network ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth2 DEVICE=eth2 BOOTPROTO=none NM_CONTROLLED=no ONBOOT=yes TYPE=Ethernet IPV6INIT=no USERCTL=no [root@network ~]# ovs-vsctl add-port br-ex eth2 /为外部桥添加外部网络接口,其中eth2为实际的外部物理接口 [root@network ~]# ovs-vsctl br-set-external-id br-ex bridge-id br-ex /修改桥设备br-ex的bridge-id为br-ex
[root@network ~]# ethtool -K eth2 gro off /关闭gro功能 [root@network ~]# ifconfig br-ex 192.168.36.133/24 [root@network ~]# route add default gw 192.168.1.0
[root@network ~]# cd /etc/neutron/ [root@network neutron]# ln -s plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
[root@network ~]# cp /etc/init.d/neutron-openvswitch-agent /etc/init.d/neutron-openvswitch-agent.orig [root@network ~]# sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /etc/init.d/neutron-openvswitch-agent [root@network ~]# for svc in openvswitch-agent l3-agent dhcp-agent metadata-agent; \ > do service neutron-${svc} start; chkconfig neutron-${svc} on; done
6.3 compute node
6.3.1修改内核网络参数
[root@compute ~]# vim /etc/sysctl.conf net.ipv4.conf.all.rp_filter = 0 net.ipv4.conf.default.rp_filter = 0 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-arptables = 1 [root@compute ~]# sysctl -p
6.3.2安装软件包
[root@compute ~]# yum install openstack-neutron-ml2 openstack-neutron-openvswitch
6.3.3 修改配置文件
配置连入keystone
[root@compute ~]# cp /etc/neutron/neutron.conf{,.bak} [root@compute ~]# vim /etc/neutron/neutron.conf [DEFAULT] auth_strategy = keystone
[keystone_authtoken]
identity_uri = http://controller:5000 auth_host = controller auth_protocol = http auth_port = 35357 admin_tenant_name = service admin_user = neutron admin_password = neutron
配置其使用的消息队列服务
rpc_backend = neutron.openstack.common.rpc.impl_qpid
qpid_hostname = 192.168.22.18
core_plugin = ml2 /核心插件
service_plugins = router /服务插件
6.3.4 配置ML2插件的相关参数
[root@compute ~]# ifconfig [root@compute ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = gre tenant_network_types = gre mechanism_drivers = openvswitch
[ml2_type_gre] tunnel_id_ranges = 1:1000
[ovs] local_ip = 10.0.10.100 /隧道接口地址 tunnel_type = gre enable_tunneling = True
[securitygroup] firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver enable_security_group = True [root@compute ~]# ping 10.0.10.110
6.3.5启动Open vSwitch服务
[root@compute ~]# service openvswitch start [root@compute ~]# chkconfig openvswitch on [root@compute ~]# ovs-vsctl add-br br-int
6.3.6配置Compute使用Networking服务
[root@compute ~]# vim /etc/nova/nova.conf [DEFAULT]
network_api_class = nova.network.neutronv2.api.API neutron_url = http://controller:9696 neutron_auth_strategy = keystone neutron_admin_tenant_name = service neutron_admin_username = neutron neutron_admin_password = neutron neutron_admin_auth_url = http://controller:35357/v2.0 linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver firewall_driver = nova.virt.firewall.NoopFirewallDriver security_group_api = neutron
6.3.7 启动服务
[root@compute ~]# cd /etc/neutron/ [root@compute neutron]# ln -s plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini [root@compute ~]# cp /etc/init.d/neutron-openvswitch-agent /etc/init.d/neutron-openvswitch-agent.orig [root@compute ~]# sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /etc/init.d/neutron-openvswitch-agent [root@compute ~]# service openstack-nova-compute restart [root@compute ~]# service neutron-openvswitch-agent start [root@compute ~]# chkconfig neutron-openvswitch-agent on
6.4 创建网络
6.4.1 在 Contoller上创建外部网络
[root@controller ~]# . admin-openrc.sh /认证
[root@controller ~]# neutron net-create ext-net --shared --router:external=True ext-net //外部网络 --shared //可共享 --router:external=True //创建可以接入外部网络的路由设备
6.4.2 创建一个子网
[root@controller ~]# neutron subnet-create ext-net --name ext-subnet \ > --allocation-pool start=192.168.36.200,end=192.168.36.220 \ //分配的地址池(公网地址) > --disable-dhcp --gateway 192.168.1.1 192.168.1.0/24 //禁用dhcp [root@controller ~]# keystone tenant-list [root@controller ~]# keystone user-list
6.4.3 创建tenant network
[root@controller ~]# cp .admin-openrc.sh .demo-os.sh [root@controller ~]# vim .demo-os.sh export OS_USERNAME=demo export OS_PASSWORD=demo export OS_TENANT_NAME=demo export OS_AUTH_URL=http://controller:35357/v2.0 [root@controller ~]# . .demo-os.sh [root@controller ~]# export [root@controller ~]# neutron net-create demo-net //二层网络 tenant network为各instance之间提供了内部互访的通道,此机制用于实现各tenant 网络之间的隔离
6.4.5 在demo-net网络创建一个子网
[root@controller ~]# neutron subnet-create demo-net --name demo-subnet \ > --gateway 192.168.30.254 192.168.30.0/24
6.4.6 为demo net创建一个路由设备,并将其添加至外部网络和demo net
[root@controller ~]# neutron router-create demo-router [root@controller ~]# neutron router-gateway-set demo-router ext-net [root@controller ~]# neutron router-port-list demo-router [root@controller ~]# neutron router-interface-add demo-router demo-subnet [root@controller ~]# neutron router-port-list demo-router network node [root@network ~]# yum update iprouter [root@network ~]# ip netns list
7、安装配置Dashboard(horizon)
7.1 安装软件包
[root@controller ~]# yum install memcached python-memcached mod_wsgi openstack-dashboard
7.2 启动memcached
[root@controller ~]# service memcached start [root@controller ~]# chkconfig memcached on
7.3 配置dashboard
[root@controller ~]# vim /etc/openstack-dashboard/local_settings OPENSTACK_HOST = "controller" //指定controller节点 CACHES = { //配置本机上如何使用memcached作为会话缓存 'default': { 'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION' : '192.168.22.128:11211', } } ALLOWED_HOSTS = ['*', 'localhost'] //授权访问权限 TIME_ZONE = "Asia/Shanghai" //设置时区
7.4 启动服务
[root@controller ~]# service httpd start [root@controller ~]# chkconfig httpd on
7.5 访问测试
用浏览器访问192.168.22.128/dashboard
8、启动实例
8.1 生成密钥,ssh公钥注入
[root@controller ~]# nova hypervisor-list [root@controller ~]# ssh-keygen [root@controller ~]# nova keypair-add --pub-key ~/.ssh/id_rsa.pub demo-key [root@controller ~]# nova keypair-list /列出密钥 +----------+-------------------------------------------------+ | Name | Fingerprint | +----------+-------------------------------------------------+ | demo-key | e1:2d:63:d2:36:ed:57:2c:8c:15:2f:09:26:96:6c:81 | +----------+-------------------------------------------------+
8.2 启动一个实例
[root@controller ~]# nova flavor-list //查看可用模板
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+ | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+ | 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True | | 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True | | 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True | | 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True | | 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True | +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
创建一个flavor,供测试使用
[root@controller ~]# nova flavor-create --is-public true m1.cirros 6 128 1 1 +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+ | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+ | 6 | m1.cirros | 128 | 1 | 0 | | 1 | 1.0 | True | +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
获取所有可用的image文件列表
[root@controller ~]# nova image-list +--------------------------------------+---------------------+--------+--------+ | ID | Name | Status | Server | +--------------------------------------+---------------------+--------+--------+ | 6a820f7e-ddc6-40c8-cf3h-27297f2673a3 | cirros-0.3.0-x86_64 | ACTIVE | | +--------------------------------------+---------------------+--------+--------+
获取所有可用的网络列表
[root@controller ~]# neutron net-list
获取安全组列表
[root@controller ~]# neutron secgroup-list
查看指定安全组内的规则
[root@controller ~]# neutron secgroup-list-rules default
启动
[root@controller ~]# nova boot --flavor m1.cirros --image cirros-0.3.0-x86_64 --nic net-id=$(neutron net-list | awk '/ demo-net / {print $1}')\ > --security-group default --key-name demokey demo-0001 发现启动不了,在查看日志信息 [root@controller ~]# tail /var/log/nova/api.log [root@compute ~]# tail /var/log/nova/api.log compute节点
[root@controller ~]# nova boot --flavor m1.cirros --image cirros-0.3.0-x86_64 --nic net-id=$(neutron net-list | awk '/ demo-net / {print $1}')\ > --security-group default --key-name demokey demo-0001 [root@controller ~]# nova list //查看 [root@controller ~]# iptables -t nat -A POSTROUTING -s 192.168.22.0/24 -j SNAT --to-source 192.168.36.128
compute节点
[root@compute ~]# virsh list instance-00000006 [root@compute ~]# virsh console instance-00000006 连接 [root@compute ~]# ss -tnl 5900 [root@compute ~]# yum install tigervnc [root@compute ~]# vncviewer :5900 //连入控制台
进行网络连通性测试,依次ping虚拟内部网关,虚拟外部网关,真实外部网关
添加安全组规则
[root@controller ~]# nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 +-------------+-----------+---------+-----------+--------------+ | IP Protocol | From Port | To Port | IP Range | Source Group | +-------------+-----------+---------+-----------+--------------+ | icmp | -1 | -1 | 0.0.0.0/0 | | +-------------+-----------+---------+-----------+--------------+
将floating ip绑定至目标实例
[root@controller ~]# nova floating-ip-associate demo-i1 192.168.36.215
9、Block Storage Service(cinder)
9.1Controller Node
9.1.1安装
[root@controller ~]# yum install openstack-cinder
9.1.2 创建并初始化cinde数据库
[root@controller ~]# mysql > CREATE DATABASE cinder; > GRANT ALL ON cinder.* TO 'cinder'@'192.168.22.%' IDENTIFIED BY 'cinder'; > FLUSH PRIVILEGES; > exit; [root@controller ~]# su -s /bin/bash -c "cinder-manager db sync" cinder [root@controller ~]# mysql > SHOW TABLES FROM cinder;
[root@controller ~]# keystone user-create --name=cinder --pass=cinder --email=cinder@qween.com [root@controller ~]# keystone user-role-add --user=cinder --tenant=service --role=admin [root@controller ~]# keystone user-role-list --user=cinder --tenant=service
9.1.4 修改cinder配置文件
[root@controller ~]# cp /etc/cinder/cinder.conf{,.bak} [root@controller ~]# vim /etc/cinder/cinder.conf [database] connection=mysql://cinder:cinder@controller/cinder //配置连入数据库的URL [DEFAULT] auth_strategy=keystone rpc_backend=qpid //配置使用消息队列 qpid_hostname=controller
[keystone_authtoken] auth_uri=http://controller:5000 auth_host=controller auth_protocol=http auth_port=35357 admin_user=cinder admin_tenant_name=service admin_password=cinder
9.1.5 在keystone中注册cinder服务
[root@controller ~]# keystone service-create --name=cinder --type=volume --description="OpenStack Block Storage"
[root@controller ~]# keystone endpoint-create \ > --service-id=$(keystone service-list | awk '/ volume / {print $2}') \ > --publicurl=http://controller:8776/v1/%\(tenant_id\)s \ > --internalurl=http://controller:8776/v1/%\(tenant_id\)s \ > --adminurl=http://controller:8776/v1/%\(tenant_id\)s
[root@controller ~]# keystone service-create --name=cinderv2 --type=volumev2 --description="OpenStack Block Storage v2"
[root@controller ~]# keystone endpoint-create \ > --service-id=$(keystone service-list | awk '/ volumev2 / {print $2}') \ > --publicurl=http://controller:8776/v2/%\(tenant_id\)s \ > --internalurl=http://controller:8776/v2/%\(tenant_id\)s \ > --adminurl=http://controller:8776/v2/%\(tenant_id\)s
9.1.6 启动服务
[root@controller ~]# service openstack-cinder-api start [root@controller ~]# service openstack-cinder-api restart //验证是否真正启动 [root@controller ~]# service openstack-cinder-scheduler start [root@controller ~]# service openstack-cinder-scheduler restart [root@controller ~]# chkconfig openstack-cinder-api on [root@controller ~]# chkconfig openstack-cinder-scheduler on
9.2 Block Storage Node
9.2.1 准备卷组
[root@stor1 ~]# pvcreate /dev/sdb //创建物理卷 Physical volume "/dev/sdb" successfully created [root@stor1 ~]# vgcreate cinder-volumes /dev/sdb Volume group "cinder-volumes" successfully created
9.2.2安装并配置cinder存储服务
[root@stor1 ~]# yum install openstack-cinder scsi-target-utils
[root@stor1 ~]# vim /etc/cinder/cinder.conf [database] connection=mysql://cinder:cinder@controller/cinder [DEFAULT] auth_strategy=keystone rpc_backend=qpid //配置使用消息队列 qpid_hostname=controller my_ip=192.168.22.132 //配置本节点提供cinder-volume服务使用的接口 glance_host=controller //指定Glance服务节点 volumes_dir=/etc/cinder/volumes //指定卷文件存放位置 iscsi_helper=tgtadm //配置scsi-target
[keystone_authtoken] auth_uri=http://controller:5000 auth_host=controller auth_protocol=http auth_port=35357 admin_user=cinder admin_tenant_name=service admin_password=cinder
[root@block ~]# vim /etc/tgt/targets.conf 配置scsi-target include /etc/cinder/volumes/*
9.2.3启动服务
fedora的epel源的中icehouse版本的openstack-cinder的服务openstack-cinder-volume默认为先读取distconfig=/usr/share/cinder/cinder-dist.conf 这个配置文件,而其内容是有错误的。直接启动会导致创建后的卷无法关联至instace上,所以请禁止服务不再读取此文件(或删除)。(/etc/rc.d/init.d/openstack-cinder-volume)
[root@stor1 ~]# service openstack-cinder-volume start [root@stor1 ~]# service openstack-cinder-volume restart [root@stor1 ~]# service tgtd start Starting SCSI target daemon: [ OK ] [root@stor1 ~]# chkconfig openstack-cinder-volume on [root@stor1 ~]# chkconfig tgtd on
9.2.4 创建卷测试
在Controller节点执行如下命令,创建一个2G 大小名为testVolume的逻辑卷 [root@controller ~]# cinder create --display-name testVolume 2
[root@controller ~]# cinder list //列出所有卷
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ | 355d03a8-7r56-7h63-9oi5-2426343f07a2 | available | testVolume | 2 | None | false | | +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
[root@controller ~]# nova volume-attach test-0002 355d03a8-7r56-7h63-9oi5-2426343f07a2 //将此卷添加至指定的实例上
[root@controller ~]# cinder list [root@controller ~]# nova help volume-detach //拆除云硬盘
原创文章,作者:nene,如若转载,请注明出处:http://www.178linux.com/89961