安装教程
目录
1. 基础环境准备(所有节点)
- 1.1 前置条件与主机规划 
- 1.2 配置yum源 
- 1.3 关闭selinux与firewalld 
- 1.4 配置iptables 
- 1.5 配置主机名解析 
- 1.6 安装openstack软件包 
- 1.7 安装NTP服务(未完成) 
2. 安装基础软件(controller)
- 2.1 安装mariadb 
- 2.2 安装rabbitmq 
- 2.3 安装memcached 
- 2.4 安装etcd 
3. 安装openstack服务
- 3.1 安装keystone(controller) 
- 3.2 安装glance(controller) 
- 3.3 安装nova - 3.3.1 控制节点(controller) 
- 3.3.2 计算节点(compute) 
 
- 3.4 安装neutron - 3.4.1 控制节点(controller) 
- 3.4.2 计算节点(compute) 
 
- 3.5 安装cinder - 3.5.1 控制节点(controller) 
- 3.5.2 cinder-volume节点(cinder-volume) 
 
- 3.6 安装horizon 
4. 替换代码
- 4.1 覆盖代码(所有节点) 
- 4.2 重启nova与neutron服务 - 4.2.1 控制节点(controller) 
- 4.2.2 计算节点(compute) 
 
- 4.3 更新neutron数据库(controller) 
- 4.4 配置spice访问(compute) 
- 4.5 创建虚机验证(dashboard) 
5. 对接ceph
- 5.1 创建存储池及授权(ceph侧操作) 
- 5.2 安装ceph组件 
- 5.3 glance 
- 5.4 cinder 
- 5.5 nova(compute) 
1. 基础环境准备(所有节点)
1.1 前置条件与主机规划
准备两台主机,一台用作controller,一台用作compute与cinder-volume
- 192.168.90.97:controller 
- 192.168.90.98:nova-compute、cinder-volume 
1.2 配置yum源
to be continued
1.3 关闭selinux与firewalld
在每台主机上执行以下命令
$ setenforce 0
$ sed -i "s#SELINUX=enforcing#SELINUX=disabled#g" /etc/selinux/config
$ systemctl stop firewalld && systemctl disable firewalld1.4 配置iptables
在每台主机上执行以下命令
$ echo "net.bridge.bridge-nf-call-iptables = 1" > /usr/lib/sysctl.d/00-system.conf
$ echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /usr/lib/sysctl.d/00-system.conf1.5 配置主机名解析
在每台主机上执行以下命令
$ echo "192.168.90.97 controller" >> /etc/hosts1.6 安装openstack软件包
在每台主机上执行以下命令
$ yum -y upgrade
$ yum -y install python-openstackclient
$ yum -y install openstack-selinux1.7 安装NTP服务
to be continued
2. 安装基础软件(controller)
2.1 安装mariadb
2.1.1 执行命令安装mariadb
$ yum -y install mariadb mariadb-server python2-PyMySQL2.1.2 创建并编辑/etc/my.cnf.d/openstack.cnf文件,文件内容见config -> mariadb -> openstack.cnf,记得替换里面的变量
2.1.3 启动mariadb
$ systemctl start mariadb.service && systemctl enable mariadb.service2.1.4 运行mysql_secure_installation 脚本初始化数据库服务,并为数据库root帐户设置密码(这里设为123456):
$ mysql_secure_installation2.2 安装rabbitmq
2.2.1 安装
$ yum -y install rabbitmq-server2.2.2 启动
$ systemctl enable rabbitmq-server.service && systemctl start rabbitmq-server.service2.2.3 添加openstack用户,记得把下面的123456替换为合适的密码
$ rabbitmqctl add_user openstack 1234562.2.4 为openstack用户设置权限
$ rabbitmqctl set_permissions openstack ".*" ".*" ".*"2.3 安装memcached
2.3.1 安装
$ yum -y install memcached python-memcached2.3.2 修改配置文件/etc/sysconfig/memcached,用config -> memcached -> memcached的内容覆盖,记得替换里面的变量
2.3.3 启动服务
$ systemctl enable memcached.service && systemctl start memcached.service2.4 安装etcd
to be continued
3. 安装openstack服务
3.1 安装keystone
3.1.1
to be continued
3.2 安装glance
(to be continued)
下载镜像并上传
$ wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
$ openstack image create "cirros" --file cirros-0.4.0-x86_64-disk.img --disk-format qcow2 --container-format bare --public然后检查镜像是否上传成功
$ openstack image list
+--------------------------------------+--------+--------+
| ID                                   | Name   | Status |
+--------------------------------------+--------+--------+
| 09a155a7-0c3d-4756-9dbc-55f8b18dfbc8 | cirros | active |
+--------------------------------------+--------+--------+3.3 安装nova
3.3.1 控制节点
(to be continued)
装完后执行以下命令查看nova的api、scheduler、conductor等相关服务是否起来
$ openstack compute service list
+----+------------------+----------+----------+---------+-------+----------------------------+
| ID | Binary           | Host     | Zone     | Status  | State | Updated At                 |
+----+------------------+----------+----------+---------+-------+----------------------------+
|  1 | nova-consoleauth | dcos-162 | internal | enabled | up    | 2019-04-14T02:51:20.000000 |
|  2 | nova-conductor   | dcos-162 | internal | enabled | up    | 2019-04-14T02:51:21.000000 |
|  3 | nova-scheduler   | dcos-162 | internal | enabled | up    | 2019-04-14T02:51:21.000000 |
+----+------------------+----------+----------+---------+-------+----------------------------+3.5 安装cinder
3.5.1 控制节点
3.5.1.1 在数据库中创建cinder用户及授权
$ mysql -uroot -p${MYSQL_ROOT_PASS}
MariaDB [(none)]> CREATE DATABASE cinder;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY '${MYSQL_NORMAL_USER_PASS}';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY '${MYSQL_NORMAL_USER_PASS}';3.5.1.2 在keystone中创建cinder用户并授权
$ openstack user create --domain default --password ${KEYSTONE_NORMAL_USER_PASS} cinder
$ openstack role add --project service --user cinder admin3.5.1.3 创建cinder服务的端点
$ openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
$ openstack endpoint create --region RegionOne volumev2 admin http://controller:${CINDER_PORT}/v2/%\(project_id\)s
$ openstack endpoint create --region RegionOne volumev2 public http://controller:${CINDER_PORT}/v2/%\(project_id\)s
$ openstack endpoint create --region RegionOne volumev2 internal http://controller:${CINDER_PORT}/v2/%\(project_id\)s
$ openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
$ openstack endpoint create --region RegionOne volumev3 admin http://controller:${CINDER_PORT}/v3/%\(project_id\)s
$ openstack endpoint create --region RegionOne volumev3 public http://controller:${CINDER_PORT}/v3/%\(project_id\)s
$ openstack endpoint create --region RegionOne volumev3 internal http://controller:${CINDER_PORT}/v3/%\(project_id\)s3.5.1.4 安装cinder-api与cinder-scheduler
$ yum -y install openstack-cinder3.5.1.5 修改配置文件/etc/cinder/cinder.conf,用config -> cinder -> cinder.conf.controller的文件内容全量替换,记得更改里面的变量
3.5.1.6 初始化cinder数据库
$ su -s /bin/sh -c "cinder-manage db sync" cinder3.5.1.7 启动cinder-api与cinder-scheduler服务
$ systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
$ systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service此时,执行命令openstack volume service list应该可以看到cinder-api与cinder-scheduler处于运行状态。
3.5.2 cinder-volume节点
cinder-volume节点只需要安装cinder-volume,不需要启动该服务,因为cinder-volume需要对接指定的后端存储,比如ceph、lvm。对接不同的后端,配置会不一样。所以,这一步我们只执行以下命令安装cinder-volume
$ yum -y install openstack-cinder targetcli python-keystone4. 替换代码
4.1 覆盖代码(所有节点)
获取源码tar.gz包,解压后里面有一个patch.sh,以root用户执行这个脚本,就能替换该节点的代码。注意所有节点都要替换代码。
4.2 重启nova与nuetron服务
4.2.1 控制节点
$ systemctl restart openstack-nova-api openstack-nova-scheduler openstack-nova-conductor openstack-nova-consoleauth openstack-nova-novncproxy
$ systemctl restart neutron-server neutron-dhcp-agent neutron-l3-agent neutron-linuxbridge-agent neutron-metadata-agent4.2.2 计算节点
$ systemctl restart openstack-nova-compute neutron-linuxbridge-agent4.3 更新neutron数据库(控制节点)
4.3.1 执行以下命令为neutron数据库添加字段
$ neutron-db-manage revision -m "add auth_policy in securitygroup rule"
  Running revision for neutron ...
  Generating /usr/lib/python2.7/site-packages/neutron/db/migration/alembic_migrations/versions/queens/expand/bb2b98644efc_add_auth_policy_in_securitygroup_rule.py ... done
  OK
  Running revision for neutron ...
  Generating /usr/lib/python2.7/site-packages/neutron/db/migration/alembic_migrations/versions/queens/contract/9282902007a9_add_auth_policy_in_securitygroup_rule.py ... done
  OK执行完这个命令后,会打出信息,生成两个个文件,expand下的文件在第二步中会用到
4.3.2 修改生成的文件上面生成的文件,注释掉原来的upgrade函数,添加以下内容
该文件全路径为一般为/usr/lib/python2.7/site-packages/neutron/db/migration/alembic_migrations/versions/queens/expand/xxxxxxxxxx_add_auth_policy_in_securitygroup_rule.py
def upgrade():
       op.add_column('securitygrouprules',
        sa.Column('auth_policy', sa.String(50),
            server_default='ALLOW', nullable=False))4.3.3 更新数据库
$ neutron-db-manage upgrade heads
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
  Running upgrade for neutron ...
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
INFO  [alembic.runtime.migration] Running upgrade 5c85685d616d -> 9282902007a9, add auth_policy in securitygroup rule
INFO  [alembic.runtime.migration] Running upgrade 594422d373ee -> bb2b98644efc, add auth_policy in securitygroup rule
  OK4.4 配置spice访问(计算节点)
4.4.1 nova.conf
修改nova配置文件,禁用vnc,并将连接方式改为spice。
$ vim /etc/nova/nova.conf
[DEFAULT]
vnc_enabled = false
[vnc]
enabled = false
[spice]
enabled = true
agent_enabled = true
keymap = en-us
server_listen = 0.0.0.04.4.2 证书
创建目录/etc/pki/libvirt-spice/,然后从http://10.142.233.68:8050/home/cloud-desk/tls/下载三个证书文件,放入到该目录下。
配置/etc/libvirt/qemu.conf
spice_tls = 1
spice_tls_x509_cert_dir = "/etc/pki/libvirt-spice"4.4.3 重启nova-compute服务
$ systemctl restart openstack-nova-compute4.5 创建虚机验证
通过dashboard创建一个虚机,此时用的还是本地盘,通过spice端口进入虚机
5. 对接ceph
首先生成一个uuid,这个uuid在后面很多地方都会用到
$ uuidgen
75745520-953f-493b-8d19-6383f644087f5.1 创建存储池及授权(ceph侧操作)
$ ceph osd pool create vms 128 128
$ ceph osd pool set vms size 3
$ ceph osd pool create volumes 128 128
$ ceph osd pool set volumes size 3
$ ceph osd pool create images 128 128
$ ceph osd pool set images size 3$ ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images' -o /etc/ceph/ceph.client.glance.keyring
$ ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=images, allow rwx pool=vms' -o /etc/ceph/ceph.client.cinder.keyring
$ ceph auth get-or-create client.vms mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=images, allow rwx pool=vms'
-o /etc/ceph/ceph.client.vms.keyring
$ ceph auth get-or-create client.backups mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups' -o /etc/ceph/ceph.client.backups.keyring这里要给cinder用户赋上vms池的权限,因为后面也要用到cinder用户去操作vms池
5.2 安装ceph软件包(所有节点)
$ yum -y install ceph-common python-rbd5.3 glance
5.3.1 编辑/etc/glance/glance-api.conf文件,修改[glance_store]的内容如下:
/etc/glance/glance-api.conf文件,修改[glance_store]的内容如下:[glance_store]
stores = rbd
default_store = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8如果要允许image的写时拷贝,再添加下面内容到[Default]段下
show_image_direct_url = true5.3.2 拷贝ceph.client.glance.keyring到/etc/ceph/下
ceph.client.glance.keyring到/etc/ceph/下5.3.3 重启glance服务,测试上传镜像
$ systemctl restart openstack-glance-api.service openstack-glance-registry.service5.4 cinder
5.4.1 controller
5.4.1.1 配置计算服务使用块存储
修改/etc/nova/nova.conf,在[cinder]区域添加如下内容
[cinder]
os_region_name = RegionOne然后重启nova-api
$ systemctl restart openstack-nova-api5.4.1.2 创建卷类型
$ cinder type-create ceph-vm
$ cinder type-key ceph-vm set volume_backend_name=ceph-vm
$ cinder type-create ceph-data
$ cinder type-key ceph-data set volume_backend_name=ceph-data
$ cinder extra-specs-list5.4.2 cinder-volume
在第3.5.2中我们只安装了cinder-volume,却没有更改配置文件及启动该服务。修改/etc/cinder/cinder.conf文件,用config -> cinder -> cinder.conf.volume的内容全量覆盖,注意替换里面的变量
然后重启cinder-volume服务
$ systemctl restart openstack-cinder-volume5.5 nova(compute)
5.5.1 配置libvirt访问ceph
创建一个secret.xml文件,其中75745520-953f-493b-8d19-6383f644087f是在5.1前面生成的UUID:
<secret ephemeral='no' private='no'>
  <uuid>75745520-953f-493b-8d19-6383f644087f</uuid>
  <usage type='ceph'>
    <name>client.cinder secret</name>
  </usage>
</secret>然后执行以下命令,定义一个secret的key
$ virsh secret-define --file secret.xml然后为这个key设置一个值,75745520-953f-493b-8d19-6383f644087f是在5.1前面生成的UUID,AQAYHS9cbK65LhAAYe774kLwXiNtUOz611QAvQ==是/etc/ceph/ceph.client.cinder.keyring中的内容
$ cat /etc/ceph/ceph.client.cinder.keyring
$ virsh secret-set-value --secret 75745520-953f-493b-8d19-6383f644087f --base64 AQAYHS9cbK65LhAAYe774kLwXiNtUOz611QAvQ==然后执行下面的命令进行验证
$ virsh secret-list5.5.2 配置nova-compute
修改/etc/nova/nova.conf,在[libvirt]区域添加如下的内容,注意把{UUID}改成5.1中生成的值:
[libvirt]
virt_type = kvm
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = {UUID}
disk_cachemodes="network=writeback"
hw_disk_discard = unmap
inject_password = false
inject_key = false
inject_partition = -2
live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_
LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"5.5.3 重启libvirtd与nova-compute
$ systemctl restart libvirtd openstack-nova-computeLast updated
Was this helpful?