OpenStack环境部署详细操作步骤

2023-05-16

【OpenStack 环境配置】

虚拟机资源信息
1、控制节点ct
CPU:双核双线程----CPU虚拟化开启
内存:8G
硬盘:300G+1024G(CEPH块存储)
双网卡:VM1-(局域网)192.168.100.11   NAT-192.168.200.150
操作系统:Centos 7.6(1810)-最小化安装

2、计算节点c1
CPU:双核双线程-CPU虚拟化开启
内存:8G 硬盘:300G+1024G(CEPH块存储)
双网卡:VM1(局域网)-192.168.100.12 NAT-192.168.200.151
操作系统:Centos 7.6(1810)-最小化安装

3、计算节点c2
CPU:双核双线程-CPU虚拟化开启
内存:8G 硬盘:300G+1024G(CEPH块存储)
双网卡:VM1(局域网)-192.168.100.13 NAT-192.168.200.152
操作系统:Centos 7.6(1810)-最小化安装
PS:最小内存6G

注意:装机界面按Tab键进入界面编辑,输入后面的内容:net.ifnames=0 biosdevname=0 (创建时可修改为eth0网卡)
在这里插入图片描述

【部署思路】
一、配置操作系统+OpenStack运行环境
二、配置OpenStack平台基础服务(rabbitmq、mariadb、memcache、Apache)
三、配置OpenStack keystone组件
四、配置OpenStack Glance组件
五、配置placement服务
六、配置OpenStack Nova组件
七、配置OpenStack Neutron组件
八、配置OpenStack dashboard组件
九、配置OpenStack Cinder组件
十、常用云主机操作

资源规划

主机名内存硬盘网卡系统
CT8300+300VM:192.168.100.11/NAT:192.168.200.150centos7.6
C18300+300VM:192.168.100.12/NAT:192.168.200.151centos7.6
C28300+300VM:192.168.100.13/NAT:192.168.200.152centos7.6

【基础环境配置】
配置项(所有节点):
1、修改主机名并修改网卡与地址映射

hostamectl set-hostname ct  #另外两台计算节点修改为c1和c2

[root@ct ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0    #c1和c2修改IP地址相同的操作
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=eth0
UUID=93878b36-7b85-47d6-8f52-51e5adf2e236
DEVICE=eth0
ONBOOT=yes
IPADDR=192.168.200.150
NETMASK=255.255.255.0
GATEWAY=192.168.200.2
[root@ct ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth1
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=eth1
UUID=e3169c79-441b-425b-b221-8a74693a9c5c
DEVICE=eth1
ONBOOT=yes
IPADDR=192.168.100.11
NETMASK=255.255.255.0
#GATEWAY=192.168.100.1     '//这里暂时注释,等部署OpenStack的时候在开启,否则无法访问公网'

[root@ct yum.repos.d]# vi /etc/resolv.conf    #添加DNS
nameserver 8.8.8.8
nameserver 114.114.114.114
[root@ct ~]# systemctl restart network	'//重启网卡'

 配置Hosts(3台主机均需配置)
[root@ct ~]# vi /etc/hosts
192.168.100.11  ct
192.168.100.12  c1
192.168.100.13  c2
PS:以上为局域网IP,即VM1的IP

2、防火墙、核心防护(所有节点)

[root@c2 ~]# systemctl stop firewalld
[root@c2 ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@c2 ~]# setenforce 0
[root@c2 ~]# vi /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled        ###此处修改为disabled
# SELINUXTYPE= can take one of three values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected.
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

3、免交互(3台主机相互进行免交互)

[root@ct ~]# ssh-keygen -t rsa
[root@ct ~]# ssh-copy-id c1
[root@ct ~]# ssh-copy-id c2

[root@c1 ~]# ssh-keygen -t rsa
[root@c1 ~]# ssh-copy-id ct
[root@c1 ~]# ssh-copy-id c2

[root@c2 ~]# ssh-keygen -t rsa
[root@c2 ~]# ssh-copy-id c1
[root@c2 ~]# ssh-copy-id ct

4、基础环境依赖包(3台主机均需配置)

yum -y install net-tools bash-completion vim gcc gcc-c++ make pcre  pcre-devel expat-devel cmake  bzip2 lrzsz 

EXPAT C语言发开库(需要多安装几次,以防有软件包没有安装成功)

yum -y install centos-release-openstack-train python-openstackclient openstack-selinux openstack-utils
#OpenStack  train 版本仓库源安装 包,同时安装 OpenStack 客户端和 openstack-selinux 安装包

5、时间同步+周期性计划任务

 控制节点配置(ct)
vi /etc/sysconfig/network-scripts/ifcfg-eth1   #VM1的网卡
# 修改、确认一下参数
BOOTPROTO=static
ONBOOT=yes
IPADDR=192.168.100.11
NETMASK=255.255.255.0
#GATEWAY=192.168.100.1

vi /etc/sysconfig/network-scripts/ifcfg-eth0
BOOTPROTO=static
IPV4_ROUTE_METRIC=90				###调由优先级,NAT网卡优先,需添加
ONBOOT=yes
IPADDR=192.168.200.150
NETMASK=255.255.255.0
GATEWAY=192.168.200.2

systemctl restart network		#重启网卡

● 【控制节点ct时间同步配置】

ct ->同步阿里云时钟服务器
c1、c2 -> 同步ct
[root@ct ~]# yum install chrony -y
[root@ct ~]# vim /etc/chrony.conf 
server 0.centos.pool.ntp.org iburst						###注释掉
server 1.centos.pool.ntp.org iburst						###注释掉
server 2.centos.pool.ntp.org iburst						###注释掉
server 3.centos.pool.ntp.org iburst						###注释掉
server ntp6.aliyun.com iburst							###配置阿里云时钟服务器源
allow 192.168.100.0/24							###允许192.168.100.0/24网段的主机来同步时钟服务

[root@ct ~]# systemctl enable chronyd
[root@ct ~]# systemctl restart chronyd

● 使用 chronyc sources 命令查询时间同步信息

[root@ct ~]# chronyc sources
210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* 203.107.6.88                  2   6     7     1   -915us[  -13ms] +/-   25ms     

● 【控制节点c1、c2时间同步配置】

#以下是在c1上进行配置,c2和c1配置相同
[root@c1 ~]# vi /etc/chrony.conf 
server 0.centos.pool.ntp.org iburst						###注释掉
server 1.centos.pool.ntp.org iburst						###注释掉
server 2.centos.pool.ntp.org iburst						###注释掉
server 3.centos.pool.ntp.org iburst						###注释掉
server ct iburst								###配置阿里云时钟服务器源

[root@c1 ~]# systemctl enable chronyd.service					###永久开启时间同步服务器
[root@c1 ~]# systemctl restart chronyd.service					###重启时间同步服务器
[root@c2 ~]# chronyc sources 
210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^? ct                            3   6     3     1  +4324us[+4324us] +/-   25ms

● 设置周期性任务

[root@c1 ~]# crontab -e							###配置计划任务,每隔2分钟同步一次
*/2 * * * * /usr/bin/chronyc sources >>/var/log/chronyc.log

[root@ct ~]# crontab -l      #查看周期性计划
*/2 * * * * /usr/bin/chronyc sources >> /var/log/chronyc.log

【系统环境配置】
配置服务(控制节点):
一、安装、配置MariaDB

[root@ct ~]# yum -y install mariadb mariadb-server python2-PyMySQL
#此包用于openstack的控制端连接mysql所需要的模块,如果不安装,则无法连接数据库;此包只安装在控制端
[root@ct ~]# yum -y install libibverbs	

● 添加MySQL子配置文件,增加如下内容

[root@ct ~]# vim /etc/my.cnf.d/openstack.cnf
[mysqld]
bind-address = 192.168.100.11
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8


[root@ct ~]# vim /etc/my.cnf.d/openstack.cnf
[mysqld] 
bind-address = 192.168.100.11			#控制节点局域网地址
default-storage-engine = innodb 		#默认存储引擎 
innodb_file_per_table = on 				#每张表独立表空间文件
max_connections = 4096 				    #最大连接数 
collation-server = utf8_general_ci 		#默认字符集 
character-set-server = utf8

● 开机自启动、开启服务

[root@ct my.cnf.d]# systemctl enable mariadb
[root@ct my.cnf.d]# systemctl start mariadb

● 执行MariaDB 安全配置脚本

[root@ct my.cnf.d]# mysql_secure_installation
Enter current password for root (enter for none): 			#回车
OK, successfully used password, moving on...
Set root password? [Y/n] Y
Remove anonymous users? [Y/n] Y
 ... Success!
Disallow root login remotely? [Y/n] N			#是否不允许root用户远程登陆
 ... skipping.
Remove test database and access to it? [Y/n] Y  #是否删除test测试库
Reload privilege tables now? [Y/n] Y 	

二、安装RabbitMQ
所有创建虚拟机的指令,控制端都会发送到rabbitmq,node节点监听rabbitmq

[root@ct ~]# yum -y install rabbitmq-server
 配置服务,启动RabbitMQ服务,并设置其开机启动。
[root@ct ~]# systemctl enable rabbitmq-server.service
[root@ct ~]# systemctl start rabbitmq-server.service

 创建消息队列用户,用于controler和 计算节点连接rabbitmq的认证(关联)
[root@ct ~]# rabbitmqctl add_user openstack RABBIT_PASS

 配置openstack用户的操作权限(正则,配置读写权限)
[root@ct ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
#可查看256725672 两个端口(5672是Rabbitmq默认端口,25672是Rabbit的测试工具CLI的端口)

 选择配置:
 查看rabbitmq插件列表
[root@ct ~]# rabbitmq-plugins list
 开启rabbitmq的web管理界面的插件,端口为15672
[root@ct ~]# rabbitmq-plugins enable rabbitmq_management

 检查端口(25672 5672 15672
[root@ct my.cnf.d]# ss -natp | grep 5672
LISTEN     0      128          *:25672                    *:*                   users:(("beam.smp",pid=24596,fd=46))
LISTEN     0      128         :::5672                    :::*                   users:(("beam.smp",pid=24596,fd=55))

● 使用windows访问 192.168.200.150:15672
默认账号密码均为guest
在这里插入图片描述

在这里插入图片描述

三、安装memcached
● 作用:
安装memcached是用于存储session信息;服务身份验证机制(keystone)使用Memcached来缓存令牌 在登录openstack的dashboard时,会产生一些session信息,这些session信息会存放到memcached中
JWT
● 操作:
● 安装Memcached

[root@ct ~]# yum install -y memcached python-memcached
#python-*模块在OpenStack中起到连接数据库的作用
 修改Memcached配置文件
[root@ct ~]# vim /etc/sysconfig/memcached 
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
OPTIONS="-l 127.0.0.1,::1,ct"     #后面添加 ,ct

[root@ct ~]# systemctl enable memcached
[root@ct ~]# systemctl start memcached
[root@ct ~]# netstat -nautp | grep 11211
tcp        0      0 192.168.100.11:11211    0.0.0.0:*               LISTEN      21572/memcached     
tcp        0      0 127.0.0.1:11211         0.0.0.0:*               LISTEN      21572/memcached     
tcp6       0      0 ::1:11211               :::*                    LISTEN      21572/memcached    

● 安装etcd

[root@ct ~]# yum -y install etcd

● 修改etcd配置文件

[root@ct ~]# cd /etc/etcd/
[root@ct etcd]# ls
etcd.conf
[root@ct etcd]# vim etcd.conf 
--------------------------------------------------------------------------------------
#将里面内容全部删除以后添加以下的内容
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://192.168.100.11:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.100.11:2379"	
ETCD_NAME="ct"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.100.11:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.100.11:2379"
ETCD_INITIAL_CLUSTER="ct=http://192.168.100.11:2380"	
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_INITIAL_CLUSTER_STATE="new"
----------------------------------------------------------------------------------------

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"      #数据目录位置
ETCD_LISTEN_PEER_URLS="http://192.168.100.11:2380"      #监听其他etcd member的url(2380端口,集群之间通讯,域名为无效值)
ETCD_LISTEN_CLIENT_URLS="http://192.168.100.11:2379"	 #对外提供服务的地址(2379端口,集群内部的通讯端口)
ETCD_NAME="ct"	  #集群中节点标识(名称)
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.100.11:2380"   #该节点成员的URL地址,2380端口:用于集群之间通讯。
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.100.11:2379"
ETCD_INITIAL_CLUSTER="ct=http://192.168.100.11:2380"	
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"		#集群唯一标识
ETCD_INITIAL_CLUSTER_STATE="new"   #初始集群状态,new为静态,若为existing,则表示此ETCD服务将尝试加入已有的集群
若为DNS,则表示此集群将作为被加入的对象

#开机自启动、开启服务,检测端口
[root@ct ~]# systemctl enable etcd.service
[root@ct ~]# systemctl start etcd.service
[root@ct ~]# netstat -anutp |grep 2379
tcp        0      0 192.168.100.11:2379     0.0.0.0:*               LISTEN      21954/etcd          
tcp        0      0 192.168.100.11:2379     192.168.100.11:55076    ESTABLISHED 21954/etcd          
tcp        0      0 192.168.100.11:55076    192.168.100.11:2379     ESTABLISHED 21954/etcd       
[
root@ct ~]# netstat -anutp |grep 2380
tcp        0      0 192.168.100.11:2380     0.0.0.0:*               LISTEN      21954/etcd      

C1、C2安装OpenStack组件
yum -y install centos-release-openstack-train python-openstackclient openstack-selinux openstack-utils
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

OpenStack环境部署详细操作步骤 的相关文章

随机推荐