CentOS 7.6安装OpenStack Stein版本

2023-05-16

文章目录

  • 一、前提
    • 1.1设置四节点
    • 1.2网络平台架构
    • 1.3准备环境(所有节点)
      • 1.3.1设置hosts
      • 1.3.2设置主机名
      • 1.3.3关闭 firewalld
      • 1.3.4关闭SELinux
      • 1.3.5设置静态IP
      • 1.3.6自定义yum源
      • 1.3.7安装预装包
      • 1.3.8 编写Openstack admin 许可(在controller节点使用)
  • 二、Controller节点
    • 2.1需安装
    • 2.2配置网卡信息
      • 2.2.1网卡一
      • 2.2.2网卡二
    • 2.3配置时间同步,chrony
    • 2.4安装mariadb数据库
      • 2.4.1安装包
      • 2.4.2配置
      • 2.4.3启动服务
      • 2.4.4运行安全权限脚本
    • 2.5安装rabbitmq消息队列服务
      • 2.5.1安装所需包
      • 2.5.2启动服务并加入开机启动
      • 2.5.3添加openstack用户,允许进行配置,写入和读取访问 openstack
    • 2.6安装Memcached缓存令牌
      • 2.6.1安装所需包
      • 2.6.2配置
      • 2.6.3启动服务
    • 2.7安装 etcd 集群
      • 2.7.1 安装所需包
      • 2.7.2 配置
      • 2.7.3启动服务
      • 2.7.4查看集群健康
    • 2.8安装 Keystone
      • 2.8.1安装Keytone数据库
      • 2.8.2安装所需包
      • 2.8.3 配置
      • 2.8.4填充Identity服务数据库
      • 2.8.5 初始化Fernet密钥存储库
      • 2.8.6 引导身份服务
      • 2.8.7 配置Apache HTTP服务器
      • 2.8.8创建链接
      • 2.8.9启动服务并加入开机启动
      • 2.8.10创建域,项目,用户和角色
    • 2.9安装 glance
      • 2.9.1 安装数据库
      • 2.9.2创建服务凭据
        • 2.9.2.1创建glance用户
        • 2.9.2.2将admin角色添加到glance用户和 service项目
        • 2.9.2.3创建glance服务实体
        • 2.9.2.4创建Image服务API端点
      • 2.9.3安装所需包
      • 2.9.4配置
      • 2.9.5填充Image服务数据库
      • 2.9.6启动服务并加入开机启动
      • 2.9.7验证glance 镜像服务
    • 2.10安装 nova
      • 2.10.1安装Nova数据库
      • 2.10.2创建Compute服务凭据
      • 2.10.3创建Compute API服务端点
      • 2.10.4创建Placement服务凭据和服务端点
      • 2.10.5安装所需包
      • 2.10.6配置
      • 2.10.7重启http服务
      • 2.10.8填充nova-api数据库
      • 2.10.9注册cell0数据库
      • 2.10.10创建cell1单元格
      • 2.10.11填充Nova数据库
      • 2.10.12验证nova cell0和cell1是否正确注册
      • 2.10.13启动服务
    • 2.11安装Neutron
      • 2.11.1配置IP转发:
        • 2.11.1.1安装网桥模块
        • 2.11.1.2配置
        • 2.11.1.3启动模块
      • 2.11.2创建neutron数据库
      • 2.11.3创建neutron用户
      • 2.11.4将admin角色添加到neutron用户
      • 2.11.5创建neutron服务实体
      • 2.11.6创建网络服务API端点
      • 2.11.7安装所需包
      • 2.11.8配置
        • 2.11.8.1配置neutron.conf
        • 2.11.8.2配置ML2
        • 2.11.8.3配置nova.conf
        • 2.11.8.4配置metadata_agent
      • 2.11.9 ML2创建链接指向ML2插件配置文件
      • 2.11.10填充数据库
      • 2.11.10启动服务
    • 2.12安装Horizon
      • 2.12.1安装所需包
      • 2.12.2配置
      • 2.12.3启动服务
    • 2.13安装Cinder
      • 2.13.1创建数据库
      • 2.13.2创建cinder用户
      • 2.13.3将admin角色添加到cinder用户
      • 2.13.4创建cinderv2和cinderv3服务实体
      • 2.13.5创建Block Storage服务API端点
      • 2.13.6安装所需包
      • 2.13.7配置
      • 2.13.8填充块存储数据库
      • 2.13.9启动服务
  • 三、Compute节点
    • 3.1需安装
    • 3.2配置网卡信息
    • 3.3安装 nova
      • 3.3.1安装所需包
      • 3.3.2配置
      • 3.3.4启动服务
      • 3.3.5添加compute节点信息入controller节点 cell 数据库 (controller节点执行)
    • 3.4安装 neutron
      • 3.4.1配置IP转发
      • 3.4.2安装相应包
      • 3.4.3配置
        • 3.4.3.1配置neutron.conf
        • 3.4.3.2配置ML2插件
        • 3.4.3.3配置openvswitch_agent
        • 3.4.3.4配置nova.conf
        • 3.4.3.5启动服务
  • 四、Network节点
    • 4.1需安装
    • 4.2安装 neutron
      • 4.2.1安装所需包
      • 4.2.2配置IP转发
      • 4.2.3创建虚拟网桥
      • 4.2.4配置网卡信息
        • 4.2.4.1网卡一
        • 4.2.4.2网卡二
        • 4.2.4.3网桥
      • 4.2.5配置
        • 4.2.5.1配置neutron.conf
        • 4.2.5.2配置ML2插件
        • 4.2.5.3配置ML3插件
        • 4.2.5.4配置DHCP
        • 4.2.5.5配置openvswitch
        • 4.2.5.6配置metadata
      • 4.2.6启动服务
      • 4.2.7验证网络服务是否正常启动(在controller节点执行)
  • 五、Block-Storage节点
    • 5.1需安装
    • 5.2配置网卡信息
    • 5.3安装Cinder
      • 5.3.1安装LVM服务
        • 5.3.1.1安装所需包
        • 5.3.1.2启动服务
        • 5.3.1.3创建LVM卷
      • 5.3.2安装cinder包
      • 5.3.3修改配置文件
      • 5.3.4启动cinder服务并加入开机启动

一、前提

使用虚拟机版本 : VMware workstation 15

提示:如需转载切记标记来源 CSDN :Stein10010

详细说明见官网:Stein

1.1设置四节点

Controller: 两块网卡 Host-Only(提供租户API接口172.16.30.10)
			Host-Only (172.16.20.110) 2.5G内存 10G硬盘

Compute:一块网卡Host-Only (172.16.20.120) 5G内存 20G硬盘
		   实例虚拟机运行在计算节点 需分配足够多内存供虚机使用
		   
Network-Node:两块网卡 NAT(设置浮动IP,172.16.10.10) 
			  Host-Only(172.16.20.130) 512MB 内存 5G硬盘

Block-Storage: 一块网卡Host-Only (172.16.20.140) 512MB内存 
				两块硬盘 /sda 5G    /sdb 100G

1.2网络平台架构

平台四大架构

1.3准备环境(所有节点)

为四台主机添加hosts解析文件,为每台机器设置主机名,关闭firewalld,sellinux,设置静态IP

1.3.1设置hosts

# cp /etc/hosts /etc/hosts.bak

# vi /etc/hosts
172.16.20.110 controller
172.16.20.120 compute
172.16.20.130 network-node
172.16.20.140 block-storage

1.3.2设置主机名

# hostnamectl set-hostname XXX

1.3.3关闭 firewalld

# systemctl stop firewalld
# systemctl disable firewalld

1.3.4关闭SELinux

# vi /etc/selinux/config
SELINUX=disabled
# reboot

1.3.5设置静态IP

# vi /etc/sysconfig/network-scripts/ifcfg-ens3X

1.3.6自定义yum源

# cp /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bak
# curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo /Centos-7.repo

# vi /etc/yum.repos.d/openstack-stein.repo
[openstack-stein]
name=openstack-stein
baseurl=https://mirrors.aliyun.com/centos/7/cloud/x86_64/openstack-stein/
enabled=1
gpgcheck=0
[qemu-kvm]
name=qemu-kvm
baseurl= https://mirrors.aliyun.com/centos/7/virt/x86_64/kvm-common/
enabled=1
gpgcheck=0

1.3.7安装预装包

# yum install -y python-openstackclient openstack-selinux chrony

1.3.8 编写Openstack admin 许可(在controller节点使用)

# vi ~/admin-openrc.sh
export OS_AUTH_TYPE=password
export OS_USERNAME=admin
export OS_PASSWORD=123123
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

二、Controller节点

2.1需安装

Chrony(NTP,时间同步) mariadb(数据库) rabbitmq (Message queue消息队列)
Memcached(cache tokens,缓存令牌) Etcd(集群)
Keystone (认证服务Identity service) Glance (Image service 镜像服务)
Nova (Compute service 计算服务 nova-api, nova-conductor, nova-novncproxy, nova-scheduler, nova-console, openstack-nova-placement-api)
Neutron (Networking service 网络服务 neutron,neutron-ml2, neutronclientwhich)
Horizon (Dashborad web管理服务)

2.2配置网卡信息

2.2.1网卡一

# vi /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
DEVICE=ens33
ONBOOT=yes

IPADDR=172.16.20.110
NETMASK=255.255.255.0

2.2.2网卡二

# vi /etc/sysconfig/network-scripts/ifcfg-ens34
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens34
DEVICE=ens34
ONBOOT=yes
IPADDR=172.16.30.10
NETMASK=255.255.255.0
# systemctl restart network

2.3配置时间同步,chrony

# vi /etc/chrony.conf
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server cn.ntp.org.cn iburst

allow 172.16.20.0/24
# systemctl enable chronyd.service
# systemctl start chronyd.service

2.4安装mariadb数据库

2.4.1安装包

# yum install mariadb mariadb-server python2-PyMySQL -y

2.4.2配置

# cp /etc/my.cnf.d/openstack.cnf /etc/my.cnf.d/openstack.cnf.bak
注:如不存在该文件则新建
# vi /etc/my.cnf.d/openstack.cnf
[mysqld]
bind-address = 172.16.20.110

default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

2.4.3启动服务

# systemctl enable mariadb.service
# systemctl start mariadb.service

2.4.4运行安全权限脚本

# mysql_secure_installation
Set root password? [Y/n] y

New password: ## 此处为root用户密码,这里设为123123

Re-enter new password:

Remove anonymous users? [Y/n] y

Disallow root login remotely? [Y/n] n

Remove test database and access to it? [Y/n] y

Reload privilege tables now? [Y/n] y

2.5安装rabbitmq消息队列服务

2.5.1安装所需包

# yum install -y rabbitmq-server

2.5.2启动服务并加入开机启动

# systemctl enable rabbitmq-server.service
# systemctl start rabbitmq-server.service

2.5.3添加openstack用户,允许进行配置,写入和读取访问 openstack

# rabbitmqctl add_user openstack 123123(这里为队列认证密码 设为123123)
# rabbitmqctl set_permissions openstack ".*" ".*" ".*"

2.6安装Memcached缓存令牌

2.6.1安装所需包

# yum install memcached python-memcached -y

2.6.2配置

# cp /etc/sysconfig/memcached /etc/sysconfig/memcached.bak
# vi /etc/sysconfig/memcached
OPTIONS="-l 127.0.0.1,::1,controller"

2.6.3启动服务

# systemctl enable memcached.service
# systemctl start memcached.service

2.7安装 etcd 集群

2.7.1 安装所需包

# yum install etcd -y

2.7.2 配置

# cp /etc/etcd/etcd.conf /etc/etcd/etcd.conf.bak
# vi /etc/etcd/etcd.conf
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS=“http:// 172.16.20.110:2380”
ETCD_LISTEN_CLIENT_URLS=“http:// 172.16.20.110:2379,http://127.0.0.1:2379”
ETCD_NAME=“controller”
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS=“http:// 172.16.20.110:2380”
ETCD_ADVERTISE_CLIENT_URLS=“http:// 172.16.20.110:2379”
ETCD_INITIAL_CLUSTER=“controller=http:// 172.16.20.110:2380”
ETCD_INITIAL_CLUSTER_TOKEN=“etcd-cluster-01”
ETCD_INITIAL_CLUSTER_STATE=“new”

2.7.3启动服务

# systemctl enable etcd
# systemctl start etcd

2.7.4查看集群健康

# etcdctl cluster-health

2.8安装 Keystone

2.8.1安装Keytone数据库

# mysql -u root -p123123
MariaDB [(none)]> CREATE DATABASE keystone;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO ‘keystone’@‘localhost’
IDENTIFIED BY ‘123123’; //这里将Keystone访问数据库密码设置为123123
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO ‘keystone’@’%’
IDENTIFIED BY ‘123123’; //这里将Keystone访问数据库密码设置为123123
MariaDB [(none)]>exit

2.8.2安装所需包

# yum install openstack-keystone httpd mod_wsgi -y

2.8.3 配置

#cp /etc/keystone/keystone.conf /etc/keystone/keystone.conf.bak
#vi /etc/keystone/keystone.conf
[database]
connection = mysql+pymysql://keystone:123123@controller/keystone
//123123 数据库设置keystone的密码
[token]
provider = fernet

2.8.4填充Identity服务数据库

# su -s /bin/sh -c "keystone-manage db_sync" keystone

2.8.5 初始化Fernet密钥存储库

# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

2.8.6 引导身份服务

# keystone-manage bootstrap --bootstrap-password 123123\ --bootstrap-admin-url http://controller:5000/v3/ \ --bootstrap-internal-url http://controller:5000/v3/ \ --bootstrap-public-url http://controller:5000/v3/ \ --bootstrap-region-id RegionOne
// 123123为 admin-openrc.sh 中的admin密码

2.8.7 配置Apache HTTP服务器

# cp /etc/httpd/conf/httpd.conf /etc/httpd/conf/httpd.conf.bak
# vi /etc/httpd/conf/httpd.conf
ServerName controller

2.8.8创建链接

# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

2.8.9启动服务并加入开机启动

# systemctl enable httpd.service
# systemctl start httpd.service

2.8.10创建域,项目,用户和角色

# source admin-openrc.sh
# openstack domain create --description "An Example Domain" example
# openstack project create --domain default \ --description "Service Project" service
# openstack project create --domain default \ --description "Demo Project" myproject
# openstack user create --domain default \ --password-prompt myuser
# openstack role create myrole
# openstack role add --project myproject --user myuser myrole

2.9安装 glance

2.9.1 安装数据库

# mysql -u root -p123123
MariaDB [(none)]> CREATE DATABASE glance;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO ‘glance’@‘localhost’
IDENTIFIED BY ‘123123’; //这里将glance访问数据库密码设置为123123

MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO ‘glance’@’%’
IDENTIFIED BY ‘123123’; //这里将glance访问数据库密码设置为123123
MariaDB [(none)]>exit

2.9.2创建服务凭据

# source admin-openrc.sh

2.9.2.1创建glance用户

# openstack user create --domain default --password-prompt glance
User Password:123123
Repeat User Password:123123

2.9.2.2将admin角色添加到glance用户和 service项目

# openstack role add --project service --user glance admin

2.9.2.3创建glance服务实体

# openstack service create --name glance \ --description "OpenStack Image" image

2.9.2.4创建Image服务API端点

# openstack endpoint create --region RegionOne \ image public http://controller:9292
# openstack endpoint create --region RegionOne \ image internal http://controller:9292
# openstack endpoint create --region RegionOne \ image admin http://controller:9292

2.9.3安装所需包

# yum install openstack-glance -y

2.9.4配置

# cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.bak
# vi /etc/glance/glance-api.conf
[database]
connection = mysql+pymysql://glance:123123@controller/glance
//123123 数据库设置glance的密码

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = 123123 //keystone设置glance的密码

[paste_deploy]
flavor = keystone

[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
# cp /etc/glance/glance-registry.conf /etc/glance/glance-registry.conf.bak
# vi /etc/glance/glance-registry.conf
[database]
connection = mysql+pymysql://glance:123123@controller/glance
//123123 数据库设置glance的密码

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = 123123 //keystone设置glance的密码

[paste_deploy]
flavor = keystone

2.9.5填充Image服务数据库

# su -s /bin/sh -c "glance-manage db_sync" glance

2.9.6启动服务并加入开机启动

# systemctl enable openstack-glance-api.service \
openstack-glance-registry.service
# systemctl start openstack-glance-api.service \ openstack-glance-registry.service

2.9.7验证glance 镜像服务

# source admin-openrc.sh
# curl -O http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
# openstack image create "cirros" \ --file cirros-0.4.0-x86_64-disk.img \ --disk-format qcow2 --container-format bare \ --public
# openstack image list

2.10安装 nova

2.10.1安装Nova数据库

# mysql -u root -p123123
MariaDB [(none)]> CREATE DATABASE nova_api;
MariaDB [(none)]> CREATE DATABASE nova;
MariaDB [(none)]> CREATE DATABASE nova_cell0;

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO ‘nova’@‘localhost’
IDENTIFIED BY ‘123123’;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO ‘nova’@’%’
IDENTIFIED BY ‘123123’;

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO ‘nova’@‘localhost’
IDENTIFIED BY ‘123123’;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO ‘nova’@’%’
IDENTIFIED BY ‘123123’;

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO ‘nova’@‘localhost’
IDENTIFIED BY ‘123123’;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO ‘nova’@’%’
IDENTIFIED BY ‘123123’;

MariaDB [(none)]>exit

2.10.2创建Compute服务凭据

# source admin-openrc.sh
# openstack user create --domain default --password-prompt nova
User Password:123123
Repeat User Password:123123
# openstack role add --project service --user nova admin
# openstack service create --name nova \ --description "OpenStack Compute" compute

2.10.3创建Compute API服务端点

# openstack endpoint create --region RegionOne \ compute public http://controller:8774/v2.1
# openstack endpoint create --region RegionOne \ compute internal http://controller:8774/v2.1
# openstack endpoint create --region RegionOne \ compute admin http://controller:8774/v2.1

2.10.4创建Placement服务凭据和服务端点

# openstack user create --domain default --password-prompt placement
User Password:123123
Repeat User Password:123123
# openstack role add --project service --user placement admin
# openstack service create --name placement \ --description "Placement API" placement
# openstack endpoint create --region RegionOne \ placement public http://controller:8778
# openstack endpoint create --region RegionOne \ placement internal http://controller:8778
# openstack endpoint create --region RegionOne \ placement admin http://controller:8778

2.10.5安装所需包

# yum install openstack-nova-api openstack-nova-conductor \ openstack-nova-novncproxy openstack-nova-scheduler \ openstack-nova-placement-api -y

2.10.6配置

# cp /etc/nova/nova.conf /etc/nova/nova.conf.bak
# vi /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:123123@controller
my_ip = 172.16.20.110 //Compute节点管理网络IP
use_neutron = true
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api_database]
connection = mysql+pymysql://nova:123123@controller/nova_api

[database]
connection = mysql+pymysql://nova:123123@controller/nova

[api]
auth_strategy = keystone

[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = 123123 //controller nova连接keystone密码为123123

[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip

[glance]
api_servers = http://controller:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = 123123 //controller placement连接keystone密码为123123
# cp /etc/httpd/conf.d/00-nova-placement-api.conf /etc/httpd/conf.d/00-nova-placement-api.conf.bak
# vi /etc/httpd/conf.d/00-nova-placement-api.conf
<Directory /usr/bin>
= 2.4>
Require all granted

<IfVersion < 2.4>
Order allow,deny
Allow from all

2.10.7重启http服务

# systemctl restart httpd

2.10.8填充nova-api数据库

# su -s /bin/sh -c "nova-manage api_db sync" nova

2.10.9注册cell0数据库

# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova

2.10.10创建cell1单元格

# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova

2.10.11填充Nova数据库

# su -s /bin/sh -c "nova-manage db sync" nova

2.10.12验证nova cell0和cell1是否正确注册

# su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova

2.10.13启动服务

# systemctl enable openstack-nova-api.service \ openstack-nova-scheduler.service \ openstack-nova-conductor.service openstack-nova-novncproxy.service
# systemctl start openstack-nova-api.service \ openstack-nova-scheduler.service \ openstack-nova-conductor.service openstack-nova-novncproxy.service

2.11安装Neutron

2.11.1配置IP转发:

2.11.1.1安装网桥模块

# modprobe bridge br_netfilter

2.11.1.2配置

# cp /etc/sysctl.conf /etc/sysctl.conf.bak
# vi /etc/sysctl.conf
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

2.11.1.3启动模块

# sysctl -p

2.11.2创建neutron数据库

# mysql -u root -p123123
MariaDB [(none)]> CREATE DATABASE neutron;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO ‘neutron’@‘localhost’
IDENTIFIED BY ‘123123’;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO ‘neutron’@’%’
IDENTIFIED BY ‘123123’;
MariaDB [(none)]> exit

2.11.3创建neutron用户

# source admin-openrc.sh
# openstack user create --domain default --password-prompt neutron

2.11.4将admin角色添加到neutron用户

# openstack role add --project service --user neutron admin

2.11.5创建neutron服务实体

# openstack service create --name neutron \ --description "OpenStack Networking" network

2.11.6创建网络服务API端点

# openstack endpoint create --region RegionOne \ network public http://controller:9696
# openstack endpoint create --region RegionOne \ network internal http://controller:9696
# openstack endpoint create --region RegionOne \ network admin http://controller:9696

2.11.7安装所需包

# yum install openstack-neutron openstack-neutron-ml2 ebtables -y

2.11.8配置

2.11.8.1配置neutron.conf

# cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak
# vi /etc/neutron/neutron.conf
[nova]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = 123123

[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:123123@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true

[database]
connection = mysql+pymysql://neutron:123123@controller/neutron

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123123

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

2.11.8.2配置ML2

# cp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_co nf.ini.bak
# vi /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan,gre,vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch,l2population
extension_drivers = port_security

[ml2_type_flat]
flat_networks = physnet

[ml2_type_vxlan]
vni_ranges = 1:1000

[securitygroup]
#enable_ipset = true
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptables
FirewallDriver
enable_security_group = True

2.11.8.3配置nova.conf

# vi /etc/nova/nova.conf
[neutron]
url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123123 //neutron 连接nova 密码为123123
service_metadata_proxy = true
metadata_proxy_shared_secret = 123123

2.11.8.4配置metadata_agent

# cp /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini.bak
# vi /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = 123123

2.11.9 ML2创建链接指向ML2插件配置文件

# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

2.11.10填充数据库

# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

2.11.10启动服务

# systemctl restart openstack-nova-api.service
# systemctl enable neutron-server.service neutron-metadata-agent.service
# systemctl start neutron-server.service neutron-metadata-agent.service

2.12安装Horizon

2.12.1安装所需包

# yum install openstack-dashboard -y

2.12.2配置

# cp /etc/openstack-dashboard/local_settings /etc/openstack-dashboard/local_settings.bak
# vi /etc/openstack-dashboard/local_settings
OPENSTACK_HOST = “controller”
ALLOWED_HOSTS = [’*’, ‘two.example.com’]

SESSION_ENGINE = ‘django.contrib.sessions.backends.cache’

CACHES = {
‘default’: {
‘BACKEND’: ‘django.core.cache.backends.memcached.MemcachedCache’,
‘LOCATION’: ‘controller:11211’,
}
}

OPENSTACK_KEYSTONE_URL = “http://%s:5000/v3” % OPENSTACK_HOST

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

OPENSTACK_API_VERSIONS = {
“identity”: 3,
“image”: 2,
“volume”: 2,
}

OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = “Default”

OPENSTACK_KEYSTONE_DEFAULT_ROLE = “user”

TIME_ZONE = “TIME_ZONE”
# cp /etc/httpd/conf.d/openstack-dashboard.conf /etc/httpd/conf.d/openstack-dashboard.conf.bak

# vi /etc/httpd/conf.d/openstack-dashboard.conf
WSGIApplicationGroup %{GLOBAL}

2.12.3启动服务

# systemctl restart httpd.service memcached.service

2.13安装Cinder

2.13.1创建数据库

# mysql -u root -p123123
MariaDB [(none)]> CREATE DATABASE cinder;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO ‘cinder’@‘localhost’
IDENTIFIED BY ‘123123’;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO ‘cinder’@’%’
IDENTIFIED BY ‘123123’;
MariaDB [(none)]>exit

2.13.2创建cinder用户

# source admin-openrc.sh
# openstack user create --domain default --password-prompt cinder

2.13.3将admin角色添加到cinder用户

# openstack role add --project service --user cinder admin

2.13.4创建cinderv2和cinderv3服务实体

# openstack service create --name cinderv2 \ --description "OpenStack Block Storage" volumev2

# openstack service create --name cinderv3 \ --description "OpenStack Block Storage" volumev3

2.13.5创建Block Storage服务API端点

# openstack endpoint create --region RegionOne \ volumev2 public http://controller:8776/v2/%\(project_id\)s

# openstack endpoint create --region RegionOne \ volumev2 internal http://controller:8776/v2/%\(project_id\)s

# openstack endpoint create --region RegionOne \ volumev2 admin http://controller:8776/v2/%\(project_id\)s

# openstack endpoint create --region RegionOne \ volumev3 public http://controller:8776/v3/%\(project_id\)s

# openstack endpoint create --region RegionOne \ volumev3 internal http://controller:8776/v3/%\(project_id\)s

# openstack endpoint create --region RegionOne \ volumev3 admin http://controller:8776/v3/%\(project_id\)s

2.13.6安装所需包

# yum install openstack-cinder -y

2.13.7配置

# cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak
# vi /etc/cinder/cinder.conf
[DEFAULT]
transport_url = rabbit://openstack:123123@controller
auth_strategy = keystone
my_ip = 172.16.20.110 //Compute节点管理网络IP

[database]
connection = mysql+pymysql://cinder:123123@controller/cinder

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = 123123

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

# vi /etc/nova/nova.conf
[cinder]
os_region_name = RegionOne

2.13.8填充块存储数据库

# su -s /bin/sh -c "cinder-manage db sync" cinder

2.13.9启动服务

# systemctl restart openstack-nova-api.service
# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

三、Compute节点

3.1需安装

Chrony(NTP,时间同步) Nova (Compute service 计算服务 nova-compute)
Neutron (Networking service 网络服务 openvswitch,neutron-l2-agent)

3.2配置网卡信息

# vi /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
DEVICE=ens33
ONBOOT=yes

IPADDR=172.16.20.120
NETMASK=255.255.255.0
# systemctl restart network

3.3安装 nova

3.3.1安装所需包

# yum install openstack-nova-compute -y

3.3.2配置

# cp /etc/nova/nova.conf /etc/nova/nova.conf.bak
# vi /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:123123@controller
//连接rabbit密码为123123
my_ip = 172.16.20.120
use_neutron = true
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api]
auth_strategy = keystone

[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = 123123 //compute nova连接keystone密码为123123

[vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://172.16.30.10:6080/vnc_auto.html

[glance]
api_servers = http://controller:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = 123123 //compute nova 连接placement密码为123123
3.3.3检查虚拟机是否支持虚拟化
# egrep -c '(vmx|svm)' /proc/cpuinfo

仅当结果为0时,开启二类虚拟化 使用qemu:
# vi /etc/nova/nova.conf
[libvirt]

virt_type = qemu

3.3.4启动服务

# systemctl enable libvirtd.service openstack-nova-compute.service
# systemctl start libvirtd.service openstack-nova-compute.service

3.3.5添加compute节点信息入controller节点 cell 数据库 (controller节点执行)

# source admin-openrc.sh
# openstack compute service list --service nova-compute

3.3.5.1发现conpute节点主机
# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

每当添加一个新的compute节点到openstack网络都需要手动在controller节点执行 “# su -s /bin/sh -c “nova-manage cell_v2 discover_hosts --verbose” nova”
在nova.conf中加入一下语句可自动发现主机:
# vi /etc/nova/nova.conf
[scheduler]
discover_hosts_in_cells_interval = 300

3.4安装 neutron

3.4.1配置IP转发

# modprobe bridge br_netfilter
# cp /etc/sysctl.conf /etc/sysctl.conf.bak
# vi /etc/sysctl.conf
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
# sysctl -p

3.4.2安装相应包

# yum install openstack-neutron openstack-neutron-ml2 neutron-openvswitch-agent -y

3.4.3配置

3.4.3.1配置neutron.conf

# cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak
# vi /etc/neutron/neutron.conf
[DEFAULT]
auth_strategy = keystone
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
transport_url = rabbit://openstack:123123@controller

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123123

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

3.4.3.2配置ML2插件

# cp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ ml2_conf.ini.bak
# vi /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan,gre,vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch,l2population
extension_drivers = port_security

[ml2_type_flat]
flat_networks = physnet

[ml2_type_vxlan]
vni_ranges = 1:1000

[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = true

3.4.3.3配置openvswitch_agent

# cp /etc/neutron/plugins/ml2/openvswitch_agent.ini /etc/neutron/plug ins/ml2/openvswitch_agent.ini.bak
# vi /etc/neutron/plugins/ml2/openvswitch_agent.ini
[ovs]
local_ip = 172.16.20.120

[agent]
tunnel_types = vxlan
l2_population = True

[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIpt
ablesFirewallDriver
enable_security_group = True

3.4.3.4配置nova.conf

# vi /etc/nova/nova.conf
url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123123

3.4.3.5启动服务

# systemctl enable openvswitch.service neutron-openvswitch-agent.service
# systemctl start openvswitch.service neutron-openvswitch-agent.service
# systemctl restart openstack-nova-compute.service

四、Network节点

4.1需安装

Chrony(NTP,时间同步) Neutron (Networking service 网络服务 openvswitch,neutron-l2-agent,neutron-l3-agent,metadata-agent)

4.2安装 neutron

4.2.1安装所需包

# yum install openstack-neutron openstack-neutron-ml2 neutron-openvswitch-agent -y

4.2.2配置IP转发

# modprobe bridge br_netfilter
# cp /etc/sysctl.conf /etc/sysctl.conf.bak
# vi /etc/sysctl.conf
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
# sysctl -p

4.2.3创建虚拟网桥

# ovs-vsctl add-br br-ex
# ovs-vsctl add-port br-ex ensxx
(用做NAT,浮动IP的网卡)

4.2.4配置网卡信息

4.2.4.1网卡一

# cp /etc/sysconfig/network-scripts/ifcfg-ens33 /etc/sysconfig/network-scripts/ifcfg-ens33.bak
# vi /etc/sysconfig/network-scripts/ifcfg-ens33 (用做NAT,浮动IP的网卡)
TYPE=OVSPort
BOOTPROTO=none
NAME=ens33
DEVICE=ens33
DEVICETYPE=ovs
OVS_BRIDGE=br-ex
ONBOOT=yes

4.2.4.2网卡二

# cp /etc/sysconfig/network-scripts/ifcfg-ens34 /etc/sysconfig/network-scripts/ifcfg-ens34
# vi /etc/sysconfig/network-scripts/ifcfg-ens33 (Host-Only网卡)
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=no
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens34
DEVICE=ens34
ONBOOT=yes
IPADDR=172.16.20.130
PREFIX=24

4.2.4.3网桥

创建网桥 br-ex 配置文件 ifcfg-br-ex:
# vi /etc/sysconfig/network-scripts/ifcfg-br-ex
TYPE=OVSBridge
BOOTPROTO=static
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
NAME=br-ex
DEVICE=br-ex
DEVICETYPE=ovs
ONBOOT=yes
IPADDR=172.16.10.10
NETMASK=255.255.255.0
GATEWAY=172.16.10.2
DNS1=172.16.10.2

4.2.5配置

4.2.5.1配置neutron.conf

# cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak
# vi /etc/neutron/neutron.conf
[DEFAULT]
auth_strategy = keystone
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
transport_url = rabbit://openstack:123123@controller

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123123
[oslo_concurrency]

lock_path = /var/lib/neutron/tmp

4.2.5.2配置ML2插件

# cp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml
2_conf.ini.bak
# vi /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan,gre,vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch,l2population
extension_drivers = port_security

[ml2_type_flat]
flat_networks = physnet

[ml2_type_vxlan]
vni_ranges = 1:1000

[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = True
#enable_ipset = True

4.2.5.3配置ML3插件

# cp /etc/neutron/l3_agent.ini /etc/neutron/l3_agent.ini.bak
# vi /etc/neutron/l3_agent.ini
[DEFAULT]
interface_driver=neutron.agent.linux.interface.OVSInterfaceDriver
debug = false

4.2.5.4配置DHCP

# cp /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini.bak
# vi /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True

4.2.5.5配置openvswitch

# cp /etc/neutron/plugins/ml2/openvswitch_agent.ini /etc/neutron/pl
ugins/ml2/openvswitch_agent.ini.bak
# vi /etc/neutron/plugins/ml2/openvswitch_agent.ini
[ovs]
local_ip = 172.16.20.130
bridge_mappings = physnet:br-ex

[agent]
tunnel_types = vxlan
l2_population = True

[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = True

4.2.5.6配置metadata

# cp /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini.bak
# vi /etc/neutron/metadata_agent.ini
nova_metadata_host = controller
metadata_proxy_shared_secret = 123123

4.2.6启动服务

# systemctl enable openvswitch.service neutron-l3-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
neutron-ovs-cleanup.service
# systemctl start openvswitch.service neutron-l3-agent.service
neutron-dhcp-agent.service neutron-metadata-agent.service

4.2.7验证网络服务是否正常启动(在controller节点执行)

# source admin-openrc.sh
# openstack network agent list

五、Block-Storage节点

5.1需安装

Chrony(NTP,时间同步) Cinder(Block-Storage 块存储服务)

5.2配置网卡信息

# vi /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
DEVICE=ens33
ONBOOT=yes

IPADDR=172.16.20.140
NETMASK=255.255.255.0
# systemctl restart network

5.3安装Cinder

5.3.1安装LVM服务

5.3.1.1安装所需包

# yum install lvm2 device-mapper-persistent-data -y

5.3.1.2启动服务

# systemctl enable lvm2-lvmetad.service
# systemctl start lvm2-lvmetad.service

5.3.1.3创建LVM卷

# pvcreate /dev/sdb
# vgcreate cinder-volumes /dev/sdb

# cp etc/lvm/lvm.conf etc/lvm/lvm.conf.bak
# vi etc/lvm/lvm.conf
devices {

filter = [ “a/sdb/”, “r/.*/”]

5.3.2安装cinder包

# yum install openstack-cinder targetcli python-keystone

5.3.3修改配置文件

# cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak
# vi /etc/cinder/cinder.conf
[DEFAULT]
transport_url = rabbit://openstack:123123@controller
auth_strategy = keystone
my_ip = 172.16.20.140
enabled_backends = lvm
glance_api_servers = http://controller:9292

[database]
connection = mysql+pymysql://cinder:123123@controller/cinder

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = 123123

[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
target_protocol = iscsi
target_helper = lioadm

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

5.3.4启动cinder服务并加入开机启动

# systemctl enable openstack-cinder-volume.service target.service
# systemctl start openstack-cinder-volume.service target.service

本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

CentOS 7.6安装OpenStack Stein版本 的相关文章

  • Vagrant 与 apache 同步文件夹权限问题

    我正在运行 Centos6 4 机器 Running vagrant upVagrant 文件中没有同步文件夹配置就可以了 我可以通过以下方式访问我的主机http localhost 8080它显示 Apache 页面 我可以在中创建ind
  • 无法获取 Flask 应用程序中设置的环境变量

    我尝试在 CentOS 中将敏感信息设置为环境变量 并将它们传递给主文件中使用的 Flask 配置文件 即init py 但这没有用 Flask 应用程序在 Apache 下运行 我首先以 root 用户身份编辑 etc environme
  • CentOS 中的 JMeter 整数表达式预期错误

    在 CentOS 中执行 JMeter 脚本时出现以下错误 我的JMeter版本是4 0 Java是1 8 我的脚本在 Windows 中运行良好 这是我的 JMeter 命令和我收到的错误 root localhost bin sh jm
  • 如何在 CentOS 中向 PHP 5 添加curl 支持

    如何在 CentOS 中向 PHP 5 添加curl 支持 安装curl和curl devel后 我需要做哪些事情才能在PHP 5中设置curl 有同样的问题 安装 php common 对我有用 yum install php commo
  • Haproxy 性能调整?

    我们正在尝试为来自客户端 而不是浏览网络交易类型的用户 的 get 和 post 请求找到 haproxy 的最佳调整选项 使用 30k 线程运行 jmeter 测试 其中包括 5 个对服务器的调用 1 个用户注册和一些更新调用 这些通过管
  • 未找到 ffprobe 或 avprobe。请安装一个

    我想向由 youtube dl 和 ffmpeg 转换的 mp3 添加标签 youtube dl o Output qpgTC9MDx1o mp3 qpgTC9MDx1o f bestaudio extract audio metadata
  • 启用 mod_http2 并在conf文件中设置协议后,HTTP/2配置未运行[重复]

    这个问题在这里已经有答案了 在看似正确的安装之后 HTTP 2 似乎并未运行 我运行的是 CentOS 7 我安装了最新版本的 Apache 版本 httpd 2 4 35 5 el7 x86 64 并一直在尝试让 HTTP 2 正常工作
  • 无法在 CentOS 7 上启动 postgresql 服务

    无法在 CentOS 7 上启动 postgresql 9 5 我关注了这个页面 https wiki postgresql org wiki YUM Installation https wiki postgresql org wiki
  • make: *** /lib/modules/2.6.32-279.el6.x86_64/build: 没有这样的文件或目录。停止

    我从他们的网站下载了 RALINK 驱动程序 untar xvf rtl 然后我在其中运行 make 谷歌搜索建议 kernel devel 需要安装 我安装了 kernel devel 软件包 但仍然收到此错误 make lib modu
  • git 存储库在 Linux 中从 jenkins 连接时出现 403 错误

    嗨 我只想将我的项目从 github 配置到 jenkins 来生成 build gradle 文件 我收到以下错误 Failed to connect to repository Command usr bin git ls remote
  • 在 php.ini 上启用curl_exec

    我想运行带有curl 的php 脚本 但以下功能被 php ini 禁用 exec passthru shell exec 系统 proc open popen curl exec curl multi exec show source 我
  • 将 awk 输出保存到变量 [重复]

    这个问题在这里已经有答案了 谁能帮我解决这个问题吗 我正在尝试将 awk 输出保存到变量中 variable ps ef grep port 10 grep v grep port 10 awk printf s 12 printf var
  • 云平台- sudo:无法解析主机[关闭]

    Closed 这个问题不符合堆栈溢出指南 help closed questions 目前不接受答案 我在 Amazon EC2 和 openstack 上使用 Linux 作为基于云的服务器 当尝试运行时 sudo chhown ubun
  • 如何找出apache上次重启的时间? [关闭]

    Closed 这个问题是无关 help closed questions 目前不接受答案 我有一个 VPSkloxo控制面板已安装 我在用CentOS 5 8 32 bit 我今天重新启动了 Apache 但忘记了启动时间 知道确切的时间非
  • 安装Python时出错

    击中后 make install 我收到以下错误 usr bin install cannot create regular file usr local bin python2 6 Permission denied make altbi
  • 安装 openstack 时发生错误:./stack.sh:137:die

    我尝试使用以下命令通过 devstack 安装 openstack git 克隆https github com openstack dev devstack git https github com openstack dev devst
  • Bash:更新文件中的变量

    我知道这是一个简单的答案 在找到答案之前我可能可以继续在谷歌上进行挖掘 但我的日程很紧 我希望能得到一个轻松的答复 我需要在安装时更新 ifcfg eth0 中的变量 换句话说 这就是需要发生的事情 以下变量需要更改 ONBOOT no B
  • 如何在Linux中打开端口[关闭]

    Closed 这个问题不符合堆栈溢出指南 help closed questions 目前不接受答案 我已经安装了 Web 应用程序 该应用程序在 RHEL centOS 上的端口 8080 上运行 我只能通过命令行访问该机器 我尝试从我的
  • CentOS目录结构是树形的吗?

    CentOS 上有相当于树的东西吗 如果你的 Centos 系统上没有安装 tree 无论如何我通常建议服务器设置使用最小安装磁盘 你应该在命令行中输入以下内容 yum install tree y 如果没有安装 那是因为您没有正确的存储库
  • CentOS:无法安装 Chromium 浏览器

    我正在尝试在 centOS 6 i 中安装 chromium 以 root 用户身份运行以下命令 cd etc yum repos d wget http repos fedorapeople org repos spot chromium

随机推荐

  • 安装vnc的各种悲剧解决

    系统 环境 VM 43 RHEL5 1 root 64 localhost vnc uname r 2 6 18 53 el5xen 本地XP系统安装 VNCVIEW去控制VM中的RHEL5 1 下面在LINUX上安装VNCSERVER 1
  • iOS基础 UITabBarController

    使用 创建子控制器继承自UITabBarController xff0c 在viewDidLoad阶段 xff0c 把各个分页上的控制器给创建好 xff0c 用UITabBarController的方法addChildControoler相
  • 插入内核模块失败提示"Invalid module format"

    产品需要编译自己的定制内核 43 内核模块 xff0c 下载内核源码定制修改后rpmbuild方式 点击打开链接 编译升级内核 xff0c 如下方式编译内核模块 make C kernel source SUBDIRS 61 96 pwd
  • microsoft visual c++ build tools

    因为visual studio的安装包太大 xff0c 所以在不需要开发的情况下 xff0c 可以选择使用microsoft visual c 43 43 build tools安装c 43 43 编译器 xff0c 这个工具会小很多 安装
  • C++ 应用程序 内存结构 --- BSS段,数据段,代码段,堆内存和栈

    转自 xff1a http hi baidu com C6 BF D6 D0 B5 C4 C5 AE CE D7 blog item 5043d08e741075f3503d922c html ld 时把所有的目标文件的代码段组合成一个代码
  • 4.1 简单题 - B 恭喜你

    当别人告诉你自己考了 x 分的时候 xff0c 你要回答说 xff1a 恭喜你考了 x 分 xff01 比如小明告诉你他考了90分 xff0c 你就用汉语拼音打出来 gong xi ni kao le 90 fen 输入格式 xff1a 输
  • <script>在页面代码上没有显示

    记录一下 导入js文件 xff0c 自己路径都没有问题 xff0c 为什么在浏览器查看页面代码没有自己写的那行js导入文件的代码呢 xff0c 原来 xff0c 是之前看着不舒服 xff0c 点了exclude xff0c exclude是
  • 利用Rust构建一个REST API服务

    利用Rust构建一个REST API服务 关注公众号 xff1a 香菜粉丝 了解更多精彩内容 Rust 是一个拥有很多忠实粉丝的编程语言 xff0c 还是很难找到一些用它构建的项目 xff0c 而且掌握起来甚至有点难度 想要开始学习一门编程
  • 安装cmake3.22

    升级cmake版本 脚本 span class token assign left variable file name span span class token operator 61 span cmake 3 22 0 yum era
  • stdout stderr 重定向到文件

    1 stdout stderr 重定向 1 stdout stderr 重定向 1 1 dup dup2 重定向到已打开文件 或 新文件1 2 freopen 重定向到新文件1 3 命令行重定向1 4 参考资料 1 1 dup dup2 重
  • 逆向基础-Windows驱动开发(一)

    Windows内核开发 第一个驱动程序 环境配置 xff1a 安装WDK xff1a WDK版本与SDK保持一致 然后记得把Spectre Mitigation给Disabled掉 xff0c 就不用去下载漏洞补丁了 然后在内核层 xff0
  • json-c 理解记录

    1 json c 理解记录 1 json c 理解记录 1 1 编译及说明1 2 特色1 3 使用 1 3 1 创建 xff0c 读写文件1 3 2 拷贝1 3 3 增改 1 3 3 1 字典增加元素1 3 3 2 数组增加修改元素 1 3
  • valgrind 简介(内存检查工具)

    1 valgrind 简介 1 valgrind 简介 1 1 概图1 2 特点1 3 使用示例1 4 参数说明 1 4 1 常用参数1 4 2 展示1 4 3 子进程 动态加载库及记录时机1 4 4 查错内存优化1 4 5 其他不常用1
  • GObject学习教程---第一章:GObject是有用并且简单的

    索引 xff1a https blog csdn net knowledgebao article details 84633798 本文是学习学习他人的博客的心得 xff08 具体详见 楼主见解 xff09 xff0c 如果源网站可访问的
  • GObject学习教程---第二章:模拟类的数据封装形式

    索引 xff1a https blog csdn net knowledgebao article details 84633798 本文是学习学习他人的博客的心得 xff08 具体详见 楼主见解 xff09 xff0c 如果源网站可访问的
  • 音视频中的PTS和DTS及同步

    相关索引 xff1a https blog csdn net knowledgebao article details 84776869 视频的播放过程可以简单理解为一帧一帧的画面按照时间顺序呈现出来的过程 xff0c 就像在一个本子的每一
  • h264和h265的区别

    相关索引 xff1a https blog csdn net knowledgebao article details 84776869 目录 1 H 264与H 265的主要差异 2 xff0c 压缩性能比较 3 各模块技术差异汇总 4
  • StreamEye使用说明

    编译相关索引 xff1a https blog csdn net knowledgebao article details 84973055 官网 xff1a https www elecard com products video ana
  • libc.so库简介

    相关链接 xff1a https blog csdn net knowledgebao article details 84315842 问题一 xff1a 比如不小心把软连接libc so 6删除了 xff0c 只要执行ldconfig
  • CentOS 7.6安装OpenStack Stein版本

    文章目录 一 前提1 1设置四节点1 2网络平台架构1 3准备环境 所有节点 1 3 1设置hosts1 3 2设置主机名1 3 3关闭 firewalld1 3 4关闭SELinux1 3 5设置静态IP1 3 6自定义yum源1 3 7