注意:要安装英文的linux系统,不要安装中文版本!

修改配置文件时,不需要删除以前的配置,在下方直接添加即可!

本文字数和图片过多,图片190张,文字含标点符号65914+,访问会有卡顿!后续会添加分开合集!!

本文为OpenStack搭建的记录笔记。

OpenStack是一个开源的云计算管理平台项目,是一系列软件开源项目的组合。由NASA(美国国家航空航天局)和Rackspace合作研发并发起,以Apache许可证(Apache软件基金会发布的一个自由软件许可证)授权的开源代码项目。

OpenStack为私有云和公有云提供可扩展的弹性的云计算服务。项目目标是提供实施简单、可大规模扩展、丰富、标准统一的云计算管理平台。

配置虚拟机与基本服务配置

注意:在前面几步的步骤两台虚拟机是一模一样的配置(我这只做一边!)

创建虚拟机

点击处理器勾选里面的虚拟化选项!!!

创建2台虚拟机,为controllercompute,IP地址为192.168.145.150192.168.145.151,详细如下表:

主机名IP
controller192.168.145.150
compute192.168.145.151

硬件配置(两台的配置一样)的详情如下:

硬件备注
内存4 GB
处理器(CPU)2 核
硬盘(SCSI)60 GB
网络适配器NAT
网络适配器2仅主机

安装好系统后,我们进行网络配置

注意:要英文系统,不要中文;开启虚拟化;

网络配置

注意:另一台一样的配法!这里为了省时间,就不演示了!
vi /etc/sysconfig/network-scripts/ifcfg-ens33
BOOTPROTO="static"

IPADDR="192.168.145.150"
NETMASK="255.255.255.0"
GATEWAY="192.168.145.2"
DNS1="114.114.114.114"

配置完后 重启网卡;

systemctl restart network

网卡配置完成,接下来我们对此关闭防火墙

首先关闭selinux

vi /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=enforcing  ##改为disabled
# SELINUXTYPE= can take one of three values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected. 
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted 

setenforce 0

//注释:
//命令临时生效:
//setenforce 0 (临时生效可以直接用setenforce 0 )
//        1 启用
//        0 告警,不启用

关闭防火墙firewalld

查看防火墙的状态:

systemctl status firewalld.service

//默认是启动的

我们关闭它

systemctl stop firewalld; systemctl disable firewalld

之后我们继续查看状态

systemctl status firewalld.service

已经关闭了!


更改主机名

hostnamectl set-hostname 主机名
hostnamectl set-hostname controller
hostnamectl set-hostname compute

更改后重新登录一下!


更改hosts

两台虚拟机都要添加

vi /etc/hosts

写入IP空格+主机名

192.168.145.150 controller
192.168.145.151 compute


同步时间

yum -y install chrony

安装后,我们更改配置文件

controller的配置文件

vi /etc/chrony.conf
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server ntp3.aliyun.com iburst

#上面这个是阿里云的时间服务器,只留这一个就行

# Record the rate at which the system clock gains/losses time.
driftfile /var/lib/chrony/drift

# Allow the system clock to be stepped in the first three updates
# if its offset is larger than 1 second.
makestep 1.0 3

# Enable kernel synchronization of the real-time clock (RTC).
rtcsync

# Enable hardware timestamping on all interfaces that support it.
#hwtimestamp *

# Increase the minimum number of selectable sources required to adjust
# the system clock.
#minsources 2

# Allow NTP client access from local network.
allow all
#允许访问的IP
# Serve time even if not synchronized to a time source.
local stratum 10
#连接数
# Specify file containing keys for NTP authentication.
#keyfile /etc/chrony.keys

# Specify directory for log files.
logdir /var/log/chrony

# Select which information is logged.
#log measurements statistics tracking

compute的配置文件

# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server controller iburst
# Record the rate at which the system clock gains/losses time.
driftfile /var/lib/chrony/drift

# Allow the system clock to be stepped in the first three updates
# if its offset is larger than 1 second.
makestep 1.0 3

# Enable kernel synchronization of the real-time clock (RTC).
rtcsync

# Enable hardware timestamping on all interfaces that support it.
#hwtimestamp *

# Increase the minimum number of selectable sources required to adjust
# the system clock.
#minsources 2

# Allow NTP client access from local network.
#allow all

# Serve time even if not synchronized to a time source.
#local stratum 10

# Specify file containing keys for NTP authentication.
#keyfile /etc/chrony.keys

# Specify directory for log files.
logdir /var/log/chrony

# Select which information is logged.
#log measurements statistics tracking

OpenStack初始化配置

安装OpenStack 软件包 两个节点都需要安装

换源

更换官方源

安装centos-release-openstack-train

yum install centos-release-openstack-train -y

安装合适的 OpenStack 客户端 和 openstack-selinux软件包以自动管理 OpenStack 服务的安全策略

yum install python-openstackclient openstack-selinux -y

注意:从下面开始只在主机名Controller也就是控制节点安装

安装SQL 数据库

yum install mariadb mariadb-server python2-PyMySQL -y

创建和编辑/etc/my.cnf.d/openstack.cnf文件

vi /etc/my.cnf.d/openstack.cnf

写入

[mysqld]
bind-address = 本机IP

default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

启动数据库服务并配置它在系统启动时启动

systemctl enable mariadb.service; systemctl start mariadb.service

运行mysql_secure_installation 脚本来保护数据库服务。特别是,为数据库root帐户选择一个合适的密码 :

mysql_secure_installation

安装消息队列

安装软件包

yum install rabbitmq-server -y

启动消息队列服务并配置它在系统启动时启动

systemctl enable rabbitmq-server.service; systemctl start rabbitmq-server.service

添加openstack用户

rabbitmqctl add_user [用户名] [密码] 
rabbitmqctl add_user openstack openstack123

密码设置为openstack123,要记住它!

允许用户的配置、写入和读取访问权限 openstack

rabbitmqctl set_permissions openstack ".*" ".*" ".*"

启动图形化界面服务

rabbitmq-plugins list
//查看需要启动的服务
rabbitmq-plugins enable rabbitmq_management rabbitmq_management_agent

访问rabbit

ss -tnl
//查看端口

ip+端口

http://192.168.145.150:15672/

账号密码都是guest

安装Memcached

身份服务认证机制使用 Memcached 来缓存令牌。memcached 服务通常在控制器节点上运行。

安装和配置组件

yum install memcached python-memcached -y

编辑/etc/sysconfig/memcached文件

vi /etc/sysconfig/memcached

更改原来的行:OPTIONS="-l 127.0.0.1,::1"

将服务配置为使用控制器节点的管理 IP 地址。这是为了允许其他节点通过管理网络进行访问

启动 Memcached 服务并配置它在系统启动时启动

systemctl enable memcached.service; systemctl start memcached.service

Keystone

本节介绍如何在控制器节点上安装和配置代号为keystone的OpenStack Identity服务。出于可扩展性的目的,此配置部署 Fernet 令牌和 Apache HTTP 服务器来处理请求。

在安装和配置身份服务之前,您必须创建一个数据库。

先决条件

使用数据库访问客户端以root用户身份连接数据库服务器

密码在前面设置了,为123

mysql -u root -p

创建keystone数据库

CREATE DATABASE keystone;

授予对keystone数据库的适当访问权限

GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone123';

退出数据库访问客户端

安装软件包

如果出现安装依赖包报错请看这篇文章:

运行以下命令来安装软件包

yum install openstack-keystone httpd mod_wsgi -y

配置数据库访问

编辑/etc/keystone/keystone.conf文件

[database]部分中,配置数据库访问

用一下快速转移的方法

/\[database] 

connection = mysql+pymysql://keystone:keystone123@controller/keystone

[token]部分中,配置 Fernet 令牌提供程序

/\[token] 

provider = fernet

同步数据库

填充身份服务数据库--同步数据库

su -s /bin/sh -c "keystone-manage db_sync" keystone

我们查看一下数据库

初始化密钥库

keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

引导身份服务(重要)

keystone-manage bootstrap --bootstrap-password admin \
  --bootstrap-admin-url http://controller:5000/v3/ \
  --bootstrap-internal-url http://controller:5000/v3/ \
  --bootstrap-public-url http://controller:5000/v3/ \
  --bootstrap-region-id RegionOne

配置 Apache HTTP 服务器

编辑/etc/httpd/conf/httpd.conf文件并配置 ServerName选项以引用控制器节点

vim /etc/httpd/conf/httpd.conf
ServerName controller

创建/usr/share/keystone/wsgi-keystone.conf文件软链接

ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

启动 Apache HTTP 服务并配置它在系统启动时启动

systemctl enable httpd.service; systemctl start httpd.service

配置管理帐户 设置环境变量

vim admin.sh
#!/bin/bash
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3

source admin.sh
openstack endpoint list

创建域、项目、用户和角色

Identity 服务为每个 OpenStack 服务提供身份验证服务。身份验证服务使用域、项目、用户和角色的组合。

创建新域

openstack domain create --description "An Example Domain" example

创建service 项目

openstack project create --domain default --description "Service Project" service

常规(非管理员)任务应使用非特权项目和用户,创建myproject项目和myuser 用户

创建myproject项目

openstack project create --domain default --description "Demo Project" myproject

创建myuser用户

openstack user create --domain default --password-prompt myuser

创建myrole角色

openstack role create myrole

myrole角色添加到myproject项目和myuser用户

openstack role add --project myproject --user myuser myrole

验证身份服务

取消设置临时OS_AUTH_URLOS_PASSWORD 环境变量

unset OS_AUTH_URL OS_PASSWORD

admin用户,请求身份验证令牌

openstack --os-auth-url http://controller:5000/v3 \
  --os-project-domain-name Default --os-user-domain-name Default \
  --os-project-name admin --os-username admin token issue

密码为admin

myuser用户,请求身份验证令牌

密码为myuser

openstack --os-auth-url http://controller:5000/v3 \
  --os-project-domain-name Default --os-user-domain-name Default \
  --os-project-name myproject --os-username myuser token issue

创建 OpenStack 客户端环境脚本

创建并编辑admin-op文件

vim admin-op.sh
#!/bin/bash
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

创建并编辑myuser文件

vim myuser.sh
#!/bin/bash
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=myproject
export OS_USERNAME=myuser
export OS_PASSWORD=myuser
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

source admin-op.sh
openstack token issue

source myuser.sh
openstack token issue

加载admin-op文件以使用身份服务的位置以及admin项目和用户凭据填充环境变量

source admin-op.sh

请求身份验证令牌

openstack token issue

Glance

创建数据库

使用数据库访问客户端以root用户身份连接数据库服务器

mysql -u root -p

创建glance数据库

CREATE DATABASE glance;

授予对glance数据库的适当访问权限

GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance123';

创建服务凭证

创建glance用户

openstack user create --domain default --password-prompt glance

admin角色添加到glance用户和 service项目

openstack role add --project service --user glance admin

创建glance服务实体

openstack service create --name glance --description "OpenStack Image" image

创建图像服务 API 端点

openstack endpoint create --region RegionOne image public http://controller:9292

openstack endpoint create --region RegionOne image internal http://controller:9292

openstack endpoint create --region RegionOne image admin http://controller:9292

安装和配置组件

安装软件包

yum install openstack-glance -y

编辑/etc/glance/glance-api.conf文件

在该[database]部分中,配置数据库访问

connection = mysql+pymysql://glance:glance123@controller/glance

[keystone_authtoken][paste_deploy]部分,配置身份服务访问

[keystone_authtoken]
www_authenticate_uri  = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = glance

[paste_deploy]

flavor = keystone

在该[glance_store]部分中,配置本地文件系统存储和图像文件的位置

[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/

同步数据库

su -s /bin/sh -c "glance-manage db_sync" glance

启动服务并将它们配置为在系统启动时启动

systemctl enable openstack-glance-api.service; systemctl start openstack-glance-api.service

验证操作

下载镜像(建议去浏览器下载然后上传)

http://download.cirros-cloud.net

将镜像上传到 Image 服务 ,以便所有项目都可以访问它

glance image-create --name "cirros4" \
  --file cirros-0.4.0-x86_64-disk.img \
  --disk-format qcow2 --container-format bare \
  --visibility public

确认上传镜像并验证属性

glance image-list

Placement

创建数据库

使用数据库访问客户端以root用户身份连接数据库服务器

mysql -u root -p

创建placement数据库

CREATE DATABASE placement;

授予对数据库的适当访问权限

GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'placement123';

配置用户和端点

创建一个用户

openstack user create --domain default --password-prompt placement

将 Placement 用户添加到具有 admin 角色的服务项目

openstack role add --project service --user placement admin

在服务目录中创建 Placement API 条目

openstack service create --name placement \
  --description "Placement API" placement

创建 Placement API 服务端点

openstack endpoint create --region RegionOne \
  placement public http://controller:8778

openstack endpoint create --region RegionOne \
  placement internal http://controller:8778
openstack endpoint create --region RegionOne \
  placement admin http://controller:8778

安装和配置组件

安装软件包

yum install openstack-placement-api -y

编辑/etc/placement/placement.conf文件

vim /etc/placement/placement.conf

在该[placement_database]部分中,配置数据库访问

[placement_database]
connection = mysql+pymysql://placement:placement123@controller/placement

[api][keystone_authtoken]部分,配置身份服务访问

[api]
auth_strategy = keystone

[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = placement
password = placement

同步填充placement数据库

su -s /bin/sh -c "placement-manage db sync" placement

httpd添加配置:BUG修复

修改/etc/httpd/conf.d/00-placement-api.conf文件

vim /etc/httpd/conf.d/00-placement-api.conf
<Directory /usr/bin>
<IfVersion >= 2.4>
   Require all granted
</IfVersion>
<IfVersion < 2.4>
   Order allow,deny
   Allow from all
</IfVersion>
</Directory>

重启httpd服务

systemctl restart httpd

验证操作

nova (OpenStack 计算)

注意:这里需要安装和配置 控制器节点(controller)和计算节点(compute)

控制器节点(controller)

创建数据库

使用数据库访问客户端以root用户身份连接数据库服务器

mysql -u root -p

创建nova_apinovanova_cell0数据库

CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;

授予对数据库的适当访问权限

GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'nova123';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova123';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'nova123';

创建计算服务凭证

创建nova用户

openstack user create --domain default --password-prompt nova

adminnova用户添加角色

openstack role add --project service --user nova admin

创建nova服务实体

openstack service create --name nova --description "OpenStack Compute" compute

创建 Compute API 服务端点

openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1

安装和配置组件

安装软件包

yum install openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler -y

编辑/etc/nova/nova.conf文件并完成以下操作

vim /etc/nova/nova.conf

在该[DEFAULT]部分中,仅启用计算和元数据 API

[DEFAULT]
enabled_apis = osapi_compute,metadata

[api_database][database]部分,配置数据库访问

[api_database]
connection = mysql+pymysql://nova:nova123@controller/nova_api

[database]
connection = mysql+pymysql://nova:nova123@controller/nova

在该[DEFAULT]部分,配置RabbitMQ消息队列访问

[DEFAULT]
transport_url = rabbit://openstack:openstack123@controller:5672/

[api][keystone_authtoken]部分,配置身份服务访问

[api]
# ...
auth_strategy = keystone

[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = nova

在该[DEFAULT]部分中,配置my_ip选项以使用控制器节点的管理接口 IP 地址

[DEFAULT]
# ...
my_ip = 192.168.145.150

在该[DEFAULT]部分中,启用对网络服务的支持

[DEFAULT]
# ...
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver

在该[vnc]部分中,将 VNC 代理配置为使用控制器节点的管理接口 IP 地址

[vnc]
enabled = true
# ...
server_listen = $my_ip
server_proxyclient_address = $my_ip

在该[glance]部分中,配置 Image 服务 API 的位置

[glance]
# ...
api_servers = http://controller:9292

在该[oslo_concurrency]部分中,配置锁定路径

[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp

[placement]部分中,配置对 Placement 服务的访问

[placement]
# ...
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = placement

填充nova-api数据库

su -s /bin/sh -c "nova-manage api_db sync" nova

注册cell0数据库

su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova

创建cell1单元格

su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova

填充 nova 数据库

su -s /bin/sh -c "nova-manage db sync" nova

验证 nova cell0 和 cell1 是否正确注册

su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova

完成安装

启动 Compute 服务并将它们配置为在系统启动时启动

systemctl enable openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service; systemctl start openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

计算节点(compute)

安装和配置组件

安装软件包

yum install openstack-nova-compute -y

编辑/etc/nova/nova.conf文件并完成以下操作

vim /etc/nova/nova.conf

在该[DEFAULT]部分中,仅启用计算和元数据 API

[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata

在该[DEFAULT]部分,配置RabbitMQ消息队列访问

[DEFAULT]
# ...
transport_url = rabbit://openstack:openstack123@controller

[api][keystone_authtoken]部分,配置身份服务访问

[api]
# ...
auth_strategy = keystone

[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = nova

在该[DEFAULT]部分中,配置my_ip选项

[DEFAULT]
# ...
my_ip = 192.168.145.151

在该[DEFAULT]部分中,启用对网络服务的支持

[DEFAULT]
# ...
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver

image-20211215215230697

[vnc]部分中,启用和配置远程控制台访问

[vnc]
# ...
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html

在该[glance]部分中,配置 Image 服务 API 的位置

[glance]
# ...
api_servers = http://controller:9292

在该[oslo_concurrency]部分中,配置锁定路径

[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp

[placement]部分中,配置 Placement API

[placement]
# ...
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = placement

完成安装

确定您的计算节点是否支持虚拟机的硬件加速

egrep -c '(vmx|svm)' /proc/cpuinfo

如果此命令返回值0,则您的计算节点不支持硬件加速,您必须配置libvirt为使用 QEMU 而不是 KVM。

编辑文件中的[libvirt]部分,/etc/nova/nova.conf如下所示

[libvirt]
# ...
virt_type = qemu

启动 Compute 服务及其依赖项,并将它们配置为在系统启动时自动启动

systemctl enable libvirtd.service openstack-nova-compute.service; systemctl start libvirtd.service openstack-nova-compute.service

将计算节点添加到cell数据库中

控制器节点上运行以下命令

获取管理员凭据以启用仅限管理员的 CLI 命令,然后确认数据库中有计算主机

openstack compute service list --service nova-compute

发现计算主机

su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

如果计算节点很多的话,我们一个一个使用命令来发现它会很麻烦,下面的配置可以在300秒自动执行发现计算主机(上面的命令)命令;

我们在控制节点操作

vim /etc/nova/nova.conf

添加新计算节点时,您必须在控制器节点上运行以注册这些新计算节点。或者,您可以在中设置适当的间隔:

[scheduler]
discover_hosts_in_cells_interval = 300

创建重启nova脚本

vim restart-nova.sh
#!/bin/bash
systemctl restart openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

Neutron(难点)

注意:这里需要安装和配置 控制器节点(controller)和计算节点(compute)

安装和配置控制器节点

创建数据库

使用数据库访问客户端以root用户身份连接数据库服务器

mysql -u root -p

创建neutron数据库

CREATE DATABASE neutron;

授予对neutron数据库的适当访问权限

GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron123';

创建服务凭证

创建neutron用户

openstack user create --domain default --password-prompt neutron

adminneutron用户添加角色

openstack role add --project service --user neutron admin

创建neutron服务实体

openstack service create --name neutron --description "OpenStack Networking" network

创建网络服务 API 端点

openstack endpoint create --region RegionOne network public http://controller:9696
openstack endpoint create --region RegionOne network internal http://controller:9696
openstack endpoint create --region RegionOne network admin http://controller:9696

提供商网络

安装组件

yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y

配置服务器组件

编辑/etc/neutron/neutron.conf文件并完成以下操作

vim /etc/neutron/neutron.conf

在该[database]部分中,配置数据库访问

[database]
connection = mysql+pymysql://neutron:neutron123@controller/neutron

在该[DEFAULT]部分中,启用 Modular Layer 2 (ML2) 插件并禁用其他插件

[DEFAULT]
# ...
core_plugin = ml2
service_plugins =

在该[DEFAULT]部分,配置RabbitMQ 消息队列访问

[DEFAULT]
# ...
transport_url = rabbit://openstack:openstack123@controller

image-20211216002211084

[DEFAULT][keystone_authtoken]部分,配置身份服务访问

[DEFAULT]
# ...
auth_strategy = keystone

[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron

[DEFAULT][nova]部分,配置 Networking 以通知 Compute 网络拓扑更改

[DEFAULT]
# ...
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true

[nova]
# ...
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova

这里会发现找不到[nova],所以我们直接shift+g到底端,直接添加;

在该[oslo_concurrency]部分中,配置锁定路径

[oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp

配置模块化第 2 层 (ML2) 插件

编辑/etc/neutron/plugins/ml2/ml2_conf.ini文件并完成以下操作

vim /etc/neutron/plugins/ml2/ml2_conf.ini

我们使用dG删掉文件里的所有内容,因为和上面的文件一样,缺胳膊少腿;

向文件里粘贴以下内容

ml2_conf.ini配置文件内容开始
[DEFAULT]

#
# From oslo.log
#

# If set to true, the logging level will be set to DEBUG instead of the default
# INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false

# DEPRECATED: If set to false, the logging level will be set to WARNING instead
# of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true

# The name of a logging configuration file. This file is appended to any
# existing logging configuration files. For details about logging configuration
# files, see the Python logging module documentation. Note that when logging
# configuration files are used then all logging configuration is set in the
# configuration file and other logging configuration options are ignored (for
# example, logging_context_format_string). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>

# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set. (string
# value)
#log_date_format = %Y-%m-%d %H:%M:%S

# (Optional) Name of log file to send logging output to. If no default is set,
# logging will go to stderr as defined by use_stderr. This option is ignored if
# log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>

# (Optional) The base directory used for relative log_file  paths. This option
# is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>

# Uses logging handler designed to watch file system. When log file is moved or
# removed this handler will open a new log file with specified path
# instantaneously. It makes sense only if log_file option is specified and
# Linux platform is used. This option is ignored if log_config_append is set.
# (boolean value)
#watch_log_file = false

# Use syslog for logging. Existing syslog format is DEPRECATED and will be
# changed later to honor RFC5424. This option is ignored if log_config_append
# is set. (boolean value)
#use_syslog = false

# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER

# Log output to standard error. This option is ignored if log_config_append is
# set. (boolean value)
#use_stderr = true

# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s

# Format string to use for log messages when context is undefined. (string
# value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s

# Additional data to append to log message when logging level for the message
# is DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d

# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s

# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s

# List of package logging levels in logger=LEVEL pairs. This option is ignored
# if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO

# Enables or disables publication of error events. (boolean value)
#publish_errors = false

# The format for an instance that is passed with the log message. (string
# value)
#instance_format = "[instance: %(uuid)s] "

# The format for an instance UUID that is passed with the log message. (string
# value)
#instance_uuid_format = "[instance: %(uuid)s] "

# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false


[ml2]

#
# From neutron.ml2
#

# List of network type driver entrypoints to be loaded from the
# neutron.ml2.type_drivers namespace. (list value)
#type_drivers = local,flat,vlan,gre,vxlan,geneve

# Ordered list of network_types to allocate as tenant networks. The default
# value 'local' is useful for single-box testing but provides no connectivity
# between hosts. (list value)
#tenant_network_types = local

# An ordered list of networking mechanism driver entrypoints to be loaded from
# the neutron.ml2.mechanism_drivers namespace. (list value)
#mechanism_drivers =

# An ordered list of extension driver entrypoints to be loaded from the
# neutron.ml2.extension_drivers namespace. For example: extension_drivers =
# port_security,qos (list value)
#extension_drivers =

# Maximum size of an IP packet (MTU) that can traverse the underlying physical
# network infrastructure without fragmentation when using an overlay/tunnel
# protocol. This option allows specifying a physical network MTU value that
# differs from the default global_physnet_mtu value. (integer value)
#path_mtu = 0

# A list of mappings of physical networks to MTU values. The format of the
# mapping is <physnet>:<mtu val>. This mapping allows specifying a physical
# network MTU value that differs from the default global_physnet_mtu value.
# (list value)
#physical_network_mtus =

# Default network type for external networks when no provider attributes are
# specified. By default it is None, which means that if provider attributes are
# not specified while creating external networks then they will have the same
# type as tenant networks. Allowed values for external_network_type config
# option depend on the network type values configured in type_drivers config
# option. (string value)
#external_network_type = <None>

# IP version of all overlay (tunnel) network endpoints. Use a value of 4 for
# IPv4 or 6 for IPv6. (integer value)
#overlay_ip_version = 4


[ml2_type_flat]

#
# From neutron.ml2
#

# List of physical_network names with which flat networks can be created. Use
# default '*' to allow flat networks with arbitrary physical_network names. Use
# an empty list to disable flat networks. (list value)
#flat_networks = *


[ml2_type_geneve]

#
# From neutron.ml2
#

# Comma-separated list of <vni_min>:<vni_max> tuples enumerating ranges of
# Geneve VNI IDs that are available for tenant network allocation (list value)
#vni_ranges =

# Geneve encapsulation header size is dynamic, this value is used to calculate
# the maximum MTU for the driver. This is the sum of the sizes of the outer ETH
# + IP + UDP + GENEVE header sizes. The default size for this field is 50,
# which is the size of the Geneve header without any additional option headers.
# (integer value)
#max_header_size = 30


[ml2_type_gre]

#
# From neutron.ml2
#

# Comma-separated list of <tun_min>:<tun_max> tuples enumerating ranges of GRE
# tunnel IDs that are available for tenant network allocation (list value)
#tunnel_id_ranges =


[ml2_type_vlan]

#
# From neutron.ml2
#

# List of <physical_network>:<vlan_min>:<vlan_max> or <physical_network>
# specifying physical_network names usable for VLAN provider and tenant
# networks, as well as ranges of VLAN tags on each available for allocation to
# tenant networks. (list value)
#network_vlan_ranges =


[ml2_type_vxlan]

#
# From neutron.ml2
#

# Comma-separated list of <vni_min>:<vni_max> tuples enumerating ranges of
# VXLAN VNI IDs that are available for tenant network allocation (list value)
#vni_ranges =

# Multicast group for VXLAN. When configured, will enable sending all broadcast
# traffic to this multicast group. When left unconfigured, will disable
# multicast VXLAN mode. (string value)
#vxlan_group = <None>


[securitygroup]

#
# From neutron.ml2
#

# Driver for security groups firewall in the L2 agent (string value)
#firewall_driver = <None>

# Controls whether the neutron security group API is enabled in the server. It
# should be false when using no security groups or using the nova security
# group API. (boolean value)
#enable_security_group = true

# Use ipset to speed-up the iptables based security groups. Enabling ipset
# support requires that ipset is installed on L2 agent node. (boolean value)
#enable_ipset = true
ml2_conf.ini配置文件内容结束

在该[ml2]部分中,启用平面和 VLAN 网络

[ml2]
type_drivers = flat,vlan

在该[ml2]部分中,禁用自助服务网络

[ml2]
tenant_network_types =

在该[ml2]部分中,启用 Linux 桥接机制

[ml2]
mechanism_drivers = linuxbridge
注意:配置 ML2 插件后,删除type_drivers选项中的值 可能会导致数据库不一致。

在该[ml2]部分中,启用端口安全扩展驱动程序

[ml2]
extension_drivers = port_security

在该[ml2_type_flat]部分中,将提供者虚拟网络配置为平面网络

[ml2_type_flat]
flat_networks = extnet

在该[securitygroup]部分中,启用ipset以提高安全组规则的效率

[securitygroup]
enable_ipset = true

配置 Linux 网桥代理

编辑/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件并完成以下操作

vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini

我们使用dG删掉文件里的所有内容,因为和上面的文件一样,缺胳膊少腿;

向文件里粘贴以下内容

linuxbridge_agent.ini配置文件内容开始
[DEFAULT]

#
# From oslo.log
#

# If set to true, the logging level will be set to DEBUG instead of the default
# INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false

# DEPRECATED: If set to false, the logging level will be set to WARNING instead
# of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true

# The name of a logging configuration file. This file is appended to any
# existing logging configuration files. For details about logging configuration
# files, see the Python logging module documentation. Note that when logging
# configuration files are used then all logging configuration is set in the
# configuration file and other logging configuration options are ignored (for
# example, logging_context_format_string). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>

# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set. (string
# value)
#log_date_format = %Y-%m-%d %H:%M:%S

# (Optional) Name of log file to send logging output to. If no default is set,
# logging will go to stderr as defined by use_stderr. This option is ignored if
# log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>

# (Optional) The base directory used for relative log_file  paths. This option
# is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>

# Uses logging handler designed to watch file system. When log file is moved or
# removed this handler will open a new log file with specified path
# instantaneously. It makes sense only if log_file option is specified and
# Linux platform is used. This option is ignored if log_config_append is set.
# (boolean value)
#watch_log_file = false

# Use syslog for logging. Existing syslog format is DEPRECATED and will be
# changed later to honor RFC5424. This option is ignored if log_config_append
# is set. (boolean value)
#use_syslog = false

# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER

# Log output to standard error. This option is ignored if log_config_append is
# set. (boolean value)
#use_stderr = true

# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s

# Format string to use for log messages when context is undefined. (string
# value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s

# Additional data to append to log message when logging level for the message
# is DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d

# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s

# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s

# List of package logging levels in logger=LEVEL pairs. This option is ignored
# if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO

# Enables or disables publication of error events. (boolean value)
#publish_errors = false

# The format for an instance that is passed with the log message. (string
# value)
#instance_format = "[instance: %(uuid)s] "

# The format for an instance UUID that is passed with the log message. (string
# value)
#instance_uuid_format = "[instance: %(uuid)s] "

# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false


[agent]

#
# From neutron.ml2.linuxbridge.agent
#

# The number of seconds the agent will wait between polling for local device
# changes. (integer value)
#polling_interval = 2

# Set new timeout in seconds for new rpc calls after agent receives SIGTERM. If
# value is set to 0, rpc timeout won't be changed (integer value)
#quitting_rpc_timeout = 10

# DEPRECATED: Enable suppression of ARP responses that don't match an IP
# address that belongs to the port from which they originate. Note: This
# prevents the VMs attached to this agent from spoofing, it doesn't protect
# them from other devices which have the capability to spoof (e.g. bare metal
# or VMs attached to agents without this flag set to True). Spoofing rules will
# not be added to any ports that have port security disabled. For LinuxBridge,
# this requires ebtables. For OVS, it requires a version that supports matching
# ARP headers. This option will be removed in Ocata so the only way to disable
# protection will be via the port security extension. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#prevent_arp_spoofing = true

# Extensions list to use (list value)
#extensions =


[linux_bridge]

#
# From neutron.ml2.linuxbridge.agent
#

# Comma-separated list of <physical_network>:<physical_interface> tuples
# mapping physical network names to the agent's node-specific physical network
# interfaces to be used for flat and VLAN networks. All physical networks
# listed in network_vlan_ranges on the server should have mappings to
# appropriate interfaces on each agent. (list value)
#physical_interface_mappings =

# List of <physical_network>:<physical_bridge> (list value)
#bridge_mappings =


[securitygroup]

#
# From neutron.ml2.linuxbridge.agent
#

# Driver for security groups firewall in the L2 agent (string value)
#firewall_driver = <None>

# Controls whether the neutron security group API is enabled in the server. It
# should be false when using no security groups or using the nova security
# group API. (boolean value)
#enable_security_group = true

# Use ipset to speed-up the iptables based security groups. Enabling ipset
# support requires that ipset is installed on L2 agent node. (boolean value)
#enable_ipset = true


[vxlan]

#
# From neutron.ml2.linuxbridge.agent
#

# Enable VXLAN on the agent. Can be enabled when agent is managed by ml2 plugin
# using linuxbridge mechanism driver (boolean value)
#enable_vxlan = true

# TTL for vxlan interface protocol packets. (integer value)
#ttl = <None>

# TOS for vxlan interface protocol packets. (integer value)
#tos = <None>

# Multicast group(s) for vxlan interface. A range of group addresses may be
# specified by using CIDR notation. Specifying a range allows different VNIs to
# use different group addresses, reducing or eliminating spurious broadcast
# traffic to the tunnel endpoints. To reserve a unique group for each possible
# (24-bit) VNI, use a /8 such as 239.0.0.0/8. This setting must be the same on
# all the agents. (string value)
#vxlan_group = 224.0.0.1

# IP address of local overlay (tunnel) network endpoint. Use either an IPv4 or
# IPv6 address that resides on one of the host network interfaces. The IP
# version of this value must match the value of the 'overlay_ip_version' option
# in the ML2 plug-in configuration file on the neutron server node(s). (IP
# address value)
#local_ip = <None>

# Extension to use alongside ml2 plugin's l2population mechanism driver. It
# enables the plugin to populate VXLAN forwarding table. (boolean value)
#l2_population = false

# Enable local ARP responder which provides local responses instead of
# performing ARP broadcast into the overlay. Enabling local ARP responder is
# not fully compatible with the allowed-address-pairs extension. (boolean
# value)
#arp_responder = false
linuxbridge_agent.ini配置文件内容结束

在该[linux_bridge]部分中,将提供者虚拟网络映射到提供者物理网络接口

[linux_bridge]
physical_interface_mappings = extnet:ens33

image-20211216005019427

在该[vxlan]部分中,禁用 VXLAN 覆盖网络

[vxlan]
enable_vxlan = false

在该[securitygroup]部分中,启用安全组并配置 Linux 网桥 iptables 防火墙驱动程序

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

通过验证以下所有sysctl值都设置为,确保您的 Linux 操作系统内核支持网桥过滤器1

vim /etc/sysctl.conf 
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1

modprobe br_netfilter

sysctl -p

配置 DHCP 代理

编辑/etc/neutron/dhcp_agent.ini文件并完成以下操作

vim /etc/neutron/dhcp_agent.ini

在该[DEFAULT]部分中,配置 Linux 桥接接口驱动程序、Dnsmasq DHCP 驱动程序,并启用隔离元数据,以便提供商网络上的实例可以通过网络访问元数据

[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true

配置元数据代理

编辑/etc/neutron/metadata_agent.ini文件并完成以下操作

vim /etc/neutron/metadata_agent.ini

在该[DEFAULT]部分中,配置元数据主机和共享密钥

[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = Ahui123

编辑/etc/nova/nova.conf文件并执行以下操作

vim /etc/nova/nova.conf

在该[neutron]部分,配置访问参数,启用元数据代理,并配置secret

[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = true
metadata_proxy_shared_secret = Ahui123

完成安装

配置软连接

络服务初始化脚本需要一个/etc/neutron/plugin.ini指向 ML2 插件配置文件的符号链接 /etc/neutron/plugins/ml2/ml2_conf.ini。如果此符号链接不存在,请使用以下命令创建它

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

同步数据库

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

重启计算 API 服务

systemctl restart openstack-nova-api.service

启动网络服务并将它们配置为在系统启动时启动

systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service; systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service

安装和配置计算节点

安装组件

yum install openstack-neutron-linuxbridge ebtables ipset -y

配置通用组件

编辑/etc/neutron/neutron.conf文件并完成以下操作

vim /etc/neutron/neutron.conf

在该[database]部分中,注释掉所有connection选项,因为计算节点不直接访问数据库

在该[DEFAULT]部分,配置RabbitMQ 消息队列访问

[DEFAULT]
transport_url = rabbit://openstack:openstack123@controller

[DEFAULT][keystone_authtoken]部分,配置身份服务访问

[DEFAULT]
auth_strategy = keystone

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron

在该[oslo_concurrency]部分中,配置锁定路径

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

提供商网络

配置 Linux 网桥代理

编辑/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件并完成以下操作

vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
后来发现文件里没有下面3个,所以直接加在最后面 shift+g 前往

在该[linux_bridge]部分中,将提供者虚拟网络映射到提供者物理网络接口

[linux_bridge]
physical_interface_mappings = extnet:ens33

在该[vxlan]部分中,禁用 VXLAN 覆盖网络

[vxlan]
enable_vxlan = false

在该[securitygroup]部分中,启用安全组并配置 Linux 网桥 iptables 防火墙驱动程序

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

通过验证以下所有sysctl值都设置为,确保您的 Linux 操作系统内核支持网桥过滤器1

net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
vim /etc/sysctl.conf

配置 Compute 服务以使用 Networking 服务

编辑/etc/nova/nova.conf文件并完成以下操作

vim /etc/nova/nova.conf

在该[neutron]部分,配置访问参数

[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron

完成安装

重启计算服务

systemctl restart openstack-nova-compute.service

启动 Linux 网桥代理并将其配置为在系统启动时启动

systemctl enable neutron-linuxbridge-agent.service; systemctl start neutron-linuxbridge-agent.service

在控制器节点(controller)验证

列出代理以验证中子代理的成功发射

openstack network agent list

创建实例

提供者网络

创建网络

openstack network create  --share --external --provider-physical-network extnet --provider-network-type flat flat-extnet

创建子网

openstack subnet create --network flat-extnet --allocation-pool start=192.168.145.160,end=192.168.145.210 --dns-nameserver 114.114.114.114 --gateway 192.168.145.2 --subnet-range 192.168.145.0/24 flat-subnet

创建 m1.nano

最小的默认风格每个实例消耗 512 MB 内存。对于包含少于 4 GB 内存的计算节点的环境,我们建议创建m1.nano每个实例仅需要 64 MB的风格。仅将此风格与 CirrOS 映像一起用于测试目的。

openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano

生成密钥对

生成密钥对并添加公钥

ssh-keygen -q -N ""

openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey

验证密钥

openstack keypair list

添加安全组规则

default安全组添加规则

允许ICMP (ping)

openstack security group rule create --proto icmp default

允许安全外壳 (SSH) 访问

openstack security group rule create --proto tcp --dst-port 22 default

提供者网络上启动一个实例

openstack flavor list

列出镜像

openstack image list

列出网卡

openstack network list

列出可用的安全组

openstack security group list

启动实例

openstack server create --flavor m1.nano --image cirros4 --nic net-id=f8dbc64a-1468-4aa1-a3e1-78229f1d1fc1 --security-group default --key-name mykey vm1

检查实例的状态

openstack server list

使用URL访问实例

获取实例的虚拟网络计算 (VNC)会话URL 并从 Web 浏览器访问它

openstack console url show vm1

前往计算节点(Compute):BUG修复

查看服务器支持的虚拟化类型

virsh capabilities

vim /etc/nova/nova.conf
[libvirt]
hw_machine_type = x86_64=pc-i440fx-rhel7.2.0
cpu_mode = host-passthrough

重启nova服务

systemctl restart openstack-nova-*

验证访问

验证对提供商物理网络网关的访问

ping -c 4 192.168.145.2

验证对互联网的访问

ping -c 4 thtown.cn

Dashboard [END]

注意配置文件的格式 不能用Tab

安装和配置组件

安装软件包

yum install openstack-dashboard -y

编辑 /etc/openstack-dashboard/local_settings 文件并完成以下操作

vim /etc/openstack-dashboard/local_settings

使用 / 查找,有就替换,没有就添加

配置仪表板以在controller节点上使用 OpenStack 服务

OPENSTACK_HOST = "controller"

允许您的主机访问仪表板

ALLOWED_HOSTS = ['*']

配置memcached会话存储服务

SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

CACHES = {
    'default': {
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': 'controller:11211',
    }
}

启用身份 API 版本 3

OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST

启用对域的支持

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

配置 API 版本

OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 3,
}

配置Default为您通过仪表板创建的用户的默认域

OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"

配置user为您通过仪表板创建的用户的默认角色:

OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

如果您选择网络选项 1,请禁用对第 3 层网络服务的支持

OPENSTACK_NEUTRON_NETWORK = {
    ...
    'enable_router': False,
    'enable_quotas': False,
    'enable_distributed_router': False,
    'enable_ha_router': False,
    'enable_lb': False,
    'enable_firewall': False,
    'enable_vpn': False,
    'enable_fip_topology_check': False,
}

配置时区

TIME_ZONE = "Asia/Shanghai"

配置openstack-dashboard.conf

vim /etc/httpd/conf.d/openstack-dashboard.conf
WSGIApplicationGroup %{GLOBAL}

完成安装

systemctl restart httpd.service memcached.service

不要慌!

我们解决它

vim /etc/openstack-dashboard/local_settings
WEBROOT = '/dashboard'
systemctl restart httpd.service memcached.service

HOSTS配置

因为我们使用的controller解析,需要在C:\Windows\System32\drivers\etc文件夹里编辑hosts

192.168.145.150 controller

Dashboard创建实例

点击实例,然后创建实例

这里看见 详细 实例类型 看图操作!

点击箭头将源分配上去

我们在上面看见了ip地址 我们PING一下


关于BUG

操作系统(linux)为中文,出现编码报错;

修改为英文

localectl  set-locale LANG=en_US.UTF-8

把中文目录改成英文目录

export LANG=en_US.UTF-8
xdg-user-dirs-gtk-update

这个时候会弹出一个配置界面,提示是否将中文目录切换为英文目录。

我们选择替换

修改系统编码

使用一下命令查看系统的编码

locale

把系统编码修改成en_US.UTF-8 就好了

我们修改/etc/locale.conf文件

不出意外,前面已经使用命令改过了!

vim /etc/locale.conf

修改之后执行source /etc/locale.conf 乱码问题就解决了

source /etc/locale.conf

HTTPD报错

请查看目录为httpd添加配置:BUG修复的文章有没有配置!

用链接访问实例时,进不去系统

请查看目录为前往计算节点(Compute):BUG修复的文章有没有配置!

如果觉得我的文章对你有用,请随意赞赏