Keystone 認證所有 OpenStack 服務並對其進行授權。同時,它也是所有服務的端點目錄。
[Glance 可儲存和檢索多個位置的虛擬機器器磁碟映象。]
[管理虛擬機器器的整個生命週期:建立、執行、掛起、排程、關閉、銷燬等。這是真正的執行部件。接受 DashBoard 發來的命令並完成具體的動作。但是 Nova 不是虛擬機器軟體,所以還需要虛擬機器器軟體(如 KVM、Xen、Hyper-v 等)配合]
[Neutron 能夠連線其他 OpenStack 服務並連線網路。]
[是一種高度容錯的物件儲存服務,使用 RESTful API 來儲存和檢索非結構資料物件。]
[通過自助服務 API 存取持久塊儲存。]
[啟動10臺雲主機,每臺雲主機執行不同的指令碼,形成自動化起服務]
節點 | 主機名 | 記憶體 | IP | 作用 | cpu | 磁碟空間 |
---|---|---|---|---|---|---|
控制節點 | controller | 大於3G | 172.16.1.160 | 管理 | 開啟虛擬化 | 50G |
計算節點 | compute1 | 大於1G | 172.16.1.161 | 執行虛擬機器器 | 開啟虛擬化 | 50G |
[root@controller ~]# cd /etc/yum.repos.d
[root@controller yum.repos.d]# mkdir backup && mv C* backup
[root@controller yum.repos.d]# wget https://mirrors.aliyun.com/repo/Centos-7.repo
[root@controller yum.repos.d]# yum repolist all
[root@controller ~]# systemctl stop firewalld.service; systemctl disable firewalld.service
[root@controller ~]# sed -i 's/SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config
[root@controller ~]# setenforce 0
[root@controller ~]# reboot
[root@controller ~]# getenforce
Disabled
[root@controller ~]# yum install chrony -y
# 控制節點
[root@controller ~]# vim /etc/chrony.conf
……
server ntp6.aliyun.com iburst
……
allow 172.16.1.0/24 // 設定同步的網段, 也可以設定所有: all
local stratum 10
[root@controller ~]# systemctl restart chronyd && systemctl enable chronyd
# 計算節點
[root@computer1 ~]# yum install ntpdate -y
[root@computer1 ~]# vim /etc/chrony.conf
……
server 172.16.1.160 iburst
[root@computer1 ~]# systemctl restart chronyd && systemctl enable chronyd
[root@computer1 ~]# ntpdate 172.16.1.160
[root@controller ~]# yum install centos-release-openstack-train -y
[root@controller ~]# yum install python-openstackclient -y
[root@controller ~]# yum install openstack-selinux -y
#在控制節點上安裝
[root@controller ~]# yum -y install mariadb mariadb-server python2-PyMySQL # `python2-PyMySQL` python模組
[root@controller ~]# vim /etc/my.cnf.d/openstack.cnf
[mysqld]
bind-address = 172.16.1.160
default-storage-engine = innodb # 預設儲存引擎
innodb_file_per_table # 獨立表空間檔案
max_connections = 4096 # 最大連線數
collation-server = utf8_general_ci
character-set-server = utf8 # 預設字元集 utf-8
[root@controller ~]# systemctl enable mariadb.service && systemctl start mariadb.service
[root@controller ~]# mysql_secure_installation
#在控制節點安裝
[root@controller ~]# yum install rabbitmq-server -y
[root@controller ~]# systemctl enable rabbitmq-server.service && systemctl start rabbitmq-server.service
[root@controller ~]# rabbitmqctl add_user openstack RABBIT_PASS ## 新增 openstack 使用者 [使openstack所有服務都能用上訊息佇列]
可以用合適的密碼替換 RABBIT_DBPASS,建議不修改,不然後面全部都要修改。
[root@controller ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*" ##給'openstack'使用者設定寫和讀許可權:
#啟用 rabbitmq 的管理外掛,方便以後做監控 < 可省略 >
[root@controller ~]# rabbitmq-plugins enable rabbitmq_management ##// 執行後會產生 15672 埠< 外掛的 >
[root@controller ~]# netstat -altnp | grep 5672
tcp 0 0 0.0.0.0:15672 0.0.0.0:* LISTEN 2112/beam.smp
tcp 0 0 0.0.0.0:25672 0.0.0.0:* LISTEN 2112/beam.smp
tcp 0 0 172.16.1.160:37231 172.16.1.160:25672 TIME_WAIT -
tcp6 0 0 :::5672 :::* LISTEN 2112/beam.smp
# 存取
IP:15672
# 預設密碼
使用者: guest
密碼: guest
[root@controller ~]# yum install memcached python-memcached -y
[root@controller ~]# sed -i 's/127.0.0.1/172.16.1.160/g' /etc/sysconfig/memcached
[root@controller ~]# systemctl enable memcached.service && systemctl start memcached.service
在你設定 OpenStack 身份認證服務前,你必須建立一個資料庫和管理員令牌(token)。
建立資料庫:
[root@controller ~]# mysql -u root -p
建立 keystone 資料庫:
MariaDB [(none)]> CREATE DATABASE keystone;
Query OK, 1 row affected (0.00 sec)
對'keystone'資料庫授予恰當的許可權:
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'172.16.1.160' IDENTIFIED BY 'KEYSTONE_DBPASS';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'KEYSTONE_DBPASS';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> flush privileges;
可以用合適的密碼替換 KEYSTONE_DBPASS
[root@controller ~]# yum install openstack-keystone httpd mod_wsgi -y
[root@controller ~]# cp /etc/keystone/keystone.conf /etc/keystone/keystone.conf_bak
[root@controller ~]# egrep -v '^$|#' /etc/keystone/keystone.conf_bak > /etc/keystone/keystone.conf
編輯檔案 /etc/keystone/keystone.conf 並完成如下動作:
在 [database] 部分,設定資料庫存取:
[database]
...
connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
將``KEYSTONE_DBPASS``替換為你為資料庫選擇的密碼。
在``[token]``部分,設定以Fernet(方式) UUID令牌的提供者。
[token]
...
provider = fernet
注:keystone 認證方式: UUID、 PKI、 Fernet;
# 都只是生成一段隨機字串的方法
檢測
[root@controller ~]# md5sum /etc/keystone/keystone.conf
f6d8563afb1def91c1b6a884cef72f11 /etc/keystone/keystone.conf
[root@controller ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone
su: 切換使用者
-s: 指定 shell + **shell
-c: 指定執行的命令 + 命令
keystone: 使用者
# 意思: 切換到 keystone 使用者執行 /bin/shell < keystone-manage db_sync > 命令
mysql -u root -ppassword keystone -e "show tables;"
[root@controller ~]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
[root@controller ~]# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
[root@controller ~]# keystone-manage bootstrap --bootstrap-password ADMIN_PASS --bootstrap-admin-url http://controller:5000/v3/ --bootstrap-internal-url http://controller:5000/v3/ --bootstrap-public-url http://controller:5000/v3/ --bootstrap-region-id RegionOne
#可替換ADMIN_PASS為適合管理使用者的密碼
編輯/etc/httpd/conf/httpd.conf 檔案,設定ServerName 選項為控制節點: [大約在95行]
[root@controller ~]# echo 'ServerName controller' >> /etc/httpd/conf/httpd.conf # 提高啟動 http 速度
建立檔案並編輯 /etc/httpd/conf.d/wsgi-keystone.conf
建立/usr/share/keystone/wsgi-keystone.conf檔案的連結:
[root@controller ~]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
[root@controller ~]# systemctl enable httpd.service;systemctl restart httpd.service
[root@controller ~]# export OS_TOKEN=ADMIN_TOKEN # 設定認證令牌
[root@controller ~]# export OS_URL=http://controller:5000/v3 # 設定端點URL
[root@controller ~]# export OS_IDENTITY_API_VERSION=3 # 設定認證 API 版本
[root@controller keystone]# env | grep OS
HOSTNAME=controller
OS_IDENTITY_API_VERSION=3
OS_TOKEN=ADMIN_TOKEN
OS_URL=http://controller:5000/v3
##注:預設建立了default域和admin專案、admin使用者及角色,只需要建立service
1、建立域 default:
[root@controller keystone]# openstack domain create --description "Default Domain" default
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Default Domain |
| enabled | True |
| id | 5d191921be13447ba77bd25aeaad3c01 |
| name | default |
| tags | [] |
+-------------+----------------------------------+
2、在你的環境中,為進行管理操作,建立管理的專案、使用者和角色:
(1)、建立 admin 專案:
[root@controller keystone]# openstack project create --domain default --description "Admin Project" admin
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Admin Project |
| domain_id | 5d191921be13447ba77bd25aeaad3c01 |
| enabled | True |
| id | 664c99b0582f452a9cd04b6847912e41 |
| is_domain | False |
| name | admin |
| parent_id | 5d191921be13447ba77bd25aeaad3c01 |
| tags | [] |
+-------------+----------------------------------+
(2)、建立 admin 使用者: //這裡是將官網的-prompt密碼改為ADMIN_PASS
[root@controller keystone]# openstack user create --domain default --password ADMIN_PASS admin
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | 5d191921be13447ba77bd25aeaad3c01 |
| enabled | True |
| id | b8ee9f1c2b8640718f9628db33aad5f4 |
| name | admin |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
(3)、建立 admin 角色:
[root@controller keystone]# openstack role create admin
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | None |
| id | 70c1f94f2edf4f6e9bdba9b7c3191a15 |
| name | admin |
+-----------+----------------------------------+
(4)、新增'admin' 角色到 admin 專案和使用者上:
[root@controller keystone]# openstack role add --project admin --user admin admin
(5)、本指南使用一個你新增到你的環境中每個服務包含獨有使用者的service 專案。建立``service``專案:
[root@controller keystone]# openstack project create --domain default --description "Service Project" service
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Service Project |
| domain_id | 5d191921be13447ba77bd25aeaad3c01 |
| enabled | True |
| id | b2e04f3a01eb4994a2990d9e75f8de11 |
| is_domain | False |
| name | service |
| parent_id | 5d191921be13447ba77bd25aeaad3c01 |
| tags | [] |
+-------------+----------------------------------+
[root@controller ~]# vim admin-openrc
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
# 載入環境變數
[root@controller ~]# source admin-openrc
# 開機自動掛載
[root@controller ~]# echo 'source admin-openrc' >> /root/.bashrc
# 退出登入
[root@controller ~]# logout
[root@controller ~]# openstack token issue
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires | 2022-09-26T08:42:45+0000 |
| id | gAAAAABjMVf1pipTbv_5WNZHh3yhsAjfhRlFV8SQING_Ra_NT382uAUTOnYo1m0-VJMms8tP_ieSCCpavejPMqHphmj7Mvxw0jYjWwXHTY8lV69UeJt5SJPqCwtJ0wZJqlQkVzkicZI_QqXO3UyvBTTAvv19X5Q6GzXnhJMkk0rJ09CtrM1fPJI |
| project_id | 664c99b0582f452a9cd04b6847912e41 |
| user_id | b8ee9f1c2b8640718f9628db33aad5f4 |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
OpenStack映象服務是IaaS的核心服務,如同 :ref:(https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/common/get_started_image_service.html#id1)get_started_conceptual_architecture
所示。它接受磁碟映象或伺服器映象API請求,和來自終端使用者或OpenStack計算元件的後設資料定義。它也支援包括OpenStack物件儲存在內的多種型別倉庫上的磁碟映象或伺服器映象儲存。
大量週期性程序執行於OpenStack映象服務上以支援快取。同步複製(Replication)服務保證叢集中的一致性和可用性。其它週期性程序包括auditors, updaters, 和 reapers。
OpenStack映象服務包括以下元件:
glance-api
接收映象API的呼叫,諸如映象發現、恢復、儲存。
glance-registry
儲存、處理和恢復映象的後設資料,後設資料包括項諸如大小和型別。
安裝和設定映象服務之前,你必須建立建立一個資料庫、服務憑證和API端點。
(1)、建立資料庫
[root@controller ~]# mysql -u root -p
MariaDB [(none)]> CREATE DATABASE glance;
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'172.16.1.160' IDENTIFIED BY 'GLANCE_DBPASS';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'GLANCE_DBPASS';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.00 sec)
可用一個合適的密碼替換 GLANCE_DBPASS
(2)、建立使用者關聯角色
(I)、建立 glance 使用者:
[root@controller ~]# openstack user create --domain default --password GLANCE_PASS glance
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | 5d191921be13447ba77bd25aeaad3c01 |
| enabled | True |
| id | 821acc687c24458c9c643d5150fd266d |
| name | glance |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
(II)、新增 admin 角色到 glance 使用者和 service 專案上:
[root@controller ~]# openstack role add --project service --user glance admin
(3)、建立服務並註冊 API
(I)、建立 glance 服務實體:
[root@controller ~]# openstack service create --name glance --description "OpenStack Image" image
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Image |
| enabled | True |
| id | 3bf4df103e5e42c5b507d67ed97921e8 |
| name | glance |
| type | image |
+-------------+----------------------------------+
(II)、建立映象服務的 API 端點:
[root@controller ~]# openstack endpoint create --region RegionOne image public http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 8e2877750e0b4398aa54628f5039ad65 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 3bf4df103e5e42c5b507d67ed97921e8 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne image internal http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 3a1041640a1b4206aa094376c84d4148 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 3bf4df103e5e42c5b507d67ed97921e8 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne image admin http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | a6b14642d90e467da07da6b68e8ebeae |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 3bf4df103e5e42c5b507d67ed97921e8 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
(1)、安裝軟體包
[root@controller ~]# yum install openstack-glance -y
(2)、編輯檔案 /etc/glance/glance-api.conf 並完成如下動作:
[root@controller ~]# cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf_bak
[root@controller ~]# egrep -v "^$|#" /etc/glance/glance-api.conf_bak > /etc/glance/glance-api.conf
[root@controller ~]# vim /etc/glance/glance-api.conf
在 [database]
部分,設定資料庫存取:
[database]
...
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
將``GLANCE_DBPASS`` 替換為你為映象服務選擇的密碼。
在 [keystone_authtoken]
和 [paste_deploy]
部分,設定認證服務存取:
[keystone_authtoken]
...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = GLANCE_PASS
[paste_deploy]
...
flavor = keystone
將 GLANCE_PASS 替換為你為認證服務中你為 glance 使用者選擇的密碼。
在 [glance_store]
部分,設定本地檔案系統儲存和映象檔案位置:
[glance_store]
...
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
(3)、編輯檔案'/etc/glance/glance-registry.conf'並完成如下動作:
[root@controller ~]# cp /etc/glance/glance-registry.conf /etc/glance/glance-registry.conf_bak
[root@controller ~]# egrep -v '^$|#' /etc/glance/glance-registry.conf_bak > /etc/glance/glance-registry.conf
[root@controller ~]# vim /etc/glance/glance-registry.conf
[database]
部分,設定資料庫存取:[database]
...
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
[keystone_authtoken]
和 [paste_deploy]
部分,設定認證服務存取:[keystone_authtoken]
...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = GLANCE_PASS
[paste_deploy]
...
flavor = keystone
可將 GLANCE_PASS 替換為你為認證服務中你為 glance 使用者選擇的密碼。
(4)、寫入映象服務資料庫:
su -s /bin/sh -c "glance-manage db_sync" glance
驗證:
mysql -u root -ppassword glance -e "show tables;"
[root@controller ~]# systemctl enable openstack-glance-api.service openstack-glance-registry.service
[root@controller ~]# systemctl start openstack-glance-api.service openstack-glance-registry.service
(1)、下載源映象:
[root@controller ~]# wget http://cdit-support.thundersoft.com/download/System_ISO/ubuntu18.04/ubuntu-18.04.5-live-server-amd64.iso
(2)、使用 QCOW2 磁碟格式, bare 容器格式上傳映象到映象服務並設定公共可見,這樣所有的專案都可以存取它:
[root@controller ~]# openstack image create "ubuntu18.04-server" --file ubuntu-18.04.5-live-server-amd64.iso --disk-format qcow2 --container-format bare --public
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| checksum | fcd77cd8aa585da4061655045f3f0511 |
| container_format | bare |
| created_at | 2022-09-27T07:11:16Z |
| disk_format | qcow2 |
| file | /v2/images/348a05ba-8b07-41bd-8b90-d1af41e5783b/file |
| id | 348a05ba-8b07-41bd-8b90-d1af41e5783b |
| min_disk | 0 |
| min_ram | 0 |
| name | ubuntu18.04-server |
| owner | 664c99b0582f452a9cd04b6847912e41 |
| properties | os_hash_algo='sha512', os_hash_value='5320be1a41792ec35ac05cdd7f5203c4fa6406dcfd7ca4a79042aa73c5803596e66962a01aabb35b8e64a2e37f19f7510bffabdd4955cff040e8522ff5e1ec1e', os_hidden='False' |
| protected | False |
| schema | /v2/schemas/image |
| size | 990904320 |
| status | active |
| tags | |
| updated_at | 2022-09-27T07:11:28Z |
| virtual_size | None |
| visibility | public |
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Placement 在 Openstack 中主要用於跟蹤和監控各種資源的使用情況,例如,在 Openstack 中包括計算資源、儲存資源、網路等各種資源。Placement 用來跟蹤管理每種資源的當前使用情況。 Placement 服務在 Openstack 14.0.0 Newton 版本中被引入到 nova 庫,並在 19.0.0 Stein 版本中被獨立到 Placement 庫中,即在 stein 版被獨立成元件。 Placement 服務提供 REST API 堆疊和資料模型,用於跟蹤資源提供者不同型別的資源的庫存和使用情況。 資源提供者可以是計算資源、共用儲存池、 IP 池等。例如,建立一個範例時會消耗計算節點的 CPU、記憶體,會消耗儲存節點的儲存;同時還會消耗網路節點的 IP 等等,所消耗資源的型別被跟蹤為 類。Placement 提供了一組標準資源類(例如 DISK_GB、MEMORY_MB 和 VCPU),並提供了根據需要定義自定義資源類的能力。
[root@controller ~]# mysql -u root -p
MariaDB [(none)]> CREATE DATABASE placement;
Query OK, 1 row affected (0.000 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'172.16.1.160' IDENTIFIED BY 'PLACEMENT_DBPASS';
Query OK, 0 rows affected (0.000 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'PLACEMENT_DBPASS';
Query OK, 0 rows affected (0.000 sec)
可替換PLACEMENT_DBPASS為合適的密碼
#建立placment使用者
[root@controller ~]# openstack user create --domain default --password PLACEMENT_PASS placement
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 3aa59a7790734af593f1e0f0bb544860 |
| name | placement |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
#將 Placement 使用者新增到具有管理員角色的服務專案中
[root@controller ~]# openstack role add --project service --user placement admin
#建立placement服務實體
[root@controller ~]# openstack service create --name placement --description "Placement API" placement
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Placement API |
| enabled | True |
| id | dbb9e71a38584bedbda1ff318c38bdb2 |
| name | placement |
| type | placement |
+-------------+----------------------------------+
#建立 Placement API 伺服器端點
[root@controller ~]# openstack endpoint create --region RegionOne placement public http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | b73f0c0eba1741fa8c20c3656278f3a9 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | dbb9e71a38584bedbda1ff318c38bdb2 |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne placement internal http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 495bbf7aa8364210be166a39375d8121 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | dbb9e71a38584bedbda1ff318c38bdb2 |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne placement admin http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 68a8c3a4e3b94a9eb30921b01170982a |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | dbb9e71a38584bedbda1ff318c38bdb2 |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+
[root@controller ~]# yum install openstack-placement-api -y
編輯/etc/placement/placement.conf
檔案並完成以下操作:
[root@controller ~]# cp /etc/placement/placement.conf /etc/placement/placement.conf_bak
[root@controller ~]# egrep -v "^$|#" /etc/placement/placement.conf_bak > /etc/placement/placement.conf
[root@controller ~]# vim /etc/placement/placement.conf
在該[placement_database]
部分中,設定資料庫存取:
[placement_database]
# ...
connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement
在[api]
和[keystone_authtoken]
部分中,設定身份服務存取:
[api]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = placement
password = PLACEMENT_PASS
[root@controller ~]# su -s /bin/sh -c "placement-manage db sync" placement
[root@controller ~]# mysql -uroot -ppassword placement -e "show tables;"
#忽略輸出資訊
[root@controller ~]# systemctl restart httpd
[root@controller ~]# placement-status upgrade check
+----------------------------------+
| Upgrade Check Results |
+----------------------------------+
| Check: Missing Root Provider IDs |
| Result: Success |
| Details: None |
+----------------------------------+
| Check: Incomplete Consumers |
| Result: Success |
| Details: None |
+----------------------------------+
(1)、建立資料庫
[root@controller ~]# mysql -u root -p
#建立 nova_api 和 nova 資料庫:
MariaDB [(none)]> CREATE DATABASE nova_api;
MariaDB [(none)]> CREATE DATABASE nova;
MariaDB [(none)]> CREATE DATABASE nova_cell0;
#對資料庫進行正確的授權:
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'172.16.1.160' IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'172.16.1.160' IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'172.16.1.160' IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.00 sec)
可用合適的密碼代替 NOVA_DBPASS。
(2)、獲得 admin
憑證來獲取只有管理員能執行的命令的存取許可權
. admin-openrc
(3)、建立使用者並關聯角色
#建立 nova 使用者:
[root@controller ~]# openstack user create --domain default --password NOVA_PASS nova
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | f3e5c9439bf84f8cbb3a975eb852eb69 |
| name | nova |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
#給 nova 使用者新增 admin 角色:
[root@controller ~]# openstack role add --project service --user nova admin
#建立 nova 服務實體:
[root@controller ~]# openstack service create --name nova --description "OpenStack Compute" compute
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Compute |
| enabled | True |
| id | ec3a89fcb73a4e0b9cf83f05372f78e8 |
| name | nova |
| type | compute |
+-------------+----------------------------------+
#檢視endpoint資訊
[root@controller ~]# openstack endpoint list
+----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------+
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
+----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------+
| 0a489b615f244b14ba1cbbeb915a5eb4 | RegionOne | glance | image | True | admin | http://controller:9292 |
| 1161fbff859d4ffe96756ed17e975267 | RegionOne | glance | image | True | internal | http://controller:9292 |
| 495bbf7aa8364210be166a39375d8121 | RegionOne | placement | placement | True | internal | http://controller:8778 |
| 68a8c3a4e3b94a9eb30921b01170982a | RegionOne | placement | placement | True | admin | http://controller:8778 |
| 81e21f9e5069492086f7368ae983b640 | RegionOne | glance | image | True | public | http://controller:9292 |
| a91af59e7fa448698db106f1c9d9178c | RegionOne | keystone | identity | True | internal | http://controller:5000/v3/ |
| b73f0c0eba1741fa8c20c3656278f3a9 | RegionOne | placement | placement | True | public | http://controller:8778 |
| b792451b1fc344bbb541f8a4a5b67b50 | RegionOne | keystone | identity | True | public | http://controller:5000/v3/ |
| d2fb160ceac747d1afef1c348db92812 | RegionOne | keystone | identity | True | admin | http://controller:5000/v3/ |
+----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------+
#建立 Compute 服務 API 端點 :
[root@controller ~]# openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 72aac47247aa4c62b719241487389fb4 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | ec3a89fcb73a4e0b9cf83f05372f78e8 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | a60b87e2f3c24cc3b2e078d114a7296a |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | ec3a89fcb73a4e0b9cf83f05372f78e8 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 561b92669bb7487195d74956dd3e14b8 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | ec3a89fcb73a4e0b9cf83f05372f78e8 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+----------------------------------+
(1)、安裝軟體包
[root@controller ~]# yum install openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler
-y
openstack-nova-api: 接受並響應所有計算服務的請求, 管理雲主機的生命週期
openstack-nova-conductor: 修改資料庫中虛擬機器器的狀態
openstack-nova-novncproxy : web版的VNC 直接操作雲主機
openstack-nova-scheduler: 排程器
(2)、編輯 /etc/nova/nova.conf
檔案並完成下面的操作
[root@controller ~]# cp /etc/nova/nova.conf /etc/nova/nova.conf_bak
[root@controller ~]# egrep -v "^$|#" /etc/nova/nova.conf_bak > /etc/nova/nova.conf
[root@controller ~]# vim /etc/nova/nova.conf
[DEFAULT]
部分,只啟用計算和後設資料API[DEFAULT]
...
enabled_apis = osapi_compute,metadata
[api_database]
和[database]
部分,設定資料庫的連線[api_database]
...
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
[database]
...
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova
可用你為 Compute 資料庫選擇的密碼來代替 NOVA_DBPASS
[DEFAULT]
...
transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
可用你在 「RabbitMQ」 中為 「openstack」 選擇的密碼替換 「RABBIT_PASS」
[api]
...
auth_strategy = keystone
[keystone_authtoken]
...
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = NOVA_PASS
可使用你在身份認證服務中設定的'nova' 使用者的密碼替換'NOVA_PASS'
[DEFAULT
部分,設定my_ip
來使用控制節點的管理介面的IP 地址[DEFAULT]
...
my_ip = 172.16.1.160
[DEFAULT]
部分,使能 Networking 服務[DEFAULT]
...
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[vnc]
部分,設定VNC代理使用控制節點的管理介面IP地址[vnc]
...
enabled = true
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
[glance]
區域,設定映象服務 API 的位置[glance]
...
api_servers = http://controller:9292
[oslo_concurrency]
部分,設定鎖路徑[oslo_concurrency]
...
lock_path = /var/lib/nova/tmp
[placement]
部分中,設定對 Placement 服務的存取[placement]
# ...
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = PLACEMENT_PASS
(3)、同步Compute 資料庫
#同步nova-api資料庫
[root@controller ~]# su -s /bin/sh -c "nova-manage api_db sync" nova
#註冊cell0資料庫
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
#建立cell1單元格
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
#同步nova資料庫
[root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova
注:忽略輸出資訊,可以登陸資料庫檢視是否有表
#驗證 nova cell0 和 cell1 是否正確註冊:
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
+-------+--------------------------------------+------------------------------------------+-------------------------------------------------+----------+
| Name | UUID | Transport URL | Database Connection | Disabled |
+-------+--------------------------------------+------------------------------------------+-------------------------------------------------+----------+
| cell0 | 00000000-0000-0000-0000-000000000000 | none:/ | mysql+pymysql://nova:****@controller/nova_cell0 | False |
| cell1 | b3252455-b9dd-427d-9395-da68baeda7c5 | rabbit://openstack:****@controller:5672/ | mysql+pymysql://nova:****@controller/nova | False |
+-------+--------------------------------------+------------------------------------------+-------------------------------------------------+----------+
[root@controller ~]# systemctl enable openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
[root@controller ~]# systemctl start openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
(1)、安裝軟體包
[root@computer1 ~]# yum install openstack-nova-compute -y
(2)、編輯/etc/nova/nova.conf檔案並完成下面的操作
[root@compute1 ~]# cp /etc/nova/nova.conf /etc/nova/nova.conf_bak
[root@compute1 yum.repos.d]# egrep -v "^$|#" /etc/nova/nova.conf_bak > /etc/nova/nova.conf
[root@compute1 ~]# vim /etc/nova/nova.conf
在該[DEFAULT]
部分中,僅啟用計算和後設資料 API:
[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata
在該[DEFAULT]
部分中,設定RabbitMQ
訊息佇列存取:
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller
替換為您在 中為 帳戶RABBIT_PASS
選擇的密碼。openstack``RabbitMQ
在[api]
和[keystone_authtoken]
部分中,設定身份服務存取:
[api]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = NOVA_PASS
可替換為您在身份服務中`NOVA_PASS`為使用者選擇的密碼。`nova`
在該[DEFAULT]
部分中,設定my_ip
選項:
[DEFAULT]
# ...
my_ip = 172.16.1.161
替換為計算節點上管理網路介面的 IP 地址,對於範例架構MANAGEMENT_INTERFACE_IP_ADDRESS
中的第一個節點,通常為 10.0.0.31 。
在該[DEFAULT]
部分中,啟用對網路服務的支援:
[DEFAULT]
# ...
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
在該[vnc]
部分中,啟用和設定遠端控制檯存取:
[vnc]
# ...
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://172.16.1.160:6080/vnc_auto.html
在該[glance]
部分中,設定影象服務 API 的位置:
[glance]
# ...
api_servers = http://controller:9292
在該[oslo_concurrency]
部分中,設定鎖定路徑:
[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp
在該[placement]
部分中,設定 Placement API:
[placement]
# ...
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = PLACEMENT_PASS
#替換為您在身份服務中`PLACEMENT_PASS`為使用者選擇的密碼 。`placement`註釋掉該`[placement]`部分中的任何其他選項
(1)、確定您的計算節點是否支援虛擬機器器的硬體加速
[root@computer1 ~]# egrep -c '(vmx|svm)' /proc/cpuinfo
2
如果這個命令返回了 one or greater 的值,那麼你的計算節點支援硬體加速且不需要額外的設定。
如果這個命令返回了 zero 值,那麼你的計算節點不支援硬體加速。你必須設定 libvirt 來使用 QEMU 去代替 KVM;如果不支援,在 /etc/nova/nova.conf 檔案的 [libvirt] 區域做出如下的編輯:
[libvirt]
...
virt_type = qemu
(2)、啟動計算服務及其依賴,並將其設定為隨系統自動啟動
[root@computer1 ~]# systemctl enable libvirtd.service openstack-nova-compute.service
[root@computer1 ~]# systemctl start libvirtd.service openstack-nova-compute.service
[root@controller ~]# openstack compute service list
+----+----------------+------------+----------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+----------------+------------+----------+---------+-------+----------------------------+
| 1 | nova-conductor | controller | internal | enabled | up | 2022-10-10T09:18:59.000000 |
| 2 | nova-scheduler | controller | internal | enabled | up | 2022-10-10T09:19:03.000000 |
| 7 | nova-compute | compute1 | nova | enabled | up | 2022-10-10T09:19:04.000000 |
+----+----------------+------------+----------+---------+-------+----------------------------+
#列出身份服務中的 API 端點以驗證與身份服務的連線:
[root@controller ~]# openstack catalog list
+-----------+-----------+-----------------------------------------+
| Name | Type | Endpoints |
+-----------+-----------+-----------------------------------------+
| glance | image | RegionOne |
| | | admin: http://controller:9292 |
| | | RegionOne |
| | | internal: http://controller:9292 |
| | | RegionOne |
| | | public: http://controller:9292 |
| | | |
| keystone | identity | RegionOne |
| | | internal: http://controller:5000/v3/ |
| | | RegionOne |
| | | public: http://controller:5000/v3/ |
| | | RegionOne |
| | | admin: http://controller:5000/v3/ |
| | | |
| placement | placement | RegionOne |
| | | internal: http://controller:8778 |
| | | RegionOne |
| | | admin: http://controller:8778 |
| | | RegionOne |
| | | public: http://controller:8778 |
| | | |
| nova | compute | RegionOne |
| | | admin: http://controller:8774/v2.1 |
| | | RegionOne |
| | | public: http://controller:8774/v2.1 |
| | | RegionOne |
| | | internal: http://controller:8774/v2.1 |
| | | |
+-----------+-----------+-----------------------------------------+
OpenStack 網路使用的是一個 SDN(Software Defined Networking)元件,即 Neutron,SDN 是一個可插拔的架構,支援插入交換機、防火牆、負載均 衡器等,這些都定義在軟體中,從而實現對整個雲基礎設施的精細化管控。 前期規劃,將 ens33 網口作為外部網路(在 Openstack 術語中,外部網路常被稱之為 Provider 網路),同時也用作管理網路,便於測試存取,生產環境 建議分開;ens37 網路作為租戶網路,即 vxlan 網路;ens38 作為 ceph 叢集網路。
OpenStack 網路部署方式可選的有 OVS 和 LinuxBridge。此處選擇 LinuxBridge 模式,部署大同小異。
在控制節點上要啟動的服務 neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
(1)、建立資料庫並授權
[root@controller ~]# mysql -u root -p
MariaDB [(none)]> CREATE DATABASE neutron;
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'172.16.1.160' IDENTIFIED BY 'NEUTRON_DBPASS';
Query OK, 0 rows affected (0.00 sec)
可對'neutron' 資料庫授予合適的存取許可權,使用合適的密碼替換'NEUTRON_DBPASS'
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.00 sec)
(2)、獲得 admin 憑證來獲取只有管理員能執行的命令的存取許可權
[root@controller ~]# .admin-openrc ##重新整理環境變數
(3)、要建立服務證書
[root@controller ~]# openstack user create --domain default --password NEUTRON_PASS neutron
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 81c68ee930884059835110c1b31b305c |
| name | neutron |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
[root@controller ~]# openstack role add --project service --user neutron admin
neutron
服務實體[root@controller ~]# openstack service create --name neutron --description "OpenStack Networking" network
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Networking |
| enabled | True |
| id | cac6a744698445a88092f67521973bc3 |
| name | neutron |
| type | network |
+-------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne network public http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 040aa02925db4ec2b9d3ee43d94352e2 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | cac6a744698445a88092f67521973bc3 |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne network internal http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 92ccf1b6b687466493c2f05911510368 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | cac6a744698445a88092f67521973bc3 |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne network admin http://controller:9696
+--------------+----------------------------------+
| enabled | True |
| id | 6a0ddef76c81405f99500f70739ce945 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | cac6a744698445a88092f67521973bc3 |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
1、安裝元件
[root@controller ~]# yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y
openstack-neutron-linuxbridge:網橋,用於建立橋接網路卡
ebtables:防火牆規則
2、設定服務元件
編輯/etc/neutron/neutron.conf
檔案並完成如下操作
[root@controller ~]# cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf_bak
[root@controller ~]# egrep -v "^$|#" /etc/neutron/neutron.conf_bak > /etc/neutron/neutron.conf
[root@controller ~]# vim /etc/neutron/neutron.conf
[database]
部分,設定資料庫存取[database]
...
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
[DEFAULT]
部分,啟用ML2外掛並禁用其他外掛[DEFAULT]
...
core_plugin = ml2
service_plugins =
[DEFAULT]
...
transport_url = rabbit://openstack:RABBIT_PASS@controller
可用你在RabbitMQ中為``openstack``選擇的密碼替換 「RABBIT_PASS」
[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
可將 NEUTRON_PASS 替換為你在認證服務中為 neutron 使用者選擇的密碼
[DEFAULT]
和[nova]
部分,設定網路服務來通知計算節點的網路拓撲變化[DEFAULT]
...
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
[nova]
...
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = NOVA_PASS
可使用你在身份認證服務中設定的``nova`` 使用者的密碼替換``NOVA_PASS``
[oslo_concurrency]
部分,設定鎖路徑[oslo_concurrency]
...
lock_path = /var/lib/neutron/tmp
3、設定 Modular Layer 2 (ML2) 外掛
編輯/etc/neutron/plugins/ml2/ml2_conf.ini
檔案並完成以下操作
[root@controller ~]# cp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini_bak
[root@controller ~]# egrep -v "^$|#" /etc/neutron/plugins/ml2/ml2_conf.ini_bak > /etc/neutron/plugins/ml2/ml2_conf.ini
[root@controller ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
部分,啟用flat和VLAN網路[ml2]
...
type_drivers = flat,vlan
[ml2]
部分,禁用私有網路[ml2]
...
tenant_network_types =
[ml2]
部分,啟用Linuxbridge機制[ml2]
...
mechanism_drivers = linuxbridge
[ml2]
部分,啟用埠安全擴充套件驅動[ml2]
...
extension_drivers = port_security
[ml2_type_flat]
部分,設定公共虛擬網路為flat網路[ml2_type_flat]
...
flat_networks = provider
[ml2_type_vxlan]
部分中,為自助服務網路設定 VXLAN 網路識別符號範圍:[securitygroup]
...
enable_ipset = True
4、設定Linuxbridge代理
Linuxbridge代理為範例建立layer-2虛擬網路並且處理安全組規則。
編輯/etc/neutron/plugins/ml2/linuxbridge_agent.ini
檔案並且完成以下操作
[root@controller ~]# cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini_bak
[root@controller ~]# egrep -v "^$|#" /etc/neutron/plugins/ml2/linuxbridge_agent.ini_bak > /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[root@controller ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
部分,將公共虛擬網路和公共物理網路介面對應起來[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
將``PROVIDER_INTERFACE_NAME`` 替換為底層的物理公共網路介面,我這裡是ens33
[vxlan]
部分,禁止VXLAN覆蓋網路[vxlan]
enable_vxlan = False
[securitygroup]
...
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
5、修改核心組態檔/etc/sysctl.conf,確保系統核心支援網橋過濾器
[root@controller ~]# vi /etc/sysctl.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
[root@controller ~]# modprobe br_netfilter
[root@controller ~]# sysctl -p
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
[root@controller ~]# sed -i '$amodprobe br_netfilter' /etc/rc.local
[root@controller ~]# chmod +x /etc/rc.d/rc.local
6、設定DHCP代理
編輯/etc/neutron/dhcp_agent.ini
檔案並完成下面的操作
[root@controller ~]# cp /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini_bak
[root@controller ~]# egrep -v "^$|#" /etc/neutron/dhcp_agent.ini_bak > /etc/neutron/dhcp_agent.ini
[root@controller ~]# vim /etc/neutron/dhcp_agent.ini
[DEFAULT]
部分,設定Linuxbridge驅動介面,DHCP驅動並啟用隔離後設資料,這樣在公共網路上的範例就可以通過網路來存取後設資料[DEFAULT]
...
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
1、安裝元件
[root@controller ~]# yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y
openstack-neutron-linuxbridge:網橋,用於建立橋接網路卡
ebtables:防火牆規則
2、設定服務元件
編輯/etc/neutron/neutron.conf
檔案並完成如下操作
[root@controller ~]# cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf_bak
[root@controller ~]# egrep -v "^$|#" /etc/neutron/neutron.conf_bak > /etc/neutron/neutron.conf
[root@controller ~]# vim /etc/neutron/neutron.conf
[database]
部分,設定資料庫存取[database]
...
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
[DEFAULT]
部分,啟用ML2外掛並禁用其他外掛[DEFAULT]
...
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
[DEFAULT]
...
transport_url = rabbit://openstack:RABBIT_PASS@controller
可用你在RabbitMQ中為``openstack``選擇的密碼替換 「RABBIT_PASS」
[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
可將 NEUTRON_PASS 替換為你在認證服務中為 neutron 使用者選擇的密碼
[DEFAULT]
和[nova]
部分,設定網路服務來通知計算節點的網路拓撲變化[DEFAULT]
...
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
[nova]
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = NOVA_PASS
可使用你在身份認證服務中設定的``nova`` 使用者的密碼替換``NOVA_PASS``
[oslo_concurrency]
部分,設定鎖路徑[oslo_concurrency]
...
lock_path = /var/lib/neutron/tmp
3、設定 Modular Layer 2 (ML2) 外掛
編輯/etc/neutron/plugins/ml2/ml2_conf.ini
檔案並完成以下操作
[root@controller ~]# cp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini_bak
[root@controller ~]# egrep -v "^$|#" /etc/neutron/plugins/ml2/ml2_conf.ini_bak > /etc/neutron/plugins/ml2/ml2_conf.ini
[root@controller ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
部分,啟用flat和VLAN網路[ml2]
...
type_drivers = flat,vlan,vxlan
[ml2]
部分,禁用私有網路[ml2]
...
tenant_network_types = vxlan
[ml2]
部分,啟用Linuxbridge機制[ml2]
...
mechanism_drivers = linuxbridge,l2population
[ml2]
部分,啟用埠安全擴充套件驅動[ml2]
...
extension_drivers = port_security
[ml2_type_flat]
部分,設定公共虛擬網路為flat網路[ml2_type_flat]
...
flat_networks = provider
[ml2_type_vxlan]
部分中,為自助服務網路設定 VXLAN 網路識別符號範圍:[ml2_type_vxlan]
# ...
vni_ranges = 1:1000
[securitygroup]
...
enable_ipset = True
4、設定Linuxbridge代理
Linuxbridge代理為範例建立layer-2虛擬網路並且處理安全組規則。
編輯/etc/neutron/plugins/ml2/linuxbridge_agent.ini
檔案並且完成以下操作
[root@controller ~]# cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini_bak
[root@controller ~]# egrep -v "^$|#" /etc/neutron/plugins/ml2/linuxbridge_agent.ini_bak > /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[root@controller ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
部分,將公共虛擬網路和公共物理網路介面對應起來[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
將``PROVIDER_INTERFACE_NAME`` 替換為底層的物理公共網路介面,我這裡是ens33
[vxlan]
部分,禁止VXLAN覆蓋網路[vxlan]
enable_vxlan = true
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
l2_population = true
替換OVERLAY_INTERFACE_IP_ADDRESS為處理覆蓋網路的底層物理網路介面的IP地址。範例架構使用管理介面將流量通過隧道傳輸到其他節點。因此,替換OVERLAY_INTERFACE_IP_ADDRESS為控制器節點的管理IP地址(10.10.10.1)。
[securitygroup]
...
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
5、修改核心組態檔/etc/sysctl.conf,確保系統核心支援網橋過濾器
[root@controller ~]# vim /etc/sysctl.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
[root@controller ~]# modprobe br_netfilter
[root@controller ~]# sysctl -p
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
[root@controller ~]# sed -i '$amodprobe br_netfilter' /etc/rc.local
[root@controller ~]# chmod +x /etc/rc.d/rc.local
6、設定第三層代理
第 3 層 (L3) 代理為自助服務虛擬網路提供路由和 NAT 服務
1、編輯/etc/neutron/l3_agent.ini檔案並完成以下操作:
[root@controller neutron]# cp /etc/neutron/l3_agent.ini /etc/neutron/l3_agent.ini_bak
[root@controller neutron]# egrep -v "^$|#" /etc/neutron/l3_agent.ini_bak > /etc/neutron/l3_agent.ini
[root@controller neutron]# vim /etc/neutron/l3_agent.ini
在該[DEFAULT]部分中,設定 Linux 橋接介面驅動程式:
[DEFAULT]
# ...
interface_driver = linuxbridge
7、設定DHCP代理
編輯/etc/neutron/dhcp_agent.ini
檔案並完成下面的操作
[root@controller ~]# cp /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini_bak
[root@controller ~]# egrep -v "^$|#" /etc/neutron/dhcp_agent.ini_bak > /etc/neutron/dhcp_agent.ini
[root@controller ~]# vim /etc/neutron/dhcp_agent.ini
[DEFAULT]
部分,設定Linuxbridge驅動介面,DHCP驅動並啟用隔離後設資料,這樣在公共網路上的範例就可以通過網路來存取後設資料[DEFAULT]
...
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
作用:存取範例的憑證
編輯/etc/neutron/metadata_agent.ini
檔案並完成以下操作
[root@controller ~]# cp /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini_bak
[root@controller ~]# egrep -v "^$|#" /etc/neutron/metadata_agent.ini_bak > /etc/neutron/metadata_agent.ini
[root@controller ~]# vim /etc/neutron/metadata_agent.ini
[DEFAULT]
...
nova_metadata_ip = controller
metadata_proxy_shared_secret = METADATA_SECRET
可用你為後設資料代理設定的密碼替換 METADATA_SECRET
編輯/etc/nova/nova.conf
檔案並完成以下操作
[root@controller ~]# vim /etc/nova/nova.conf
[neutron]
部分,設定存取引數,啟用後設資料代理並設定密碼[neutron]
...
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
service_metadata_proxy = true
metadata_proxy_shared_secret = METADATA_SECRET
將 NEUTRON_PASS 替換為你在認證服務中為 neutron 使用者選擇的密碼。
可使用你為後設資料代理設定的密碼替換``METADATA_SECRET``
5、完成安裝
(1)、網路服務初始化指令碼需要一個超連結 /etc/neutron/plugin.ini``指向ML2外掛組態檔
/etc/neutron/plugins/ml2/ml2_conf.ini``。如果超連結不存在,使用下面的命令建立它
[root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
(2)、同步資料庫
[root@controller ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
(3)、重啟計算API 服務
[root@controller ~]# systemctl restart openstack-nova-api.service
(4)、當系統啟動時,啟動 Networking 服務並設定它啟動
[root@controller ~]# systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
[root@controller ~]# systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
[root@controller ~]# systemctl start neutron-l3-agent.service
[root@controller ~]# systemctl enable neutron-l3-agent.service
[root@computer1 ~]# yum install openstack-neutron-linuxbridge ebtables ipset -y
Networking 通用元件的設定包括認證機制、訊息佇列和外掛。
編輯/etc/neutron/neutron.conf
檔案並完成如下操作
[root@computer1 ~]# cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf_bak
[root@computer1 ~]# egrep -v "^$|#" /etc/neutron/neutron.conf_bak > /etc/neutron/neutron.conf
[root@computer1 ~]# vim /etc/neutron/neutron.conf
[DEFAULT]
...
transport_url = rabbit://openstack:RABBIT_PASS@controller
可用你在RabbitMQ中為``openstack``選擇的密碼替換 「RABBIT_PASS」
[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
可將 NEUTRON_PASS 替換為你在認證服務中為 neutron 使用者選擇的密碼
[oslo_concurrency]
部分,設定鎖路徑[oslo_concurrency]
...
lock_path = /var/lib/neutron/tmp
由於該設定與控制節點一樣,即複製到計算節點即可
[root@computer1 ~]# scp -r root@controller:/etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini
修改核心組態檔/etc/sysctl.conf,確保系統核心支援網橋過濾器
[root@controller ~]# vi /etc/sysctl.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
[root@controller ~]# modprobe br_netfilter
[root@controller ~]# sysctl -p
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
[root@controller ~]# sed -i '$amodprobe br_netfilter' /etc/rc.local
[root@controller ~]# chmod +x /etc/rc.d/rc.local
1、設定 Linux 網橋代理
Linux 橋接代理為範例構建第 2 層(橋接和交換)虛擬網路基礎架構並處理安全組。
1、編輯/etc/neutron/plugins/ml2/linuxbridge_agent.ini檔案並完成以下操作
[root@controller ~]# cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini_bak
[root@compute1 ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
在該[linux_bridge]部分中,將提供者虛擬網路對映到提供者物理網路介面:
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
替換PROVIDER_INTERFACE_NAME為底層提供者物理網路介面的名稱。有關詳細資訊,請參閱主機網路 。
在該[vxlan]部分中,啟用 VXLAN 覆蓋網路,設定處理覆蓋網路的物理網路介面的 IP 地址,並啟用第 2 層填充:
[vxlan]
enable_vxlan = true
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
l2_population = true
替換OVERLAY_INTERFACE_IP_ADDRESS為處理覆蓋網路的底層物理網路介面的 IP 地址。範例架構使用管理介面將流量通過隧道傳輸到其他節點。因此,替換OVERLAY_INTERFACE_IP_ADDRESS為計算節點的管理IP地址。有關詳細資訊,請參閱 主機網路。
在該[securitygroup]部分中,啟用安全組並設定 Linux 網橋 iptables 防火牆驅動程式:
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
2、修改核心組態檔/etc/sysctl.conf,確保系統核心支援網橋過濾器
[root@compute1 ~]# vim /etc/sysctl.conf
net.bridge.bridge-nf-call-iptables
net.bridge.bridge-nf-call-ip6tables
編輯/etc/nova/nova.conf
檔案並完成下面的操作
[root@computer1 ~]# vim /etc/nova/nova.conf
[neutron]
部分,設定存取引數[neutron]
...
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
可將 NEUTRON_PASS 替換為你在認證服務中為 neutron 使用者選擇的密碼
[root@computer1 ~]# systemctl restart openstack-nova-compute.service
[root@computer1 ~]# systemctl enable neutron-linuxbridge-agent.service
[root@computer1 ~]# systemctl start neutron-linuxbridge-agent.service
[root@controller ~]# openstack network agent list
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| 02199446-fe60-4f56-a0f0-3ea6827f6891 | Linux bridge agent | compute1 | None | :-) | UP | neutron-linuxbridge-agent |
| 1d8812ec-4237-4d75-937c-40a9fac82c65 | Metadata agent | controller | None | :-) | UP | neutron-metadata-agent |
| 2cd47568-54a1-4bea-b2fa-bb1d1b2fe935 | L3 agent | controller | nova | :-) | UP | neutron-l3-agent |
| 533aa260-78f3-4391-b14c-4a1639eda135 | DHCP agent | controller | nova | :-) | UP | neutron-dhcp-agent |
| fd1c7b5c-21ad-4e47-967d-4625e66c3962 | Linux bridge agent | controller | None | :-) | UP | neutron-linuxbridge-agent |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
1、執行admin憑證獲取存取許可權
[root@controller ~]# . admin-openrc
2、建立提供商網路
[root@controller ~]# openstack network create --share --external --provider-physical-network provider --provider-network-type flat provider
+---------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+---------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2022-10-17T10:13:25Z |
| description | |
| dns_domain | None |
| id | 3ae54d14-14e6-48a2-ab7d-10329ce9bb93 |
| ipv4_address_scope | None |
| ipv6_address_scope | None |
| is_default | False |
| is_vlan_transparent | None |
| location | cloud='', project.domain_id=, project.domain_name='Default', project.id='2c9db0df0c9d4543816a07cec1e4d5d5', project.name='admin', region_name='', zone= |
| mtu | 1500 |
| name | provider |
| port_security_enabled | True |
| project_id | 2c9db0df0c9d4543816a07cec1e4d5d5 |
| provider:network_type | flat |
| provider:physical_network | provider |
| provider:segmentation_id | None |
| qos_policy_id | None |
| revision_number | 1 |
| router:external | External |
| segments | None |
| shared | True |
| status | ACTIVE |
| subnets | |
| tags | |
| updated_at | 2022-10-17T10:13:25Z |
+---------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
3、建立一個子網
[root@controller ~]# openstack subnet create --network provider --allocation-pool start=172.16.1.220,end=172.16.1.240 --dns-nameserver 192.168.87.8 --gateway 172.16.1.2 --subnet-range 172.16.1.0/24 provider
+-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
| allocation_pools | 172.16.1.220-172.16.1.240 |
| cidr | 172.16.1.0/24 |
| created_at | 2022-10-17T10:16:10Z |
| description | |
| dns_nameservers | 192.168.87.8 |
| enable_dhcp | True |
| gateway_ip | 172.16.1.2 |
| host_routes | |
| id | eba80af0-8b35-4b5d-9e61-4a524579f631 |
| ip_version | 4 |
| ipv6_address_mode | None |
| ipv6_ra_mode | None |
| location | cloud='', project.domain_id=, project.domain_name='Default', project.id='2c9db0df0c9d4543816a07cec1e4d5d5', project.name='admin', region_name='', zone= |
| name | provider |
| network_id | 3ae54d14-14e6-48a2-ab7d-10329ce9bb93 |
| prefix_length | None |
| project_id | 2c9db0df0c9d4543816a07cec1e4d5d5 |
| revision_number | 0 |
| segment_id | None |
| service_types | |
| subnetpool_id | None |
| tags | |
| updated_at | 2022-10-17T10:16:10Z |
+-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
1、執行admin憑證獲取存取許可權
[root@controller ~]# . admin-openrc
2、建立自助服務網路
[root@controller ~]# openstack network create selfservice
+---------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+---------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2022-10-17T09:51:04Z |
| description | |
| dns_domain | None |
| id | 7bf4d5b8-7190-4e05-b3cb-201dae570c1d |
| ipv4_address_scope | None |
| ipv6_address_scope | None |
| is_default | False |
| is_vlan_transparent | None |
| location | cloud='', project.domain_id=, project.domain_name='Default', project.id='2c9db0df0c9d4543816a07cec1e4d5d5', project.name='admin', region_name='', zone= |
| mtu | 1450 |
| name | selfservice |
| port_security_enabled | True |
| project_id | 2c9db0df0c9d4543816a07cec1e4d5d5 |
| provider:network_type | vxlan |
| provider:physical_network | None |
| provider:segmentation_id | 1 |
| qos_policy_id | None |
| revision_number | 1 |
| router:external | Internal |
| segments | None |
| shared | False |
| status | ACTIVE |
| subnets | |
| tags | |
| updated_at | 2022-10-17T09:51:04Z |
+---------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
3、建立一個子網
[root@controller ~]# openstack subnet create --network selfservice --dns-nameserver 114.114.114.114 --gateway 192.168.1.1 --subnet-range 192.168.1.0/24 selfservice
+-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
| allocation_pools | 10.10.10.1-10.10.10.253 |
| cidr | 10.10.10.0/24 |
| created_at | 2022-10-17T10:03:05Z |
| description | |
| dns_nameservers | 114.114.114.114 |
| enable_dhcp | True |
| gateway_ip | 10.10.10.254 |
| host_routes | |
| id | d5898751-981c-40e2-8e1b-bbe9812cdbf6 |
| ip_version | 4 |
| ipv6_address_mode | None |
| ipv6_ra_mode | None |
| location | cloud='', project.domain_id=, project.domain_name='Default', project.id='2c9db0df0c9d4543816a07cec1e4d5d5', project.name='admin', region_name='', zone= |
| name | selfservice |
| network_id | 7bf4d5b8-7190-4e05-b3cb-201dae570c1d |
| prefix_length | None |
| project_id | 2c9db0df0c9d4543816a07cec1e4d5d5 |
| revision_number | 0 |
| segment_id | None |
| service_types | |
| subnetpool_id | None |
| tags | |
| updated_at | 2022-10-17T10:03:05Z |
+-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
4、建立路由
[root@controller ~]# openstack router create router
+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2022-10-17T10:04:49Z |
| description | |
| distributed | False |
| external_gateway_info | null |
| flavor_id | None |
| ha | False |
| id | 33c7f8f1-7798-49fc-a3d5-83786a70819b |
| location | cloud='', project.domain_id=, project.domain_name='Default', project.id='2c9db0df0c9d4543816a07cec1e4d5d5', project.name='admin', region_name='', zone= |
| name | router |
| project_id | 2c9db0df0c9d4543816a07cec1e4d5d5 |
| revision_number | 1 |
| routes | |
| status | ACTIVE |
| tags | |
| updated_at | 2022-10-17T10:04:49Z |
+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
5、在路由器上新增自助網路子網作為介面
[root@controller ~]# openstack router add subnet router selfservice
6、給路由器設定提供商網路閘道器
[root@controller ~]# openstack router set router --external-gateway provider
1、列出網路名稱空間。你應該可以看到一個’ qrouter ‘名稱空間和兩個’qdhcp ‘ 名稱空間
[root@controller ~]# ip netns
qdhcp-3ae54d14-14e6-48a2-ab7d-10329ce9bb93 (id: 2)
qrouter-33c7f8f1-7798-49fc-a3d5-83786a70819b (id: 1)
qdhcp-7bf4d5b8-7190-4e05-b3cb-201dae570c1d (id: 0)
2、列出路由器上的埠來確定公網閘道器的IP 地址
[root@controller ~]# neutron router-port-list router
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+--------------------------------------+------+----------------------------------+-------------------+-------------------------------------------------------------------------------------+
| id | name | tenant_id | mac_address | fixed_ips |
+--------------------------------------+------+----------------------------------+-------------------+-------------------------------------------------------------------------------------+
| 90943cea-ee27-42a3-9b4f-6b6a8f4c70ab | | 2c9db0df0c9d4543816a07cec1e4d5d5 | fa:16:3e:f3:31:ec | {"subnet_id": "d5898751-981c-40e2-8e1b-bbe9812cdbf6", "ip_address": "10.10.10.254"} |
| ae5858c5-03a1-4a51-9d8e-71d0d74b7900 | | | fa:16:3e:3b:60:2f | {"subnet_id": "eba80af0-8b35-4b5d-9e61-4a524579f631", "ip_address": "172.16.1.236"} |
+--------------------------------------+------+----------------------------------+-------------------+-------------------------------------------------------------------------------------+
[root@controller ~]# openstack flavor create --id 0 --vcpus 2 --ram 512 --disk 1 m1.nano
+----------------------------+---------+
| Field | Value |
+----------------------------+---------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 1 |
| id | 0 |
| name | m1.nano |
| os-flavor-access:is_public | True |
| properties | |
| ram | 512 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 2 |
+----------------------------+---------+
#生成和新增金鑰對
[root@controller ~]# ssh-keygen -q -N ""
[root@controller ~]# openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
+-------------+-------------------------------------------------+
| Field | Value |
+-------------+-------------------------------------------------+
| fingerprint | 78:21:ea:39:d0:e0:a0:12:26:55:5e:50:62:cb:f4:78 |
| name | mykey |
| user_id | 58126687cbcc4888bfa9ab73a2256f27 |
+-------------+-------------------------------------------------+
#驗證公鑰的新增
[root@controller ~]# openstack keypair list
+-------+-------------------------------------------------+
| Name | Fingerprint |
+-------+-------------------------------------------------+
| mykey | 78:21:ea:39:d0:e0:a0:12:26:55:5e:50:62:cb:f4:78 |
+-------+-------------------------------------------------+
新增規則到 default
安全組
允許 ICMP (ping):
[root@controller ~]# openstack security group rule create --proto icmp default
允許安全 shell (SSH) 的存取:
[root@controller ~]# openstack security group rule create --proto tcp --dst-port 22 default
[root@controller ~]# openstack flavor list
+----+---------+-----+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+---------+-----+------+-----------+-------+-----------+
| 0 | m1.nano | 512 | 1 | 0 | 2 | True |
+----+---------+-----+------+-----------+-------+-----------+
[root@controller ~]# openstack image list
+--------------------------------------+-----------------------+--------+
| ID | Name | Status |
+--------------------------------------+-----------------------+--------+
| d8e30d01-3b95-4ec7-9b22-785cd0076ae4 | cirros | active |
| f09fe2d3-5a6e-4169-926b-cb13cd5e6018 | ubuntu-18.04-server | active |
| cae84d10-2034-4f8b-8ae0-3d0115d90a68 | ubuntu2004-01Snapshot | active |
| 2ada7482-a406-487e-a9b3-d7bd235fe29f | vm5 | active |
+--------------------------------------+-----------------------+--------+
[root@controller ~]# openstack security group list
+--------------------------------------+---------+------------------------+----------------------------------+------+
| ID | Name | Description | Project | Tags |
+--------------------------------------+---------+------------------------+----------------------------------+------+
| 9fa1a67a-1d4e-41ca-a86a-b0ed03a06c37 | default | Default security group | 2c9db0df0c9d4543816a07cec1e4d5d5 | [] |
| a3258f5d-039b-4ece-ba94-aa95a2ea82f4 | default | Default security group | 9c18512ba8d241619aef8a8018d25587 | [] |
+--------------------------------------+---------+------------------------+----------------------------------+------+
[root@controller ~]# openstack network list
+--------------------------------------+-------------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+-------------+--------------------------------------+
| 3ae54d14-14e6-48a2-ab7d-10329ce9bb93 | provider | eba80af0-8b35-4b5d-9e61-4a524579f631 |
| 7bf4d5b8-7190-4e05-b3cb-201dae570c1d | selfservice | d5898751-981c-40e2-8e1b-bbe9812cdbf6 |
+--------------------------------------+-------------+--------------------------------------+
使用selfservice網路的ID替換SELFSERVICE_NET_ID
[root@controller ~]# openstack server create --flavor m1.nano --image cirros --nic net-id=b58ef51b-b37a-46f1-9b22-efe6139f0718 --security-group default --key-name mykey vm1
建立浮動IP
[root@controller ~]# openstack floating ip create provider
+---------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+---------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| created_at | 2022-10-24T09:43:01Z |
| description | |
| dns_domain | None |
| dns_name | None |
| fixed_ip_address | None |
| floating_ip_address | 172.16.1.248 |
| floating_network_id | 2f08b499-2a73-452f-929d-f973d18de441 |
| id | e8b9d356-c9f5-424e-8508-c6292fba4a9c |
| location | Munch({'project': Munch({'domain_name': 'default', 'domain_id': None, 'name': 'admin', 'id': u'3950f79d4f1d45d6bd5a1ff91485e316'}), 'cloud': '', 'region_name': '', 'zone': None}) |
| name | 172.16.1.248 |
| port_details | None |
| port_id | None |
| project_id | 3950f79d4f1d45d6bd5a1ff91485e316 |
| qos_policy_id | None |
| revision_number | 0 |
| router_id | None |
| status | DOWN |
| subnet_id | None |
| tags | [] |
| updated_at | 2022-10-24T09:43:01Z |
+---------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
繫結浮動IP:
[root@controller ~]# openstack server add floating ip vm1 172.16.1.248
[root@controller ~]# openstack server list
+--------------------------------------+------+--------+-----------------------------------------+--------+---------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+------+--------+-----------------------------------------+--------+---------+
| 589a6161-1b58-4154-9a71-a21d7459df6a | vm1 | ACTIVE | selfservice=192.168.1.211, 172.16.1.248 | cirros | m1.nano |
+--------------------------------------+------+--------+-----------------------------------------+--------+---------+
[root@controller openstack server list
+--------------------------------------+------+--------+----------+--------+---------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+------+--------+----------+--------+---------+
| 859f51b1-57a7-42d6-8dcf-59d461ebb961 | vm1 | ERROR | | cirros | m1.nano |
+--------------------------------------+------+--------+----------+--------+---------+
在控制節點上檢視紀錄檔
問題一:
[root@controller ~]# vim /var/log/nova/nova-conductor.log
2022-10-24 16:47:01.651 8282 ERROR nova.conductor.manager
2022-10-24 16:47:01.651 8282 ERROR nova.conductor.manager NoValidHost: No valid host was found.
2022-10-24 16:47:01.651 8282 ERROR nova.conductor.manager
2022-10-24 16:47:01.651 8282 ERROR nova.conductor.manager
2022-10-24 16:47:01.902 8282 WARNING nova.scheduler.utils [req-f3d6bd8b-6491-4c13-b787-df91e5461282 f88044dc66354fce937046e3b76732a2 3950f79d4f1d45d6bd5a1ff91485e316 - default default] Failed to compute_task_build_instances: No valid host was found.
出現NoValidHost: No valid host was found.
解決方法:
[root@controller ~]# vim /etc/httpd/conf.d/00-placement-api.conf
新增如下資訊:
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order all,deny
Allow from all
</IfVersion>
</Directory>
[root@controller ~]# systemctl restart httpd
問題二:
2022-10-24 17:01:42.259 8281 ERROR nova.conductor.manager [req-afff2d67-b078-42d2-8e5a-ae8a3a115358 f88044dc66354fce937046e3b76732a2 3950f79d4f1d45d6bd5a1ff91485e316 - default default] No host-to-cell mapping found for selected host compute1. Setup is incomplete.: HostMappingNotFound: Host 'compute1' is not mapped to any cell
2022-10-24 17:01:42.353 8281 WARNING nova.scheduler.utils [req-afff2d67-b078-42d2-8e5a-ae8a3a115358 f88044dc66354fce937046e3b76732a2 3950f79d4f1d45d6bd5a1ff91485e316 - default default] Failed to compute_task_build_instances: Host 'compute1' is not mapped to any cell: HostMappingNotFound: Host 'compute1' is not mapped to any cell
2022-10-24 17:01:42.355 8281 WARNING nova.scheduler.utils [req-afff2d67-b078-42d2-8e5a-ae8a3a115358 f88044dc66354fce937046e3b76732a2 3950f79d4f1d45d6bd5a1ff91485e316 - default default] [instance: ff5689f6-903b-4b82-898b-f34c334e6038] Setting instance to ERROR state.: HostMappingNotFound: Host 'compute1' is not mapped to any cell
出現HostMappingNotFound: Host 'compute1' is not mapped to any cell
解決方法:
刪除範例,執行[root@controller ~]# nova-manage cell_v2 discover_hosts --verbose
然後再建立範例
建立後還出現這樣的情況解決方法:
在計算節點中
[root@compute1 ~]# vim /etc/nova/nova.conf
[DEFAULT]
vif_plugging_timeout = 10
vif_plugging_is_fatal = False
1、安裝軟體包
[root@controller ~]# yum install openstack-dashboard -y
2、編輯檔案 /etc/openstack-dashboard/local_settings
並完成如下動作
[root@controller ~]# cp /etc/openstack-dashboard/local_settings{,_bak}
[root@controller ~]# vim /etc/openstack-dashboard/local_settings
controller
節點上設定儀表盤以使用 OpenStack 服務OPENSTACK_HOST = "controller"
ALLOWED_HOSTS = ['*', ]
memcached
對談儲存服務SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller:11211',
}
}
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 2,
}
default
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "default"
user
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "admin"
OPENSTACK_NEUTRON_NETWORK = {
...
'enable_router': False,
'enable_quotas': False,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_lb': False,
'enable_firewall': False,
'enable_vpn': False,
'enable_fip_topology_check': False,
}
TIME_ZONE = "Asia/Shanghai"
可使用恰當的時區標識替換``TIME_ZONE`` 。更多資訊,參考 list of time zones
新增一行:
WEBROOT = '/dashboard'
1、重啟web伺服器以及對談儲存服務
[root@controller ~]# systemctl restart httpd.service memcached.service
openstack server create --flavor m1.nano --image cirros --nic net-id=0a2a4727-ce1c-4b1a-955a-b59a72463cbb --security-group default --key-name mykey vm1
1、建立資料庫及授權
[root@controller ~]# mysql -u root -p
MariaDB [(none)]> CREATE DATABASE cinder;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'172.16.1.160' IDENTIFIED BY 'CINDER_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'CINDER_DBPASS';
2、更新環境變數
[root@controller ~]# . admin-openrc
3、建立cinder使用者
[root@controller ~]# openstack user create --domain default --password CINDER_PASS cinder
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 02643ad738044a3ea2df8b7e4b780beb |
| name | cinder |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
4、將admin角色新增到cinder使用者
[root@controller ~]# openstack role add --project service --user cinder admin
5、建立cinderv2和cinderv3服務實體
[root@controller ~]# openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Block Storage |
| enabled | True |
| id | c5239e643a67471cbd686daaa7717ac0 |
| name | cinderv2 |
| type | volumev2 |
+-------------+----------------------------------+
[root@controller ~]# openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Block Storage |
| enabled | True |
| id | 48fd786fd97b499aa7b3e20745b733a1 |
| name | cinderv3 |
| type | volumev3 |
+-------------+----------------------------------+
6、建立塊儲存服務 API 端點
[root@controller ~]# openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | b13c66d0f8334356b2236efcbd208b3c |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | c5239e643a67471cbd686daaa7717ac0 |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(project_id)s |
+--------------+------------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | f7329c2c1fa349cb80673aa801ba9c3c |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | c5239e643a67471cbd686daaa7717ac0 |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(project_id)s |
+--------------+------------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | 45f42ed528e941f1aa0c7946f534185a |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | c5239e643a67471cbd686daaa7717ac0 |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(project_id)s |
+--------------+------------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | fb92c7708562494f8000806887b1e1c6 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 48fd786fd97b499aa7b3e20745b733a1 |
| service_name | cinderv3 |
| service_type | volumev3 |
| url | http://controller:8776/v3/%(project_id)s |
+--------------+------------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | 726b7ad809f845f2b5b25c3b0c8a9dfc |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 48fd786fd97b499aa7b3e20745b733a1 |
| service_name | cinderv3 |
| service_type | volumev3 |
| url | http://controller:8776/v3/%(project_id)s |
+--------------+------------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | 5375ae13ced049cebfb32f6d33aa3bbb |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 48fd786fd97b499aa7b3e20745b733a1 |
| service_name | cinderv3 |
| service_type | volumev3 |
| url | http://controller:8776/v3/%(project_id)s |
+--------------+------------------------------------------+
1、安裝軟體包
[root@controller ~]# yum install openstack-cinder -y
2、編輯/etc/cinder/cinder.conf檔案並完成以下操作
[root@controller ~]# cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf_bak
[root@controller ~]# egrep -v "^$|#" /etc/cinder/cinder.conf_bak > /etc/cinder/cinder.conf
[root@controller ~]# vim /etc/cinder/cinder.conf
在該[database]部分中,設定資料庫存取:
[database]
# ...
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
在該[DEFAULT]部分中,設定RabbitMQ 訊息佇列存取:
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller
在[DEFAULT]和[keystone_authtoken]部分中,設定身份服務存取:
[DEFAULT]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = CINDER_PASS
在該[DEFAULT]部分中,設定my_ip選項以使用控制器節點的管理介面 IP 地址:
[DEFAULT]
# ...
my_ip = 10.0.0.11
在該[oslo_concurrency]部分中,設定鎖定路徑:
[oslo_concurrency]
# ...
lock_path = /var/lib/cinder/tmp
3、同步資料庫
[root@controller ~]# su -s /bin/sh -c "cinder-manage db sync" cinder
4、驗證資料庫
[root@controller ~]# mysql -ucinder -pCINDER_DBPASS -e "use cinder;show tables;"
+----------------------------+
| Tables_in_cinder |
+----------------------------+
| attachment_specs |
| backup_metadata |
| backups |
| cgsnapshots |
| clusters |
| consistencygroups |
| driver_initiator_data |
| encryption |
| group_snapshots |
| group_type_projects |
| group_type_specs |
| group_types |
| group_volume_type_mapping |
| groups |
| image_volume_cache_entries |
| messages |
| migrate_version |
| quality_of_service_specs |
| quota_classes |
| quota_usages |
| quotas |
| reservations |
| services |
| snapshot_metadata |
| snapshots |
| transfers |
| volume_admin_metadata |
| volume_attachment |
| volume_glance_metadata |
| volume_metadata |
| volume_type_extra_specs |
| volume_type_projects |
| volume_types |
| volumes |
| workers |
+----------------------------+
1、編輯/etc/nova/nova.conf檔案並將以下內容新增到其中
[root@controller ~]# vim /etc/nova/nova.conf
在最後面新增下面資訊:
[cinder]
os_region_name = RegionOne
1、重啟計算 API 服務
root@controller ~]# systemctl restart openstack-nova-api.service
2、啟動塊儲存服務
[root@controller ~]# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
[root@controller ~]# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
[root@ceph_node1 cluster]# ssh-copy-id controller
[root@ceph_node1 cluster]# ssh-copy-id compute1
[root@ceph_node1 cluster]# cd /etc/yum.repos.d/
[root@ceph_node1 cluster]# scp ceph.repo epel.repo controller:/etc/yum.repos.d/
[root@ceph_node1 cluster]# scp ceph.repo epel.repo compute1:/etc/yum.repos.d/
[root@controller ~]# yum -y install ceph ceph-radosgw
[root@compute1 ~]# yum -y install ceph ceph-radosgw
for i in {ceph_node1,ceph_node2,ceph_node3}
do ssh ${i} "\
systemctl restart ceph-mon.target
"
done
[root@ceph_node1 cluster]# ceph osd pool create volumes 128
[root@ceph_node1 cluster]# ceph osd pool create images 32
[root@ceph_node1 cluster]# ceph osd pool create backups 128
[root@ceph_node1 cluster]# ceph osd pool create vms 128
1、建立使用者 client.cinder,對volumes儲存池有rwx許可權,對vms儲存池有rwx許可權,對images 池有rx許可權
[root@ceph_node1 cluster]# ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms,allow rx pool=images'
[client.cinder]
key = AQCqg0djCgapGxAA0KMSpfaJIDM9ZniBORzndw==
2、建立使用者client.glance,對images儲存池有rwx許可權
[root@ceph_node1 cluster]# ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
[client.glance]
key = AQApg0djhjjFFRAAdbbJIvJjsuChC9o8t4Exeg==
3、建立使用者client.cinder-backup,對backups儲存池有rwx許可權
[root@ceph_node1 cluster]# ceph auth get-or-create client.cinder-backup mon 'profile rbd' osd 'profile rbd pool=backups'
[client.cinder-backup]
key = AQBjg0djNyjTFxAA9MGOQFHMz/Kfp7NApFZXhA==
4、將glance的keyring儲存到controller(glance伺服器所在節點)上
[root@ceph_node1 cluster]# ceph auth get-or-create client.glance | ssh controller tee /etc/ceph/ceph.client.glance.keyring
[root@ceph_node1 cluster]# ssh controller chown glance:glance /etc/ceph/ceph.client.glance.keyring
5、將cinder的keyring儲存到(控制節點、計算節點、儲存節點)上
[root@ceph_node1 cluster]# ceph auth get-or-create client.cinder | ssh controller tee /etc/ceph/ceph.client.cinder.keyring
[root@ceph_node1 cluster]# ssh controller chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
[root@ceph_node1 cluster]# ceph auth get-or-create client.cinder | ssh compute1 tee /etc/ceph/ceph.client.cinder.keyring
[root@ceph_node1 cluster]# ssh compute1 chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
##節點多可以使用下面迴圈
for h in controller compute1
do
ceph auth get-or-create client.cinder-backup | ssh $h tee /etc/ceph/ceph.client.cinder-backup.keyring
ssh $h chown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring
ceph auth get-key client.cinder | ssh $h tee client.cinder.key
done
##不要直接用 admin 的 key,因為不同使用者要讀到該金鑰檔案,要修改屬組和屬主,否則沒有許可權(當然可以將 ceph.client.admin.keyring 檔案改為 775
允許 cinder/glance 使用者讀取,但不推薦)
在計算節點(compute1)上向 libvirt 新增祕鑰,先在計算節點 compute1 上操作,生成金鑰
1、生成UUID
[root@compute1 ceph]# UUID=$(uuidgen)
[root@compute1 ceph]# cat > secret.xml <<EOF
<secret ephemeral='no' private='no'>
<uuid>${UUID}</uuid>
<usage type='ceph'>
<name>client.cinder secret</name>
</usage>
</secret>
EOF
[root@compute1 ceph]# cat secret.xml
<secret ephemeral='no' private='no'>
<uuid>3807e089-07ce-4846-a898-9b91b50552d0</uuid>
<usage type='ceph'>
<name>client.cinder secret</name>
</usage>
</secret>
2、新增金鑰到libvirt
[root@compute1 ceph]# virsh secret-define --file secret.xml
Secret 04bbb8de-5b07-4c9f-870d-5038702e9eff created
[root@compute1 ceph]# virsh secret-set-value --secret ${UUID} --base64 $(cat /etc/ceph/ceph.client.cinder.keyring | grep key | awk -F ' ' '{ print $3 }')
Secret value set
說明:儲存此處生成的 UUID 的值,後面 Cinder 以及 Nova 的設定中需要用到,本範例中的 UUID 為:585e1823-adc0-4c25-b368-3ec81300fac2
如果新增錯誤,需要刪除,則執行如下命令
# virsh secret-undefine aa22a048-147f-44d1-8571-8c394d924299
如果控制節點也當計算點用,則也要新增金鑰
3、檢視新增後的金鑰
[root@compute1 ceph]# virsh secret-list
UUID Usage
--------------------------------------------------------------------------------
3807e089-07ce-4846-a898-9b91b50552d0 ceph client.cinder secret
[root@compute1 ceph]# scp secret.xml controller:/etc/ceph/
1、在openstack控制節點上修改Glance設定
[root@controller ceph]# vim /etc/glance/glance-api.conf
在[DEFAULT]部分新增如下內容
[DEFAULT]
show_image_direct_url = True
在[glance_store]部分新增如下內容,需要註釋原來的 stores 和 default_store
[glance_store]
#stores = file,http
#default_store = file
filesystem_store_datadir = /var/lib/glance/images/
stores = rbd,file,http
default_store = rbd
rbd_store_pool = images
rbd_store_user = glance
# rbd_store_user = admin # 直接用 admin,當在 ceph 中沒有建立 glance 使用者時,同時還要修改/etc/ceph/ceph.client.admin.keyring 檔案許可權為 775
rbd_store_ceph_conf = /etc/ceph/ceph.conf
# 映象儲存在 ceph 中會被分割成多個物件,單個物件大小,單位是 MB,該值最好是 2 的冪次方,預設是 8,chunk_size 不能太大,當映象小於 chunk_size
時會上傳失敗。
rbd_store_chunk_size = 8
在[paste_deploy]部分新增如下內容,避免 images 快取到/var/lib/glance/image-cache 目錄下
[paste_deploy]
flavor = keystone
在控制節點(controller)重啟 glance-api 服務。
[root@controller ceph]# systemctl restart openstack-glance-api.service
驗證映象上傳
[root@controller ~]# source /etc/keystone/admin-openrc.sh
[root@controller ~]# openstack image create "cirros-0.4.0-x86_64" --file cirros-0.4.0-x86_64-disk.img --disk-format qcow2 --container-format bare --public
+------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| checksum | fcd77cd8aa585da4061655045f3f0511 |
| container_format | bare |
| created_at | 2022-10-14T01:08:41Z |
| disk_format | qcow2 |
| file | /v2/images/f09fe2d3-5a6e-4169-926b-cb13cd5e6018/file |
| id | f09fe2d3-5a6e-4169-926b-cb13cd5e6018 |
| min_disk | 0 |
| min_ram | 0 |
| name | ubuntu-18.04-server |
| owner | 2c9db0df0c9d4543816a07cec1e4d5d5 |
| properties | direct_url='rbd://c473ecd2-8e18-475c-835b-9bd1c661bc9f/images/f09fe2d3-5a6e-4169-926b-cb13cd5e6018/snap', os_hash_algo='sha512', os_hash_value='5320be1a41792ec35ac05cdd7f5203c4fa6406dcfd7ca4a79042aa73c5803596e66962a01aabb35b8e64a2e37f19f7510bffabdd4955cff040e8522ff5e1ec1e', os_hidden='False' |
| protected | False |
| schema | /v2/schemas/image |
| size | 990904320 |
| status | active |
| tags | |
| updated_at | 2022-10-14T01:09:30Z |
| virtual_size | None |
| visibility | public |
+------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
[root@controller ~]# cd /etc/ceph/
[root@controller ceph]# openstack image list
[root@ceph_node1 ceph]# rbd info images/f09fe2d3-5a6e-4169-926b-cb13cd5e6018
rbd image 'f09fe2d3-5a6e-4169-926b-cb13cd5e6018':
size 945 MiB in 119 objects
order 23 (8 MiB objects)
snapshot_count: 1
id: 7da096317d665
block_name_prefix: rbd_data.7da096317d665
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
op_features:
flags:
create_timestamp: Fri Oct 14 09:08:44 2022
access_timestamp: Fri Oct 14 09:08:44 2022
modify_timestamp: Fri Oct 14 09:08:44 2022
[root@controller ceph]# systemctl start openstack-cinder-volume.service
[root@controller ceph]# systemctl enable openstack-cinder-volume.service
1、在 Cinder 控制節點(controller)上,修改組態檔/etc/cinder/cinder.conf
[root@controller ~]# vim /etc/cinder/cinder.conf
在[DEFAULT]部分,將預設 volume 設定為 ceph
[DEFAULT]
#default_volume_type = hdd
default_volume_type = ceph
修改 default_volume_type 會重新往 cinder 庫中 volume_types 表中增加一條名為 ceph 的記錄
在控制節點重啟 cinder-api 和 cinder-scheduler 服務
[root@controller ~]# systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.service
2、在 Cinder 儲存節點(compute01、compute02)上,修改組態檔/etc/cinder/cinder.conf
[root@compute1 ~]# yum install openstack-cinder targetcli python-keystone -y
[root@compute1 ~]# cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf_bak
[root@compute1 ~]# egrep -v "^$|#" /etc/cinder/cinder.conf_bak > /etc/cinder/cinder.conf
[root@compute1 ~]# vim /etc/cinder/cinder.conf
在[DEFAULT]部分,註釋已經有的 enabled_backends
[DEFAULT]
#enabled_backends = lvm
enabled_backends = ceph,lvm
在檔案末尾新增[ceph]部分
[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
# rbd snapshot 在底層會快速複製一個元資訊表,但不會產生實際的資料拷貝,當從 snapshot 建立新卷時,使用者可能會期望不要依賴原來的 snapshot,
這個選項開啟會在建立新卷時對原來的 snapshot 資料進行拷貝來生成一個不依賴於源 snapshot 的卷,預設為 false
rbd_flatten_volume_from_snapshot = false
# 克隆卷的最大巢狀(層)數,設定為 0 表示禁用克隆,降低這個值不會影響克隆深度超過新值的已有卷
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
# 注意 uuid 後面不要複製空格
rbd_secret_uuid = bb914e51-22c2-468d-868d-efad83bc9975
/*
說明:此處 rbd_secret_uuid 的值即為設定 Ceph 環境儲存的 UUID 值
*/
3、在 Cinder 節點(compute01、compute02)上,重啟 cinder-volume 服務
[root@compute1 ~]# systemctl start openstack-cinder-volume.service
[root@compute1 ~]# systemctl enable openstack-cinder-volume.service
cinder-backup 用於將卷(volume)備份到其他儲存系統上,目前支援的備份儲存系統有 swift、ceph 以及 IBM Tivoli Storage Manager(TSM),預設為
Swift,但需要設定 swift 元件
1、在儲存節點(compute01、compute02)上修改組態檔/etc/cinder/cinder.conf
[root@compute1 ~]# vim /etc/cinder/cinder.conf
在[DEFAULT]部分,增加如下內容
[DEFAULT]
backup_driver = cinder.backup.drivers.ceph.CephBackupDriver
backup_ceph_conf = /etc/ceph/ceph.conf
backup_ceph_user = cinder-backup
backup_ceph_chunk_size = 4194304
backup_ceph_pool = backups
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
restore_discard_excess_bytes = true
2、在 dashboard 設定(/etc/openstack-dashboard/local_settings)中的 OPENSTACK_CINDER_FEATURES 中增加如下設定
[root@controller cinder]# vim /etc/openstack-dashboard/local_settings
OPENSTACK_CINDER_FEATURES = {
'enable_backup': True,
}
啟動 cinder backup 服務程序並設定成開機自啟動
[root@controller cinder]# systemctl enable openstack-cinder-backup.service
[root@controller cinder]# systemctl restart openstack-cinder-backup.service
重啟 httpd 服務程序。
[root@controller cinder]# systemctl restart httpd
[root@compute1 ~]# systemctl enable openstack-cinder-backup.service
[root@compute1 ~]# systemctl restart openstack-cinder-backup.service
1、在 Nova 計算節點(compute01、compute02)上,修改組態檔/etc/nova/nova.conf
[root@compute1 ~]# vim /etc/nova/nova.conf
修改並新增以下內容:
[DEFAULT]
# 支援熱遷移
live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"
[libvirt]
# 虛擬機器器模擬 openstack 可能需要將 virt_type 設定為 qemu,否則建立虛擬機器器後,一直停在 GRUB Loading stage2
# virt_type = qemu
inject_partition=-2
virt_type = kvm
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
disk_cachemodes="network=writeback"
rbd_user = cinder
# 此處 uuid 與 cinder.conf 是同一個
rbd_secret_uuid = aa22a048-147f-44d1-8571-8c394d924299
2、在 Nova 計算節點(controller/computer)上,重啟 nova-compute 服務。
[root@compute1 ~]# systemctl restart openstack-nova-compute.service
說明:nova 整合 ceph,允許從 ceph 捲上啟動範例。當 cinder 掛載或者解除安裝塊裝置時,libvirt 程序需要有存取 ceph 叢集的許可權
# 可以在compute節點建立虛擬機器器
[root@compute1 ~]# sudo qemu-img create -f qcow2 /home/soft/ubuntu18_01/ubuntu18_01.qcow2 100G ##大小可隨意,openstack建立虛擬機器器時可拉伸的
[root@compute1 ~]# sudo virt-install --virt-type kvm --name ubuntu18_01 --vcpus=2 --ram 4096 --disk /home/soft/ubuntu18_01/ubuntu18_01.qcow2,format=qcow2,bus=virtio --network default,model=virtio --graphics vnc,listen=0.0.0.0,password=123456,port=5920 --noautoconsole --os-type=linux --os-variant=ubuntu18.04 --cdrom=/home/soft/ubuntu-18.04.5-live-server-amd64.iso
#使用VNC使用者端連線,安裝虛擬機器器
yum install -y acpid && systemctl enable acpid
# acpid是一個使用者空間的服務程序, 用來處理電源相關事件,比如將kernel中的電源事件轉發給應用程式,告訴應用程式安全的退出,防止應用程式異常退出導致資料損壞
# 當作業系統核心崩潰時會報出核心系統crash出錯資訊,通常啟動的時候一閃而過, 而此時系統還沒有起來,不能通過遠端工具(比如ssh)進入系統檢視,我們可以通過設定grub,把這些紀錄檔重定向到Serial Console中,這樣我們就可以通過Serial console來存取錯誤資訊,以供分析和排錯使用。修改組態檔/etc/default/grub,設定GRUB_CMDLINE_LINUX如下:
vim /etc/default/grub
GRUB_CMDLINE_LINUX="crashkernel=auto console=tty0 console=ttyS0,115200n8"
# cloud-init是虛擬機器器第一次啟動時執行的指令碼,主要負責從metadata服務中拉取設定資訊,完成虛擬機器器的初始化工作,比如設定主機名、初始化密碼以及注入金鑰等。手動安裝cloud-init並設定開機啟動如下:
yum install -y cloud-init && systemctl enable cloud-init
#對於 Linux 映象,cloud-init 負責 instance 的初始化工作。cloud-init 功能很強大,能做很多事情,而且我們可以通過修改組態檔靈活客製化 cloud-init。
cloud-init 的組態檔為 /etc/cloud/cloud.cfg,這裡舉幾個應用場景:
#1. 如果希望 root 能夠直接登入 instance(預設不允許 root 登入),設定:
disable_root: 0
#2. 如果希望能以 ssh passwod 方式登入(預設只能通過 private key 登入),設定:
ssh_pwauth: 1
#3. 如果希望能夠修改 instance 的 hostname(預設 instance 每次重啟後 cloud-init 都會重新將 hostname 恢復成初始值),將cloud_init_modules 列表中下面兩項刪除或註釋掉:
- set_hostname
- update_hostname
#instance 每次啟動 cloud-init 都會執行初始化工作,如果希望改變所有 instance 的初始化行為,則修改映象的 /etc/cloud/cloud.cfg 檔案;如果只想改變某個 instance 的初始化行為,直接修改 instance 的 /etc/cloud/cloud.cfg。
Set Passwords(可選):
chpasswd:
list: |
centos:openstack
expire: False
#如果指定expire, 並且設定為 false, 則將密碼全域性設定鍵用作所有使用者帳戶的密碼。如果指定了expire並將其設定為 true, 則使用者密碼將過期, 從而防止使用預設的系統密碼。
# 虛擬機器器製作映象時指定了根分割區大小(比如我們設定為10GB),為了使虛擬機器器能夠自動調整為flavor disk指定的根磁碟大小,即自動擴容, 我們需要安裝glowpart(老版本叫growroot)並完成以下設定:
yum install -y epel-release
yum install -y cloud-utils-growpart
rpm -qa kernel | sed 's/^kernel-//' | xargs -I {} dracut -f /boot/initramfs-{}.img {}
#在宿主機上執行以下命名,移除宿主機資訊,比如mac地址等。
virt-sysprep -d centos7
virt-sparsify --compress $kvname.qcow2 $kvname-.qcow2