Kubernetes
Kubernetes是Google在2014年開源的一個容器群集管理系統,Kubernetes簡稱K8S。
發展:
1.mesos+zookeeper+marathon 架構
2.docker+swarm 容器群集管理
3.kubernetes 開源框架 —>二次開發 API介面 舵手(Go語言開發)
K8S用於容器化應用程式的部署,擴充套件和管理。
K8S提供了容器編排,資源排程,彈性伸縮,部署管理。服務發現等一系列功能。
部署管理:無狀態、有狀態—>由控制器完成。
服務發現:ETCD—>相當於分散式資料庫(自動的服務發現)。(ETCD在生產環境最少三臺起步)
Kubernetes目標是讓部署容器化應用簡單高效
官方網站:http://www.kubernetes.io
自我修復
在節點故障時重新啟動失敗的容器,替換和重新部署,保證預期的副本數量;殺死將康檢查失敗的容器,並且在為準備好之前不會處理使用者端請求,確保線上服務部中斷。
彈性伸縮
使用命令、UI或者加基於CPU使用情況自動快速擴容和縮容應用程式範例,保證應用業務高峰並行時的高可用性;業務地縫時揮手資源,以最小成本執行服務。
自動部署和回滾
K8S採用捲動更新策略更新應用,一次更新一個Pod,而不是同時刪除所有Pod,如果封信過程中出現問題,將回滾更改,確保升級不受影響業務。
服務發現和負載均衡
K8S為多個容器提供一個統一存取入口(管理方的入口,使用者端的入口)(內部IP地址和一個DNS名稱),並且負載均衡關聯的所有容器,使得使用者無需考慮容器IP問題。
機密和設定管理
管理機密資料和應用程式設定,而不需要吧敏感資料暴露在映象裡,提高敏感資料安全性。並可以將一些常用的設定儲存在K8S中,方便應用程式使用。
儲存編排
掛載外部儲存系統,無論是來自本地儲存,公有云(如AWS),還是網路儲存(如NFS、GlusterFS、Ceph)都作為群集資源的一部分,幾大提高儲存使用靈活性
批次處理
提供一次性任務,定時任務;滿足批次資料處理和分析的場景。
三個節點:master主控節點 node1 node2
kubectl 管理人員操作的操作指令
兩個node:提供業務
兩個入口:一個是使用者端存取的,一個是管理員存取的
兩種方式管理資源:kubectl、YAML檔案
Pod
最小部署單元
一組容器的集合
一個Pod中的容器共用網路名稱空間
Pod是短暫的
Controllers
ReplicaSet:確保預期的Pod副本數量
Deployment:無狀態應用部署
StatefulSet:用狀態應用部署
DaemonSet:確保所有Node節點執行在同一個Pod
Job:一次性任務
Cronjob:定時任務
更高層次物件,部署和管理Pod
Service
防止Pod失聯
定義一組Pod的存取策略
Label:標籤,附加到某個資源上,用於關聯物件、查詢和篩選
Namepaces:名稱空間,將物件邏輯上隔離
Annotations:註釋
etcd:ca.pem server.pem server-key.pem
flannel:ca.pem server.pem server-key.pem
kube-apiserve:ca.pem server.pem server-key.pem
kubelet:ca.pem ca-key.pem
kube-proxy:ca.pem kube-proxy.pem kube-proxy-key.pem
kubectl:ca.pem admin.pem admin-key.pem
etcd具有以下特點:
完全複製:叢集中的每個節點都可以使用完整的存檔
高可用性:Etcd可用於避免硬體的單點故障或網路問題
一致性:每次讀取都會返回跨多主機的最新寫入
簡單:包括一個定義良好、面向使用者的API(gRPC)
快速:每秒10000次寫入的基準速度
可靠:使用Raft演演算法實現強一致性、高可用服務儲存目錄
flannel網路元件
Overlay Network:覆蓋網路,在基礎網路疊加的一種虛擬網路技術模式,該網路中的主機通過虛擬鏈路連線起來。
VXLAN:將源封包封裝到UDP中,並使用基礎網路的IP/MAC作為外層報文頭進行封裝,然後在乙太網上傳輸,到達目的地後由隧道端點解封裝並將資料傳送給目標地址。
Flannel:是Overlay網路的一種。也是將源封包封裝在另一種網路包裡面進行路由轉發和通訊,目前已經支援UDP、VXLAN、VPC和GCE路由等資料轉發方式。
負責管理Kubernetes叢集。它們管理pod的生命週期,pod是Kubernetes叢集內部署的基本單元。
節點元件是Kubernetes中的worker機器,受到master的管理。節點可以是虛擬機器器(VM)或物理機器——Kubernetes在這兩種型別的系統上都能良好執行。
每個節點都包含執行pod的必要元件:
所有節點清空防火牆規則和關閉核心防護
iptables -F
setenforce 0
一:環境部署
官網地址:https://github.com/kubernetes/kubernetes/releases?after=v1.13.1
二:K8S部署
環境
master:192.168.20.10 kube-apiserver kube-controller-manager kube-scheduler etcd
node1:192.168.20.20 kubelet kube-proxy docker flannel etcd
node2:192.168.20.30 kubelet kube-proxy docker flannel etcd
//證書製作
//master操作
[root@localhost ~]# hostnamectl set-hostname master
[root@localhost ~]# su
[root@master ~]# mkdir k8s
[root@master ~]# cd k8s/
[root@master k8s]# ls //從宿主機拖進來
etcd-cert.sh etcd.sh
[root@master k8s]# mkdir etcd-cert
[root@master k8s]# mv etcd-cert.sh etcd-cert
//下載證書製作工具
[root@master k8s]# cd etcd-cert/
[root@master etcd-cert]# vim cfssl.sh
curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo
[root@master etcd-cert]# bash cfssl.sh //下載cfssl官方包
或
[root@master etcd-cert]# ls 下載軟體包拖入
cfssl cfssl-certinfo cfssljson etcd-cert etcd.sh
[root@master etcd-cert]# mv cfssl* /usr/local/bin/
[root@master etcd-cert]# chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo
[root@master etcd-cert]# ls /usr/local/bin/
cfssl cfssl-certinfo cfssljson
//開始製作證書
//cfssl 生成證書工具 cfssljson通過傳入json檔案生成證書
cfssl-certinfo檢視證書資訊
//定義ca證書
[root@master etcd-cert]# cat > ca-config.json <<EOF
> {
> "signing": {
> "default": {
> "expiry": "87600h"
> },
> "profiles": {
> "www": {
> "expiry": "87600h",
> "usages": [
> "signing",
> "key encipherment",
> "server auth",
> "client auth"
> ]
> }
> }
> }
> }
> EOF
//實現證書籤名
[root@master etcd-cert]# cat > ca-csr.json <<EOF
> {
> "CN": "etcd CA",
> "key": {
> "algo": "rsa",
> "size": 2048
> },
> "names": [
> {
> "C": "CN",
> "L": "Beijing",
> "ST": "Beijing"
> }
> ]
> }
> EOF
//生成證書,生成ca-key.pem ca.pem
[root@master etcd-cert]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
2020/09/28 16:11:29 [INFO] generating a new CA key and certificate from CSR
2020/09/28 16:11:29 [INFO] generate received request
2020/09/28 16:11:29 [INFO] received CSR
2020/09/28 16:11:29 [INFO] generating key: rsa-2048
2020/09/28 16:11:30 [INFO] encoded CSR
2020/09/28 16:11:30 [INFO] signed certificate with serial number 307109152987071081700641248999918396111229161596
//指定etcd三個節點之間的通訊驗證
[root@master etcd-cert]# cat > server-csr.json <<EOF
> {
> "CN": "etcd",
> "hosts": [
> "192.168.20.10", //master地址
> "192.168.20.20", //node1地址
> "192.168.20.30" //node2地址
> ],
> "key": {
> "algo": "rsa",
> "size": 2048
> },
> "names": [ //名字要和上面定義的一樣
> {
> "C": "CN",
> "L": "BeiJing",
> "ST": "BeiJing"
> }
> ]
> }
> EOF
//生成ETCD證書 server-key.pem server.pem
[root@master etcd-cert]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
2020/09/28 16:12:52 [INFO] generate received request
2020/09/28 16:12:52 [INFO] received CSR
2020/09/28 16:12:52 [INFO] generating key: rsa-2048
2020/09/28 16:12:52 [INFO] encoded CSR
2020/09/28 16:12:52 [INFO] signed certificate with serial number 538862372957746116117729195241060280056748061751
2020/09/28 16:12:52 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
//ETCD 二進位制包地址 https://github.com/etcd-io/etcd/releases
//拖入flannel-v0.10.0-linux-amd64.tar.gz、etcd-v3.3.10-linux-amd64.tar.gz、kubernetes-server-linux-amd64.tar.gz
[root@master etcd-cert]# ls
ca-config.json etcd-cert.sh server-csr.json
ca.csr etcd-v3.3.10-linux-amd64.tar.gz server-key.pem
ca-csr.json flannel-v0.10.0-linux-amd64.tar.gz server.pem
ca-key.pem kubernetes-server-linux-amd64.tar.gz
ca.pem server.csr
[root@master etcd-cert]# mv *.tar.gz ../
[root@master etcd-cert]# cd ..
[root@master k8s]# ls
cfssl.sh etcd.sh flannel-v0.10.0-linux-amd64.tar.gz
etcd-cert etcd-v3.3.10-linux-amd64.tar.gz kubernetes-server-linux-amd64.tar.gz
//解壓etcd-v3.3.10-linux-amd64.tar.gz
[root@master k8s]# tar zxvf etcd-v3.3.10-linux-amd64.tar.gz
[root@master k8s]# ls etcd-v3.3.10-linux-amd64
Documentation etcd etcdctl README-etcdctl.md README.md READMEv2-etcdctl.md
//建立目錄——組態檔,命令檔案,證書
[root@master k8s]# mkdir /opt/etcd/{cfg,bin,ssl} -p
[root@master k8s]# mv etcd-v3.3.10-linux-amd64/etcd etcd-v3.3.10-linux-amd64/etcdctl /opt/etcd/bin/
//證書拷貝
[root@master k8s]# cp etcd-cert/*.pem /opt/etcd/ssl/
//進入卡住狀態等待其他節點加入
[root@master k8s]# bash etcd.sh etcd01 192.168.20.10 etcd02=https://192.168.20.20:2380,etcd03=https://192.168.20.30:2380
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
//再使用另外一個對談開啟,會發現etcd程序已經開啟
[root@master k8s]# ps -ef | grep etcd
root 22521 1 2 16:41 ? 00:00:01 /opt/etcd/bin/etcd --name=etcd01 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://192.168.20.10:2380 --listen-client-urls=https://192.168.20.10:2379,http://127.0.0.1:2379 --advertise-client-urls=https://192.168.20.10:2379 --initial-advertise-peer-urls=https://192.168.20.10:2380 --initial-cluster=etcd01=https://192.168.20.10:2380,etcd02=https://192.168.20.20:2380,etc03=https://192.168.20.30:2380 --initial-cluster-token=etcd-cluster --initial-cluster-state=new --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --peer-cert-file=/opt/etcd/ssl/server.pem --peer-key-file=/opt/etcd/ssl/server-key.pem --trusted-ca-file=/opt/etcd/ssl/ca.pem --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem
root 22534 22330 0 16:42 pts/2 00:00:00 grep --color=auto etcd
//拷貝證書到node節點
[root@master k8s]# scp -r /opt/etcd/ root@192.168.20.20:/opt/
The authenticity of host '192.168.20.20 (192.168.20.20)' can't be established.
ECDSA key fingerprint is SHA256:M+6YSK2hm7e8JY4G1qYmT0X1UmIr280vvpa+1rW8IBc.
ECDSA key fingerprint is MD5:bd:01:e2:85:f0:b0:36:8c:49:64:08:30:6c:2d:a4:37.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.20.20' (ECDSA) to the list of known hosts.
root@192.168.20.20's password:
etcd 100% 509 237.4KB/s 00:00
etcd 100% 18MB 71.0MB/s 00:00
etcdctl 100% 15MB 81.1MB/s 00:00
ca-key.pem 100% 1679 1.0MB/s 00:00
ca.pem 100% 1265 1.4MB/s 00:00
server-key.pem 100% 1675 1.4MB/s 00:00
server.pem 100% 1338 1.8MB/s 00:00
[root@master k8s]# scp -r /opt/etcd/ root@192.168.20.30:/opt/
The authenticity of host '192.168.20.30 (192.168.20.30)' can't be established.
ECDSA key fingerprint is SHA256:YI9QBe63U8Cgwvdpz0mTaUAPrBP7p0NRMbrujvLhYm8.
ECDSA key fingerprint is MD5:2a:d0:1b:eb:fb:50:3f:a4:f4:f0:a0:59:9b:97:e5:55.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.20.30' (ECDSA) to the list of known hosts.
root@192.168.20.30's password:
etcd 100% 509 335.8KB/s 00:00
etcd 100% 18MB 81.8MB/s 00:00
etcdctl 100% 15MB 75.6MB/s 00:00
ca-key.pem 100% 1679 351.1KB/s 00:00
ca.pem 100% 1265 316.3KB/s 00:00
server-key.pem 100% 1675 1.2MB/s 00:00
server.pem 100% 1338 805.8KB/s 00:00
//拷貝啟動指令碼到node節點
[root@master k8s]# scp /usr/lib/systemd/system/etcd.service root@192.168.20.20:/usr/lib/systemd/system/
root@192.168.20.20's password:
etcd.service 100% 923 283.8KB/s 00:00
[root@master k8s]# scp /usr/lib/systemd/system/etcd.service root@192.168.20.30:/usr/lib/systemd/system/
root@192.168.20.30's password:
etcd.service
//在node1節點修改
[root@localhost ~]# hostnamectl set-hostname node1
[root@localhost ~]# su
[root@node1 ~]# vim /opt/etcd/cfg/etcd
#[Member]
ETCD_NAME="etcd02" //名字改成etcd02
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.20.20:2380" //地址改成自己的地址
ETCD_LISTEN_CLIENT_URLS="https://192.168.20.20:2379" //地址改成自己的地址
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.20.20:2380" //地址改成自己的地址
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.20.20:2379" //地址改成自己的地址
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.20.10:2380,etcd02=https://192.168.20.20:2380,etcd03=https://192.168.20.30:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
//在node2節點修改
[root@localhost ~]# hostnamectl set-hostname node2
[root@localhost ~]# su
[root@node2 ~]# vim /opt/etcd/cfg/etcd
#[Member]
ETCD_NAME="etcd03" //名字改成etcd03
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.20.30:2380" //地址改成自己的地址
ETCD_LISTEN_CLIENT_URLS="https://192.168.20.30:2379" //地址改成自己的地址
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.20.30:2380" //地址改成自己的地址
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.20.30:2379" //地址改成自己的地址
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.20.10:2380,etcd02=https://192.168.20.20:2380,etcd03=https://192.168.20.30:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
//啟動
[root@master k8s]# bash etcd.sh etcd01 192.168.20.10 etcd02=https://192.168.20.20:2380,etcd03=https://192.168.20.30:2380
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
[root@node1 ~]# systemctl start etcd.service
[root@node2 ~]# systemctl start etcd.service
//檢視資訊
[root@master k8s]# systemctl status etcd
● etcd.service - Etcd Server
Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled)
Active: active (running) since 一 2020-09-28 17:23:02 CST; 55s ago
Main PID: 78752 (etcd)
Tasks: 13
CGroup: /system.slice/etcd.service
└─78752 /opt/etcd/bin/etcd --name=etcd01 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://192.168.20.10...
...
[root@node1 ~]# systemctl status etcd
● etcd.service - Etcd Server
Loaded: loaded (/usr/lib/systemd/system/etcd.service; disabled; vendor preset: disabled)
Active: active (running) since 一 2020-09-28 17:22:50 CST; 2min 14s ago
Main PID: 22277 (etcd)
Tasks: 13
CGroup: /system.slice/etcd.service
└─22277 /opt/etcd/bin/etcd --name=etcd02 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://192.168.20.2...
...
[root@node2 ~]# systemctl status etcd
● etcd.service - Etcd Server
Loaded: loaded (/usr/lib/systemd/system/etcd.service; disabled; vendor preset: disabled)
Active: active (running) since 一 2020-09-28 17:22:53 CST; 2min 16s ago
Main PID: 22366 (etcd)
Tasks: 14
CGroup: /system.slice/etcd.service
└─22366 /opt/etcd/bin/etcd --name=etcd03 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://192.168.20.3...
...
//檢查群集健康狀態
[root@master k8s]# cd etcd-cert/
[root@master etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.20.10:2379,https://192.168.20.20:2379,https://192.168.20.30:2379" cluster-health
member 350b6ab68923a8a2 is healthy: got healthy result from https://192.168.20.20:2379
member 51ae3f86f3783687 is healthy: got healthy result from https://192.168.20.10:2379
member c05141f45e08d8ff is healthy: got healthy result from https://192.168.20.30:2379
cluster is healthy
//3.docker引擎部署——所有node節點部署docker引擎
node1和node2節點安裝docker
//4.flannel網路設定
//寫入分配的子網段到ETCD中,供flannel使用
[root@master etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.20.10:2379,https://192.168.20.20:2379,https://192.168.20.30:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
//檢視寫入的資訊
[root@master etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.20.10:2379,https://192.168.20.20:2379,https://192.168.20.30:2379" get /coreos.com/network/config
{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
//拷貝到所有node節點(只需要部署在node節點即可)
[root@master etcd-cert]# scp flannel-v0.10.0-linux-amd64.tar.gz root@192.168.1^Croot
[root@master etcd-cert]# cd ..
[root@master k8s]# scp flannel-v0.10.0-linux-amd64.tar.gz root@192.168.20.20:/root
root@192.168.20.20's password:
flannel-v0.10.0-linux-amd64.tar.gz 100% 9479KB 55.8MB/s 00:00
[root@master k8s]# scp flannel-v0.10.0-linux-amd64.tar.gz root@192.168.20.30:/root
root@192.168.20.30's password:
flannel-v0.10.0-linux-amd64.tar.gz 100% 9479KB 35.8MB/s 00:00
//所有node節點操作(這裡只展示node1的操作)
//解壓
[root@node1 ~]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz
flanneld
mk-docker-opts.sh
README.md
//建立k8s工作目錄
[root@node1 ~]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p
[root@node1 ~]# mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/
//編寫flannel的指令碼
[root@node1 ~]# vim flannel.sh
#!/bin/bash
ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}
cat <<EOF >/opt/kubernetes/cfg/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \
-etcd-cafile=/opt/etcd/ssl/ca.pem \
-etcd-certfile=/opt/etcd/ssl/server.pem \
-etcd-keyfile=/opt/etcd/ssl/server-key.pem"
EOF
cat <<EOF >/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld
//開啟flannel網路功能
[root@node1 ~]# bash flannel.sh https://192.168.20.10:2379,https://192.168.20.20:2379,https://192.168.20.30:2379
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
//設定docker連線flannel
[root@node1 ~]# vim /usr/lib/systemd/system/docker.service
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
[root@node1 ~]# cat /run/flannel/subnet.env
DOCKER_OPT_BIP="--bip=172.17.39.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=172.17.39.1/24 --ip-masq=false --mtu=1450"
//重新啟動docker服務
[root@node1 ~]# systemctl daemon-reload
[root@node1 ~]# systemctl restart docker
//檢視flannel網路
[root@node1 ~]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.39.1 netmask 255.255.255.0 broadcast 172.17.39.255
ether 02:42:9e:5b:d5:d1 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.20.20 netmask 255.255.255.0 broadcast 192.168.20.255
inet6 fe80::f0c9:c17f:3e56:9bf5 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:ac:fe:ba txqueuelen 1000 (Ethernet)
RX packets 361114 bytes 227105212 (216.5 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 261561 bytes 29704749 (28.3 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 172.17.39.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::2062:10ff:fe72:d64d prefixlen 64 scopeid 0x20<link>
ether 22:62:10:72:d6:4d txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 38 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 810 bytes 55986 (54.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 810 bytes 55986 (54.6 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255
ether 52:54:00:7e:c1:42 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
//測試ping通對方docker0網路卡 證明flannel起到路由作用
[root@node1 ~]# docker run -it centos:7 /bin/bash
Unable to find image 'centos:7' locally
7: Pulling from library/centos
75f829a71a1c: Pull complete
Digest: sha256:19a79828ca2e505eaee0ff38c2f3fd9901f4826737295157cc5212b7a372cd2b
Status: Downloaded newer image for centos:7
[root@2ab5e936498a /]# yum install net-tools -y
[root@2ab5e936498a /]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 172.17.39.2 netmask 255.255.255.0 broadcast 172.17.39.255
ether 02:42:ac:11:27:02 txqueuelen 0 (Ethernet)
RX packets 16290 bytes 12483008 (11.9 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 7815 bytes 425422 (415.4 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1000 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@node2 ~]# docker run -it centos:7 /bin/bash
Unable to find image 'centos:7' locally
7: Pulling from library/centos
75f829a71a1c: Pull complete
Digest: sha256:19a79828ca2e505eaee0ff38c2f3fd9901f4826737295157cc5212b7a372cd2b
Status: Downloaded newer image for centos:7
[root@c72893bc9690 /]# yum install net-tools -y
[root@c72893bc9690 /]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 172.17.63.2 netmask 255.255.255.0 broadcast 172.17.63.255
ether 02:42:ac:11:3f:02 txqueuelen 0 (Ethernet)
RX packets 16264 bytes 12482650 (11.9 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 7783 bytes 423626 (413.6 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1000 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@2ab5e936498a /]# ping 172.17.63.2
PING 172.17.63.2 (172.17.63.2) 56(84) bytes of data.
64 bytes from 172.17.63.2: icmp_seq=1 ttl=62 time=2.55 ms
64 bytes from 172.17.63.2: icmp_seq=2 ttl=62 time=4.69 ms
64 bytes from 172.17.63.2: icmp_seq=3 ttl=62 time=0.383 ms
^C
--- 172.17.63.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2006ms
rtt min/avg/max/mdev = 0.383/2.542/4.695/1.761 ms
[root@c72893bc9690 /]# ping 172.17.39.2
PING 172.17.39.2 (172.17.39.2) 56(84) bytes of data.
64 bytes from 172.17.39.2: icmp_seq=1 ttl=62 time=2.02 ms
64 bytes from 172.17.39.2: icmp_seq=2 ttl=62 time=0.917 ms
64 bytes from 172.17.39.2: icmp_seq=3 ttl=62 time=0.751 ms
^C
--- 172.17.39.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 0.751/1.231/2.027/0.567 ms
部署master元件
//在master上操作,api-server生成證書
拖入 master.zip壓縮包到/root/k8s目錄下
[root@master k8s]# ls
cfssl.sh etcd-v3.3.10-linux-amd64 kubernetes-server-linux-amd64.tar.gz
etcd-cert etcd-v3.3.10-linux-amd64.tar.gz master.zip
etcd.sh flannel-v0.10.0-linux-amd64.tar.gz
[root@master k8s]# unzip master.zip
[root@master k8s]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p
[root@master k8s]# mkdir k8s-cert
[root@master k8s]# cd k8s-cert/
拖入k8s-cert.sh指令碼
[root@master k8s-cert]# ls
k8s-cert.sh
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
cat > ca-csr.json <<EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
#-----------------------
cat > server-csr.json <<EOF
{
"CN": "kubernetes",
"hosts": [
"10.0.0.1",
"127.0.0.1",
"192.168.20.10", //master1
"192.168.20.40", //master2
"192.168.20.111", //vip
"192.168.20.50", //lb (master)
"192.168.20.60", //lb (backup)
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
#-----------------------
cat > admin-csr.json <<EOF
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "system:masters",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
#-----------------------
cat > kube-proxy-csr.json <<EOF
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-pro
xy
//生成k8s證書
[root@master k8s-cert]# bash k8s-cert.sh
2020/09/29 15:20:27 [INFO] generating a new CA key and certificate from CSR
2020/09/29 15:20:27 [INFO] generate received request
2020/09/29 15:20:27 [INFO] received CSR
2020/09/29 15:20:27 [INFO] generating key: rsa-2048
2020/09/29 15:20:28 [INFO] encoded CSR
2020/09/29 15:20:28 [INFO] signed certificate with serial number 572092143940477158442975741908760581653757414586
2020/09/29 15:20:28 [INFO] generate received request
2020/09/29 15:20:28 [INFO] received CSR
2020/09/29 15:20:28 [INFO] generating key: rsa-2048
2020/09/29 15:20:28 [INFO] encoded CSR
2020/09/29 15:20:28 [INFO] signed certificate with serial number 645411198364777330575133409297661007151065267201
2020/09/29 15:20:28 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
2020/09/29 15:20:28 [INFO] generate received request
2020/09/29 15:20:28 [INFO] received CSR
2020/09/29 15:20:28 [INFO] generating key: rsa-2048
2020/09/29 15:20:28 [INFO] encoded CSR
2020/09/29 15:20:28 [INFO] signed certificate with serial number 382185722811839684332683631495065868107644288788
2020/09/29 15:20:28 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
2020/09/29 15:20:28 [INFO] generate received request
2020/09/29 15:20:28 [INFO] received CSR
2020/09/29 15:20:28 [INFO] generating key: rsa-2048
2020/09/29 15:20:29 [INFO] encoded CSR
2020/09/29 15:20:29 [INFO] signed certificate with serial number 54367561030861349163097338268655276544563898262
2020/09/29 15:20:29 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@master k8s-cert]# ls *.pem
admin-key.pem ca-key.pem kube-proxy-key.pem server-key.pem
admin.pem ca.pem kube-proxy.pem server.pem
[root@master k8s-cert]# cp ca*pem server*pem /opt/kubernetes/ssl/
[root@master k8s-cert]# cd ..
//解壓kubernetes壓縮包
[root@master k8s]# tar zxvf kubernetes-server-linux-amd64.tar.gz
//複製關鍵命令檔案
[root@master k8s]# cd /root/k8s/kubernetes/server/bin
[root@master bin]# ls
apiextensions-apiserver kube-controller-manager.tar
cloud-controller-manager kubectl
cloud-controller-manager.docker_tag kubelet
cloud-controller-manager.tar kube-proxy
hyperkube kube-proxy.docker_tag
kubeadm kube-proxy.tar
kube-apiserver kube-scheduler
kube-apiserver.docker_tag kube-scheduler.docker_tag
kube-apiserver.tar kube-scheduler.tar
kube-controller-manager mounter
kube-controller-manager.docker_tag
[root@master bin]# cp kube-apiserver kubectl kube-controller-manager kube-scheduler /opt/kubernetes/bin/
[root@master bin]# cd /root/k8s/
[root@master k8s]# head -c 16 /dev/urandom | od -An -t x | tr -d ' ' //隨機生成序列號
7c0a6952689f0769225e08a5d1f705b2
[root@master k8s]# vim /opt/kubernetes/cfg/token.csv
7c0a6952689f0769225e08a5d1f705b2,kubelet-bootstrap,10001,"system:kubelet-bootstrap" //序列號,使用者名稱,id,角色
//二進位制檔案,token,證書都準備好,開啟apiserver
[root@master k8s]# bash apiserver.sh 192.168.20.10 https://192.168.20.10:2379,https://192.168.20.20:2379,https://192.168.20.30:2379
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
//檢查程序是否啟動成功
[root@master k8s]# ps aux | grep kube-apiserver
//檢視組態檔
[root@master k8s]# cat /opt/kubernetes/cfg/kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.20.10:2379,https://192.168.20.20:2379,https://192.168.20.30:2379 \
--bind-address=192.168.20.10 \
--secure-port=6443 \
--advertise-address=192.168.20.10 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--kubelet-https=true \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"
//監聽的https埠
[root@master k8s]# netstat -ntap | grep 6443
tcp 0 0 192.168.20.10:6443 0.0.0.0:* LISTEN 80347/kube-apiserve
tcp 0 0 192.168.20.10:6443 192.168.20.10:58068 ESTABLISHED 80347/kube-apiserve
tcp 0 0 192.168.20.10:58068 192.168.20.10:6443 ESTABLISHED 80347/kube-apiserve
[root@master k8s]# netstat -ntap | grep 8080
tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 80347/kube-apiserve
//啟動scheduler服務
[root@master k8s]# ./scheduler.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
[root@master k8s]# chmod +x controller-manager.sh
//啟動controller-manager
[root@master k8s]# ./controller-manager.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
//檢視master節點狀態
[root@master k8s]# /opt/kubernetes/bin/kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-2 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
node節點部署
//master上操作
//把kubelet、kube-proxy拷貝到node節點上去
[root@master k8s]# cd kubernetes/server/bin/
[root@master bin]# scp kubelet kube-proxy root@192.168.20.20:/opt/kubernetes/bin/
root@192.168.20.20's password:
kubelet 100% 168MB 60.4MB/s 00:02
kube-proxy 100% 48MB 59.5MB/s 00:00
root@master bin]# scp kubelet kube-proxy root@192.168.20.30:/opt/kubernetes/bin/
root@192.168.20.30's password:
kubelet 100% 168MB 96.8MB/s 00:01
kube-proxy 100% 48MB 96.0MB/s 00:00
//node節點上操作(複製node.zip到/root目錄下)
[root@node1 ~]# ls
anaconda-ks.cfg flannel-v0.10.0-linux-amd64.tar.gz node.zip 公共 視訊 檔案 音樂
flannel.sh initial-setup-ks.cfg README.md 模板 圖片 下載 桌面
[root@node1 ~]# unzip node.zip
Archive: node.zip
inflating: proxy.sh
inflating: kubelet.sh
//在master上操作
[root@master bin]# cd /root/k8s
[root@master k8s]# mkdir kubeconfig
[root@master k8s]# cd kubeconfig
//拷貝kubeconfig.sh檔案進行重新命名
[root@master kubeconfig]# ls
kubeconfig.sh
[root@master kubeconfig]# mv kubeconfig.sh kubeconfig
[root@master kubeconfig]# vim kubeconfig
刪除以下部分
# 建立 TLS Bootstrapping Token
#BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
BOOTSTRAP_TOKEN=0fb61c46f8991b718eb38d27b605b008
cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
//獲取token資訊(複製下前面的序列號)
[root@master kubeconfig]# cat /opt/kubernetes/cfg/token.csv
7c0a6952689f0769225e08a5d1f705b2,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
//組態檔修改為tokenID
[root@master kubeconfig]# vim kubeconfig
#----------------------
APISERVER=$1
SSL_DIR=$2
# 建立kubelet bootstrapping kubeconfig
export KUBE_APISERVER="https://$APISERVER:6443"
# 設定叢集引數
kubectl config set-cluster kubernetes \
--certificate-authority=$SSL_DIR/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig
# 設定使用者端認證引數
kubectl config set-credentials kubelet-bootstrap \
--token=7c0a6952689f0769225e08a5d1f705b2 \ //這裡輸入複製的序列號
--kubeconfig=bootstrap.kubeconfig
# 設定上下文引數
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig
# 設定預設上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
#----------------------
# 建立kube-proxy kubeconfig檔案
kubectl config set-cluster kubernetes \
--certificate-authority=$SSL_DIR/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
--client-certificate=$SSL_DIR/kube-proxy.pem \
--client-key=$SSL_DIR/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
//設定環境變數
[root@master kubeconfig]# vim /etc/profile
在最後一行後插入
export PATH=$PATH:/opt/kubernetes/bin/
[root@master kubeconfig]# source /etc/profile
[root@master kubeconfig]# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-2 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
//生成組態檔bootstrap.kubeconfig、kube-proxy.kubeconfig
[root@master kubeconfig]# bash kubeconfig 192.168.20.10 /root/k8s/k8s-cert/
Cluster "kubernetes" set.
User "kubelet-bootstrap" set.
Context "default" created.
Switched to context "default".
Cluster "kubernetes" set.
User "kube-proxy" set.
Context "default" created.
Switched to context "default".
[root@master kubeconfig]# ls
bootstrap.kubeconfig kubeconfig kube-proxy.kubeconfig
//拷貝組態檔到node節點
[root@master kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.20.20:/opt/kubernetes/cfg/
root@192.168.20.20's password:
bootstrap.kubeconfig 100% 2167 1.4MB/s 00:00
kube-proxy.kubeconfig 100% 6273 1.2MB/s 00:00
[root@master kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.20.30:/opt/kubernetes/cfg/
root@192.168.20.30's password:
bootstrap.kubeconfig 100% 2167 1.2MB/s 00:00
kube-proxy.kubeconfig 100% 6273 4.9MB/s 00:00
//建立bootstrap角色賦予許可權用於連線apiserver請求籤名
[root@master kubeconfig]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
//在node01節點上操作
[root@node1 ~]# bash kubelet.sh 192.168.20.20
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
//檢查kubelet服務啟動
[root@node1 ~]# bash kubelet.sh 192.168.20.20
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@node1 ~]# ps -aux | grep kube
root 79703 0.1 0.4 399640 18088 ? Ssl 14:23 0:22 /opt/kubernetes/bin/flanneld --ip-masq --etcd-endpoints=https://192.168.20.10:2379,https://192.168.20.20:2379,https://192.168.20.30:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem
root 97400 1.0 1.1 534300 42548 ? Ssl 17:30 0:00 /opt/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-override=192.168.20.20 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfg/kubelet.config --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
root 97465 0.0 0.0 112728 984 pts/2 S+ 17:31 0:00 grep --color=auto kube
//master上操作
//檢查到node01節點的請求
[root@master kubeconfig]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-5VZZo63-AZcdMyqaRZ6IiQbdprnkWP7GyBWqDGfIAwY 2m1s kubelet-bootstrap Pending
[root@master kubeconfig]# kubectl certificate approve node-csr-5VZZo63-AZcdMyqaRZ6IiQbdprnkWP7GyBWqDGfIAwY
certificatesigningrequest.certificates.k8s.io/node-csr-5VZZo63-AZcdMyqaRZ6IiQbdprnkWP7GyBWqDGfIAwY approved
//kubectl certificate
approve 同意一個自簽證書請求
deny 拒絕一個自簽證書請求
//繼續檢視證書狀態
[root@master kubeconfig]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-5VZZo63-AZcdMyqaRZ6IiQbdprnkWP7GyBWqDGfIAwY 3m11s kubelet-bootstrap Approved,Issued
//Pending等待叢集給該節點頒發證書 Approved,Issued已經被允許加入群集
//檢視群集節點,成功加入node01節點
[root@master kubeconfig]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
192.168.20.20 Ready <none> 68s v1.12.3
//在node01節點操作,啟動proxy服務
[root@node1 ~]# bash proxy.sh 192.168.20.20
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
[root@node1 ~]# systemctl status kube-proxy.service
● kube-proxy.service - Kubernetes Proxy
Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
Active: active (running) since 二 2020-09-29 17:35:57 CST; 1min 24s ago
Main PID: 98700 (kube-proxy)
Tasks: 0
Memory: 7.8M
CGroup: /system.slice/kube-proxy.service
‣ 98700 /opt/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-...
9月 29 17:37:12 node1 kube-proxy[98700]: I0929 17:37:12.263279 98700 config.go:14...te
9月 29 17:37:13 node1 kube-proxy[98700]: I0929 17:37:13.437384 98700 config.go:14...te
9月 29 17:37:14 node1 kube-proxy[98700]: I0929 17:37:14.277055 98700 config.go:14...te
9月 29 17:37:15 node1 kube-proxy[98700]: I0929 17:37:15.451517 98700 config.go:14...te
9月 29 17:37:16 node1 kube-proxy[98700]: I0929 17:37:16.287927 98700 config.go:14...te
9月 29 17:37:17 node1 kube-proxy[98700]: I0929 17:37:17.464773 98700 config.go:14...te
9月 29 17:37:18 node1 kube-proxy[98700]: I0929 17:37:18.296889 98700 config.go:14...te
9月 29 17:37:19 node1 kube-proxy[98700]: I0929 17:37:19.474728 98700 config.go:14...te
9月 29 17:37:20 node1 kube-proxy[98700]: I0929 17:37:20.308835 98700 config.go:14...te
9月 29 17:37:21 node1 kube-proxy[98700]: I0929 17:37:21.489116 98700 config.go:14...te
Hint: Some lines were ellipsized, use -l to show in full.
//node02節點部署
//在node01節點操作
//把現成的/opt/kubernetes目錄複製到其他節點進行修改即可
[root@node1 ~]# scp -r /opt/kubernetes/ root@192.168.20.30:/opt/
The authenticity of host '192.168.20.30 (192.168.20.30)' can't be established.
ECDSA key fingerprint is SHA256:YI9QBe63U8Cgwvdpz0mTaUAPrBP7p0NRMbrujvLhYm8.
ECDSA key fingerprint is MD5:2a:d0:1b:eb:fb:50:3f:a4:f4:f0:a0:59:9b:97:e5:55.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.20.30' (ECDSA) to the list of known hosts.
root@192.168.20.30's password:
flanneld 100% 235 209.8KB/s 00:00
bootstrap.kubeconfig 100% 2167 1.7MB/s 00:00
kube-proxy.kubeconfig 100% 6273 5.1MB/s 00:00
kubelet 100% 377 257.4KB/s 00:00
kubelet.config 100% 267 75.5KB/s 00:00
kubelet.kubeconfig 100% 2296 2.1MB/s 00:00
kube-proxy 100% 189 167.6KB/s 00:00
mk-docker-opts.sh 100% 2139 1.6MB/s 00:00
scp: /opt//kubernetes/bin/flanneld: Text file busy
kubelet 100% 168MB 106.7MB/s 00:01
kube-proxy 100% 48MB 113.7MB/s 00:00
kubelet.crt 100% 2185 2.1MB/s 00:00
kubelet.key 100% 1675 646.4KB/s 00:00
kubelet-client-2020-09-29-17-33-26.pem 100% 1273 273.7KB/s 00:00
kubelet-client-current.pem 100% 1273 304.1KB/s 00:00
//把kubelet,kube-proxy的service檔案拷貝到node2中
[root@node1 ~]# scp /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.20.30:/usr/lib/systemd/system/
root@192.168.20.30's password:
kubelet.service 100% 264 136.6KB/s 00:00
kube-proxy.service 100% 231 143.8KB/s 00:00
//在node02上操作,進行修改
//首先刪除複製過來的證書,等會node02會自行申請證書
[root@node2 ~]# cd /opt/kubernetes/ssl/
[root@node2 ssl]# rm -rf *
//修改組態檔kubelet kubelet.config kube-proxy
[root@node2 ssl]# cd ../cfg
[root@node2 cfg]# vim kubelet
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.20.30 \ //這裡改成自己的地址
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet.config \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
[root@node2 cfg]# vim kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.20.30 //這裡改成自己的地址
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local.
failSwapOn: false
authentication:
anonymous:
enabled: true
[root@node2 cfg]# vim kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.20.30 \ //這裡改成自己的地址
--cluster-cidr=10.0.0.0/24 \
--proxy-mode=ipvs \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"
//啟動服務
[root@node2 cfg]# systemctl start kubelet.service
[root@node2 cfg]# systemctl enable kubelet.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@node2 cfg]# systemctl start kube-proxy.service
[root@node2 cfg]# systemctl enable kube-proxy.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
//在master上操作檢視請求
[root@master kubeconfig]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-5VZZo63-AZcdMyqaRZ6IiQbdprnkWP7GyBWqDGfIAwY 41m kubelet-bootstrap Approved,Issued
node-csr-rfpP-a8Z8anqv5yxrR-cdcpO98QHjo7EAkqUXPElscE 117s kubelet-bootstrap Pending //複製序列號
//授權許可加入群集
[root@master kubeconfig]# kubectl certificate approve node-csr-rfpP-a8Z8anqv5yxrR-cdcpO98QHjo7EAkqUXPElscE
certificatesigningrequest.certificates.k8s.io/node-csr-rfpP-a8Z8anqv5yxrR-cdcpO98QHjo7EAkqUXPElscE approved
[root@master kubeconfig]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-5VZZo63-AZcdMyqaRZ6IiQbdprnkWP7GyBWqDGfIAwY 42m kubelet-bootstrap Approved,Issued
node-csr-rfpP-a8Z8anqv5yxrR-cdcpO98QHjo7EAkqUXPElscE 2m45s kubelet-bootstrap Approved,Issued
//檢視群集中的節點
[root@master kubeconfig]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
192.168.20.20 Ready <none> 39m v1.12.3
192.168.20.30 Ready <none> 12s v1.12.3
[root@node2 cfg]# systemctl status kube-proxy.service
● kube-proxy.service - Kubernetes Proxy
Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
Active: active (running) since 二 2020-09-29 18:11:15 CST; 16min ago
Main PID: 99461 (kube-proxy)
CGroup: /system.slice/kube-proxy.service
‣ 99461 /opt/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=192.168.20.30 --cluster-cidr=10.0.0.0/24 --p...
9月 29 18:27:19 node2 kube-proxy[99461]: I0929 18:27:19.360327 99461 iptables.go:327] running iptables-save [-t filter]
9月 29 18:27:19 node2 kube-proxy[99461]: I0929 18:27:19.362421 99461 iptables.go:327] running iptables-save [-t nat]
9月 29 18:27:19 node2 kube-proxy[99461]: I0929 18:27:19.435050 99461 proxier.go:1472] Bind addr 10.0.0.1
9月 29 18:27:19 node2 kube-proxy[99461]: I0929 18:27:19.458144 99461 iptables.go:391] running iptables-restore [-w 5 --noflush --counters]
9月 29 18:27:19 node2 kube-proxy[99461]: I0929 18:27:19.461366 99461 proxier.go:672] syncProxyRules took 101.094914ms
9月 29 18:27:19 node2 kube-proxy[99461]: I0929 18:27:19.461402 99461 bounded_frequency_runner.go:221] sync-runner: ran, next poss... in 30s
9月 29 18:27:19 node2 kube-proxy[99461]: I0929 18:27:19.903731 99461 config.go:141] Calling handler.OnEndpointsUpdate
9月 29 18:27:19 node2 kube-proxy[99461]: I0929 18:27:19.932189 99461 config.go:141] Calling handler.OnEndpointsUpdate
9月 29 18:27:21 node2 kube-proxy[99461]: I0929 18:27:21.917556 99461 config.go:141] Calling handler.OnEndpointsUpdate
9月 29 18:27:21 node2 kube-proxy[99461]: I0929 18:27:21.941538 99461 config.go:141] Calling handler.OnEndpointsUpdate
Hint: Some lines were ellipsized, use -l to show in full.