伺服器版本 | docker軟體版本 | CPU架構 |
---|---|---|
CentOS Linux release 7.4.1708 (Core) | Docker version 20.10.12 | x86_64 |
下圖描述了軟體部署方式的變遷:傳統部署時代,虛擬化部署時代,容器部署時代。
傳統部署時代:
早期,各個組織是在物理伺服器上執行應用程式。 由於無法限制在物理伺服器中執行的應用程式資源使用,因此會導致資源分配問題。 例如,如果在同一臺物理伺服器上執行多個應用程式, 則可能會出現一個應用程式佔用大部分資源的情況,而導致其他應用程式的效能下降。 一種解決方案是將每個應用程式都執行在不同的物理伺服器上, 但是當某個應用程式資源利用率不高時,剩餘資源無法被分配給其他應用程式, 而且維護許多物理伺服器的成本很高。
虛擬化部署時代:
因此,虛擬化技術被引入了。虛擬化技術允許你在單個物理伺服器的 CPU 上執行多臺虛擬機器器(VM)。 虛擬化能使應用程式在不同 VM 之間被彼此隔離,且能提供一定程度的安全性, 因為一個應用程式的資訊不能被另一應用程式隨意存取。
虛擬化技術能夠更好地利用物理伺服器的資源,並且因為可輕鬆地新增或更新應用程式, 而因此可以具有更高的可擴縮性,以及降低硬體成本等等的好處。 通過虛擬化,你可以將一組物理資源呈現為可丟棄的虛擬機器器叢集。
每個 VM 是一臺完整的計算機,在虛擬化硬體之上執行所有元件,包括其自己的作業系統。
容器部署時代:
容器類似於 VM,但是更寬鬆的隔離特性,使容器之間可以共用作業系統(OS)。 因此,容器比起 VM 被認為是更輕量級的。且與 VM 類似,每個容器都具有自己的檔案系統、CPU、記憶體、程序空間等。 由於它們與基礎架構分離,因此可以跨雲和 OS 發行版本進行移植。
容器因具有許多優勢而變得流行起來,例如:
Kubernetes 是一個可移植、可延伸的開源平臺,用於管理容器化的工作負載和服務,可促進宣告式設定和自動化。 Kubernetes 擁有一個龐大且快速增長的生態,其服務、支援和工具的使用範圍相當廣泛。
Kubernetes 這個名字源於希臘語,意為「舵手」或「飛行員」。k8s 這個縮寫是因為 k 和 s 之間有八個字元的關係。 Google 在 2014 年開源了 Kubernetes 專案。 Kubernetes 建立在 Google 大規模執行生產工作負載十幾年經驗的基礎上, 結合了社群中最優秀的想法和實踐。
Kubernetes 為你提供的功能如下:
Kubernetes 叢集架構如下:
Kubernetes有兩種節點型別:master節點,worker節點。master節點又稱為控制平面(Control Plane)。控制平面有很多元件,控制平面元件會為叢集做出全域性決策,比如資源的排程。 以及檢測和響應叢集事件,例如當不滿足部署的 replicas 欄位時, 要啟動新的 pod)。
控制平面元件可以在叢集中的任何節點上執行。 然而,為了簡單起見,設定指令碼通常會在同一個計算機上啟動所有控制平面元件, 並且不會在此計算機上執行使用者容器。
控制平面元件如下:
節點元件會在每個節點上執行,負責維護執行的 Pod 並提供 Kubernetes 執行環境。
node元件如下:
Kubernetes叢集架構:k8scloude1作為master節點,k8scloude2,k8scloude3作為worker節點
伺服器 | 作業系統版本 | CPU架構 | 程序 | 功能描述 |
---|---|---|---|---|
k8scloude1/192.168.110.130 | CentOS Linux release 7.4.1708 (Core) | x86_64 | docker,kube-apiserver,etcd,kube-scheduler,kube-controller-manager,kubelet,kube-proxy,coredns,calico | k8s master節點 |
k8scloude2/192.168.110.129 | CentOS Linux release 7.4.1708 (Core) | x86_64 | docker,kubelet,kube-proxy,calico | k8s worker節點 |
k8scloude3/192.168.110.128 | CentOS Linux release 7.4.1708 (Core) | x86_64 | docker,kubelet,kube-proxy,calico | k8s worker節點 |
先設定節點的基本環境,3個節點都要同時設定,在此以k8scloude1作為範例
首先設定主機名
[root@localhost ~]# vim /etc/hostname
[root@localhost ~]# cat /etc/hostname
k8scloude1
設定節點IP地址(可選)
[root@localhost ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens32
[root@k8scloude1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens32
TYPE=Ethernet
BOOTPROTO=static
NAME=ens32
DEVICE=ens32
ONBOOT=yes
DNS1=114.114.114.114
IPADDR=192.168.110.130
NETMASK=255.255.255.0
GATEWAY=192.168.110.2
ZONE=trusted
重啟網路
[root@localhost ~]# service network restart
Restarting network (via systemctl): [ 確定 ]
[root@localhost ~]# systemctl restart NetworkManager
重啟機器之後,主機名變為k8scloude1,測試機器是否可以存取網路
[root@k8scloude1 ~]# ping www.baidu.com
PING www.a.shifen.com (14.215.177.38) 56(84) bytes of data.
64 bytes from 14.215.177.38 (14.215.177.38): icmp_seq=1 ttl=128 time=25.9 ms
64 bytes from 14.215.177.38 (14.215.177.38): icmp_seq=2 ttl=128 time=26.7 ms
64 bytes from 14.215.177.38 (14.215.177.38): icmp_seq=3 ttl=128 time=26.4 ms
^C
--- www.a.shifen.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms
rtt min/avg/max/mdev = 25.960/26.393/26.724/0.320 ms
設定IP和主機名對映
[root@k8scloude1 ~]# vim /etc/hosts
[root@k8scloude1 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.110.130 k8scloude1
192.168.110.129 k8scloude2
192.168.110.128 k8scloude3
#複製 /etc/hosts到其他兩個節點
[root@k8scloude1 ~]# scp /etc/hosts 192.168.110.129:/etc/hosts
[root@k8scloude1 ~]# scp /etc/hosts 192.168.110.128:/etc/hosts
#可以ping通其他兩個節點則成功
[root@k8scloude1 ~]# ping k8scloude1
PING k8scloude1 (192.168.110.130) 56(84) bytes of data.
64 bytes from k8scloude1 (192.168.110.130): icmp_seq=1 ttl=64 time=0.044 ms
64 bytes from k8scloude1 (192.168.110.130): icmp_seq=2 ttl=64 time=0.053 ms
^C
--- k8scloude1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.044/0.048/0.053/0.008 ms
[root@k8scloude1 ~]# ping k8scloude2
PING k8scloude2 (192.168.110.129) 56(84) bytes of data.
64 bytes from k8scloude2 (192.168.110.129): icmp_seq=1 ttl=64 time=0.297 ms
64 bytes from k8scloude2 (192.168.110.129): icmp_seq=2 ttl=64 time=1.05 ms
64 bytes from k8scloude2 (192.168.110.129): icmp_seq=3 ttl=64 time=0.254 ms
^C
--- k8scloude2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2001ms
rtt min/avg/max/mdev = 0.254/0.536/1.057/0.368 ms
[root@k8scloude1 ~]# ping k8scloude3
PING k8scloude3 (192.168.110.128) 56(84) bytes of data.
64 bytes from k8scloude3 (192.168.110.128): icmp_seq=1 ttl=64 time=0.285 ms
64 bytes from k8scloude3 (192.168.110.128): icmp_seq=2 ttl=64 time=0.513 ms
64 bytes from k8scloude3 (192.168.110.128): icmp_seq=3 ttl=64 time=0.390 ms
^C
--- k8scloude3 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 0.285/0.396/0.513/0.093 ms
關閉屏保(可選)
[root@k8scloude1 ~]# setterm -blank 0
下載新的yum源
[root@k8scloude1 ~]# rm -rf /etc/yum.repos.d/* ;wget ftp://ftp.rhce.cc/k8s/* -P /etc/yum.repos.d/
--2022-01-07 17:07:28-- ftp://ftp.rhce.cc/k8s/*
=> 「/etc/yum.repos.d/.listing」
正在解析主機 ftp.rhce.cc (ftp.rhce.cc)... 101.37.152.41
正在連線 ftp.rhce.cc (ftp.rhce.cc)|101.37.152.41|:21... 已連線。
正在以 anonymous 登入 ... 登入成功!
==> SYST ... 完成。 ==> PWD ... 完成。
......
100%[=======================================================================================================================================================================>] 276 --.-K/s 用時 0s
2022-01-07 17:07:29 (81.9 MB/s) - 「/etc/yum.repos.d/k8s.repo」 已儲存 [276]
#新的repo檔案如下
[root@k8scloude1 ~]# ls /etc/yum.repos.d/
CentOS-Base.repo docker-ce.repo epel.repo k8s.repo
關閉selinux,設定SELINUX=disabled
[root@k8scloude1 ~]# cat /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three two values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
[root@k8scloude1 ~]# getenforce
Disabled
[root@k8scloude1 ~]# setenforce 0
setenforce: SELinux is disabled
設定防火牆允許所有封包通過
[root@k8scloude1 ~]# firewall-cmd --set-default-zone=trusted
Warning: ZONE_ALREADY_SET: trusted
success
[root@k8scloude1 ~]# firewall-cmd --get-default-zone
trusted
Linux swapoff命令用於關閉系統交換分割區(swap area)。
注意:如果不關閉swap,就會在kubeadm初始化Kubernetes的時候報錯:「[ERROR Swap]: running with swap on is not supported. Please disable swap」
[root@k8scloude1 ~]# swapoff -a ;sed -i '/swap/d' /etc/fstab
[root@k8scloude1 ~]# cat /etc/fstab
# /etc/fstab
# Created by anaconda on Thu Oct 18 23:09:54 2018
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=9875fa5e-2eea-4fcc-a83e-5528c7d0f6a5 / xfs defaults 0 0
k8s是容器編排工具,需要容器管理工具,所以三個節點同時安裝docker,還是以k8scloude1為例。
安裝docker
[root@k8scloude1 ~]# yum -y install docker-ce
已載入外掛:fastestmirror
base | 3.6 kB 00:00:00
......
已安裝:
docker-ce.x86_64 3:20.10.12-3.el7
......
完畢!
設定docker開機自啟動並現在啟動docker
[root@k8scloude1 ~]# systemctl enable docker --now
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@k8scloude1 ~]# systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
Active: active (running) since 六 2022-01-08 22:10:38 CST; 18s ago
Docs: https://docs.docker.com
Main PID: 1377 (dockerd)
Memory: 30.8M
CGroup: /system.slice/docker.service
└─1377 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
檢視docker版本
[root@k8scloude1 ~]# docker --version
Docker version 20.10.12, build e91ed57
設定docker映象加速器
[root@k8scloude1 ~]# cat > /etc/docker/daemon.json <<EOF
> {
> "registry-mirrors": ["https://frz7i079.mirror.aliyuncs.com"]
> }
> EOF
[root@k8scloude1 ~]# cat /etc/docker/daemon.json
{
"registry-mirrors": ["https://frz7i079.mirror.aliyuncs.com"]
}
重啟docker
[root@k8scloude1 ~]# systemctl restart docker
[root@k8scloude1 ~]# systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
Active: active (running) since 六 2022-01-08 22:17:45 CST; 8s ago
Docs: https://docs.docker.com
Main PID: 1529 (dockerd)
Memory: 32.4M
CGroup: /system.slice/docker.service
└─1529 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
設定iptables不對bridge的資料進行處理,啟用IP路由轉發功能
[root@k8scloude1 ~]# cat <<EOF> /etc/sysctl.d/k8s.conf
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> net.ipv4.ip_forward = 1
> EOF
#使設定生效
[root@k8scloude1 ~]# sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
三個節點都安裝kubelet,kubeadm,kubectl:
#repoid:禁用為給定kubernetes定義的排除
##--disableexcludes=kubernetes 禁掉除了這個之外的別的倉庫
[root@k8scloude1 ~]# yum -y install kubelet-1.21.0-0 kubeadm-1.21.0-0 kubectl-1.21.0-0 --disableexcludes=kubernetes
已載入外掛:fastestmirror
Loading mirror speeds from cached hostfile
正在解決依賴關係
--> 正在檢查事務
---> 軟體包 kubeadm.x86_64.0.1.21.0-0 將被 安裝
......
已安裝:
kubeadm.x86_64 0:1.21.0-0 kubectl.x86_64 0:1.21.0-0 kubelet.x86_64 0:1.21.0-0
......
完畢!
設定kubelet開機自啟動並現在啟動kubelet
[root@k8scloude1 ~]# systemctl enable kubelet --now
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
#kubelet現在是啟動不了的
[root@k8scloude1 ~]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: activating (auto-restart) (Result: exit-code) since 六 2022-01-08 22:35:33 CST; 3s ago
Docs: https://kubernetes.io/docs/
Process: 1722 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
Main PID: 1722 (code=exited, status=1/FAILURE)
1月 08 22:35:33 k8scloude1 systemd[1]: kubelet.service: main process exited, code=exited, status=1/FAILURE
1月 08 22:35:33 k8scloude1 systemd[1]: Unit kubelet.service entered failed state.
1月 08 22:35:33 k8scloude1 systemd[1]: kubelet.service failed.
檢視kubeadm哪些版本是可用的
[root@k8scloude2 ~]# yum list --showduplicates kubeadm --disableexcludes=kubernetes
已載入外掛:fastestmirror
Loading mirror speeds from cached hostfile
已安裝的軟體包
kubeadm.x86_64 1.21.0-0 @kubernetes
可安裝的軟體包
kubeadm.x86_64 1.6.0-0 kubernetes
kubeadm.x86_64 1.6.1-0 kubernetes
kubeadm.x86_64 1.6.2-0 kubernetes
......
kubeadm.x86_64 1.23.0-0 kubernetes
kubeadm.x86_64 1.23.1-0
kubeadm init:在主節點k8scloude1上初始化 Kubernetes 控制平面節點
#進行kubeadm初始化
#--image-repository registry.aliyuncs.com/google_containers:使用阿里雲映象倉庫,不然有些映象下載不下來
#--kubernetes-version=v1.21.0:指定k8s的版本
#--pod-network-cidr=10.244.0.0/16:指定pod的網段
#如下報錯:registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0下載不下來,原因為:coredns改名為coredns/coredns了,手動下載coredns即可
#coredns是一個用go語言編寫的開源的DNS服務
[root@k8scloude1 ~]# kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.21.0 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.21.0
[preflight] Running pre-flight checks
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ImagePull]: failed to pull image registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0: output: Error response from daemon: pull access denied for registry.aliyuncs.com/google_containers/coredns/coredns, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
手動下載coredns映象
[root@k8scloude1 ~]# docker pull coredns/coredns:1.8.0
1.8.0: Pulling from coredns/coredns
c6568d217a00: Pull complete
5984b6d55edf: Pull complete
Digest: sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e
Status: Downloaded newer image for coredns/coredns:1.8.0
docker.io/coredns/coredns:1.8.0
需要重新命名coredns映象,不然識別不了
[root@k8scloude1 ~]# docker tag coredns/coredns:1.8.0 registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0
#刪除coredns/coredns:1.8.0映象
[root@k8scloude1 ~]# docker rmi coredns/coredns:1.8.0
此時可以發現現在k8scloude1上有7個映象,缺一個映象,kubeadm初始化都不能成功
[root@k8scloude1 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.aliyuncs.com/google_containers/kube-apiserver v1.21.0 4d217480042e 9 months ago 126MB
registry.aliyuncs.com/google_containers/kube-proxy v1.21.0 38ddd85fe90e 9 months ago 122MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.21.0 09708983cc37 9 months ago 120MB
registry.aliyuncs.com/google_containers/kube-scheduler v1.21.0 62ad3129eca8 9 months ago 50.6MB
registry.aliyuncs.com/google_containers/pause 3.4.1 0f8457a4c2ec 12 months ago 683kB
registry.aliyuncs.com/google_containers/coredns/coredns v1.8.0 296a6d5035e2 14 months ago 42.5MB
registry.aliyuncs.com/google_containers/etcd 3.4.13-0 0369cf4303ff 16 months ago 253MB
重新進行kubeadm初始化
[root@k8scloude1 ~]# kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.21.0 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.21.0
[preflight] Running pre-flight checks
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8scloude1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.110.130]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8scloude1 localhost] and IPs [192.168.110.130 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8scloude1 localhost] and IPs [192.168.110.130 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 65.002757 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8scloude1 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8scloude1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: nta3x4.3e54l2dqtmj9tlry
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.110.130:6443 --token nta3x4.3e54l2dqtmj9tlry \
--discovery-token-ca-cert-hash sha256:9add1314177ac5660d9674dab8c13aa996520028514246c4cd103cf08a211cc8
根據提示建立目錄和組態檔
[root@k8scloude1 ~]# mkdir -p $HOME/.kube
[root@k8scloude1 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8scloude1 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
現在已經可以看到master節點了
[root@k8scloude1 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8scloude1 NotReady control-plane,master 5m54s v1.21.0
接下來把另外的兩個worker節點也加入到k8s叢集。
kubeadm init的時候輸出瞭如下這句:kubeadm join 192.168.110.130:6443 --token nta3x4.3e54l2dqtmj9tlry --discovery-token-ca-cert-hash sha256:9add1314177ac5660d9674dab8c13aa996520028514246c4cd103cf08a211cc8 ,在另外兩個worker節點執行這一命令就可以把節點加入到k8s叢集裡。
如果加入叢集的token忘了,可以使用如下的命令獲取最新的加入命令token
[root@k8scloude1 ~]# kubeadm token create --print-join-command
kubeadm join 192.168.110.130:6443 --token 8e3haz.m1wrpuf357g72k1u --discovery-token-ca-cert-hash sha256:9add1314177ac5660d9674dab8c13aa996520028514246c4cd103cf08a211cc8
在另外兩個節點執行加入叢集的token命令
[root@k8scloude2 ~]# kubeadm join 192.168.110.130:6443 --token 8e3haz.m1wrpuf357g72k1u --discovery-token-ca-cert-hash sha256:9add1314177ac5660d9674dab8c13aa996520028514246c4cd103cf08a211cc8
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@k8scloude3 ~]# kubeadm join 192.168.110.130:6443 --token 8e3haz.m1wrpuf357g72k1u --discovery-token-ca-cert-hash sha256:9add1314177ac5660d9674dab8c13aa996520028514246c4cd103cf08a211cc8
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
在k8scloude1檢視節點狀態,可以看到兩個worker節點都加入到了k8s叢集
[root@k8scloude1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8scloude1 NotReady control-plane,master 8m43s v1.21.0
k8scloude2 NotReady <none> 28s v1.21.0
k8scloude3 NotReady <none> 25s v1.21.0
可以發現worker節點加入到k8s叢集後多了兩個映象
[root@k8scloude2 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.aliyuncs.com/google_containers/kube-proxy v1.21.0 38ddd85fe90e 9 months ago 122MB
registry.aliyuncs.com/google_containers/pause 3.4.1 0f8457a4c2ec 12 months ago 683kB
[root@k8scloude3 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.aliyuncs.com/google_containers/kube-proxy v1.21.0 38ddd85fe90e 9 months ago 122MB
registry.aliyuncs.com/google_containers/pause 3.4.1 0f8457a4c2ec 12 months ago 683kB
雖然現在k8s叢集已經有1個master節點,2個worker節點,但是此時三個節點的狀態都是NotReady的,原因是沒有CNI網路外掛,為了節點間的通訊,需要安裝cni網路外掛,常用的cni網路外掛有calico和flannel,兩者區別為:flannel不支援複雜的網路策略,calico支援網路策略,因為今後還要設定k8s網路策略networkpolicy,所以本文選用的cni網路外掛為calico!
現在去官網下載calico.yaml檔案:
官網:https://projectcalico.docs.tigera.io/about/about-calico
搜尋方塊裡直接搜尋calico.yaml
找到下載calico.yaml的命令
下載calico.yaml檔案
[root@k8scloude1 ~]# curl https://docs.projectcalico.org/manifests/calico.yaml -O
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 212k 100 212k 0 0 44222 0 0:00:04 0:00:04 --:--:-- 55704
[root@k8scloude1 ~]# ls
calico.yaml
檢視需要下載的calico映象,這四個映象需要在所有節點都下載,以k8scloude1為例
[root@k8scloude1 ~]# grep image calico.yaml
image: docker.io/calico/cni:v3.21.2
image: docker.io/calico/cni:v3.21.2
image: docker.io/calico/pod2daemon-flexvol:v3.21.2
image: docker.io/calico/node:v3.21.2
image: docker.io/calico/kube-controllers:v3.21.2
[root@k8scloude1 ~]# docker pull docker.io/calico/cni:v3.21.2
v3.21.2: Pulling from calico/cni
Digest: sha256:ce618d26e7976c40958ea92d40666946d5c997cd2f084b6a794916dc9e28061b
Status: Image is up to date for calico/cni:v3.21.2
docker.io/calico/cni:v3.21.2
[root@k8scloude1 ~]# docker pull docker.io/calico/pod2daemon-flexvol:v3.21.2
v3.21.2: Pulling from calico/pod2daemon-flexvol
Digest: sha256:b034c7c886e697735a5f24e52940d6d19e5f0cb5bf7caafd92ddbc7745cfd01e
Status: Image is up to date for calico/pod2daemon-flexvol:v3.21.2
docker.io/calico/pod2daemon-flexvol:v3.21.2
[root@k8scloude1 ~]# docker pull docker.io/calico/node:v3.21.2
v3.21.2: Pulling from calico/node
Digest: sha256:6912fe45eb85f166de65e2c56937ffb58c935187a84e794fe21e06de6322a4d0
Status: Image is up to date for calico/node:v3.21.2
docker.io/calico/node:v3.21.2
[root@k8scloude1 ~]# docker pull docker.io/calico/kube-controllers:v3.21.2
v3.21.2: Pulling from calico/kube-controllers
d6a693444ed1: Pull complete
a5399680e995: Pull complete
8f0eb4c2bcba: Pull complete
52fe18e41b06: Pull complete
2f8d3f9f1a40: Pull complete
bc94a7e3e934: Pull complete
55bf7cf53020: Pull complete
Digest: sha256:1f4fcdcd9d295342775977b574c3124530a4b8adf4782f3603a46272125f01bf
Status: Downloaded newer image for calico/kube-controllers:v3.21.2
docker.io/calico/kube-controllers:v3.21.2
#主要是如下4個映象
[root@k8scloude1 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
calico/node v3.21.2 f1bca4d4ced2 4 weeks ago 214MB
calico/pod2daemon-flexvol v3.21.2 7778dd57e506 5 weeks ago 21.3MB
calico/cni v3.21.2 4c5c32530391 5 weeks ago 239MB
calico/kube-controllers v3.21.2 b20652406028 5 weeks ago 132MB
修改calico.yaml 檔案,CALICO_IPV4POOL_CIDR的IP段要和kubeadm初始化時候的pod網段一致,注意格式要對齊,不然會報錯
[root@k8scloude1 ~]# vim calico.yaml
[root@k8scloude1 ~]# cat calico.yaml | egrep "CALICO_IPV4POOL_CIDR|"10.244""
- name: CALICO_IPV4POOL_CIDR
value: "10.244.0.0/16"
不直觀的話看圖片:修改calico.yaml 檔案
應用calico.yaml檔案
[root@k8scloude1 ~]# kubectl apply -f calico.yaml
configmap/calico-config unchanged
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org configured
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers unchanged
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers unchanged
clusterrole.rbac.authorization.k8s.io/calico-node unchanged
clusterrolebinding.rbac.authorization.k8s.io/calico-node unchanged
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/calico-kube-controllers created
此時發現三個節點都是Ready狀態了
[root@k8scloude1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8scloude1 Ready control-plane,master 53m v1.21.0
k8scloude2 Ready <none> 45m v1.21.0
k8scloude3 Ready <none> 45m v1.21.0
檢視kubectl自動補全命令
[root@k8scloude1 ~]# kubectl --help | grep bash
completion Output shell completion code for the specified shell (bash or zsh)
新增source <(kubectl completion bash)到/etc/profile,並使設定生效
[root@k8scloude1 ~]# cat /etc/profile | head -2
# /etc/profile
source <(kubectl completion bash)
[root@k8scloude1 ~]# source /etc/profile
此時即可kubectl命令tab鍵自動補全
[root@k8scloude1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8scloude1 Ready control-plane,master 59m v1.21.0
k8scloude2 Ready <none> 51m v1.21.0
k8scloude3 Ready <none> 51m v1.21.0
#注意:需要bash-completion-2.1-6.el7.noarch包,不然不能自動補全命令
[root@k8scloude1 ~]# rpm -qa | grep bash
bash-completion-2.1-6.el7.noarch
bash-4.2.46-30.el7.x86_64
bash-doc-4.2.46-30.el7.x86_64
自此,Kubernetes(k8s)叢集部署完畢!