我這裡準備了三臺虛擬機器器,分別部署一個master和兩個node,作業系統位ubuntu 20.04。以下為特殊說明為三臺機器都要做此操作
之前,我們用的容器runtime基本都是docker,但是docker並沒有實現k8s的CRI,是在kubelet的有一個元件叫docker-shim做轉化,在kubernetes v1.24版本以上這個元件已經廢棄,這裡選擇containerd做容器runtime。當然,containerd是可以使用docker的映象的。如果非要使用docker的話,被kubernetes廢棄的docker-shim被docker自己維護起來了,可以試試看。但是不建議純純的浪費資源。
apt install -y containerd
生成預設設定
mkdir /etc/containerd
containerd config default > /etc/containerd/config.toml
設定systemd cgroup
驅動程式
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
設定代理和修改pause映象
重所周知的原因
我這裡用的網易docker源 你也可以用別的 阿里源等
sed -i 's|config_path = ""|config_path = "/etc/containerd/certs.d/"|g' /etc/containerd/config.toml
mkdir -p /etc/containerd/certs.d/docker.io
mkdir -p /etc/containerd/certs.d/docker.io
cat >/etc/containerd/certs.d/docker.io/hosts.toml <<EOF
server = "https://docker.io"
[host."http://hub-mirror.c.163.com"]
capabilities = ["pull","resolve"]
[host."https://docker.mirrors.ustc.edu.cn"]
capabilities = ["pull","resolve"]
[host."https://registry-1.docker.io"]
capabilities = ["pull","resolve","push"]
EOF
vim /usr/lib/systemd/system/containerd.service
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.8"
啟動
systemctl daemon-reload
systemctl enable containerd
systemctl start containerd
這裡使用 nerdctl
工具測試
nerdctl
是 containerd 房官方提供的加強版命令列工具 https://github.com/containerd/nerdctl
下載方式
wget https://ghproxy.com/https://github.com/containerd/nerdctl/releases/download/v0.23.0/nerdctl-0.23.0-linux-amd64.tar.gz
tar xzvf nerdctl-0.23.0-linux-amd64.tar.gz -C /usr/local/bin
nerdctl --debug pull busybox
DEBU[0000] verification process skipped
DEBU[0000] Found hosts dir "/etc/containerd/certs.d"
DEBU[0000] Ignoring hosts dir "/etc/docker/certs.d" error="stat /etc/docker/certs.d: no such file or directory"
DEBU[0000] The image will be unpacked for platform {"amd64" "linux" "" [] ""}, snapshotter "overlayfs".
DEBU[0000] fetching image="docker.io/library/busybox:latest"
DEBU[0000] loading host directory dir=/etc/containerd/certs.d/docker.io
DEBU[0000] resolving host=hub-mirror.c.163.com
DEBU[0000] do request host=hub-mirror.c.163.com request.header.accept="application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.list.v2+json, application/vnd.oci.image.manifest.v1+json, application/vnd.oci.image.index.v1+json, */*" request.header.user-agent=containerd/1.6.0+unknown request.method=HEAD url="http://hub-mirror.c.163.com/v2/library/busybox/manifests/latest?ns=docker.io"
看到 host=hub-mirror.c.163.com 代表設定成功
# 檢視狀態
ufw status
# 如果開啟著呢 請關閉
ufw disable
apt install -y ntpdate
ntpdate time.windows.com
# 永久生效 需要重啟
sed -ri 's/.*swap.*/#&/' /etc/fstab
# 臨時關閉,重啟後無效
swapoff -a
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
EOF
# 載入br_netfilter模組
modprobe br_netfilter
# 檢視是否載入
lsmod | grep br_netfilter
# 生效
sysctl --system
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 > /proc/sys/net/ipv4/ip_forward
在kubernetes中service有兩種代理模型,一種是基於iptables,另一種是基於ipvs的。ipvs的效能要高於iptables的,但是如果要使用它,需要手動載入ipvs模組。
apt install -y ipset ipvsadm
mkdir -p /etc/sysconfig/modules
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF
授權、執行、檢查是否載入
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
檢查是否載入
lsmod | grep -e ipvs -e nf_conntrack_ipv4
設定主機名
hostnamectl set-hostname <hostname>
三臺機器分別為
# 192.168.56.100
hostnamectl set-hostname k8s-master
# 192.168.56.101
hostnamectl set-hostname k8s-node1
# 192.168.56.102
hostnamectl set-hostname k8s-node2
安裝https工具
apt install -y apt-transport-https ca-certificates curl
下載阿里雲cloud公鑰
為什麼下載阿里雲的,不去下載 kubernetes 官方的 你懂得
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg
新增 Kubernetes apt
倉庫
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
更新 apt
包索引,安裝 kubelet、kubeadm 和 kubectl,並鎖定其版本:
apt update
apt install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl
kubeadm config images list
registry.k8s.io/kube-apiserver:v1.25.1
registry.k8s.io/kube-controller-manager:v1.25.1
registry.k8s.io/kube-scheduler:v1.25.1
registry.k8s.io/kube-proxy:v1.25.1
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
kubeadm init \
--apiserver-advertise-address=192.168.56.100 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.25.1 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16
出現這個代表成功
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.56.100:6443 --token 0ce9pe.e2jmgubd88d94xad \
--discovery-token-ca-cert-hash sha256:f87d5a4f64a5c7f29fa86a2d32f4af976aef960eb0b23d443fef943f17726f6c
根據提示執行
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
根據提示在兩臺node上執行命令 加入叢集
kubeadm join 192.168.56.100:6443 --token 0ce9pe.e2jmgubd88d94xad \
--discovery-token-ca-cert-hash
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yaml
這個是網路地址,可能是失敗這裡提供一個yaml下載,然後 apply,kube-flannel.yml
kubectl get node
k8s-master Ready control-plane 1h v1.25.1
k8s-node1 Ready <none> 1h v1.25.1
k8s-node2 Ready <none> 1h v1.25.1
kubectl get pod -n kube-system
coredns-c676cc86f-dqs4c 1/1 Running 0 1h
coredns-c676cc86f-wkclg 1/1 Running 0 1h
etcd-k8s-master 1/1 Running 0 1h
kube-apiserver-k8s-master 1/1 Running 0 1h
kube-controller-manager-k8s-master 1/1 Running 0 1h
kube-proxy-6rwfl 1/1 Running 0 1h
kube-proxy-8tv7x 1/1 Running 0 1h
kube-proxy-dd92k 1/1 Running 0 1h
kube-scheduler-k8s-master 1/1 Running 0 1h
kubectl get pod -n kube-flannel
kube-flannel-ds-42k74 1/1 Running 0 1h
kube-flannel-ds-l62tq 1/1 Running 0 1h
kube-flannel-ds-qfh95 1/1 Running 0 1h