openEuler 部署Kubernetes(K8s)叢集

2022-12-30 12:00:30

前言

由於工作原因需要使用 openEuler,openEuler官方檔案部署K8s叢集比較複雜,並且網上相關資料較少,本文是通過實踐與測試整理的 openEuler 22.03 部署 Kubernetes 1.20.2 叢集操作方法。
這篇文章僅供學習參考,請勿直接用於生產環境。

1. 安裝準備

在開始之前,部署 Kubernetes 叢集機器需要滿足以下幾個條件:

  • 作業系統:openEuler 22.03
  • 硬體設定:2GB或更多RAM,2個CPU或更多CPU,硬碟30GB或更多
  • 叢集中所有機器之間網路互通
  • 可以存取外網,需要拉取映象

1.1 伺服器規劃

主機名稱 角色 IP地址 設定
openEuler.master01 Master節點 192.168.123.208 CPU 2核,記憶體 4G,硬碟 40GB
openEuler.node01 Node節點 192.168.123.167 CPU 2核,記憶體 4G,硬碟 40GB
openEuler.node02 Node節點 192.168.123.213 CPU 2核,記憶體 4G,硬碟 40GB

1.2 伺服器環境設定

  1. 修改主機名稱
# master01 執行
hostnamectl set-hostname openEuler.master01
# node01 執行
hostnamectl set-hostname openEuler.node01
# node02 執行
hostnamectl set-hostname openEuler.node02
  1. 設定host對映
vim /etc/hosts

192.168.123.208 openEuler.master01
192.168.123.167 openEuler.node01
192.168.123.213 openEuler.node02
  1. 關閉swap
# 臨時關閉swap分割區
swapoff -a
  1. 關閉防火牆
# 關閉並禁用防火牆
systemctl stop firewalld && systemctl disable firewalld

2. Kubernetes叢集安裝

2.1 Master節點安裝

2.1.1 安裝Docker

# 安裝docker
dnf install -y docker
# 啟用docker
systemctl enable docker && systemctl start docker
# 檢視docker版本
docker --version

2.1.2 安裝設定Kubernetes元件

# 安裝kubeadmin、kubelet、kubernetes-master
dnf install -y kubernetes-kubeadm kubernetes-kubelet kubernetes-master
# 安裝conntrack元件(k8s依賴元件)
dnf install -y conntrack
# 設定kubelet開機自啟
systemctl enable kubelet.service && systemctl start kubelet.service

# 安裝Kubernetes,apiserver-advertise-address 請替換成實際環境中的master節點ip地址,本文環境使用192.168.123.208
kubeadm init --apiserver-advertise-address=192.168.123.208 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.20.2 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16
# 命令選項說明:
# --apiserver-advertise-address:apiserver通告給其他元件的IP地址,一般應該為Master節點的用於叢集內部通訊的IP地址,0.0.0.0表示節點上所有可用地址
# --image-repository:指定要使用的映象倉庫,指定為aliyun映象加速下載
# --kubernetes-version:Kubernetes程式元件的版本號
# --pod-network-cidr:Pod網路的地址範圍,其值為CIDR格式的網路地址
# --service-cidr:Service的網路地址範圍,其值為CIDR格式的網路地址

看到如下提示安裝成功

儲存kubeadm join資訊

kubeadm join 192.168.123.208:6443 --token 9b3zg3.w9428fz00d993pwo --discovery-token-ca-cert-hash sha256:0287bffb9cc2c10f9ad53dcdc052462cae5ebef63cecb8d53ff689fb6e358b9e 

2.1.3 設定Kubectl

# 設定環境變數
vi /etc/profile

# 在檔案底部追加
export KUBECONFIG=/etc/kubernetes/admin.conf
# 應用改變
source /etc/profile

# 檢視Master節點狀態,此時節點會提示Not Ready,繼續下一步 2.1.4 操作
kubectl get nodes

2.1.4 設定網路外掛

# containerd容器執行時 cni相關檔案預設路徑在/usr/libexec/cni
# flannel 預設讀取/opt/cni/bin路徑
# 拷貝相關檔案
mkdir -p /opt/cni/bin
cp /usr/libexec/cni/* /opt/cni/bin/

# 以下兩種方法根據實際情況任選一種安裝
# 1. 伺服器無法存取github資源-安裝方法
# (科學)下載kube-flannel.yml檔案放置到 /opt/yaml/kube-flannel.yml
# kube-flannel.yml檔案連結:https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f /opt/yaml/kube-flannel.yaml
# 檢視Master節點狀態,此時會提示Ready
kubectl get nodes

# 2. 伺服器可以存取github資源-安裝方法
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# 檢視Master節點狀態,此時節點會提示Ready,Master節點安裝完成
kubectl get nodes

附: 2022.12.29 下載的kube-flannel.yml檔案,可以手動儲存使用

---
kind: Namespace
apiVersion: v1
metadata:
  name: kube-flannel
  labels:
    pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
- apiGroups:
  - "networking.k8s.io"
  resources:
  - clustercidrs
  verbs:
  - list
  - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
        #image: flannelcni/flannel-cni-plugin:v1.1.2 #for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.2
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
       #image: flannelcni/flannel:v0.20.2 #for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
       #image: flannelcni/flannel:v0.20.2 #for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate

2.2 Node節點(兩臺)安裝

2.2.1 安裝Docker

# 安裝docker
dnf install -y docker
# 啟用docker
systemctl enable docker && systemctl start docker
# 檢視docker版本
docker --version

2.2.2 安裝設定Kubernetes元件

# 安裝kubeadmin、kubelet、kubernetes-node
dnf install -y kubernetes-kubeadm kubernetes-kubelet kubernetes-node

# 設定kubelet開機自啟
systemctl enable kubelet.service && systemctl start kubelet.service

# containerd 容器執行時cni相關檔案預設路徑在/usr/libexec/cni
# 拷貝相關檔案到cni標準路徑
mkdir -p /opt/cni/bin
cp /usr/libexec/cni/* /opt/cni/bin/

# 依據Master節點上建立的token開始join命令,此處可以從 2.1.2 kubeadm init的返回中複製。
kubeadm join 192.168.123.208:6443 --token 9b3zg3.w9428fz00d993pwo --discovery-token-ca-cert-hash sha256:0287bffb9cc2c10f9ad53dcdc052462cae5ebef63cecb8d53ff689fb6e358b9e 

見到如下提示安裝成功:

回到Master節點檢視狀態,稍作等待三個節點都會變為Ready

# Master節點檢視狀態,稍作等待會提示三個節點Ready
kubectl get nodes

3. 測試Kubernetes叢集

  1. 在Kubernetes叢集中建立一個pod,驗證是否正常執行
# Master節點操作
# 建立nginx容器
kubectl create deployment nginx --image=nginx
# 暴露對外埠
kubectl expose deployment nginx --port=80 --type=NodePort
# 檢視nginx是否執行成功
kubectl get pod,svc

# 所有節點都可以存取到Nginx
192.168.123.208:30116
192.168.123.167:30116
192.168.123.213:30116

  1. 擴容nginx副本測試
# 擴充套件副本數為3
kubectl scale deployment nginx --replicas=3
# 檢視pod狀態
kubectl get pods

如下圖則擴充套件成功


要是覺得文章對你有幫助的話,歡迎評論轉發點贊~
更多有趣實用的內容,敬請關注公眾號「嵐山茶館」。