利用kubernetes資源鎖完成自己的HA應用

2022-06-29 06:00:52

Backgroud

前一章中,對kubernetes的選舉原理進行了深度剖析,下面就通過一個example來實現一個,利用kubernetes提供的選舉機制完成的高可用應用。

對於此章需要提前對一些概念有所瞭解後才可以繼續看下去

  • leader election mechanism
  • RBCA
  • Pod runtime mechanism

Implementation

程式碼實現

如果僅僅是使用Kubernetes中的鎖,實現的程式碼也只有幾行而已。

package main

import (
	"context"
	"flag"
	"fmt"
	"os"
	"os/signal"
	"syscall"
	"time"

	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
	clientset "k8s.io/client-go/kubernetes"
	"k8s.io/client-go/rest"
	"k8s.io/client-go/tools/clientcmd"
	"k8s.io/client-go/tools/leaderelection"
	"k8s.io/client-go/tools/leaderelection/resourcelock"
	"k8s.io/klog/v2"
)

func buildConfig(kubeconfig string) (*rest.Config, error) {
	if kubeconfig != "" {
		cfg, err := clientcmd.BuildConfigFromFlags("", kubeconfig)
		if err != nil {
			return nil, err
		}
		return cfg, nil
	}

	cfg, err := rest.InClusterConfig()
	if err != nil {
		return nil, err
	}
	return cfg, nil
}

func main() {
	klog.InitFlags(nil)

	var kubeconfig string
	var leaseLockName string
	var leaseLockNamespace string
	var id string
	// 初始化使用者端的部分
	flag.StringVar(&kubeconfig, "kubeconfig", "", "absolute path to the kubeconfig file")
	flag.StringVar(&id, "id", "", "the holder identity name")
	flag.StringVar(&leaseLockName, "lease-lock-name", "", "the lease lock resource name")
	flag.StringVar(&leaseLockNamespace, "lease-lock-namespace", "", "the lease lock resource namespace")
	flag.Parse()

	if leaseLockName == "" {
		klog.Fatal("unable to get lease lock resource name (missing lease-lock-name flag).")
	}
	if leaseLockNamespace == "" {
		klog.Fatal("unable to get lease lock resource namespace (missing lease-lock-namespace flag).")
	}
	config, err := buildConfig(kubeconfig)
	if err != nil {
		klog.Fatal(err)
	}
	client := clientset.NewForConfigOrDie(config)

	run := func(ctx context.Context) {
		// 實現的業務邏輯,這裡僅僅為實驗,就直接列印了
		klog.Info("Controller loop...")

		for {
			fmt.Println("I am leader, I was working.")
			time.Sleep(time.Second * 5)
		}
	}

	// use a Go context so we can tell the leaderelection code when we
	// want to step down
	ctx, cancel := context.WithCancel(context.Background())
	defer cancel()

	// 監聽系統中斷
	ch := make(chan os.Signal, 1)
	signal.Notify(ch, os.Interrupt, syscall.SIGTERM)
	go func() {
		<-ch
		klog.Info("Received termination, signaling shutdown")
		cancel()
	}()

	// 建立一個資源鎖
	lock := &resourcelock.LeaseLock{
		LeaseMeta: metav1.ObjectMeta{
			Name:      leaseLockName,
			Namespace: leaseLockNamespace,
		},
		Client: client.CoordinationV1(),
		LockConfig: resourcelock.ResourceLockConfig{
			Identity: id,
		},
	}

	// 開啟一個選舉的迴圈
	leaderelection.RunOrDie(ctx, leaderelection.LeaderElectionConfig{
		Lock:            lock,
		ReleaseOnCancel: true,
		LeaseDuration:   60 * time.Second,
		RenewDeadline:   15 * time.Second,
		RetryPeriod:     5 * time.Second,
		Callbacks: leaderelection.LeaderCallbacks{
			OnStartedLeading: func(ctx context.Context) {
				// 當選舉為leader後所執行的業務邏輯
				run(ctx)
			},
			OnStoppedLeading: func() {
				// we can do cleanup here
				klog.Infof("leader lost: %s", id)
				os.Exit(0)
			},
			OnNewLeader: func(identity string) { // 申請一個選舉時的動作
				if identity == id {
					return
				}
				klog.Infof("new leader elected: %s", identity)
			},
		},
	})
}

注:這種lease鎖只能在in-cluster模式下執行,如果需要類似二進位制部署的程式,可以選擇endpoint型別的資源鎖。

生成映象

這裡已經制作好了映象並上傳到dockerhub(cylonchau/leaderelection:v0.0.2)上了,如果只要學習執行原理,則忽略此步驟

FROM golang:alpine AS builder
MAINTAINER cylon
WORKDIR /election
COPY . /election
ENV GOPROXY https://goproxy.cn,direct
RUN GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go build -o elector main.go

FROM alpine AS runner
WORKDIR /go/elector
COPY --from=builder /election/elector .
VOLUME ["/election"]
ENTRYPOINT ["./elector"]

準備資源清單

預設情況下,Kubernetes執行的pod在請求Kubernetes叢集內資源時,預設的賬戶是沒有許可權的,預設服務帳戶無權存取協調 API,因此我們需要建立另一個serviceaccount並相應地設定 對應的RBAC許可權繫結;在清單中設定上這個sa,此時所有的pod就會有協調鎖的許可權了

apiVersion: v1
kind: ServiceAccount
metadata:
  name: sa-leaderelection
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: leaderelection
rules:
  - apiGroups:
      - coordination.k8s.io
    resources:
      - leases
    verbs:
      - '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: leaderelection
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: leaderelection
subjects:
  - kind: ServiceAccount
    name: sa-leaderelection
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: leaderelection
  name: leaderelection
  namespace: default
spec:
  replicas: 3
  selector:
    matchLabels:
      app: leaderelection
  template:
    metadata:
      labels:
        app: leaderelection
    spec:
      containers:
        - image: cylonchau/leaderelection:v0.0.2
          imagePullPolicy: IfNotPresent
          command: ["./elector"]
          args:
          - "-id=$(POD_NAME)"
          - "-lease-lock-name=test"
          - "-lease-lock-namespace=default"
          env:
          - name: POD_NAME
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: metadata.name
          name: elector
      serviceAccountName: sa-leaderelection

叢集中執行

執行完清單後,當pod啟動後,可以看到會建立出一個 lease

$ kubectl get lease
NAME   HOLDER                            AGE
test   leaderelection-5644c5f84f-frs5n   1s


$ kubectl describe lease
Name:         test
Namespace:    default
Labels:       <none>
Annotations:  <none>
API Version:  coordination.k8s.io/v1
Kind:         Lease
Metadata:
  Creation Timestamp:  2022-06-28T16:39:45Z
  Managed Fields:
    API Version:  coordination.k8s.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:spec:
        f:acquireTime:
        f:holderIdentity:
        f:leaseDurationSeconds:
        f:leaseTransitions:
        f:renewTime:
    Manager:         elector
    Operation:       Update
    Time:            2022-06-28T16:39:45Z
  Resource Version:  131693
  Self Link:         /apis/coordination.k8s.io/v1/namespaces/default/leases/test
  UID:               bef2b164-a117-44bd-bad3-3e651c94c97b
Spec:
  Acquire Time:            2022-06-28T16:39:45.931873Z
  Holder Identity:         leaderelection-5644c5f84f-frs5n
  Lease Duration Seconds:  60
  Lease Transitions:       0
  Renew Time:              2022-06-28T16:39:55.963537Z
Events:                    <none>

通過其持有者的資訊檢視對應pod(因為程式中對holder Identity設定的是pod的名稱),實際上是工作的pod。

如上範例所述,這是利用Kubernetes叢集完成的leader選舉的方案,雖然這不是最完美解決方案,但這是一種簡單的方法,因為可以無需在叢集上部署更多東西或者進行大量的程式碼工作就可以利用Kubernetes叢集來完成一個高可用的HA應用。