redis是一款基於BSD協定,開源的非關係型資料庫(nosql資料庫),作者是義大利開發者Salvatore Sanfilippo在2009年釋出,使用C語言編寫;redis是基於記憶體儲存,而且是目前比較流行的鍵值資料庫(key-value database),它提供將記憶體通過網路遠端共用的一種服務,提供類似功能的還有memcache,但相比 memcache,redis 還提供了易擴充套件、高效能、具備資料永續性等功能。主要的應用場景有session共用,常用於web叢集中的tomcat或PHP中多web伺服器的session共用;訊息佇列,ELK的紀錄檔快取,部分業務的訂閱釋出系統;計數器,常用於存取排行榜,商品瀏覽數等和次數相關的數值統計場景;快取,常用於資料查詢、電商網站商品資訊、新聞內容等;相對memcache,redis支援資料的持久化,可以將記憶體的資料儲存在磁碟中,重啟redis服務或者伺服器之後可以從備份檔案中恢復資料到記憶體繼續使用;
由於redis的資料(主要是redis快照)都存放在儲存系統中,即便redis pod掛掉,對應資料都不會丟;因為在k8s上部署redis單機,redis pod掛了,k8s會將對應pod重建,重建時會把對應pvc掛載至pod中,載入快照,從而使得redis的資料不被pod的掛掉而丟資料;
root@k8s-master01:~/k8s-data/dockerfile/web/magedu/redis# ll
total 1784
drwxr-xr-x 2 root root 4096 Jun 5 15:22 ./
drwxr-xr-x 11 root root 4096 Aug 9 2022 ../
-rw-r--r-- 1 root root 717 Jun 5 15:20 Dockerfile
-rwxr-xr-x 1 root root 235 Jun 5 15:21 build-command.sh*
-rw-r--r-- 1 root root 1740967 Jun 22 2021 redis-4.0.14.tar.gz
-rw-r--r-- 1 root root 58783 Jun 22 2021 redis.conf
-rwxr-xr-x 1 root root 84 Jun 5 15:21 run_redis.sh*
root@k8s-master01:~/k8s-data/dockerfile/web/magedu/redis# cat Dockerfile
#Redis Image
# 匯入自定義centos基礎映象
FROM harbor.ik8s.cc/baseimages/magedu-centos-base:7.9.2009
# 新增redis原始碼包至/usr/local/src
ADD redis-4.0.14.tar.gz /usr/local/src
# 編譯安裝redis
RUN ln -sv /usr/local/src/redis-4.0.14 /usr/local/redis && cd /usr/local/redis && make && cp src/redis-cli /usr/sbin/ && cp src/redis-server /usr/sbin/ && mkdir -pv /data/redis-data
# 新增redis組態檔
ADD redis.conf /usr/local/redis/redis.conf
# 暴露redis伺服器埠
EXPOSE 6379
#ADD run_redis.sh /usr/local/redis/run_redis.sh
#CMD ["/usr/local/redis/run_redis.sh"]
# 新增啟動指令碼
ADD run_redis.sh /usr/local/redis/entrypoint.sh
# 啟動redis
ENTRYPOINT ["/usr/local/redis/entrypoint.sh"]
root@k8s-master01:~/k8s-data/dockerfile/web/magedu/redis# cat build-command.sh
#!/bin/bash
TAG=$1
#docker build -t harbor.ik8s.cc/magedu/redis:${TAG} .
#sleep 3
#docker push harbor.ik8s.cc/magedu/redis:${TAG}
nerdctl build -t harbor.ik8s.cc/magedu/redis:${TAG} .
nerdctl push harbor.ik8s.cc/magedu/redis:${TAG}
root@k8s-master01:~/k8s-data/dockerfile/web/magedu/redis# cat run_redis.sh
#!/bin/bash
# Redis啟動命令
/usr/sbin/redis-server /usr/local/redis/redis.conf
# 使用tail -f 在pod內部構建守護行程
tail -f /etc/hosts
root@k8s-master01:~/k8s-data/dockerfile/web/magedu/redis# grep -v '^#\|^$' redis.conf
bind 0.0.0.0
protected-mode yes
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize yes
supervised no
pidfile /var/run/redis_6379.pid
loglevel notice
logfile ""
databases 16
always-show-logo yes
save 900 1
save 5 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error no
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir /data/redis-data
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
requirepass 123456
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
slave-lazy-flush no
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
aof-use-rdb-preamble no
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes
root@k8s-master01:~/k8s-data/dockerfile/web/magedu/redis#
能夠將redis映象執行為容器,並且能夠通過遠端主機連線至redis進行資料讀寫,說明我們構建的reids映象沒有問題;
root@harbor:~# mkdir -pv /data/k8sdata/magedu/redis-datadir-1
mkdir: created directory '/data/k8sdata/magedu/redis-datadir-1'
root@harbor:~# cat /etc/exports
# /etc/exports: the access control list for filesystems which may be exported
# to NFS clients. See exports(5).
#
# Example for NFSv2 and NFSv3:
# /srv/homes hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
#
# Example for NFSv4:
# /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
# /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check)
#
/data/k8sdata/kuboard *(rw,no_root_squash)
/data/volumes *(rw,no_root_squash)
/pod-vol *(rw,no_root_squash)
/data/k8sdata/myserver *(rw,no_root_squash)
/data/k8sdata/mysite *(rw,no_root_squash)
/data/k8sdata/magedu/images *(rw,no_root_squash)
/data/k8sdata/magedu/static *(rw,no_root_squash)
/data/k8sdata/magedu/zookeeper-datadir-1 *(rw,no_root_squash)
/data/k8sdata/magedu/zookeeper-datadir-2 *(rw,no_root_squash)
/data/k8sdata/magedu/zookeeper-datadir-3 *(rw,no_root_squash)
/data/k8sdata/magedu/redis-datadir-1 *(rw,no_root_squash)
root@harbor:~# exportfs -av
exportfs: /etc/exports [1]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/kuboard".
Assuming default behaviour ('no_subtree_check').
NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [2]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/volumes".
Assuming default behaviour ('no_subtree_check').
NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [3]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/pod-vol".
Assuming default behaviour ('no_subtree_check').
NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [4]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/myserver".
Assuming default behaviour ('no_subtree_check').
NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [5]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/mysite".
Assuming default behaviour ('no_subtree_check').
NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [7]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/images".
Assuming default behaviour ('no_subtree_check').
NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [8]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/static".
Assuming default behaviour ('no_subtree_check').
NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [11]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/zookeeper-datadir-1".
Assuming default behaviour ('no_subtree_check').
NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [12]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/zookeeper-datadir-2".
Assuming default behaviour ('no_subtree_check').
NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [13]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/zookeeper-datadir-3".
Assuming default behaviour ('no_subtree_check').
NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [16]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/redis-datadir-1".
Assuming default behaviour ('no_subtree_check').
NOTE: this default has changed since nfs-utils version 1.0.x
exporting *:/data/k8sdata/magedu/redis-datadir-1
exporting *:/data/k8sdata/magedu/zookeeper-datadir-3
exporting *:/data/k8sdata/magedu/zookeeper-datadir-2
exporting *:/data/k8sdata/magedu/zookeeper-datadir-1
exporting *:/data/k8sdata/magedu/static
exporting *:/data/k8sdata/magedu/images
exporting *:/data/k8sdata/mysite
exporting *:/data/k8sdata/myserver
exporting *:/pod-vol
exporting *:/data/volumes
exporting *:/data/k8sdata/kuboard
root@harbor:~#
root@k8s-master01:~/k8s-data/yaml/magedu/redis/pv# cat redis-persistentvolume.yaml
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-datadir-pv-1
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
nfs:
path: /data/k8sdata/magedu/redis-datadir-1
server: 192.168.0.42
root@k8s-master01:~/k8s-data/yaml/magedu/redis/pv#
root@k8s-master01:~/k8s-data/yaml/magedu/redis/pv# cat redis-persistentvolumeclaim.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redis-datadir-pvc-1
namespace: magedu
spec:
volumeName: redis-datadir-pv-1
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
root@k8s-master01:~/k8s-data/yaml/magedu/redis/pv#
root@k8s-master01:~/k8s-data/yaml/magedu/redis# cat redis.yaml
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
labels:
app: devops-redis
name: deploy-devops-redis
namespace: magedu
spec:
replicas: 1
selector:
matchLabels:
app: devops-redis
template:
metadata:
labels:
app: devops-redis
spec:
containers:
- name: redis-container
image: harbor.ik8s.cc/magedu/redis:v4.0.14
imagePullPolicy: Always
volumeMounts:
- mountPath: "/data/redis-data/"
name: redis-datadir
volumes:
- name: redis-datadir
persistentVolumeClaim:
claimName: redis-datadir-pvc-1
---
kind: Service
apiVersion: v1
metadata:
labels:
app: devops-redis
name: srv-devops-redis
namespace: magedu
spec:
type: NodePort
ports:
- name: http
port: 6379
targetPort: 6379
nodePort: 36379
selector:
app: devops-redis
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800
root@k8s-master01:~/k8s-data/yaml/magedu/redis#
上述報錯說我們的伺服器埠超出範圍,這是因為我們在初始化k8s叢集時指定的伺服器埠範圍;
編輯/etc/systemd/system/kube-apiserver.service,將其--service-node-port-range選項指定的值修改即可;其他兩個master節點也需要修改哦
root@k8s-master01:~# systemctl daemon-reload
root@k8s-master01:~# systemctl restart kube-apiserver.service
root@k8s-master01:~#
再次部署redis
root@harbor:~# ll /data/k8sdata/magedu/redis-datadir-1
total 12
drwxr-xr-x 2 root root 4096 Jun 5 16:29 ./
drwxr-xr-x 8 root root 4096 Jun 5 15:53 ../
-rw-r--r-- 1 root root 116 Jun 5 16:29 dump.rdb
root@harbor:~#
可以看到剛才我們向redis寫入資料,對應redis在規定時間內發現key的變化就做了快照,因為redis資料目錄時通過pv/pvc掛載的nfs,所以我們在nfs對應目錄裡時可以正常看到這個快照檔案的;
可以看到k8s重建後的redis pod 還保留著原有pod的資料;這說明k8s重建時掛載了前一個pod的pvc;
redis cluster相比redis單機要稍微複雜一點,我們也是通過pv/pvc將redis cluster資料存放在儲存系統中,不同於redis單機,redis cluster對存入的資料會做crc16計算,然後和16384做取模計算,得出一個數位,這個數位就是存入redis cluster的一個槽位;即redis cluster將16384個槽位,平均分配給叢集所有master節點,每個master節點存放整個叢集資料的一部分;這樣一來就存在一個問題,如果master宕機,那麼對應槽位的資料也就不可用,為了防止master單點故障,我們還需要對master做高可用,即專門用一個slave節點對master做備份,master宕機的情況下,對應slave會接管master繼續向叢集提供服務,從而實現redis cluster master的高可用;如上圖所示,我們使用3主3從的redis cluster,redis0,1,2為master,那麼3,4,5就對應為0,1,2的slave,負責備份各自對應的master的資料;這六個pod都是通過k8s叢集的pv/pvc將資料存放在儲存系統中;
root@harbor:~# mkdir -pv /data/k8sdata/magedu/redis{0,1,2,3,4,5}
mkdir: created directory '/data/k8sdata/magedu/redis0'
mkdir: created directory '/data/k8sdata/magedu/redis1'
mkdir: created directory '/data/k8sdata/magedu/redis2'
mkdir: created directory '/data/k8sdata/magedu/redis3'
mkdir: created directory '/data/k8sdata/magedu/redis4'
mkdir: created directory '/data/k8sdata/magedu/redis5'
root@harbor:~# tail -6 /etc/exports
/data/k8sdata/magedu/redis0 *(rw,no_root_squash)
/data/k8sdata/magedu/redis1 *(rw,no_root_squash)
/data/k8sdata/magedu/redis2 *(rw,no_root_squash)
/data/k8sdata/magedu/redis3 *(rw,no_root_squash)
/data/k8sdata/magedu/redis4 *(rw,no_root_squash)
/data/k8sdata/magedu/redis5 *(rw,no_root_squash)
root@harbor:~# exportfs -av
exportfs: /etc/exports [1]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/kuboard".
Assuming default behaviour ('no_subtree_check').
NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [2]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/volumes".
Assuming default behaviour ('no_subtree_check').
NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [3]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/pod-vol".
Assuming default behaviour ('no_subtree_check').
NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [4]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/myserver".
Assuming default behaviour ('no_subtree_check').
NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [5]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/mysite".
Assuming default behaviour ('no_subtree_check').
NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [7]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/images".
Assuming default behaviour ('no_subtree_check').
NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [8]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/static".
Assuming default behaviour ('no_subtree_check').
NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [11]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/zookeeper-datadir-1".
Assuming default behaviour ('no_subtree_check').
NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [12]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/zookeeper-datadir-2".
Assuming default behaviour ('no_subtree_check').
NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [13]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/zookeeper-datadir-3".
Assuming default behaviour ('no_subtree_check').
NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [16]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/redis-datadir-1".
Assuming default behaviour ('no_subtree_check').
NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [18]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/redis0".
Assuming default behaviour ('no_subtree_check').
NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [19]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/redis1".
Assuming default behaviour ('no_subtree_check').
NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [20]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/redis2".
Assuming default behaviour ('no_subtree_check').
NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [21]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/redis3".
Assuming default behaviour ('no_subtree_check').
NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [22]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/redis4".
Assuming default behaviour ('no_subtree_check').
NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [23]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/redis5".
Assuming default behaviour ('no_subtree_check').
NOTE: this default has changed since nfs-utils version 1.0.x
exporting *:/data/k8sdata/magedu/redis5
exporting *:/data/k8sdata/magedu/redis4
exporting *:/data/k8sdata/magedu/redis3
exporting *:/data/k8sdata/magedu/redis2
exporting *:/data/k8sdata/magedu/redis1
exporting *:/data/k8sdata/magedu/redis0
exporting *:/data/k8sdata/magedu/redis-datadir-1
exporting *:/data/k8sdata/magedu/zookeeper-datadir-3
exporting *:/data/k8sdata/magedu/zookeeper-datadir-2
exporting *:/data/k8sdata/magedu/zookeeper-datadir-1
exporting *:/data/k8sdata/magedu/static
exporting *:/data/k8sdata/magedu/images
exporting *:/data/k8sdata/mysite
exporting *:/data/k8sdata/myserver
exporting *:/pod-vol
exporting *:/data/volumes
exporting *:/data/k8sdata/kuboard
root@harbor:~#
root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# cat pv/redis-cluster-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-cluster-pv0
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.0.42
path: /data/k8sdata/magedu/redis0
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-cluster-pv1
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.0.42
path: /data/k8sdata/magedu/redis1
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-cluster-pv2
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.0.42
path: /data/k8sdata/magedu/redis2
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-cluster-pv3
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.0.42
path: /data/k8sdata/magedu/redis3
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-cluster-pv4
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.0.42
path: /data/k8sdata/magedu/redis4
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-cluster-pv5
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.0.42
path: /data/k8sdata/magedu/redis5
root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster#
root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# cat redis.conf
appendonly yes
cluster-enabled yes
cluster-config-file /var/lib/redis/nodes.conf
cluster-node-timeout 5000
dir /var/lib/redis
port 6379
root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster#
root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# kubectl create cm redis-conf --from-file=./redis.conf -n magedu
configmap/redis-conf created
root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# kubectl get cm -n magedu
NAME DATA AGE
kube-root-ca.crt 1 35h
redis-conf 1 6s
root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster#
root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# kubectl describe cm redis-conf -n magedu
Name: redis-conf
Namespace: magedu
Labels: <none>
Annotations: <none>
Data
====
redis.conf:
----
appendonly yes
cluster-enabled yes
cluster-config-file /var/lib/redis/nodes.conf
cluster-node-timeout 5000
dir /var/lib/redis
port 6379
BinaryData
====
Events: <none>
root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster#
root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# cat redis.yaml
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: magedu
labels:
app: redis
spec:
selector:
app: redis
appCluster: redis-cluster
ports:
- name: redis
port: 6379
clusterIP: None
---
apiVersion: v1
kind: Service
metadata:
name: redis-access
namespace: magedu
labels:
app: redis
spec:
type: NodePort
selector:
app: redis
appCluster: redis-cluster
ports:
- name: redis-access
protocol: TCP
port: 6379
targetPort: 6379
nodePort: 36379
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
namespace: magedu
spec:
serviceName: redis
replicas: 6
selector:
matchLabels:
app: redis
appCluster: redis-cluster
template:
metadata:
labels:
app: redis
appCluster: redis-cluster
spec:
terminationGracePeriodSeconds: 20
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- redis
topologyKey: kubernetes.io/hostname
containers:
- name: redis
image: redis:4.0.14
command:
- "redis-server"
args:
- "/etc/redis/redis.conf"
- "--protected-mode"
- "no"
resources:
requests:
cpu: "500m"
memory: "500Mi"
ports:
- containerPort: 6379
name: redis
protocol: TCP
- containerPort: 16379
name: cluster
protocol: TCP
volumeMounts:
- name: conf
mountPath: /etc/redis
- name: data
mountPath: /var/lib/redis
volumes:
- name: conf
configMap:
name: redis-conf
items:
- key: redis.conf
path: redis.conf
volumeClaimTemplates:
- metadata:
name: data
namespace: magedu
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster#
上述設定清單,主要用sts控制器建立了6個pod副本,每個副本都使用configmap中的組態檔作為redis組態檔,使用pvc模板指定pod在k8s上自動關聯pv,並在magedu名稱空間建立pvc,即只要k8s上有空餘的pv,對應pod就會在magedu這個名稱空間按pvc模板資訊建立pvc;當然我們可以使用儲存類自動建立pvc,也可以提前建立好pvc,一般情況下使用sts控制器,我們可以使用pvc模板的方式來指定pod自動建立pvc(前提是k8s有足夠的pv可用);
應用設定清單部署redis cluster
使用sts控制器建立pod,pod名稱是sts控制器的名稱-id,使用pvc模板建立pvc的名稱為pvc模板名稱-pod名稱,即pvc模板名-sts控制器名-id;
root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# kubectl run -it ubuntu1804 --image=ubuntu:18.04 --restart=Never -n magedu bash
If you don't see a command prompt, try pressing enter.
root@ubuntu1804:/#
root@ubuntu1804:/# apt update
# 安裝必要工具
root@ubuntu1804:/# apt install python2.7 python-pip redis-tools dnsutils iputils-ping net-tools
# 更新pip
root@ubuntu1804:/# pip install --upgrade pip
# 使用pip安裝redis cluster初始化工具redis-trib
root@ubuntu1804:/# pip install redis-trib==0.5.1
root@ubuntu1804:/#
root@ubuntu1804:/# redis-trib.py create \
`dig +short redis-0.redis.magedu.svc.cluster.local`:6379 \
`dig +short redis-1.redis.magedu.svc.cluster.local`:6379 \
`dig +short redis-2.redis.magedu.svc.cluster.local`:6379
在k8s上我們使用sts建立pod,對應pod的名稱是固定不變的,所以我們初始化redis 叢集就直接使用redis pod名稱就可以直接解析到對應pod的IP地址;在傳統虛擬機器器或物理機上初始化redis叢集,我們可用直接使用IP地址,原因是物理機或虛擬機器器IP地址是固定的,在k8s上pod的IP地址是不固定的;
root@ubuntu1804:/# redis-trib.py replicate \
--master-addr `dig +short redis-0.redis.magedu.svc.cluster.local`:6379 \
--slave-addr `dig +short redis-3.redis.magedu.svc.cluster.local`:6379
root@ubuntu1804:/# redis-trib.py replicate \
--master-addr `dig +short redis-1.redis.magedu.svc.cluster.local`:6379 \
--slave-addr `dig +short redis-4.redis.magedu.svc.cluster.local`:6379
root@ubuntu1804:/# redis-trib.py replicate \
--master-addr `dig +short redis-2.redis.magedu.svc.cluster.local`:6379 \
--slave-addr `dig +short redis-5.redis.magedu.svc.cluster.local`:6379
叢集節點資訊中記錄了master節點id和slave id,其中slave後面會對應master的id,表示該slave備份對應master資料;
127.0.0.1:6379> info
# Server
redis_version:4.0.14
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:165c932261a105d7
redis_mode:cluster
os:Linux 5.15.0-73-generic x86_64
arch_bits:64
multiplexing_api:epoll
atomicvar_api:atomic-builtin
gcc_version:8.3.0
process_id:1
run_id:aa8ef00d843b4f622374dbb643cf27cdbd4d5ba3
tcp_port:6379
uptime_in_seconds:4303
uptime_in_days:0
hz:10
lru_clock:8272053
executable:/data/redis-server
config_file:/etc/redis/redis.conf
# Clients
connected_clients:1
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
# Memory
used_memory:2642336
used_memory_human:2.52M
used_memory_rss:5353472
used_memory_rss_human:5.11M
used_memory_peak:2682248
used_memory_peak_human:2.56M
used_memory_peak_perc:98.51%
used_memory_overhead:2559936
used_memory_startup:1444856
used_memory_dataset:82400
used_memory_dataset_perc:6.88%
total_system_memory:16740012032
total_system_memory_human:15.59G
used_memory_lua:37888
used_memory_lua_human:37.00K
maxmemory:0
maxmemory_human:0B
maxmemory_policy:noeviction
mem_fragmentation_ratio:2.03
mem_allocator:jemalloc-4.0.3
active_defrag_running:0
lazyfree_pending_objects:0
# Persistence
loading:0
rdb_changes_since_last_save:0
rdb_bgsave_in_progress:0
rdb_last_save_time:1685992849
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:0
rdb_current_bgsave_time_sec:-1
rdb_last_cow_size:245760
aof_enabled:1
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_last_write_status:ok
aof_last_cow_size:0
aof_current_size:0
aof_base_size:0
aof_pending_rewrite:0
aof_buffer_length:0
aof_rewrite_buffer_length:0
aof_pending_bio_fsync:0
aof_delayed_fsync:0
# Stats
total_connections_received:7
total_commands_processed:17223
instantaneous_ops_per_sec:1
total_net_input_bytes:1530962
total_net_output_bytes:108793
instantaneous_input_kbps:0.04
instantaneous_output_kbps:0.00
rejected_connections:0
sync_full:1
sync_partial_ok:0
sync_partial_err:1
expired_keys:0
expired_stale_perc:0.00
expired_time_cap_reached_count:0
evicted_keys:0
keyspace_hits:0
keyspace_misses:0
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:853
migrate_cached_sockets:0
slave_expires_tracked_keys:0
active_defrag_hits:0
active_defrag_misses:0
active_defrag_key_hits:0
active_defrag_key_misses:0
# Replication
role:master
connected_slaves:1
slave0:ip=10.200.155.175,port=6379,state=online,offset=1120,lag=1
master_replid:60381a28fee40b44c409e53eeef49215a9d3b0ff
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:1120
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:1120
# CPU
used_cpu_sys:12.50
used_cpu_user:7.51
used_cpu_sys_children:0.01
used_cpu_user_children:0.00
# Cluster
cluster_enabled:1
# Keyspace
127.0.0.1:6379>
手動連線redis 叢集master節點進行資料讀寫,存在一個問題就是當我們寫入的key經過crc16計算對16384取模後,對應槽位可能不在當前節點,redis它會告訴我們該key該在哪裡去寫;從上面的截圖可用看到,現在redis cluster 是可用正常讀寫資料的
root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# cat redis-client-test.py
#!/usr/bin/env python
#coding:utf-8
#Author:Zhang ShiJie
#python 2.7/3.8
#pip install redis-py-cluster
import sys,time
from rediscluster import RedisCluster
def init_redis():
startup_nodes = [
{'host': '192.168.0.34', 'port': 36379},
{'host': '192.168.0.35', 'port': 36379},
{'host': '192.168.0.36', 'port': 36379},
{'host': '192.168.0.34', 'port': 36379},
{'host': '192.168.0.35', 'port': 36379},
{'host': '192.168.0.36', 'port': 36379},
]
try:
conn = RedisCluster(startup_nodes=startup_nodes,
# 有密碼要加上密碼哦
decode_responses=True, password='')
print('連線成功!!!!!1', conn)
#conn.set("key-cluster","value-cluster")
for i in range(100):
conn.set("key%s" % i, "value%s" % i)
time.sleep(0.1)
data = conn.get("key%s" % i)
print(data)
#return conn
except Exception as e:
print("connect error ", str(e))
sys.exit(1)
init_redis()
root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster#
執行指令碼,向redis cluster 寫入資料
root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# python redis-client-test.py
Traceback (most recent call last):
File "/root/k8s-data/yaml/magedu/redis-cluster/redis-client-test.py", line 8, in <module>
from rediscluster import RedisCluster
ModuleNotFoundError: No module named 'rediscluster'
root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster#
這裡提示沒有找到rediscluster模組,解決辦法就是通過pip安裝redis-py-cluster模組即可;
安裝redis-py-cluster模組
執行指令碼連線redis cluster進行資料讀寫
連線redis pod,驗證資料是否正常寫入?
從上面的截圖可用看到三個reids cluster master pod各自都存放了一部分key,並非全部;說明剛才我們用python指令碼把資料正常寫入了redis cluster;
驗證在slave 節點是否可用正常讀取資料?
從上面的截圖可以瞭解到在slave節點是不可以讀取資料;
到slave對應的master節點讀取資料
上述驗證說明了redis cluster 只有master可以讀寫資料,slave只是對master資料做備份,不可以在slave上讀寫資料;
root@k8s-node01:~# nerdctl tag redis:4.0.14 harbor.ik8s.cc/redis-cluster/redis:4.0.14
root@k8s-node01:~# nerdctl push harbor.ik8s.cc/redis-cluster/redis:4.0.14
INFO[0000] pushing as a reduced-platform image (application/vnd.docker.distribution.manifest.list.v2+json, sha256:1ae9e0f790001af4b9f83a2b3d79c593c6f3e9a881b754a99527536259fb6625)
WARN[0000] skipping verifying HTTPS certs for "harbor.ik8s.cc"
index-sha256:1ae9e0f790001af4b9f83a2b3d79c593c6f3e9a881b754a99527536259fb6625: done |++++++++++++++++++++++++++++++++++++++|
manifest-sha256:5bd4fe08813b057df2ae55003a75c39d80a4aea9f1a0fbc0fbd7024edf555786: done |++++++++++++++++++++++++++++++++++++++|
config-sha256:191c4017dcdd3370f871a4c6e7e1d55c7d9abed2bebf3005fb3e7d12161262b8: done |++++++++++++++++++++++++++++++++++++++|
elapsed: 1.4 s total: 8.5 Ki (6.1 KiB/s)
root@k8s-node01:~#
修改映象為本地harbor映象和拉取策略是方便我們測試redis cluster的高可用;
root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# kubectl apply -f redis.yaml
service/redis unchanged
service/redis-access unchanged
statefulset.apps/redis configured
root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster#
這裡相當於給redis cluster更新,他們之間的叢集關係還存在,因為叢集關係設定都儲存在遠端儲存之上;
不同於之前,這裡rdis-0變成了slave ,redis-3變成了master;從上面的截圖我們也發現,在k8s上部署redis cluster pod重建以後(IP地址發生變化),對應叢集關係不會發生變化;對應master和salve一對關係始終只是再對應的master和salve兩個pod中切換,這其實就是高可用;
root@harbor:~# systemctl stop harbor
可用看到我們把redis-3刪除(相當於master宕機)以後,對應slave提升為master了;
再次刪除redis-3以後,對應pod正常被重建,並處於running狀態;
可以看到redis-3恢復以後,對應自動加入叢整合為redis-0的slave;