本文主要基於Docker 20.10.12和Linux作業系統CentOS7.4。
伺服器版本 | calico版本 | docker軟體版本 | Kubernetes(k8s)叢集版本 | CPU架構 |
---|---|---|---|---|
CentOS Linux release 7.4.1708 (Core) | v2.6.12 | Docker version 20.10.12 | v1.21.9 | x86_64 |
etcd叢集架構:etcd1為leader,etcd2為follower,etcd3為follower。
伺服器 | 作業系統版本 | CPU架構 | 程序 | 功能描述 |
---|---|---|---|---|
etcd1/192.168.110.133 | CentOS Linux release 7.4.1708 (Core) | x86_64 | etcd | leader |
etcd2/192.168.110.131 | CentOS Linux release 7.4.1708 (Core) | x86_64 | etcd | follower |
etcd3/192.168.110.132 | CentOS Linux release 7.4.1708 (Core) | x86_64 | etcd | follower |
Kubernetes叢集架構:k8scloude1作為master節點,k8scloude2,k8scloude3作為worker節點。
伺服器 | 作業系統版本 | CPU架構 | 程序 | 功能描述 |
---|---|---|---|---|
k8scloude1/192.168.110.130 | CentOS Linux release 7.4.1708 (Core) | x86_64 | docker,kube-apiserver,etcd,kube-scheduler,kube-controller-manager,kubelet,kube-proxy,coredns,calico | k8s master節點 |
k8scloude2/192.168.110.129 | CentOS Linux release 7.4.1708 (Core) | x86_64 | docker,kubelet,kube-proxy,calico | k8s worker節點 |
k8scloude3/192.168.110.128 | CentOS Linux release 7.4.1708 (Core) | x86_64 | docker,kubelet,kube-proxy,calico | k8s worker節點 |
在Kubernetes叢集中,容器之間的通訊是非常重要的一部分。為了實現容器之間的跨主機互聯,需要使用CNI網路外掛。本文將介紹CNI網路外掛的概念和常見的幾種外掛對比,並詳細講解如何使用Calico實現Docker容器跨主機互聯。
calico的資訊儲存在etcd裡,所以需要一套etcd叢集,關於etcd叢集的安裝部署,可以檢視部落格《Kubernetes後臺資料庫etcd:安裝部署etcd叢集,資料備份與恢復》。
檢視Kubernetes(k8s)環境裡的calico的前提是已經有一套可以正常執行的Kubernetes叢集,關於Kubernetes(k8s)叢集的安裝部署,可以檢視部落格《Centos7 安裝部署Kubernetes(k8s)叢集》https://www.cnblogs.com/renshengdezheli/p/16686769.html。
CNI(Containernetworking Interface)網路外掛是一個由Linux基金會維護的開源專案,它可以為容器提供網路連線。在Kubernetes中,可以通過CNI網路外掛來為Pod提供網路連線。
目前市面上主流的CNI網路外掛有以下幾種:
下面我們來對比這幾種CNI網路外掛。
CNI網路外掛 | 優點 | 缺點 | 是否支援網路策略 |
---|---|---|---|
Flannel | 部署簡單,效能優秀 | 網路層延遲高 | 否 |
Calico | 效能最好,支援容器內BGP協定,支援網路策略 | 設定複雜 | 是 |
Weave Net | 功能強大,跨平臺支援 | 效能低下,容易出現網路死鎖 | 是 |
Canal | 結合了Flannel和Calico兩種外掛的優點,支援多種網路模式,可以滿足不同的需求 | 部署和設定較為繁瑣 | 是 |
綜上所述,每種CNI網路外掛都有其獨特的優勢和侷限性,需要根據實際情況進行選擇。
Calico是一種基於IP路由技術的CNI網路外掛,它利用BGP協定來實現高效的容器網路互連。在Calico中,每個容器都被賦予了一個唯一的IP地址,這些IP地址在網路層面上是可達的,並且是通過封包路由直接到達目標容器的。
Calico使用路由表來管理容器網路,每個主機上都會存在一個Calico Agent,它會監聽Kubernetes API伺服器,從而瞭解叢集中所有容器的IP地址和狀態。當某個容器需要向其他容器發起請求時,Calico會根據路由表資訊進行查詢,找到合適的路徑,並將封包轉發給目標容器。
現在要解決的問題是:讓物理機A上的docker容器c1可以存取物理機B上的docker容器c2!
方法一:物理機A上的容器c1想和物理機B上的容器c2通訊,可以通過容器c1在物理機上對映一個埠,容器c2在物理機上對映一個埠,存取物理機的埠達到存取容器的目的,但是這樣過於麻煩,有沒有更好的方法呢?
方法二:可以通過網路外掛來實現這個需求,這裡使用calico網路外掛。
因為calico的資訊儲存在etcd裡,所以需要一套etcd叢集。
檢視etcd叢集的健康狀態。
[root@etcd1 ~]# etcdctl cluster-health
member 341a3c460c1c993a is healthy: got healthy result from http://192.168.110.131:2379
member 4679fe0fcb37326d is healthy: got healthy result from http://192.168.110.132:2379
member ab23bcc86cf3190b is healthy: got healthy result from http://192.168.110.133:2379
cluster is healthy
檢視etcd叢集的成員,可以看到etcd133是Leader。
[root@etcd1 ~]# etcdctl member list
341a3c460c1c993a: name=etcd131 peerURLs=http://192.168.110.131:2380 clientURLs=http://192.168.110.131:2379,http://localhost:2379 isLeader=false
4679fe0fcb37326d: name=etcd132 peerURLs=http://192.168.110.132:2380 clientURLs=http://192.168.110.132:2379,http://localhost:2379 isLeader=false
ab23bcc86cf3190b: name=etcd133 peerURLs=http://192.168.110.133:2380 clientURLs=http://192.168.110.133:2379,http://localhost:2379 isLeader=true
etcd現在什麼資料也沒有
[root@etcd1 ~]# etcdctl ls /
三個節點安裝docker,用來啟動容器。
[root@etcd1 ~]# yum -y install docker-ce
[root@etcd2 ~]# yum -y install docker-ce
[root@etcd3 ~]# yum -y install docker-ce
修改docker的啟動引數,設定docker使用etcd來儲存資料,可以看到docker的啟動指令碼在/usr/lib/systemd/system/docker.service。
[root@etcd1 ~]# systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
Active: inactive (dead)
Docs: https://docs.docker.com
[root@etcd1 ~]# etcdctl member list
341a3c460c1c993a: name=etcd131 peerURLs=http://192.168.110.131:2380 clientURLs=http://192.168.110.131:2379,http://localhost:2379 isLeader=false
4679fe0fcb37326d: name=etcd132 peerURLs=http://192.168.110.132:2380 clientURLs=http://192.168.110.132:2379,http://localhost:2379 isLeader=false
ab23bcc86cf3190b: name=etcd133 peerURLs=http://192.168.110.133:2380 clientURLs=http://192.168.110.133:2379,http://localhost:2379 isLeader=true
新增啟動引數:--cluster-store=etcd://192.168.110.133:2379。
[root@etcd1 ~]# vim /usr/lib/systemd/system/docker.service
[root@etcd1 ~]# grep ExecStart /usr/lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd --cluster-store=etcd://192.168.110.133:2379 -H fd:// --containerd=/run/containerd/containerd.sock
重新載入設定,啟動docker。
[root@etcd1 ~]# systemctl daemon-reload ;systemctl restart docker
可以看到引數新增成功:/usr/bin/dockerd --cluster-store=etcd://192.168.110.133:2379 -H fd:// --containerd=/run/containerd/containerd.sock。這樣設定之後,etcd就可以儲存docker的後端資料了。
[root@etcd1 ~]# systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
Active: active (running) since 三 2022-02-16 15:39:50 CST; 39s ago
Docs: https://docs.docker.com
Main PID: 1390 (dockerd)
Memory: 30.8M
CGroup: /system.slice/docker.service
└─1390 /usr/bin/dockerd --cluster-store=etcd://192.168.110.133:2379 -H fd:// --containerd=/run/containerd/containerd.sock
其他兩個節點進行相同操作,但是etcd的IP要修改為本機的地址。
[root@etcd2 ~]# vim /usr/lib/systemd/system/docker.service
[root@etcd2 ~]# grep ExecStart /usr/lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd --cluster-store=etcd://192.168.110.131:2379 -H fd:// --containerd=/run/containerd/containerd.sock
[root@etcd2 ~]# systemctl daemon-reload ;systemctl restart docker
[root@etcd2 ~]# systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
Active: active (running) since 三 2022-02-16 15:39:57 CST; 41s ago
Docs: https://docs.docker.com
Main PID: 1348 (dockerd)
Memory: 32.4M
CGroup: /system.slice/docker.service
└─1348 /usr/bin/dockerd --cluster-store=etcd://192.168.110.131:2379 -H fd:// --containerd=/run/containerd/containerd.sock
[root@etcd3 ~]# vim /usr/lib/systemd/system/docker.service
[root@etcd3 ~]# grep ExecStart /usr/lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd --cluster-store=etcd://192.168.110.132:2379 -H fd:// --containerd=/run/containerd/containerd.sock
[root@etcd3 ~]# systemctl daemon-reload ;systemctl restart docker
[root@etcd3 ~]# systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
Active: active (running) since 三 2022-02-16 15:39:59 CST; 41s ago
Docs: https://docs.docker.com
Main PID: 1355 (dockerd)
Memory: 34.7M
CGroup: /system.slice/docker.service
└─1355 /usr/bin/dockerd --cluster-store=etcd://192.168.110.132:2379 -H fd:// --containerd=/run/containerd/containerd.sock
建立calico目錄,並建立組態檔,三個節點都需要。
[root@etcd1 ~]# mkdir /etc/calico
[root@etcd1 ~]# cat > /etc/calico/calicoctl.cfg <<EOF
> apiVersion: v1
> kind: calicoApiConfig
> metadata:
> spec:
> datastoreType: "etcdv2"
> etcdEndpoints: "http://192.168.110.133:2379"
> EOF
#calico的組態檔已經設定好了
[root@etcd1 ~]# cat /etc/calico/calicoctl.cfg
apiVersion: v1
kind: calicoApiConfig
metadata:
spec:
datastoreType: "etcdv2"
etcdEndpoints: "http://192.168.110.133:2379"
[root@etcd2 ~]# mkdir /etc/calico
[root@etcd2 ~]# cat > /etc/calico/calicoctl.cfg <<EOF
> apiVersion: v1
> kind: calicoApiConfig
> metadata:
> spec:
> datastoreType: "etcdv2"
> etcdEndpoints: "http://192.168.110.131:2379"
> EOF
[root@etcd2 ~]# cat /etc/calico/calicoctl.cfg
apiVersion: v1
kind: calicoApiConfig
metadata:
spec:
datastoreType: "etcdv2"
etcdEndpoints: "http://192.168.110.131:2379"
[root@etcd3 ~]# mkdir /etc/calico
[root@etcd3 ~]# cat > /etc/calico/calicoctl.cfg <<EOF
> apiVersion: v1
> kind: calicoApiConfig
> metadata:
> spec:
> datastoreType: "etcdv2"
> etcdEndpoints: "http://192.168.110.132:2379"
> EOF
[root@etcd3 ~]# cat /etc/calico/calicoctl.cfg
apiVersion: v1
kind: calicoApiConfig
metadata:
spec:
datastoreType: "etcdv2"
etcdEndpoints: "http://192.168.110.132:2379"
建立目錄放置calico映象和工具。
[root@etcd1 ~]# mkdir etcd-calico
[root@etcd1 ~]# cd etcd-calico/
calicoctl是calico命令列工具,calico-node-v2.tar是calico-node映象包。
[root@etcd1 etcd-calico]# ls
calicoctl calico-node-v2.tar
其他兩個節點也需要這兩個檔案
[root@etcd1 etcd-calico]# scp ./* etcd2:/root/etcd-calico/
root@etcd2's password:
calicoctl 100% 31MB 98.1MB/s 00:00
calico-node-v2.tar 100% 269MB 29.9MB/s 00:09
[root@etcd1 etcd-calico]# scp ./* etcd3:/root/etcd-calico/
root@etcd3's password:
calicoctl 100% 31MB 96.3MB/s 00:00
calico-node-v2.tar 100% 269MB 67.3MB/s 00:04
給calicoctl賦予可執行許可權
[root@etcd1 etcd-calico]# chmod +x calicoctl
[root@etcd1 etcd-calico]# mv calicoctl /bin/
載入映象
[root@etcd1 etcd-calico]# docker load -i calico-node-v2.tar
df64d3292fd6: Loading layer [==================================================>] 4.672MB/4.672MB
d6f0e85be2d0: Loading layer [==================================================>] 8.676MB/8.676MB
c9818c503193: Loading layer [==================================================>] 250.9kB/250.9kB
1f748fca5871: Loading layer [==================================================>] 4.666MB/4.666MB
714c5990d9e8: Loading layer [==================================================>] 263.9MB/263.9MB
Loaded image: quay.io/calico/node:v2.6.12
另外兩個節點也是相同的操作。
[root@etcd2 ~]# mkdir etcd-calico
[root@etcd2 ~]# cd etcd-calico/
[root@etcd2 etcd-calico]# pwd
/root/etcd-calico
[root@etcd2 etcd-calico]# ls
calicoctl calico-node-v2.tar
[root@etcd2 etcd-calico]# chmod +x calicoctl
[root@etcd2 etcd-calico]# mv calicoctl /bin/
[root@etcd2 etcd-calico]# docker load -i calico-node-v2.tar
[root@etcd3 ~]# mkdir etcd-calico
[root@etcd3 ~]# cd etcd-calico/
[root@etcd3 etcd-calico]# ls
calicoctl calico-node-v2.tar
[root@etcd3 etcd-calico]# chmod +x calicoctl
[root@etcd3 etcd-calico]# mv calicoctl /bin/
[root@etcd3 etcd-calico]# docker load -i calico-node-v2.tar
三個節點上都啟動Calico node
[root@etcd1 etcd-calico]# calicoctl node run --node-image=quay.io/calico/node:v2.6.12 -c /etc/calico/calicoctl.cfg
Running command to load modules: modprobe -a xt_set ip6_tables
......
Running the following command to start calico-node:
docker run --net=host --privileged --name=calico-node -d --restart=always -e NODENAME=etcd1 -e CALICO_NETWORKING_BACKEND=bird -e CALICO_LIBNETWORK_ENABLED=true -e ETCD_ENDPOINTS=http://192.168.110.133:2379 -v /var/log/calico:/var/log/calico -v /var/run/calico:/var/run/calico -v /lib/modules:/lib/modules -v /run:/run -v /run/docker/plugins:/run/docker/plugins -v /var/run/docker.sock:/var/run/docker.sock quay.io/calico/node:v2.6.12
Image may take a short time to download if it is not available locally.
Container started, checking progress logs.
2022-02-16 08:00:06.363 [INFO][9] startup.go 173: Early log level set to info
......
2022-02-16 08:00:06.536 [INFO][14] client.go 202: Loading config from environment
Starting libnetwork service
Calico node started successfully
每個節點都建立了一個calico-node容器
[root@etcd1 etcd-calico]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ac7d48a378b6 quay.io/calico/node:v2.6.12 "start_runit" 57 seconds ago Up 56 seconds calico-node
另外兩個節點也啟動Calico node
[root@etcd2 etcd-calico]# calicoctl node run --node-image=quay.io/calico/node:v2.6.12 -c /etc/calico/calicoctl.cfg
[root@etcd2 etcd-calico]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bc99f286802f quay.io/calico/node:v2.6.12 "start_runit" About a minute ago Up About a minute calico-node
[root@etcd3 etcd-calico]# calicoctl node run --node-image=quay.io/calico/node:v2.6.12 -c /etc/calico/calicoctl.cfg
[root@etcd3 etcd-calico]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
07ba9ccdcd4d quay.io/calico/node:v2.6.12 "start_runit" About a minute ago Up About a minute calico-node
因為我們是使用etcd儲存資料的,可以看到對方的主機的資訊。
[root@etcd1 etcd-calico]# calicoctl node status
Calico process is running.
IPv4 BGP status
+-----------------+-------------------+-------+----------+-------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+-----------------+-------------------+-------+----------+-------------+
| 192.168.110.131 | node-to-node mesh | up | 08:00:13 | Established |
| 192.168.110.132 | node-to-node mesh | up | 08:00:14 | Established |
+-----------------+-------------------+-------+----------+-------------+
IPv6 BGP status
No IPv6 peers found.
[root@etcd2 etcd-calico]# calicoctl node status
Calico process is running.
IPv4 BGP status
+-----------------+-------------------+-------+----------+-------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+-----------------+-------------------+-------+----------+-------------+
| 192.168.110.133 | node-to-node mesh | up | 08:00:13 | Established |
| 192.168.110.132 | node-to-node mesh | up | 08:00:14 | Established |
+-----------------+-------------------+-------+----------+-------------+
IPv6 BGP status
No IPv6 peers found.
[root@etcd3 etcd-calico]# calicoctl node status
Calico process is running.
IPv4 BGP status
+-----------------+-------------------+-------+----------+-------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+-----------------+-------------------+-------+----------+-------------+
| 192.168.110.133 | node-to-node mesh | up | 08:00:15 | Established |
| 192.168.110.131 | node-to-node mesh | up | 08:00:15 | Established |
+-----------------+-------------------+-------+----------+-------------+
IPv6 BGP status
No IPv6 peers found.
檢視docker網路型別
[root@etcd1 etcd-calico]# docker network list
NETWORK ID NAME DRIVER SCOPE
2db83772936d bridge bridge local
3c0a5a224b09 host host local
422becf3aa3b none null local
建立calico型別的網路,--driver calico 指定使用 calico 的 libnetwork CNM driver。 --ipam-driver calico-ipam 指定使用 calico 的 IPAM driver 管理 IP。 calico 為 global 網路,etcd 會將 calnet1 同步到所有主機。
[root@etcd1 etcd-calico]# docker network create --driver calico --ipam-driver calico-ipam calnet1
735f15b514db3a7310a7f3ef0734a6cd6b966753dc8cf0f7847305e0ba9fe51f
calico 為 global 網路,etcd 會將 calnet1 同步到所有主機。
[root@etcd1 etcd-calico]# docker network list
NETWORK ID NAME DRIVER SCOPE
2db83772936d bridge bridge local
735f15b514db calnet1 calico global
3c0a5a224b09 host host local
422becf3aa3b none null local
[root@etcd2 etcd-calico]# docker network list
NETWORK ID NAME DRIVER SCOPE
df0044c9f6f6 bridge bridge local
735f15b514db calnet1 calico global
03b08fa135f8 host host local
c19501b7ea7b none null local
[root@etcd3 etcd-calico]# docker network list
NETWORK ID NAME DRIVER SCOPE
331a6b638487 bridge bridge local
735f15b514db calnet1 calico global
08f90f4840c1 host host local
0d2160ce7298 none null local
三個節點拉取busybox映象用來建立容器
[root@etcd1 etcd-calico]# docker pull busybox
[root@etcd2 etcd-calico]# docker pull busybox
[root@etcd3 etcd-calico]# docker pull busybox
三個節點上都建立一個容器,指定網路型別為calnet1
[root@etcd1 etcd-calico]# docker run --name c1 --net calnet1 -itd busybox
73359e36becf9859e073ebce9370b83ac36754f40356e53b82a1e2a8cd7b0066
[root@etcd2 etcd-calico]# docker run --name c2 --net calnet1 -itd busybox
28d27f3effb0ea15e6f5e6cca9e8982c68d24f459978098967842242478b6d8b
[root@etcd3 etcd-calico]# docker run --name c3 --net calnet1 -itd busybox
995241af841f2da4f69c7c3cfa2ce0766de49e7b43ec327f5dc8d57ff7838b62
進入容器c1,檢視網路卡資訊
[root@etcd1 etcd-calico]# docker exec -it c1 sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
4: cali0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff
inet 192.168.36.192/32 scope global cali0
valid_lft forever preferred_lft forever
/ # exit
每在主機上建立一個容器,則會在物理機上建立一張虛擬網路卡出來,注意:cali5aa980fa781@if4裡的if4是容器裡網路卡的4,cali0@if5裡的5是物理機網路卡的5。從這裡可以看到容器裡的虛擬網路卡 cali0 和物理機的 cali5aa980fa781 是 veth pair 關係。
[root@etcd1 etcd-calico]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:1e:33:3e brd ff:ff:ff:ff:ff:ff
inet 192.168.110.133/24 brd 192.168.110.255 scope global ens32
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe1e:333e/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:8b:19:bc:63 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
5: cali5aa980fa781@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether 9a:3d:aa:d2:bc:a2 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::983d:aaff:fed2:bca2/64 scope link
valid_lft forever preferred_lft forever
另外兩個節點也是類似的
[root@etcd2 etcd-calico]# docker exec -it c2 sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
......
4: cali0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff
inet 192.168.57.64/32 scope global cali0
valid_lft forever preferred_lft forever
/ # exit
[root@etcd2 etcd-calico]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
......
valid_lft forever preferred_lft forever
5: cali2e3a79a8486@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether ce:2a:7a:5f:4e:83 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::cc2a:7aff:fe5f:4e83/64 scope link
valid_lft forever preferred_lft forever
[root@etcd3 etcd-calico]# docker exec -it c3 sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
......
4: cali0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff
inet 192.168.175.64/32 scope global cali0
valid_lft forever preferred_lft forever
/ # exit
[root@etcd3 etcd-calico]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
......
5: califd96a41066a@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether 2e:ca:96:03:96:83 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::2cca:96ff:fe03:9683/64 scope link
valid_lft forever preferred_lft forever
使用route -n檢視路由資訊:
192.168.57.64 192.168.110.131 255.255.255.192 UG 0 0 0 ens32,表示在容器裡ping 192.168.57.64這個地址,都會轉發到192.168.110.131這臺機器 ;
192.168.57.64 0.0.0.0 255.255.255.255 UH 0 0 0 cali2e3a79a8486,表示目的地址是192.168.57.64的封包,轉發到cali2e3a79a8486這張網路卡 。
cali2e3a79a8486和容器裡的網路卡cali0是veth pair 關係,所以就可以從容器c1存取到容器c2,其他以此類推,calico相當於建立了一個隧道,可以在物理機A的c1容器存取物理機B的c2容器。
[root@etcd1 etcd-calico]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.110.2 0.0.0.0 UG 0 0 0 ens32
169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 ens32
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.36.192 0.0.0.0 255.255.255.255 UH 0 0 0 cali5aa980fa781
192.168.36.192 0.0.0.0 255.255.255.192 U 0 0 0 *
192.168.57.64 192.168.110.131 255.255.255.192 UG 0 0 0 ens32
192.168.110.0 0.0.0.0 255.255.255.0 U 0 0 0 ens32
192.168.175.64 192.168.110.132 255.255.255.192 UG 0 0 0 ens32
在容器c1裡,ping c2容器可以ping通。
[root@etcd1 etcd-calico]# docker exec -it c1 sh
/ # ping 192.168.57.64
PING 192.168.57.64 (192.168.57.64): 56 data bytes
64 bytes from 192.168.57.64: seq=0 ttl=62 time=0.578 ms
64 bytes from 192.168.57.64: seq=1 ttl=62 time=0.641 ms
64 bytes from 192.168.57.64: seq=2 ttl=62 time=0.543 ms
^C
--- 192.168.57.64 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.543/0.587/0.641 ms
/ # exit
在物理機上ping不通容器c2。
[root@etcd1 etcd-calico]# ping 192.168.57.64
PING 192.168.57.64 (192.168.57.64) 56(84) bytes of data.
^C
--- 192.168.57.64 ping statistics ---
6 packets transmitted, 0 received, 100% packet loss, time 5000ms
看下路由的關係:不管目的地是哪裡都走 cali0。
[root@etcd1 etcd-calico]# docker exec c1 ip route
default via 169.254.1.1 dev cali0
169.254.1.1 dev cali0 scope link
看下etcd1 的路由,目的地址到 192.168.36.192 的封包都從 cali5aa980fa781(etcd1 新產生的虛擬網路卡)走,目的地址到 192.168.57.64/26 網段的封包都從 ens32 發到 192.168.110.131 上去,每臺主機都知道不同的容器在哪臺主機上,所以會動態的設定路由。
[root@etcd1 etcd-calico]# ip route
default via 192.168.110.2 dev ens32
169.254.0.0/16 dev ens32 scope link metric 1002
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
192.168.36.192 dev cali5aa980fa781 scope link
blackhole 192.168.36.192/26 proto bird
192.168.57.64/26 via 192.168.110.131 dev ens32 proto bird
192.168.110.0/24 dev ens32 proto kernel scope link src 192.168.110.133
192.168.175.64/26 via 192.168.110.132 dev ens32 proto bird
在k8s環境裡,每個節點上都有calico-node,calico資料存在etcd裡。
[root@k8scloude1 ~]# kubectl get pod -o wide -n kube-system
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-6b9fbfff44-4jzkj 1/1 Running 55 38d 10.244.251.210 k8scloude3 <none> <none>
calico-node-bdlgm 1/1 Running 27 38d 192.168.110.130 k8scloude1 <none> <none>
calico-node-hx8bk 1/1 Running 27 38d 192.168.110.128 k8scloude3 <none> <none>
calico-node-nsbfs 1/1 Running 27 38d 192.168.110.129 k8scloude2 <none> <none>
coredns-545d6fc579-7wm95 1/1 Running 27 38d 10.244.158.121 k8scloude1 <none> <none>
coredns-545d6fc579-87q8j 1/1 Running 27 38d 10.244.158.122 k8scloude1 <none> <none>
etcd-k8scloude1 1/1 Running 27 38d 192.168.110.130 k8scloude1 <none> <none>
kube-apiserver-k8scloude1 1/1 Running 18 27d 192.168.110.130 k8scloude1 <none> <none>
kube-controller-manager-k8scloude1 1/1 Running 29 38d 192.168.110.130 k8scloude1 <none> <none>
kube-proxy-599xh 1/1 Running 27 38d 192.168.110.128 k8scloude3 <none> <none>
kube-proxy-lpj8z 1/1 Running 27 38d 192.168.110.129 k8scloude2 <none> <none>
kube-proxy-zxlk9 1/1 Running 27 38d 192.168.110.130 k8scloude1 <none> <none>
kube-scheduler-k8scloude1 1/1 Running 29 38d 192.168.110.130 k8scloude1 <none> <none>
metrics-server-bcfb98c76-n4fnb 1/1 Running 26 30d 10.244.251.196 k8scloude3 <none> <none>
本文介紹了CNI網路外掛的概念和常見的幾種外掛對比,詳細講解了如何使用Calico實現Docker容器跨主機互聯。通過使用Calico,我們可以輕鬆地在Kubernetes叢集中實現高效的容器網路互連,提升應用程式的可靠性和可延伸性。