kubernetes + istio 是目前最強大,也是最易於使用的服務網格方案。要使用kubernetes + istio, 首先要搭建 kubernets 叢集。搭建kubernetes 叢集的方式有很多,其中使用 anisble 自動化搭建 kubernetes 叢集的方案非常便捷、可靠。
伺服器列表
VIP 192.168.2.111
HOST | ROLE | IP | CPU | MEMORY |
---|---|---|---|---|
k8s-lvs-01 | LVS MASTER | 192.168.2.58 | 2C | 4G |
k8s-lvs-02 | LVS BACKUP | 192.168.2.233 | 2C | 4G |
k8s-main-01 | K8S MASTER | 192.168.2.85 | 4C | 8G |
k8s-main-02 | K8S MASTER | 192.168.2.155 | 4C | 8G |
k8s-main-03 | K8S MASTER | 192.168.2.254 | 4C | 8G |
k8s-node-01 | K8S WORKER | 192.168.2.110 | 4C | 8G |
k8s-node-02 | K8S WORKER | 192.168.2.214 | 4C | 8G |
k8s-node-03 | K8S WORKER | 192.168.2.36 | 4C | 8G |
GitHub: https://github.com/ansible/ansible
安裝之前可以先更新下apt源
sudo apt-get update
安裝 ansible:
sudo apt-get install ansible
在ansible中使用密碼方式設定叢集必須要安裝sshpass,如果不使用密碼模式可以不安裝:
sudo apt-get install sshpass
如果 apt 找不到ansible和sshpass,可以設定 http://mirrors.aliyun.com/ubuntu 源後再安裝,參考連結:https://developer.aliyun.com/mirror/ubuntu/
從 AppStore 安裝 Xcode 後執行以下命令
xcode-select --install
安裝 homebrew:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
安裝ansible:
brew install --verbose ansible
要安裝 sshpass,需要 使用 baocang/delicious 庫:
brew tap baocang/delicious
安裝 sshpass:
brew install --verbose sshpass
sshpass是開源的,baocang/delicious是以原始碼編譯的方式安裝。
使用自己習慣用的檔案編輯器編輯hosts.ini
, 並輸入以下內容:
[all:vars]
kubernetes_vip=192.168.2.111
keepalived_master_ip=192.168.2.58
[lvs]
k8s-lvs-01 ansible_host=192.168.2.58 ansible_ssh_port=22 ansible_ssh_user=ansible ansible_ssh_pass="P@ssw0rd" ansible_sudo_pass="P@ssw0rd"
k8s-lvs-02 ansible_host=192.168.2.233 ansible_ssh_port=22 ansible_ssh_user=ansible ansible_ssh_pass="P@ssw0rd" ansible_sudo_pass="P@ssw0rd"
[main]
k8s-main-01 ansible_host=192.168.2.85 ansible_ssh_port=22 ansible_ssh_user=ansible ansible_ssh_pass="P@ssw0rd" ansible_sudo_pass="P@ssw0rd"
[masters]
k8s-main-02 ansible_host=192.168.2.155 ansible_ssh_port=22 ansible_ssh_user=ansible ansible_ssh_pass="P@ssw0rd" ansible_sudo_pass="P@ssw0rd"
k8s-main-03 ansible_host=192.168.2.254 ansible_ssh_port=22 ansible_ssh_user=ansible ansible_ssh_pass="P@ssw0rd" ansible_sudo_pass="P@ssw0rd"
[workers]
k8s-node-01 ansible_host=192.168.2.110 ansible_ssh_port=22 ansible_ssh_user=ansible ansible_ssh_pass="P@ssw0rd" ansible_sudo_pass="P@ssw0rd"
k8s-node-02 ansible_host=192.168.2.214 ansible_ssh_port=22 ansible_ssh_user=ansible ansible_ssh_pass="P@ssw0rd" ansible_sudo_pass="P@ssw0rd"
k8s-node-03 ansible_host=192.168.2.36 ansible_ssh_port=22 ansible_ssh_user=ansible ansible_ssh_pass="P@ssw0rd" ansible_sudo_pass="P@ssw0rd"
[kubernetes:children]
main
masters
workers
如果某個伺服器的 ssh 是22,可以省略ansible_ssh_port
以上內容 使用 [lvs]
、[main]
、 [masters]
、 [workers]
對伺服器進行分組,格式如下:
IP地址
ssh埠號
ssh使用者名稱
登入ssh時使用的密碼
執行sudo時使用的密碼
[kubernetes:children]
是將[lvs]
、 [main]
、[masters]
和 [workers]
組中的伺服器組合並放到名為kubernetes
的組下。
預設還會有一個all
組,包含所有的伺服器列表
[all:vars]是為了定義變數,比如這裡用變數kubernetes_vip
儲存 VIP,使用keepalived_master_ip
儲存 keepalived
的 MASTER 節點 IP.
ansible
組態檔這個設定夜會在每個伺服器上建立一個.test.txt
檔案:
之後的設定不再指定檔名稱,可以在全文最後找到完整的設定來執行。
檔名:demo-anisble-playbook.yml`
---
- name: Demo
hosts: lvs
become: yes
tasks:
- name: Write current user to file named .test.txt
shell: |
echo `whoami` > .test.txt
然後執行以下命令:
ansible-playbook -i hosts.ini demo-anisble-playbook.yml
得到如下輸出:
PLAY [Demo] *********************************************************************************
TASK [Gathering Facts] **********************************************************************
ok: [k8s-lvs-02]
ok: [k8s-lvs-01]
TASK [Write current user to file named .test.txt] *******************************************
changed: [k8s-lvs-01]
changed: [k8s-lvs-02]
PLAY RECAP **********************************************************************************
k8s-lvs-01 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
k8s-lvs-02 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
執行完成後,在 lvs 的兩個伺服器上都有一個名為.test.txt
的檔案,內容為root
,因為become: yes
指定了預設以sudo
方式執行。
將echo
whoami > .test.txt
改為rm -rf .test.txt
後再執行一遍即可完成清理工作
VIP(Virtual IP Address,虛擬 IP 地址)是網路中一個不直接分配給某個特定網路介面的 IP 地址。在多種網路服務和應用中,VIP 通常用於實現高可用性、負載均衡和故障轉移。
VIP 在不同場景中的應用:
VIP 的優勢:
注意事項:
Linux Virtual Server (LVS) 提供了幾種不同的工作模式,每種模式都有其特定的用途和網路流量處理方式。以下是 LVS 的主要工作模式:
每種模式都有其獨特的應用場景和優勢。選擇哪種模式取決於具體的需求,比如是否需要保留源 IP 地址、網路拓撲結構、以及效能考慮等。通常,DR 和 TUN 模式在效能方面表現更好,因為它們減少了 LVS 伺服器的網路流量負載。然而,這些模式可能需要在網路設定上進行更多的調整。相反,NAT 和 Masquerade 模式更易於設定,但可能會影響效能,並且不保留原始的源 IP 地址。
Linux Virtual Server (LVS) 提供了多種負載均衡演演算法,用於決定如何將進入的請求分配到不同的後端伺服器上。這些演演算法各有特點,適用於不同的場景。以下是 LVS 中常見的幾種負載均衡演演算法:
根據應用需求和伺服器效能,可以選擇最適合的負載均衡演演算法。例如,如果伺服器效能大致相同,輪詢或加權輪詢可能是個好選擇;如果伺服器效能不同,可以考慮使用加權最少連線演演算法。對於需要對談永續性的應用,基於雜湊的演演算法可能更加適合。
兩臺lvs 伺服器都需要安裝ipvsadm和keepalived元件,其中ipvsadm用於管理和想看ipvs規則,keepalived用於管理VIP和生成ipvs規則,進行健康檢查等。
在k8s-setup.yml檔案同目錄下的resources
目錄中建立 keepalived.conf.j2
檔案
如果沒有機器名,可以將 ansible_hostname 改為判斷 ansible_host 或 ansible_default_ipv4.address 用IP判斷
檔名:resources/keepalived.conf.j2
vrrp_instance VI_1 {
state {{ 'MASTER' if ansible_host == keepalived_master_ip else 'BACKUP' }}
interface ens160
virtual_router_id 51
priority {{ 255 if ansible_host == keepalived_master_ip else 254 }}
advert_int 1
authentication {
auth_type PASS
auth_pass 123456
}
virtual_ipaddress {
{{ kubernetes_vip }}/24
}
}
# masters with port 6443
virtual_server {{ kubernetes_vip }} 6443 {
delay_loop 6
lb_algo wlc
lb_kind DR
persistence_timeout 360
protocol TCP
{% for host in groups['main'] %}
# {{ host }}
real_server {{ hostvars[host]['ansible_host'] }} 6443 {
weight 1
SSL_GET {
url {
path /livez?verbose
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
{% endfor %}
{% for host in groups['masters'] %}
# {{ host }}
real_server {{ hostvars[host]['ansible_host'] }} 6443 {
weight 1
SSL_GET {
url {
path /livez?verbose
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
{% endfor %}
}
# workers with port 80
virtual_server {{ kubernetes_vip }} 80 {
delay_loop 6
lb_algo wlc
lb_kind DR
persistence_timeout 7200
protocol TCP
{% for host in groups['workers'] %}
# {{ host }}
real_server {{ hostvars[host]['ansible_host'] }} 80 {
weight 1
TCP_CHECK {
connect_timeout 10
connect_port 80
}
}
{% endfor %}
}
# workers with port 443
virtual_server {{ kubernetes_vip }} 443 {
delay_loop 6
lb_algo wlc
lb_kind DR
persistence_timeout 7200
protocol TCP
{% for host in groups['workers'] %}
# {{ host }}
real_server {{ hostvars[host]['ansible_host'] }} 443 {
weight 1
TCP_CHECK {
connect_timeout 10
connect_port 443
}
}
{% endfor %}
}
vrrp_instance 指定了範例名為 VI_1,在 k8s-lvs-01 上設定為 MASTER,在其餘機器上設定為BACKUP
interface ens160 是當前網路卡上的介面名稱,通過 ip addr 或 ifconfig 命令可獲取,輸出中帶有當前節點IP地址的就是,初始情況下,有一個迴路lo介面和另一個類似 ens160、en0之類的介面
advert_int 1 指定了Keepalived傳送VRRP通告的時間間隔為1秒
priority 一般在 BACKUP節點上應該低於 MASTER節點
virtual_ipaddress中設定的就是VIP(Virtual IP)
virtual_server 用於定義 虛擬伺服器,一個虛擬伺服器下有多個真實的伺服器(real_server)
lb_algo wlc 指定了負載均衡演演算法
lb_kind DR 指定了使用 Direct Routing 模式路由資料
{% for host in groups['masters'] %}
這些語句頂格寫是為了防止出現鎖進錯誤
---
- name: Setup Load Balancer with IPVS and Keepalived
hosts: lvs
become: yes
tasks:
# Upgrade all installed packages to their latest versions
- name: Upgrade all installed apt packages
apt:
upgrade: 'yes'
update_cache: yes
cache_valid_time: 3600 # Cache is considered valid for 1 hour
# Install IP Virtual Server (IPVS) administration utility
- name: Install ipvsadm for IPVS management
apt:
name: ipvsadm
state: present
# Install keepalived for high availability
- name: Install Keepalived for load balancing
apt:
name: keepalived
state: present
# Deploy keepalived configuration from a Jinja2 template
- name: Deploy keepalived configuration file
template:
src: resources/keepalived.conf.j2
dest: /etc/keepalived/keepalived.conf
# Restart keepalived to apply changes
- name: Restart Keepalived service
service:
name: keepalived
state: restarted
---
- name: Install kubernetes packages and containerd.io
hosts: kubernetes
become: yes
tasks:
# Upgrade all installed packages to their latest versions
- name: Upgrade all installed apt packages
apt:
upgrade: 'yes'
update_cache: yes
cache_valid_time: 3600 # Cache is considered valid for 1 hour
# Install required packages for Kubernetes and Docker setup
- name: Install prerequisites for Kubernetes and Docker
apt:
name:
- ca-certificates
- curl
- gnupg
update_cache: yes
cache_valid_time: 3600
# Ensure the keyring directory exists for storing GPG keys
- name: Create /etc/apt/keyrings directory for GPG keys
file:
path: /etc/apt/keyrings
state: directory
mode: '0755'
# Add Docker's official GPG key
- name: Add official Docker GPG key to keyring
apt_key:
url: https://download.docker.com/linux/ubuntu/gpg
keyring: /etc/apt/keyrings/docker.gpg
state: present
# Add Docker's apt repository
- name: Add Docker repository to apt sources
apt_repository:
# repo: "deb [arch={{ ansible_architecture }} signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu {{ ansible_distribution_release }} stable"
repo: "deb [signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu {{ ansible_distribution_release }} stable"
filename: docker
update_cache: yes
notify: Update apt cache
# Add Kubernetes' GPG key
- name: Add Kubernetes GPG key to keyring
apt_key:
url: https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key
keyring: /etc/apt/keyrings/kubernetes-apt-keyring.gpg
state: present
# Add Kubernetes' apt repository
- name: Add Kubernetes repository to apt sources
lineinfile:
path: /etc/apt/sources.list.d/kubernetes.list
line: 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /'
create: yes
notify: Update apt cache
# Install Kubernetes packages
- name: Install Kubernetes packages (kubelet, kubeadm, kubectl) and containerd.io
apt:
name:
- kubelet
- kubeadm
- kubectl
- containerd.io
state: present
# Hold the installed packages to prevent automatic updates
- name: Hold Kubernetes packages and containerd.io
dpkg_selections:
name: "{{ item }}"
selection: hold
loop:
- kubelet
- kubeadm
- kubectl
- containerd.io
handlers:
# Handler to update apt cache when notified
- name: Update apt cache
apt:
update_cache: yes
如果要指定 repo 的 arch,可以如下方式使用(以docker為例):
repo: "deb [arch={{ ansible_architecture }} signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu {{ ansible_distribution_release }} stable"
---
- name: Configure Kubernetes prerequisites
hosts: kubernetes
become: yes # to run tasks that require sudo
tasks:
- name: Load Kernel Modules
copy:
content: |
overlay
br_netfilter
dest: /etc/modules-load.d/k8s.conf
notify: Load Modules
- name: Set Sysctl Parameters
copy:
content: |
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
dest: /etc/sysctl.d/k8s.conf
notify: Apply Sysctl
handlers:
- name: Load Modules
modprobe:
name: "{{ item }}"
state: present
loop:
- overlay
- br_netfilter
- name: Apply Sysctl
command: sysctl --system
containerd 其實也用的到一個 k8s中用的到的映象,只不過可能版本更低,不過可以改成相同版本的,所以在設定 containerd 之前預載入 k8s 需要的映象。
---
- name: Prefetch kubernetes images
hosts: kubernetes
become: true
tasks:
- name: Get kubeadm version
command: kubeadm version -o short
register: kubeadm_version
- name: List Kubernetes images for the specific kubeadm version
command: "kubeadm config images list --kubernetes-version={{ kubeadm_version.stdout }}"
register: kubernetes_images
- name: Pull and retag Kubernetes images from Aliyun registry
block:
- name: List old images in k8s.io namespace
command: ctr -n k8s.io images list -q
register: old_images_list
- name: Pull Kubernetes image from Aliyun
command: "ctr -n k8s.io images pull registry.aliyuncs.com/google_containers/{{ item.split('/')[-1] }}"
loop: "{{ kubernetes_images.stdout_lines }}"
when: item not in old_images_list.stdout
loop_control:
label: "{{ item }}"
- name: Retag Kubernetes image
command: "ctr -n k8s.io images tag registry.aliyuncs.com/google_containers/{{ item.split('/')[-1] }} {{ item }}"
loop: "{{ kubernetes_images.stdout_lines }}"
when: item not in old_images_list.stdout
loop_control:
label: "{{ item }}"
- name: List new images in k8s.io namespace
command: ctr -n k8s.io images list -q
register: new_images_list
- name: Remove images from Aliyun registry
command: "ctr -n k8s.io images remove {{ item }}"
loop: "{{ new_images_list.stdout_lines }}"
when: item.startswith('registry.aliyuncs.com/google_containers')
loop_control:
label: "{{ item }}"
# # Optional: Remove old SHA256 tags if necessary
# - name: Remove old SHA256 tags
# command: "ctr -n k8s.io images remove {{ item }}"
# loop: "{{ new_images_list.stdout_lines }}"
# when: item.startswith('sha256:')
# loop_control:
# label: "{{ item }}"
- name: Restart containerd service
service:
name: containerd
state: restarted
通過anisble 執行後就完成了所有的映象預載入工作,也可以在kubeadm的init組態檔中改寫imageRepository: registry.k8s.io
來省略這步操作。
註釋的內容是刪除 sha256:
開頭的映象,建議不要刪除.
ctr -n k8s.io images ls -q
registry.k8s.io/coredns/coredns:v1.10.1
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/pause:3.9
sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
SHA256 標籤在 Docker 和容器技術中扮演著重要角色,主要涉及到映象的完整性和版本控制。以下是 SHA256 標籤的作用和刪除它們的潛在好處與壞處:
SHA256 標籤的作用:
latest
)。刪除 SHA256 標籤的好處:
v1.0.0
)來識別映象。刪除 SHA256 標籤的壞處:
總的來說,是否刪除 SHA256 標籤取決於您的具體需求和管理偏好。在生產環境中,通常建議保留這些標籤以便於版本控制和安全稽核。在開發或測試環境中,如果標籤的數量導致管理變得繁瑣,可以考慮移除它們。
containerd 預設的組態檔禁用了cri
外掛,並且預設SystemdCgroup = false
,在開始設定kubernetes 前,需要啟用cri
外掛,並且設定SystemdCgroup = true
SystemdCgroup=true
是一個設定選項,通常在容器執行時環境的組態檔中使用,特別是在與 Docker 或 Kubernetes 相關的設定中。這個選項涉及到 Linux 中的 cgroup(控制組)管理,具體來說是 cgroup 的第二版本(cgroup v2)。
當您在容器執行時的組態檔中設定 SystemdCgroup=true
時,您告訴容器執行時(如 Docker 或 containerd)使用 Systemd 來管理容器的 cgroup。這樣做的優點包括:
在 Kubernetes 環境中,這個設定通常出現在 Kubelet 的設定中,確保容器的資源管理與系統的其它部分保持一致。這對於保證系統資源的高效利用和穩定執行非常重要。
通過將 disabled_plugins = ["cri"] 中內容清空可以啟用cri
外掛,設定 SystemdCgroup=true 可以使用 Systemd 來管理容器的 cgroup,並且修改 sandbox_image
,使其與 k8s 保持一致。
---
- name: Configure containerd
hosts: kubernetes
become: true
tasks:
- name: Get Kubernetes images list
command: kubeadm config images list
register: kubernetes_images
- name: Set pause image variable
set_fact:
pause_image: "{{ kubernetes_images.stdout_lines | select('match', '^registry.k8s.io/pause:') | first }}"
- name: Generate default containerd config
command: containerd config default
register: containerd_config
changed_when: false
- name: Write containerd config to file
copy:
dest: /etc/containerd/config.toml
content: "{{ containerd_config.stdout }}"
mode: '0644'
- name: Replace 'sandbox_image' and 'SystemdCgroup' in containerd config
lineinfile:
path: /etc/containerd/config.toml
regexp: "{{ item.regexp }}"
line: "{{ item.line }}"
loop:
- { regexp: '^\s*sandbox_image\s*=.*$', line: ' sandbox_image = "{{ pause_image }}"' }
- { regexp: 'SystemdCgroup =.*', line: ' SystemdCgroup = true' }
- name: Restart containerd service
service:
name: containerd
state: restarted
使用 kubeadm 初始化 k8s 叢集時,只需要初始化一個 master 節點,其餘的 master 和 worker 節點都可以使用 kubeadm join
命令加入,所以需要先初始化 main master 節點。
kubeadm init
用的 tokenkubeadm 預設的 token 是 abcdef.0123456789abcdef,其格式要求為"[a-z0-9]{6}.[a-z0-9]{16}
, 可以通過下面這條命令來生成:
LC_CTYPE=C tr -dc 'a-z0-9' </dev/urandom | head -c 6; echo -n '.'; LC_CTYPE=C tr -dc 'a-z0-9' </dev/urandom | head -c 16
檔名:resources/kubeadm-init.yaml.j2
通過kubeadm config print init-defaults
命令獲取到預設的組態檔,然後修改為如下內容:
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- token: {{ token }}
ttl: 0s
usages:
- signing
- authentication
description: "kubeadm bootstrap token"
groups:
- system:bootstrappers:kubeadm:default-node-token
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: {{ ansible_host }}
bindPort: 6443
nodeRegistration:
criSocket: unix:///var/run/containerd/containerd.sock
imagePullPolicy: IfNotPresent
name: k8s-main-01
taints: null
---
apiServer:
certSANs:
- {{ kubernetes_vip }}
- {{ ansible_host }}
{% for host in groups['masters'] %}
- {{ hostvars[host]['ansible_host'] }}
{% endfor %}
- k8s-main-01
- k8s-main-02
- k8s-main-03
- kubernetes.cluster
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.k8s.io
kind: ClusterConfiguration
kubernetesVersion: 1.28.4
controlPlaneEndpoint: "{{ kubernetes_vip }}:6443"
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/12
serviceSubnet: 10.96.0.0/12
scheduler: {}
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd
---
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
ttl: 0s指定了永不過期。
推薦的做法是設定合理的過期時間,後續使用kubeadm token create --print-join-command
的方式獲取 token。
通過 register 捕獲 shell 指令碼生成的內容,並且在 Generate kubeadm config file
子任務中定義名為token
的變數接收來自 stdout 輸出的內容。
使用 register
的基本原理:
register
關鍵字時,Ansible 會捕獲該任務的輸出並將其儲存在您指定的變數中。編寫 ansible 任務組態檔內容如下:
---
- name: Initialize Kubernetes Cluster on Main Master
hosts: main
become: true
tasks:
- name: Generate Kubernetes init token
shell: >
LC_CTYPE=C tr -dc 'a-z' </dev/urandom | head -c 1;
LC_CTYPE=C tr -dc 'a-z0-9' </dev/urandom | head -c 5;
echo -n '.';
LC_CTYPE=C tr -dc 'a-z0-9' </dev/urandom | head -c 16
register: k8s_init_token
- name: Generate kubeadm config file
template:
src: resources/kubeadm-init.yaml.j2
dest: kubeadm-init.yaml
vars:
token: "{{ k8s_init_token.stdout }}"
以上組態檔會在 /home/ansible 目錄下生成一個名為kubeadm-init.yaml
的檔案,使用者名稱是在 hosts.ini 中 main 組的伺服器通過 ansible_ssh_user 指定的。可以登入到伺服器檢查 kubeadm-init.yaml 檔案的內容。
resources/cilium-linux-amd64.tar.gz
檔案下載方式參考
https://docs.cilium.io/en/stable/gettingstarted/k8s-install-default/
在 linux 上安裝 cilium-cli 時這樣下載
CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
CLI_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
# 在Linux上這樣檢查檔案 sha256
# sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
# 在Mac上這樣檢查 sha256
# shasum -a 256 -c cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
在 Mac 上安裝 cilium-cli 時這樣下載
CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
CLI_ARCH=amd64
if [ "$(uname -m)" = "arm64" ]; then CLI_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-darwin-${CLI_ARCH}.tar.gz{,.sha256sum}
shasum -a 256 -c cilium-darwin-${CLI_ARCH}.tar.gz.sha256sum
這是在上面的組態檔基礎上增加的內容
建議與後面的加入其它 masters 和 workers 的步驟放同一個檔案中一起執行,這樣可以通過set_fact
設定的變數直接執行kubeadm join
命令。
下面的組態檔也將 masters 和 workers 上需要的join命令存到了 .master_join_command.txt
和.worker_join_command.txt
中,單獨加入節點,或後續手動執行kubeadm join
可以使用其中的命令。
下面的設定直接在 k8s 的 main master 上安裝cilium-cli, 將下載好的檔案放到 resources/cilium-linux-amd64.tar.gz 目錄:
---
- name: Initialize Kubernetes Cluster on Main Master
hosts: main
become: true
tasks:
- name: Check if IP address is already present
shell: "ip addr show dev lo | grep {{ kubernetes_vip }}"
register: ip_check
ignore_errors: yes
failed_when: false
changed_when: false
- name: Debug print ip_check result
debug:
msg: "{{ ip_check }}"
- name: Add IP address to loopback interface
command:
cmd: "ip addr add {{ kubernetes_vip }}/32 dev lo"
when: ip_check.rc != 0
- name: Generate Kubernetes init token
shell: >
LC_CTYPE=C tr -dc 'a-z0-9' </dev/urandom | head -c 6;
echo -n '.';
LC_CTYPE=C tr -dc 'a-z0-9' </dev/urandom | head -c 16
register: k8s_init_token
- name: Generate kubeadm config file
template:
src: resources/kubeadm-init.yaml.j2
dest: kubeadm-init.yaml
vars:
token: "{{ k8s_init_token.stdout }}"
- name: Initialize the Kubernetes cluster using kubeadm
command:
cmd: kubeadm init --v=5 --skip-phases=addon/kube-proxy --upload-certs --config kubeadm-init.yaml
register: kubeadm_init
- name: Set fact for master join command
set_fact:
master_join_command: "{{ kubeadm_init.stdout | regex_search('kubeadm join(.*\\n)+?.*--control-plane.*', multiline=True) }}"
cacheable: yes
run_once: true
- name: Set fact for worker join command
set_fact:
worker_join_command: "{{ kubeadm_init.stdout | regex_search('kubeadm join(.*\\n)+?.*sha256:[a-z0-9]{64}', multiline=True) }}"
cacheable: yes
run_once: true
# - name: Create the target directory if it doesn't exist
# file:
# path: ~/.kube
# state: directory
# owner: "{{ ansible_user_id }}"
# group: "{{ ansible_user_id }}"
# mode: '0755'
# when: not ansible_check_mode # This ensures it only runs when not in check mode
# - name: Copy kube admin config to ansible user directory
# copy:
# src: /etc/kubernetes/admin.conf
# dest: ~/.kube/config
# remote_src: yes
# owner: "{{ ansible_user_id }}"
# group: "{{ ansible_user_id }}"
# mode: '0644'
- name: Write master join command to .master_join_command.txt
copy:
content: "{{ master_join_command }}"
dest: ".master_join_command.txt"
mode: '0664'
delegate_to: localhost
- name: Append worker join command to .worker_join_command.txt
lineinfile:
path: ".worker_join_command.txt"
line: "{{ worker_join_command }}"
create: yes
delegate_to: localhost
- name: Install cilium on Main Master
hosts: main
become: true
tasks:
- name: Ensure tar is installed (Debian/Ubuntu)
apt:
name: tar
state: present
when: ansible_os_family == "Debian"
- name: Check for Cilium binary in /usr/local/bin
stat:
path: /usr/local/bin/cilium
register: cilium_binary
- name: Transfer and Extract Cilium
unarchive:
src: resources/cilium-linux-amd64.tar.gz
dest: /usr/local/bin
remote_src: no
when: not cilium_binary.stat.exists
- name: Install cilium to the Kubernetes cluster
environment:
KUBECONFIG: /etc/kubernetes/admin.conf
command:
cmd: cilium install --version 1.14.4 --set kubeProxyReplacement=true
- name: Wait for Kubernetes cluster to become ready
environment:
KUBECONFIG: /etc/kubernetes/admin.conf
command: kubectl get nodes
register: kubectl_output
until: kubectl_output.stdout.find("Ready") != -1
retries: 20
delay: 30
如果想繼續使用kube-proxy,刪除kubeadm init
命令中的--skip-phases=addon/kube-proxy
引數和cilium install
命令中的--set kubeProxyReplacement=true
。
---
- name: Join Masters to the Cluster
hosts: masters
become: true
tasks:
- name: Joining master node to the Kubernetes cluster
shell:
cmd: "{{ hostvars['k8s-main-01']['master_join_command'] }}"
ignore_errors: yes
- name: Wait for node to become ready
environment:
KUBECONFIG: /etc/kubernetes/admin.conf
command: kubectl get nodes
register: kubectl_output
until: kubectl_output.stdout.find("NotReady") == -1
retries: 20
delay: 30
- name: Check if IP address is already present
shell: "ip addr show dev lo | grep {{ kubernetes_vip }}"
register: ip_check
ignore_errors: yes
failed_when: false
changed_when: false
- name: Debug print ip_check result
debug:
msg: "{{ ip_check }}"
- name: Add IP address to loopback interface
command:
cmd: "ip addr add {{ kubernetes_vip }}/32 dev lo"
when: ip_check.rc != 0
---
- name: Join Worker Nodes to the Cluster
hosts: workers
become: true
tasks:
- name: Joining master node to the Kubernetes cluster
shell:
cmd: "{{ hostvars['k8s-main-01']['worker_join_command'] }}"
ignore_errors: yes
- name: Check if IP address is already present
shell: "ip addr show dev lo | grep {{ kubernetes_vip }}"
register: ip_check
ignore_errors: yes
failed_when: false
changed_when: false
- name: Debug print ip_check result
debug:
msg: "{{ ip_check }}"
- name: Add IP address to loopback interface
command:
cmd: "ip addr add {{ kubernetes_vip }}/32 dev lo"
when: ip_check.rc != 0
只需要執行這個組態檔,加上resources目錄中的三個檔案,就可以完成上面所有的操作
依賴的檔案:
---
- name: Setup Load Balancer with IPVS and Keepalived
hosts: lvs
become: yes
tasks:
# Upgrade all installed packages to their latest versions
- name: Upgrade all installed apt packages
apt:
upgrade: 'yes'
update_cache: yes
cache_valid_time: 3600 # Cache is considered valid for 1 hour
# Install IP Virtual Server (IPVS) administration utility
- name: Install ipvsadm for IPVS management
apt:
name: ipvsadm
state: present
# Install keepalived for high availability
- name: Install Keepalived for load balancing
apt:
name: keepalived
state: present
# Deploy keepalived configuration from a Jinja2 template
- name: Deploy keepalived configuration file
template:
src: resources/keepalived.conf.j2
dest: /etc/keepalived/keepalived.conf
# Restart keepalived to apply changes
- name: Restart Keepalived service
service:
name: keepalived
state: restarted
- name: Install kubernetes packages and containerd.io
hosts: kubernetes
become: yes
tasks:
# Upgrade all installed packages to their latest versions
- name: Upgrade all installed apt packages
apt:
upgrade: 'yes'
update_cache: yes
cache_valid_time: 3600 # Cache is considered valid for 1 hour
# Install required packages for Kubernetes and Docker setup
- name: Install prerequisites for Kubernetes and Docker
apt:
name:
- ca-certificates
- curl
- gnupg
update_cache: yes
cache_valid_time: 3600
# Ensure the keyring directory exists for storing GPG keys
- name: Create /etc/apt/keyrings directory for GPG keys
file:
path: /etc/apt/keyrings
state: directory
mode: '0755'
# Add Docker's official GPG key
- name: Add official Docker GPG key to keyring
apt_key:
url: https://download.docker.com/linux/ubuntu/gpg
keyring: /etc/apt/keyrings/docker.gpg
state: present
# Add Docker's apt repository
- name: Add Docker repository to apt sources
apt_repository:
# repo: "deb [arch={{ ansible_architecture }} signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu {{ ansible_distribution_release }} stable"
repo: "deb [signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu {{ ansible_distribution_release }} stable"
filename: docker
update_cache: yes
notify: Update apt cache
# Add Kubernetes' GPG key
- name: Add Kubernetes GPG key to keyring
apt_key:
url: https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key
keyring: /etc/apt/keyrings/kubernetes-apt-keyring.gpg
state: present
# Add Kubernetes' apt repository
- name: Add Kubernetes repository to apt sources
lineinfile:
path: /etc/apt/sources.list.d/kubernetes.list
line: 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /'
create: yes
notify: Update apt cache
# Install Kubernetes packages
- name: Install Kubernetes packages (kubelet, kubeadm, kubectl) and containerd.io
apt:
name:
- kubelet
- kubeadm
- kubectl
- containerd.io
state: present
# Hold the installed packages to prevent automatic updates
- name: Hold Kubernetes packages and containerd.io
dpkg_selections:
name: "{{ item }}"
selection: hold
loop:
- kubelet
- kubeadm
- kubectl
- containerd.io
handlers:
# Handler to update apt cache when notified
- name: Update apt cache
apt:
update_cache: yes
- name: Configure Kubernetes prerequisites
hosts: kubernetes
become: yes # to run tasks that require sudo
tasks:
- name: Load Kernel Modules
copy:
content: |
overlay
br_netfilter
dest: /etc/modules-load.d/k8s.conf
notify: Load Modules
- name: Set Sysctl Parameters
copy:
content: |
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
dest: /etc/sysctl.d/k8s.conf
notify: Apply Sysctl
handlers:
- name: Load Modules
modprobe:
name: "{{ item }}"
state: present
loop:
- overlay
- br_netfilter
- name: Apply Sysctl
command: sysctl --system
- name: Prefetch kubernetes images
hosts: kubernetes
become: true
tasks:
- name: Get kubeadm version
command: kubeadm version -o short
register: kubeadm_version
- name: List Kubernetes images for the specific kubeadm version
command: "kubeadm config images list --kubernetes-version={{ kubeadm_version.stdout }}"
register: kubernetes_images
- name: Pull and retag Kubernetes images from Aliyun registry
block:
- name: List old images in k8s.io namespace
command: ctr -n k8s.io images list -q
register: old_images_list
- name: Pull Kubernetes image from Aliyun
command: "ctr -n k8s.io images pull registry.aliyuncs.com/google_containers/{{ item.split('/')[-1] }}"
loop: "{{ kubernetes_images.stdout_lines }}"
when: item not in old_images_list.stdout
loop_control:
label: "{{ item }}"
- name: Retag Kubernetes image
command: "ctr -n k8s.io images tag registry.aliyuncs.com/google_containers/{{ item.split('/')[-1] }} {{ item }}"
loop: "{{ kubernetes_images.stdout_lines }}"
when: item not in old_images_list.stdout
loop_control:
label: "{{ item }}"
- name: List new images in k8s.io namespace
command: ctr -n k8s.io images list -q
register: new_images_list
- name: Remove images from Aliyun registry
command: "ctr -n k8s.io images remove {{ item }}"
loop: "{{ new_images_list.stdout_lines }}"
when: item.startswith('registry.aliyuncs.com/google_containers')
loop_control:
label: "{{ item }}"
# # Optional: Remove old SHA256 tags if necessary
# - name: Remove old SHA256 tags
# command: "ctr -n k8s.io images remove {{ item }}"
# loop: "{{ new_images_list.stdout_lines }}"
# when: item.startswith('sha256:')
# loop_control:
# label: "{{ item }}"
- name: Configure containerd
hosts: kubernetes
become: true
tasks:
- name: Get Kubernetes images list
command: kubeadm config images list
register: kubernetes_images
- name: Set pause image variable
set_fact:
pause_image: "{{ kubernetes_images.stdout_lines | select('match', '^registry.k8s.io/pause:') | first }}"
- name: Generate default containerd config
command: containerd config default
register: containerd_config
changed_when: false
- name: Write containerd config to file
copy:
dest: /etc/containerd/config.toml
content: "{{ containerd_config.stdout }}"
mode: '0644'
- name: Replace 'sandbox_image' and 'SystemdCgroup' in containerd config
lineinfile:
path: /etc/containerd/config.toml
regexp: "{{ item.regexp }}"
line: "{{ item.line }}"
loop:
- { regexp: '^\s*sandbox_image\s*=.*$', line: ' sandbox_image = "{{ pause_image }}"' }
- { regexp: 'SystemdCgroup =.*', line: ' SystemdCgroup = true' }
- name: Restart containerd service
service:
name: containerd
state: restarted
- name: Initialize Kubernetes Cluster on Main Master
hosts: main
become: true
tasks:
- name: Check if IP address is already present
shell: "ip addr show dev lo | grep {{ kubernetes_vip }}"
register: ip_check
ignore_errors: yes
failed_when: false
changed_when: false
- name: Debug print ip_check result
debug:
msg: "{{ ip_check }}"
- name: Add IP address to loopback interface
command:
cmd: "ip addr add {{ kubernetes_vip }}/32 dev lo"
when: ip_check.rc != 0
- name: Generate Kubernetes init token
shell: >
LC_CTYPE=C tr -dc 'a-z0-9' </dev/urandom | head -c 6;
echo -n '.';
LC_CTYPE=C tr -dc 'a-z0-9' </dev/urandom | head -c 16
register: k8s_init_token
- name: Generate kubeadm config file
template:
src: resources/kubeadm-init.yaml.j2
dest: kubeadm-init.yaml
vars:
token: "{{ k8s_init_token.stdout }}"
- name: Initialize the Kubernetes cluster using kubeadm
command:
cmd: kubeadm init --v=5 --skip-phases=addon/kube-proxy --config kubeadm-init.yaml --upload-certs
register: kubeadm_init
- name: Set fact for master join command
set_fact:
master_join_command: "{{ kubeadm_init.stdout | regex_search('kubeadm join(.*\\n)+?.*--control-plane', multiline=True) }}"
cacheable: yes
run_once: true
- name: Set fact for worker join command
set_fact:
worker_join_command: "{{ kubeadm_init.stdout | regex_search('kubeadm join(.*\\n)+?.*sha256:[a-z0-9]{64}', multiline=True) }}"
cacheable: yes
run_once: true
# - name: Create the target directory if it doesn't exist
# file:
# path: ~/.kube
# state: directory
# owner: "{{ ansible_user_id }}"
# group: "{{ ansible_user_id }}"
# mode: '0755'
# when: not ansible_check_mode # This ensures it only runs when not in check mode
# - name: Copy kube admin config to ansible user directory
# copy:
# src: /etc/kubernetes/admin.conf
# dest: ~/.kube/config
# remote_src: yes
# owner: "{{ ansible_user_id }}"
# group: "{{ ansible_user_id }}"
# mode: '0644'
- name: Write master join command to .master_join_command.txt
copy:
content: "{{ master_join_command }}"
dest: ".master_join_command.txt"
mode: '0664'
delegate_to: localhost
- name: Append worker join command to .worker_join_command.txt
lineinfile:
path: ".worker_join_command.txt"
line: "{{ worker_join_command }}"
create: yes
delegate_to: localhost
- name: Install cilium on Main Master
hosts: main
become: true
tasks:
- name: Ensure tar is installed (Debian/Ubuntu)
apt:
name: tar
state: present
when: ansible_os_family == "Debian"
- name: Check for Cilium binary in /usr/local/bin
stat:
path: /usr/local/bin/cilium
register: cilium_binary
- name: Transfer and Extract Cilium
unarchive:
src: resources/cilium-linux-amd64.tar.gz
dest: /usr/local/bin
remote_src: no
when: not cilium_binary.stat.exists
- name: Install cilium to the Kubernetes cluster
environment:
KUBECONFIG: /etc/kubernetes/admin.conf
command:
cmd: cilium install --version 1.14.4 --set kubeProxyReplacement=true
- name: Wait for Kubernetes cluster to become ready
environment:
KUBECONFIG: /etc/kubernetes/admin.conf
command: kubectl get nodes
register: kubectl_output
until: kubectl_output.stdout.find("Ready") != -1
retries: 20
delay: 30
- name: Join Masters to the Cluster
hosts: masters
become: true
tasks:
- name: Joining master node to the Kubernetes cluster
shell:
cmd: "{{ hostvars['k8s-main-01']['master_join_command'] }}"
ignore_errors: yes
- name: Join Worker Nodes to the Cluster
hosts: workers
become: true
tasks:
- name: Joining master node to the Kubernetes cluster
shell:
cmd: "{{ hostvars['k8s-main-01']['worker_join_command'] }}"
ignore_errors: yes
---
- name: Setup Load Balancer with IPVS and Keepalived
hosts: lvs
become: yes
tasks:
# Upgrade all installed packages to their latest versions
- name: Upgrade all installed apt packages
apt:
upgrade: 'yes'
update_cache: yes
cache_valid_time: 3600 # Cache is considered valid for 1 hour
# Install IP Virtual Server (IPVS) administration utility
- name: Install ipvsadm for IPVS management
apt:
name: ipvsadm
state: present
# Install keepalived for high availability
- name: Install Keepalived for load balancing
apt:
name: keepalived
state: present
# Deploy keepalived configuration from a Jinja2 template
- name: Deploy keepalived configuration file
template:
src: resources/keepalived.conf.j2
dest: /etc/keepalived/keepalived.conf
# Restart keepalived to apply changes
- name: Restart Keepalived service
service:
name: keepalived
state: restarted
- name: Install kubernetes packages and containerd.io
hosts: kubernetes
become: yes
tasks:
# Upgrade all installed packages to their latest versions
- name: Upgrade all installed apt packages
apt:
upgrade: 'yes'
update_cache: yes
cache_valid_time: 3600 # Cache is considered valid for 1 hour
# Install required packages for Kubernetes and Docker setup
- name: Install prerequisites for Kubernetes and Docker
apt:
name:
- ca-certificates
- curl
- gnupg
update_cache: yes
cache_valid_time: 3600
# Ensure the keyring directory exists for storing GPG keys
- name: Create /etc/apt/keyrings directory for GPG keys
file:
path: /etc/apt/keyrings
state: directory
mode: '0755'
# Add Docker's official GPG key
- name: Add official Docker GPG key to keyring
apt_key:
url: https://download.docker.com/linux/ubuntu/gpg
keyring: /etc/apt/keyrings/docker.gpg
state: present
# Add Docker's apt repository
- name: Add Docker repository to apt sources
apt_repository:
# repo: "deb [arch={{ ansible_architecture }} signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu {{ ansible_distribution_release }} stable"
repo: "deb [signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu {{ ansible_distribution_release }} stable"
filename: docker
update_cache: yes
notify: Update apt cache
# Add Kubernetes' GPG key
- name: Add Kubernetes GPG key to keyring
apt_key:
url: https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key
keyring: /etc/apt/keyrings/kubernetes-apt-keyring.gpg
state: present
# Add Kubernetes' apt repository
- name: Add Kubernetes repository to apt sources
lineinfile:
path: /etc/apt/sources.list.d/kubernetes.list
line: 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /'
create: yes
notify: Update apt cache
# Install Kubernetes packages
- name: Install Kubernetes packages (kubelet, kubeadm, kubectl) and containerd.io
apt:
name:
- kubelet
- kubeadm
- kubectl
- containerd.io
state: present
# Hold the installed packages to prevent automatic updates
- name: Hold Kubernetes packages and containerd.io
dpkg_selections:
name: "{{ item }}"
selection: hold
loop:
- kubelet
- kubeadm
- kubectl
- containerd.io
handlers:
# Handler to update apt cache when notified
- name: Update apt cache
apt:
update_cache: yes
- name: Configure Kubernetes prerequisites
hosts: kubernetes
become: yes # to run tasks that require sudo
tasks:
- name: Load Kernel Modules
copy:
content: |
overlay
br_netfilter
dest: /etc/modules-load.d/k8s.conf
notify: Load Modules
- name: Set Sysctl Parameters
copy:
content: |
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
dest: /etc/sysctl.d/k8s.conf
notify: Apply Sysctl
handlers:
- name: Load Modules
modprobe:
name: "{{ item }}"
state: present
loop:
- overlay
- br_netfilter
- name: Apply Sysctl
command: sysctl --system
- name: Prefetch kubernetes images
hosts: kubernetes
become: true
tasks:
- name: Get kubeadm version
command: kubeadm version -o short
register: kubeadm_version
- name: List Kubernetes images for the specific kubeadm version
command: "kubeadm config images list --kubernetes-version={{ kubeadm_version.stdout }}"
register: kubernetes_images
- name: Pull and retag Kubernetes images from Aliyun registry
block:
- name: List old images in k8s.io namespace
command: ctr -n k8s.io images list -q
register: old_images_list
- name: Pull Kubernetes image from Aliyun
command: "ctr -n k8s.io images pull registry.aliyuncs.com/google_containers/{{ item.split('/')[-1] }}"
loop: "{{ kubernetes_images.stdout_lines }}"
when: item not in old_images_list.stdout
loop_control:
label: "{{ item }}"
- name: Retag Kubernetes image
command: "ctr -n k8s.io images tag registry.aliyuncs.com/google_containers/{{ item.split('/')[-1] }} {{ item }}"
loop: "{{ kubernetes_images.stdout_lines }}"
when: item not in old_images_list.stdout
loop_control:
label: "{{ item }}"
- name: List new images in k8s.io namespace
command: ctr -n k8s.io images list -q
register: new_images_list
- name: Remove images from Aliyun registry
command: "ctr -n k8s.io images remove {{ item }}"
loop: "{{ new_images_list.stdout_lines }}"
when: item.startswith('registry.aliyuncs.com/google_containers')
loop_control:
label: "{{ item }}"
# # Optional: Remove old SHA256 tags if necessary
# - name: Remove old SHA256 tags
# command: "ctr -n k8s.io images remove {{ item }}"
# loop: "{{ new_images_list.stdout_lines }}"
# when: item.startswith('sha256:')
# loop_control:
# label: "{{ item }}"
- name: Restart containerd service
service:
name: containerd
state: restarted
- name: Configure containerd
hosts: kubernetes
become: true
tasks:
- name: Get Kubernetes images list
command: kubeadm config images list
register: kubernetes_images
- name: Set pause image variable
set_fact:
pause_image: "{{ kubernetes_images.stdout_lines | select('match', '^registry.k8s.io/pause:') | first }}"
- name: Generate default containerd config
command: containerd config default
register: containerd_config
changed_when: false
- name: Write containerd config to file
copy:
dest: /etc/containerd/config.toml
content: "{{ containerd_config.stdout }}"
mode: '0644'
- name: Replace 'sandbox_image' and 'SystemdCgroup' in containerd config
lineinfile:
path: /etc/containerd/config.toml
regexp: "{{ item.regexp }}"
line: "{{ item.line }}"
loop:
- { regexp: '^\s*sandbox_image\s*=.*$', line: ' sandbox_image = "{{ pause_image }}"' }
- { regexp: 'SystemdCgroup =.*', line: ' SystemdCgroup = true' }
- name: Restart containerd service
service:
name: containerd
state: restarted
- name: Initialize Kubernetes Cluster on Main Master
hosts: main
become: true
tasks:
- name: Check if IP address is already present
shell: "ip addr show dev lo | grep {{ kubernetes_vip }}"
register: ip_check
ignore_errors: yes
failed_when: false
changed_when: false
- name: Debug print ip_check result
debug:
msg: "{{ ip_check }}"
- name: Add IP address to loopback interface
command:
cmd: "ip addr add {{ kubernetes_vip }}/32 dev lo"
when: ip_check.rc != 0
- name: Generate Kubernetes init token
shell: >
LC_CTYPE=C tr -dc 'a-z0-9' </dev/urandom | head -c 6;
echo -n '.';
LC_CTYPE=C tr -dc 'a-z0-9' </dev/urandom | head -c 16
register: k8s_init_token
- name: Generate kubeadm config file
template:
src: resources/kubeadm-init.yaml.j2
dest: kubeadm-init.yaml
vars:
token: "{{ k8s_init_token.stdout }}"
- name: Initialize the Kubernetes cluster using kubeadm
command:
cmd: kubeadm init --v=5 --skip-phases=addon/kube-proxy --upload-certs --config kubeadm-init.yaml
register: kubeadm_init
- name: Set fact for master join command
set_fact:
master_join_command: "{{ kubeadm_init.stdout | regex_search('kubeadm join(.*\\n)+?.*--control-plane.*', multiline=True) }}"
cacheable: yes
run_once: true
- name: Set fact for worker join command
set_fact:
worker_join_command: "{{ kubeadm_init.stdout | regex_search('kubeadm join(.*\\n)+?.*sha256:[a-z0-9]{64}', multiline=True) }}"
cacheable: yes
run_once: true
# - name: Create the target directory if it doesn't exist
# file:
# path: ~/.kube
# state: directory
# owner: "{{ ansible_user_id }}"
# group: "{{ ansible_user_id }}"
# mode: '0755'
# when: not ansible_check_mode # This ensures it only runs when not in check mode
# - name: Copy kube admin config to ansible user directory
# copy:
# src: /etc/kubernetes/admin.conf
# dest: ~/.kube/config
# remote_src: yes
# owner: "{{ ansible_user_id }}"
# group: "{{ ansible_user_id }}"
# mode: '0644'
- name: Write master join command to .master_join_command.txt
copy:
content: "{{ master_join_command }}"
dest: ".master_join_command.txt"
mode: '0664'
delegate_to: localhost
- name: Append worker join command to .worker_join_command.txt
lineinfile:
path: ".worker_join_command.txt"
line: "{{ worker_join_command }}"
create: yes
delegate_to: localhost
- name: Install cilium on Main Master
hosts: main
become: true
tasks:
- name: Ensure tar is installed (Debian/Ubuntu)
apt:
name: tar
state: present
when: ansible_os_family == "Debian"
- name: Check for Cilium binary in /usr/local/bin
stat:
path: /usr/local/bin/cilium
register: cilium_binary
- name: Transfer and Extract Cilium
unarchive:
src: resources/cilium-linux-amd64.tar.gz
dest: /usr/local/bin
remote_src: no
when: not cilium_binary.stat.exists
- name: Install cilium to the Kubernetes cluster
environment:
KUBECONFIG: /etc/kubernetes/admin.conf
command:
cmd: cilium install --version 1.14.4 --set kubeProxyReplacement=true
- name: Wait for Kubernetes cluster to become ready
environment:
KUBECONFIG: /etc/kubernetes/admin.conf
command: kubectl get nodes
register: kubectl_output
until: kubectl_output.stdout.find("Ready") != -1
retries: 20
delay: 30
- name: Join Masters to the Cluster
hosts: masters
become: true
tasks:
- name: Joining master node to the Kubernetes cluster
shell:
cmd: "{{ hostvars['k8s-main-01']['master_join_command'] }}"
ignore_errors: yes
- name: Wait for node to become ready
environment:
KUBECONFIG: /etc/kubernetes/admin.conf
command: kubectl get nodes
register: kubectl_output
until: kubectl_output.stdout.find("NotReady") == -1
retries: 20
delay: 30
- name: Check if IP address is already present
shell: "ip addr show dev lo | grep {{ kubernetes_vip }}"
register: ip_check
ignore_errors: yes
failed_when: false
changed_when: false
- name: Debug print ip_check result
debug:
msg: "{{ ip_check }}"
- name: Add IP address to loopback interface
command:
cmd: "ip addr add {{ kubernetes_vip }}/32 dev lo"
when: ip_check.rc != 0
- name: Join Worker Nodes to the Cluster
hosts: workers
become: true
tasks:
- name: Joining master node to the Kubernetes cluster
shell:
cmd: "{{ hostvars['k8s-main-01']['worker_join_command'] }}"
ignore_errors: yes
- name: Check if IP address is already present
shell: "ip addr show dev lo | grep {{ kubernetes_vip }}"
register: ip_check
ignore_errors: yes
failed_when: false
changed_when: false
- name: Debug print ip_check result
debug:
msg: "{{ ip_check }}"
- name: Add IP address to loopback interface
command:
cmd: "ip addr add {{ kubernetes_vip }}/32 dev lo"
when: ip_check.rc != 0