前一篇文章講述了基於Nginx代理的Kuberenetes Ingress Nginx【雲原生時代的閘道器 Ingress Nginx】這次給大家介紹下基於Envoy的 Emissary Ingress。
Envoy 是由 Lyft 開源的高效能網路代理軟體,後來捐贈給了 CNCF 基金會,已經畢業於CNCF。 相比於 Nginx、HAProxy 等經典代理軟體,Envoy 具備豐富的可觀察性和靈活的可延伸性,並且引入了基於 xDS API 的動態設定方案,Envoy 還提供了大量的開箱即用的 Filter 以滿足各種場景下流量治理的需求。
Envoy 與 Nginx 代理的區別
- Envoy 對 HTTP/2 的支援比 Nginx 更好,支援包括 upstream 和 downstream在內的雙向通訊,而 Nginx 只支援 downstream 的連線。
- 高階負載均衡功能是免費的,Nginx 的高階負載均衡功能則需要商業版 Nginx Plus 支援。
- Envoy 支援熱更新,Nginx 設定更新之後需要 Reload。
- Envoy 更貼近 Service Mesh 的使用習慣,Nginx 更貼近傳統服務的使用習慣。
Envoy 有典型的兩種工作模式。一種作為中心代理,代理叢集的南北向流量,這種模式下,Envoy 一般就是負載均衡裝置或者是 API 閘道器的基礎資料面,比如 Ambassador 現在叫 Emissary,Gloo 都是新興的開源的基於 Envoy 的開源閘道器。另一種模式,就是作為業務程序的 Sidecar,當有業務請求存取業務的時候,流量會被劫持到 Sidecar Envoy 當中,之後再被轉發給業務程序,典型代表 Istio 和 Linkerd.
今天我們介紹的就是代理南北向流量的閘道器 Emissary Ingress(原名 Ambassador)。Emissary-ingress已經是CNCF的孵化專案,並且在去年被頂級服務網格專案Linkerd和Istio正式支援。如需整合參考檔案。
關鍵詞:基於Enovy的Emissary Ingress實踐,Emissary Ingress入門,雲原生閘道器Emissary Ingress,Emissary Ingress實踐
https://www.getambassador.io/docs/emissary/latest/about/alternatives/
https://www.getambassador.io/docs/emissary/latest/about/faq/#why-emissary-ingress
常見的雲原生閘道器功能都有,像流量管理,限流,熔斷,canary release, authentication,詳見下面列表。
Reference: https://github.com/emissary-ingress/emissary
- Manage ingress traffic with load balancing, support for multiple protocols (gRPC and HTTP/2, TCP, and web sockets), and Kubernetes integration
- Manage changes to routing with an easy to use declarative policy engine and self-service configuration, via Kubernetes CRDs or annotations
- Secure microservices with authentication, rate limiting, and TLS
- Ensure high availability with sticky sessions, rate limiting, and circuit breaking
- Leverage observability with integrations with Grafana, Prometheus, and Datadog, and comprehensive metrics support
- Enable progressive delivery with canary releases
- Connect service meshes including Consul, Linkerd, and Istio
- Knative serverless integration
從 emissary-ingress 2.1開始, 它把 CRDs 從Helm Charts移除了, 現在首先需要手動 apply CRDs。
kubectl apply -f https://app.getambassador.io/yaml/emissary/3.2.0/emissary-crds.yaml
所以我做了一個Helm Charts 專門裝下CRDs,否則無法全流程安裝自動化。
如果不瞭解Helm Chart 請參考這篇文章【Kubernetes時代的包管理工具 Helm】入門。
resource "helm_release" "emissary_crds" { name = "emissary-crds" create_namespace = true # create emissary default namespace `emissary-system` namespace = local.emissary_ns chart = "../common/helm/repos/emissary-crds-8.2.0.tgz" }
CRDs是預設裝在`emissary-system` namespace下面的,不建議修改namespace,如果要在不同的Namespace下裝多個Emissary ingress, 是可以共用這個CRDs的。
# Install Emissary-ingress from Chart Repository resource "helm_release" "emissary_ingress" { name = "emissary-ingress" repository = "https://app.getambassador.io" chart = "emissary-ingress" version = local.chart_version create_namespace = true namespace = local.emissary_ns values = [ templatefile("${local.common_yaml_d}/emissary-ingress-template.yaml", local.emissary_ingress_map) ] depends_on = [ helm_release.emissary_crds ] }
最後一部分,也是自制 chart 專門負責config
# This is for install Host/Listener/Mapping/TLSContext from a local custom chart # also can upload chart to a bucket or a public github for install from a url # e.g. [Publish to a GCS bucket](https://github.com/hayorov/helm-gcs) resource "helm_release" "emissary_config" { name = "emissary-config" namespace = local.emissary_ns chart = "../common/helm/repos/emissary-config-8.2.0.tgz" values = [ templatefile("${local.common_yaml_d}/emissary-listeners-template.yaml", local.emissary_listeners_map), local.emissary_config_yaml ] depends_on = [ helm_release.emissary_ingress ] }
locals 變數
locals { project_id = "global-sre-dev" cluster_name = "sre-gke" cluster_region = "us-central1" emissary_ns = "emissary" chart_version = "8.2.0" common_yaml_d = "../common/helm/yamls" ambassador_id = "ambassador" emissary_ingress_map = { ambassadorID = local.ambassador_id loadBalancerIP = "35.232.98.249" # Prepare a Static IP first instead to use Ephemeral replicaCount = 2 minReplicas = 2 maxReplicas = 3 canaryEnabled = false # set to true in Prod logLevel = "error" # valid log levels are error, warn/warning, info, debug, and trace endpointEnable = true endpointName = "my-resolver" diagnosticsEnable = false clusterRequestTimeout = 120000 # milliseconds } emissary_listeners_map = { ambassadorID = local.ambassador_id listenersEnabled = true # custom listeners } }
config檔案
locals { emissary_config_yaml = <<-EOT hosts: - name: my-host-dev spec: ambassador_id: - ${local.ambassador_id} hostname: '*.wadexu.cloud' requestPolicy: insecure: action: Redirect tlsContext: name: my-tls-context tlsSecret: name: tls-secret namespace: secret mappings: - name: my-nginx-mapping spec: ambassador_id: - ${local.ambassador_id} hostname: dev.wadexu.cloud prefix: / service: my-nginx.nginx:80 tlscontexts: - name: my-tls-context spec: ambassador_id: - ${local.ambassador_id} hosts: - "*.wadexu.cloud" min_tls_version: v1.2 EOT }
完整程式碼請參考 my repo
kubectl create secret -n secret tls tls-secret \ --key ./xxx.key \ --cert ./xxx.pem
Install from local, (Optional) 如果要學習自動化Terraform安裝,請參考【部署Terrform基礎設施程式碼的自動化利器 Atlantis】
cd terraform_helm_install/dev terraform init terraform plan terraform apply
Install result
% helm list -n emissary-system NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION emissary-crds emissary-system 1 2022-10-20 10:09:30.72553 +0800 CST deployed emissary-crds-8.2.0 3.2.0 % helm list -n emissary NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION emissary-config emissary 1 2022-10-20 10:31:24.819555 +0800 CST deployed emissary-config-8.2.0 3.2.0 emissary-ingress emissary 1 2022-10-20 10:29:33.705888 +0800 CST deployed emissary-ingress-8.2.0 3.2.0
參考我的 quick start
如果不瞭解 Kustomize, 請移步我這篇文章【不能錯過的一款 Kubernetes 應用編排管理神器 Kustomize】
我這個例子 This example 展示了 multiple Emissary deployed in one cluster.
在一個叢集裡安裝多個 Emissary 一定要設定 ambassador_id 並且替換 ClusterRoleBinding name, 否則資源衝突。
Test in local
# apply CRDs first kustomize build emissary-ingress-init/sre-mgmt-dev > ~/init.yaml kubectl apply -f ~/init.yaml # deploy first public Emissary, this allow list = all, face to internet kustomize build emissary-ingress-public/sre-mgmt-dev > ~/emissary_deploy1.yaml kubectl apply -f ~/emissary_deploy1.yaml # deploy second private Emissary with a restrict allow list to access kustomize build emissary-ingress-private/sre-mgmt-dev > ~/emissary_deploy2.yaml kubectl apply -f ~/emissary_deploy2.yaml
通過Terraform安裝 Kustomize資源,請參考 my repo
如:
module "example_custom_manifests" { source = "kbst.xyz/catalog/custom-manifests/kustomization" version = "0.3.0" configuration_base_key = "default" configuration = { default = { resources = [ "${path.root}/../../infra/emissary-ingress-init/sre-mgmt-dev" ] common_labels = { "env" = "dev" } } } }
建一個nginx service 測試下
helm install my-nginx bitnami/nginx --set service.type="ClusterIP" -n nginx --create-namespace
curl
% curl https://dev.wadexu.cloud <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
1. 這個error 代表 tls-secret 有問題,確保正確建立
error:1404B42E:SSL routines:ST_CONNECT:tlsv1 alert protocol version
2. Connection refused, 最大的可能是 Listeners 沒有設定好。
curl: (7) Failed to connect to dev.wadexu.cloud port 443 after 255 ms: Connection refused
3. CRDs 沒建立。
│ Error: unable to build kubernetes objects from release manifest: [resource mapping not found for name: "my-resolver" namespace: "emissary-system" from "": no matches for kind "KubernetesEndpointResolver" in version "getambassador.io/v2" │ ensure CRDs are installed first, resource mapping not found for name: "ambassador" namespace: "emissary-system" from "": no matches for kind "Module" in version "getambassador.io/v2" │ ensure CRDs are installed first]