Contents

k3s

k3s

参考链接:

一、简介

K3s 是一个轻量级的 Kubernetes 发行版,它针对边缘计算、物联网等场景进行了高度优化。

  • CNCF 认证的 Kubernetes 发行版
  • 支持 X86_64, ARM64, ARMv7 平台
  • 单一进程包含 Kubernetes masterkubeletcontainerd

K3s 有以下增强功能:

  • 打包为单个二进制文件
    • K8S 相关的组件,比如 kube-api/ kube-manager 都打包到同一个二进制文件里面,这样的话,只需要启动这个文件就可以快速的启动对应的组件。
  • 使用基于 sqlite3 的默认存储机制
    • 同时支持使用 etcd3MySQLPostgreSQL 作为存储机制。
  • 默认情况下是安全的
    • K3s 中有一个默认的证书管理机制(默认一年有效期),也有一个可以轮转证书的功能(就是在小于九十天之内重启 K3s 的话,就会自动续一年)。
  • 功能强大的 batteries-included 功能
    • 就是虽然有些服务本身这个二进制文件并没有提供,但是可以通过内置的服务,将配置文件放到指定的目录下面,就可以在启动的时候一并将该服务启动或替换默认组件。
  • 所有 K8S control-plane 组件都封装在单个二进制文件和进程中
    • 因为封装在二进制文件中,所以启动的时候只有一个进程。好处在于只需要管理这个单一进程就可以了,同时也具备操作复杂集群的能力。
  • 最大程度减轻了外部依赖性
    • 即稍新一点的 Linux 内核就可以了(需要 kernelcgroup 挂载)。

之所以叫做 K3S 是因为希望安装的 K8S 在内存占用方面只是一半的大小,而一半大的东西就是一个 5 个字母的单词,简写为 K3S

  • 生命周期
  • 更新周期
    • K8s 更新新版本后,一般 K3s 在一周内同步更新
    • 可以通过 这个链接 获取 latest/stable/testing 版本
    • 我们默认安装的是 stable 版本,可以运行通过命令进行查看
  • 命名规范
    • v1.20.4+k3s1: v1.20.4K8s 版本,k3s1 为补丁版本

二、单机部署

1、在线部署

curl -sfL https://rancher-mirror.rancher.cn/k3s/k3s-install.sh | INSTALL_K3S_SELINUX_WARN=false INSTALL_K3S_MIRROR=cn INSTALL_K3S_EXEC="--write-kubeconfig ~/.kube/config  --write-kubeconfig-mode 644 "  sh - 

# token存放在/var/lib/rancher/k3s/server/node-token
#
cat /var/lib/rancher/k3s/server/node-token



curl -sfL https://rancher-mirror.rancher.cn/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn K3S_URL=https://172.16.1.164:6443 K3S_TOKEN=K105b756f7f9ce7b0d7f776405cc197e74fa5002701ac76dc63edd57dce8cd272ba::server:a61905854a82cc9280a3ac57ccd649d5 sh -

2、离线部署

1)下载k3s二进制包、镜像和安装脚本

二进制包:https://github.com/rancher/k3s/releases

镜像:https://github.com/k3s-io/k3s/releases

部署脚本:https://github.com/k3s-io/k3s/blob/master/install.sh

RHEL系列需要安装selinux:https://github.com/k3s-io/k3s-selinux/releases

一下以 Rocky 9 arm64 为示例

# download
wget https://github.com/k3s-io/k3s/releases/download/v1.28.4%2Bk3s2/k3s-arm64
wget https://github.com/k3s-io/k3s-selinux/releases/download/v1.4.stable.1/k3s-selinux-1.4-1.el9.noarch.rpm
wget https://github.com/k3s-io/k3s/releases/download/v1.28.4%2Bk3s2/k3s-airgap-images-arm64.tar.gz

# wget https://get.k3s.io -o ./install.sh
# wget http://rancher-mirror.cnrancher.com/k3s/k3s-install.sh
wget https://raw.githubusercontent.com/k3s-io/k3s/master/install.sh


# set k3s
cp ./k3s /usr/local/bin/
chmod 755 /usr/local/bin/k3s


# set images
gzip -d k3s-airgap-images-$ARCH.tar.gz
mkdir -p /var/lib/rancher/k3s/agent/images/
cp ./k3s-airgap-images-$ARCH.tar /var/lib/rancher/k3s/agent/images/

# install selinux and set system
yum -y localinstall k3s-selinux-.rpm
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -ri '/^SELINUX=/cSELINUX=disabled' /etc/selinux/config



# install k3s
INSTALL_K3S_SKIP_DOWNLOAD=true INSTALL_K3S_MIRROR=cn  INSTALL_K3S_SELINUX_WARN=true INSTALL_K3S_SKIP_SELINUX_RPM=true INSTALL_K3S_EXEC="--write-kubeconfig ~/.kube/config  --write-kubeconfig-mode 644 --docker "  ./install.sh



# 其他节点加入
# token存放在/var/lib/rancher/k3s/server/node-token
#
cat /var/lib/rancher/k3s/server/node-token

INSTALL_K3S_SKIP_DOWNLOAD=true INSTALL_K3S_MIRROR=cn INSTALL_K3S_SELINUX_WARN=false K3S_URL=https://172.16.1.5:6443 K3S_TOKEN=K101e11731af882398bc6757080f0d8e4b46f8cdb2a17a2edfce54d7971cb3411d1::server:81ee583304cc2b38bf15190298403d34 ./install.sh

二、高级配置

1、设置节点hostname

集群中不能有共同的主机名,如果hostname有相同,则需要在加入集群时设置一下主机名

# 为每个节点指定主机名
curl -sfL http://rancher-mirror.cnrancher.com/k3s/k3s-install.sh | \
    K3S_NODE_NAME="k3s2" INSTALL_K3S_MIRROR=cn \
    K3S_URL=https://192.168.64.3:6443 K3S_TOKEN=xxx sh -

# 为每个节点指定主机名
curl -sfL http://rancher-mirror.cnrancher.com/k3s/k3s-install.sh | \
    INSTALL_K3S_MIRROR=cn K3S_URL=https://192.168.64.3:6443 \
    K3S_TOKEN=xxx sh -s - --node-name k3s2

2、可用环境变量

Environment Variable Description
INSTALL_K3S_SKIP_DOWNLOAD 如果设置为 “true “将不会下载 K3s 的哈希值或二进制。
INSTALL_K3S_SYMLINK 默认情况下,如果路径中不存在命令,将为 kubectl、crictl 和 ctr 二进制文件创建符号链接。如果设置为’skip’将不会创建符号链接,而’force’将覆盖。
INSTALL_K3S_SKIP_ENABLE 如果设置为 “true”,将不启用或启动 K3s 服务。
INSTALL_K3S_SKIP_START 如果设置为 “true “将不会启动 K3s 服务。
INSTALL_K3S_VERSION 从 Github 下载 K3s 的版本。如果没有指定,将尝试从"stable"频道下载。
INSTALL_K3S_BIN_DIR 安装 K3s 二进制文件、链接和卸载脚本的目录,或者使用/usr/local/bin作为默认目录。
INSTALL_K3S_BIN_DIR_READ_ONLY 如果设置为 true 将不会把文件写入INSTALL_K3S_BIN_DIR,强制设置INSTALL_K3S_SKIP_DOWNLOAD=true
INSTALL_K3S_SYSTEMD_DIR 安装 systemd 服务和环境文件的目录,或者使用/etc/systemd/system作为默认目录。
INSTALL_K3S_EXEC 带有标志的命令,用于在服务中启动 K3s。如果未指定命令,并且设置了K3S_URL,它将默认为“agent”。如果未设置K3S_URL,它将默认为“server”。要获得帮助,请参考此示例。
INSTALL_K3S_NAME 要创建的 systemd 服务名称,如果以服务器方式运行 k3s,则默认为’k3s’;如果以 agent 方式运行 k3s,则默认为’k3s-agent’。如果指定了服务名,则服务名将以’k3s-‘为前缀。
INSTALL_K3S_TYPE 要创建的 systemd 服务类型,如果没有指定,将默认使用 K3s exec 命令。
INSTALL_K3S_SELINUX_WARN 如果设置为 true,则在没有找到 k3s-selinux 策略的情况下将继续。
INSTALL_K3S_SKIP_SELINUX_RPM 如果设置为 “true “将跳过 k3s RPM 的自动安装。
INSTALL_K3S_CHANNEL_URL 用于获取 K3s 下载网址的频道 URL。默认为 https://update.k3s.io/v1-release/channels
INSTALL_K3S_CHANNEL 用于获取 K3s 下载 URL 的通道。默认值为 “stable”。选项包括:stable, latest, testing
K3S_CONFIG_FILE 指定配置文件的位置。默认目录为/etc/rancher/k3s/config.yaml
K3S_TOKEN 用于将 server 或 agent 加入集群的共享 secret。
K3S_TOKEN_FILE 指定 cluster-secret,token 的文件目录。

3、k3s安装参数设置

# 使用docker作为容器运行时
$ curl -sfL http://rancher-mirror.cnrancher.com/k3s/k3s-install.sh | \
    INSTALL_K3S_MIRROR=cn \
    INSTALL_K3S_EXEC="--docker" sh -

# 指定运行时工具
$ curl -sfL http://rancher-mirror.cnrancher.com/k3s/k3s-install.sh | \
    INSTALL_K3S_MIRROR=cn \
    INSTALL_K3S_EXEC="--container-runtime-endpoint containerd" \
    sh -
    
# 设置私有镜像仓库配置文件
# 默认配置文件: /etc/rancher/k3s/registries.yaml
$ curl -sfL http://rancher-mirror.cnrancher.com/k3s/k3s-install.sh | \
    INSTALL_K3S_MIRROR=cn \
    INSTALL_K3S_EXEC="--private-registry xxx" \
    sh -
# 针对多网卡主机安装K3s集群
# 默认多网卡会使用默认网关的那个卡
$ rout -n

# K3s server
$ curl -sfL http://rancher-mirror.cnrancher.com/k3s/k3s-install.sh | \
    INSTALL_K3S_MIRROR=cn \
    INSTALL_K3S_EXEC="--node-ip=192.168.100.100" \
    sh -

# K3s agent
$ curl -sfL http://rancher-mirror.cnrancher.com/k3s/k3s-install.sh | \
    INSTALL_K3S_MIRROR=cn \
    K3S_URL=https://192.168.99.211:6443 K3S_TOKEN=xxx \
    INSTALL_K3S_EXEC="--node-ip=192.168.100.100" \
    sh -
# --tls-san
# 在TLS证书中添加其他主机名或IP作为主机备用名称
# 即在公网环境下允许通过公网IP访问控制、操作远程集群
# 或者部署多个Server并使用LB进行负责,就需要保留公网地址
$ curl -sfL http://rancher-mirror.cnrancher.com/k3s/k3s-install.sh | \
    INSTALL_K3S_MIRROR=cn \
    INSTALL_K3S_EXEC="--tls-san 1.1.1.1"  \
    sh -

# 获取配置
$ kubectl get secret k3s-serving -n kube-system -o yaml

# 然后本机复制公网主节点对应的yaml文件即可本地操作了
$ scp ci@1.1.1.1:/etc/rancher/k3s/k3s.yaml ~/.kube/config
# 修改启动的服务对应配置(调整节点的启动的最大Pod数量)
$ curl -sfL http://rancher-mirror.cnrancher.com/k3s/k3s-install.sh | \
    INSTALL_K3S_MIRROR=cn \
    INSTALL_K3S_EXEC='--kubelet-arg=max-pods=200' \
    sh -

# 修改启动的服务对应配置(使用ipvs作为服务调度工具)
$ curl -sfL http://rancher-mirror.cnrancher.com/k3s/k3s-install.sh | \
    INSTALL_K3S_MIRROR=cn \
    INSTALL_K3S_EXEC='--kube-proxy-arg=proxy-mode=ipvs' \
    sh -

# 修改启动的服务对应配置(调整服务启动的端口范围)
$ curl -sfL http://rancher-mirror.cnrancher.com/k3s/k3s-install.sh | \
    INSTALL_K3S_MIRROR=cn \
    INSTALL_K3S_EXEC='--kube-apiserver-arg=service-node-port-range=40000-50000' \
    sh -

# kubelet-arg     --kubelet-arg
# kube-apiserver  --kube-apiserver-arg
# kube-proxy-arg  --kube-proxy-arg
# kube-proxy-arg  --kube-proxy-arg=proxy-mode=ipvs
# --data-dir
# 修改K3s数据存储目录
$ curl -sfL http://rancher-mirror.cnrancher.com/k3s/k3s-install.sh | \
    INSTALL_K3S_MIRROR=cn \
    INSTALL_K3S_EXEC='--data-dir=/opt/k3s-data' \
    sh -
# 禁用组件
$ curl -sfL http://rancher-mirror.cnrancher.com/k3s/k3s-install.sh | \
    INSTALL_K3S_MIRROR=cn \
    INSTALL_K3S_EXEC='--disable traefik' \
    sh -

# 自己加自己需要的服务
$ ls /var/lib/rancher/k3s/server/manifests
$ kubectl get pods -A | grep traefik
# 添加label和taint标识
$ curl -sfL http://rancher-mirror.cnrancher.com/k3s/k3s-install.sh | \
    INSTALL_K3S_MIRROR=cn \
    INSTALL_K3S_EXEC='--node-label foo=bar,hello=world \
        --node-taint key1=value1:NoExecute'
    sh -

# 查看一下
$ kubectl describe nodes

K3s Server/Agent - 数据库选项

# 指定数据源名称
# 标志位: --datastore-endpoint value
# 环境变量: K3S_DATASTORE_ENDPOINT
$ curl -sfL http://rancher-mirror.cnrancher.com/k3s/k3s-install.sh | \
    INSTALL_K3S_MIRROR=cn \
    INSTALL_K3S_EXEC='--datastore-endpoint etcd' \
    sh -
    
# cron规范中的快照间隔时间
# --etcd-snapshot-schedule-cron value
$ curl -sfL http://rancher-mirror.cnrancher.com/k3s/k3s-install.sh | \
    INSTALL_K3S_MIRROR=cn \
    INSTALL_K3S_EXEC='--etcd-snapshot-schedule-cron * */5 * * *' \
    sh -

4、网络选项配置

默认情况下,K3s 将以 flannel 作为 CNI 运行,使用 VXLAN 作为默认后端,CNI 和默认后端都可以通过参数修改。要启用加密,请使用下面的 IPSecWireGuard 选项。

# 默认安装K3s之后的网络配置
$ sudo cat /var/lib/rancher/k3s/agent/etc/flannel/net-conf.json
{
    "Network": "10.42.0.0/16",
    "EnableIPv6": false,
    "EnableIPv4": true,
    "IPv6Network": "::/0",
    "Backend": {
        "Type": "vxlan"
    }
}
CLI Flag 和 Value 描述
--flannel-backend=vxlan 使用 VXLAN 后端(默认)
--flannel-backend=host-gw 使用 host-gw 后端
--flannel-backend=ipsec 使用 IPSEC 后端;对网络流量进行加密
--flannel-backend=wireguard 使用 WireGuard 后端;对网络流量进行加密

1)配置 Flannel 选项

这样,我就可以在安装 K3s 或者之后修改对应配置文件,来修改 Flannel 默认的后端网络配置选项(重启会覆盖不生效)了。下面,我们演示下,如何修改为 host-gw 模式。

# 主节点
# flannel-backend使用host-gw
# 该模式会把对端主机的IP当做默认网管(多Server情况)
$ curl -sfL http://rancher-mirror.cnrancher.com/k3s/k3s-install.sh | \
    INSTALL_K3S_MIRROR=cn \
    INSTALL_K3S_EXEC='--flannel-backend=host-gw' \
    sh -

# 工作节点
$ curl -sfL http://rancher-mirror.cnrancher.com/k3s/k3s-install.sh | \
    INSTALL_K3S_MIRROR=cn K3S_URL=https://192.168.100.100:6443 \
    K3S_TOKEN=xxx sh -
# 默认的路由信息
$ route -n
0.0.0.0         172.16.64.1     0.0.0.0         UG    100    0        0 enp0s2
10.42.1.0       172.16.64.9     255.255.255.0   UG    0      0        0 enp0s2

# 查看配置之后的网络配置
$ sudo cat /var/lib/rancher/k3s/agent/etc/flannel/net-conf.json
{
    "Network": "10.42.0.0/16",
    "Backend": {
        "Type": "host-gw"
    }
}

5、镜像仓库设置

K3s 默认使用 containerd 作为容器运行时,所以在 docker 上配置镜像仓库是不生效的。K3s 镜像仓库配置文件由两大部分组成:mirrorsconfigs

  • Mirrors 是一个用于定义专用镜像仓库的名称和 endpoint 的指令
  • Configs 部分定义了每个 mirrorTLS 和证书配置
  • 对于每个 mirror,你可以定义 auth/tls

K3s registry 配置目录为: /etc/rancher/k3s/registries.yamlK3s 启动时会检查 /etc/rancher/k3s/ 中是否存在 registries.yaml 文件,并指示 containerd 使用文件中定义的镜像仓库。如果你想使用一个私有的镜像仓库,那么你需要在每个使用镜像仓库的节点上以 root 身份创建这个文件。

请注意,server 节点默认是可以调度的。如果你没有在 server 节点上设置污点,那么将在它们上运行工作负载,请确保在每个 server 节点上创建 registries.yaml 文件。

containerd 使用了类似 K8Ssvcendpoint 的概念,svc 可以理解为访问名称,这个名称会解析到对应的 endpoint 上。也可以理解 mirror 配置就是一个反向代理,它把客户端的请求代理到 endpoint 配置的后端镜像仓库。mirror 名称可以随意填写,但是必须符合 IP 或域名的定义规则。并且可以配置多个 endpoint,默认解析到第一个 endpoint,如果第一个 endpoint 没有返回数据,则自动切换到第二个 endpoint,以此类推。

# /etc/rancher/k3s/registries.yaml
# 同时可以设置多个mirrors地址
# 可以对mirrors设置权限和证书
mirrors:
  "172.31.6.200:5000":
    endpoint:
      - "http://172.31.6.200:5000"
      - "http://x.x.x.x:5000"
      - "http://y.y.y.y:5000"
  "rancher.ksd.top:5000":
    endpoint:
      - "http://172.31.6.200:5000"
  "docker.io":
    endpoint:
      - "https://fogjl973.mirror.aliyuncs.com"
      - "https://registry-1.docker.io"

configs:
  "172.31.6.200:5000":
    auth:
      username: admin
      password: Harbor@12345
    tls:
      cert_file: /home/ubuntu/harbor2.escapelife.site.cert
      key_file: /home/ubuntu/harbor2.escapelife.site.key
      ca_file: /home/ubuntu/ca.crt
# 镜像都是从同一个仓库获取到的
$ sudo systemctl restart k3s.service
$ sudo crictl pull 172.31.6.200:5000/library/alpine
$ sudo crictl pull rancher.ksd.top:5000/library/alpine

这里我们介绍下,如何使用 TLS 配置。

# 证书颁发机构颁发的证书
$ cat >> /etc/rancher/k3s/registries.yaml <<EOF
mirrors:
  "harbor.escapelife.site":
    endpoint:
      - "https://harbor.escapelife.site"
configs:
  "harbor.escapelife.site":
    auth:
      username: admin
      password: Harbor@12345
EOF

$ sudo systemctl restart k3s
# 自签名证书
$ cat >> /etc/rancher/k3s/registries.yaml <<EOF
mirrors:
  "harbor2.escapelife.site":
    endpoint:
      - "https://harbor2.escapelife.site"
configs:
  "harbor2.escapelife.site":
    auth:
      username: admin
      password: Harbor@12345
    tls:
      cert_file: /home/ubuntu/harbor2.escapelife.site.cert
      key_file:  /home/ubuntu/harbor2.escapelife.site.key
      ca_file:   /home/ubuntu/ca.crt
EOF

$ sudo systemctl restart k3s
# 不使用TLS证书
$ cat >> /etc/rancher/k3s/registries.yaml <<EOF
mirrors:
  "docker.io":
    endpoint:
      - "https://fogjl973.mirror.aliyuncs.com"
      - "https://registry-1.docker.io"
EOF

$ sudo systemctl restart k3s

K3s 将会在 /var/lib/rancher/k3s/agent/etc/containerd/config.toml 中为 containerd 生成 config.toml。如果要对这个文件进行高级设置,你可以在同一目录中创建另一个名为 config.toml.tmpl 的文件,此文件将会代替默认设置。

# 可用示例
# mkdir -p /etc/rancher/k3s
cat > /etc/rancher/k3s/registries.yaml <<EOF
mirrors:
  docker.io:
    endpoint:
      - "https://docker.mirrors.sjtug.sjtu.edu.cn"
      - "https://docker.nju.edu.cn"
  quay.io:
    endpoint:
      - "https://quay.nju.edu.cn"
  gcr.io:
    endpoint:
      - "https://gcr.nju.edu.cn"
  ghcr.io:
    endpoint:
      - "https://ghcr.nju.edu.cn"
  nvcr.io:
    endpoint:
      - "https://ngc.nju.edu.cn"
EOF

systemctl restart k3s
# 完整示例
$ cat >> /etc/rancher/k3s/registries.yaml
mirrors:
  "harbor.escapelife.site":
     endpoint:
     - "https://harbor.escapelife.site"
  "harbor2.escapelife.site":
     endpoint:
     - "https://harbor2.escapelife.site"
  "172.31.19.227:5000":
     endpoint:
     - "http://172.31.19.227:5000"
  "docker.io":
     endpoint:
     - "https://fogjl973.mirror.aliyuncs.com"
     - "https://registry-1.docker.io"

configs:
  "harbor.escapelife.site":
     auth:
       username: admin
       password: Harbor@12345

  "harbor2.escapelife.site":
     auth:
       username: admin
       password: Harbor@12345
     tls:
       cert_file: /home/ubuntu/harbor2.escapelife.site.cert
       key_file:  /home/ubuntu/harbor2.escapelife.site.key
       ca_file:   /home/ubuntu/ca.crt

6、证书管理

参考链接:https://blog.starudream.cn/2023/07/21/k3s-client-cert-extend/

默认情况下,K3s 的证书在 12 个月内过期。如果证书已经过期或剩余的时间不足 90 天,则在 K3s 重启时轮换证书。

# 查询K3s证书过期时间
$ for i in `ls /var/lib/rancher/k3s/server/tls/*.crt`; \
  do \
    echo $i;\
    openssl x509 -enddate -noout -in $i; \
  done

# 修改系统时间为证书过期前90天或证书过期后
$ timedatectl set-ntp no
$ date -s 20220807

# 重启K3s服务
$ service k3s restart

k3s 默认的根证书签发 十年,客户端证书签发 一年。

经常需要重新签发客户端证书,可以通过修改 k3s 的环境变量来延长客户端证书的有效期。

新增 /etc/default/k3s 文件,并添加以下内容:

CATTLE_NEW_SIGNED_CERT_EXPIRATION_DAYS="3650"

该变量在 k3s server 重新签发证书时有效,或者在安装之前设置。

# 轮换证书
k3s certificate rotate

# 启动 K3s
systemctl start k3s

三、多master高可用生产级别部署—在线版

HA + kube-vip

1、配置内核参数

[root@k8s-m1 ~]#  cat <<EOF > /etc/sysctl.d/k8s.conf
# https://github.com/moby/moby/issues/31208 
# ipvsadm -l --timout
# 修复ipvs模式下长连接timeout问题 小于900即可
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 10
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
net.ipv4.neigh.default.gc_stale_time = 120
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_announce = 2
net.ipv4.conf.all.arp_announce = 2
net.ipv4.ip_forward = 1
net.ipv4.tcp_max_tw_buckets = 5000
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 1024
net.ipv4.tcp_synack_retries = 2
# 要求iptables不对bridge的数据进行处理
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1
net.netfilter.nf_conntrack_max = 2310720
fs.inotify.max_user_watches=89100
fs.may_detach_mounts = 1
fs.file-max = 52706963
fs.nr_open = 52706963
vm.swappiness = 0
vm.overcommit_memory=1
vm.panic_on_oom=0
EOF

加载内核参数

[root@k8s-m1 ~]# sysctl -p 

ipvs配置

[root@k8s-m1 ~]# yum install ipset ipvsadm -y
[root@k8s-m1 ~]# cat >/etc/modules-load.d/ipvs.conf<<EOF
ip_vs
# 负载均衡调度算法-最少连接
ip_vs_lc
# 负载均衡调度算法-加权最少连接
ip_vs_wlc
# 负载均衡调度算法-轮询
ip_vs_rr
# 负载均衡调度算法-加权轮询
ip_vs_wrr
# 源地址散列调度算法
ip_vs_sh
nf_conntrack
br_netfilter
EOF

[root@k8s-m1 ~]# systemctl restart systemd-modules-load.service

查看加载情况

[root@k8s-m1 ~]# lsmod | grep -e ip_vs -e nf_conntrack -e br_netfilter
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -ri '/^SELINUX=/cSELINUX=disabled' /etc/selinux/config

换源

sed -e 's|^mirrorlist=|#mirrorlist=|g' \
    -e 's|^#baseurl=http://dl.rockylinux.org/$contentdir|baseurl=https://mirrors.ustc.edu.cn/rocky|g' \
    -i.bak \
    /etc/yum.repos.d/rocky-extras.repo \
    /etc/yum.repos.d/rocky.repo

自签名ca [可选]

mkdir -p /var/lib/rancher/k3s/server/tls
cd /var/lib/rancher/k3s/server/tls
openssl genrsa -out client-ca.key 2048
openssl genrsa -out server-ca.key 2048
openssl genrsa -out request-header-ca.key 2048
openssl req -x509 -new -nodes -key client-ca.key -sha256 -days 3650 -out client-ca.crt -addext keyUsage=critical,digitalSignature,keyEncipherment,keyCertSign -subj '/CN=k3s-client-ca'
openssl req -x509 -new -nodes -key server-ca.key -sha256 -days 3650 -out server-ca.crt -addext keyUsage=critical,digitalSignature,keyEncipherment,keyCertSign -subj '/CN=k3s-server-ca'
openssl req -x509 -new -nodes -key request-header-ca.key -sha256 -days 3650 -out request-header-ca.crt -addext keyUsage=critical,digitalSignature,keyEncipherment,keyCertSign -subj '/CN=k3s-request-header-ca'

设置k3s续签的证书有效期

echo "CATTLE_NEW_SIGNED_CERT_EXPIRATION_DAYS=3650" > /etc/default/k3s
# 或者
#echo "CATTLE_NEW_SIGNED_CERT_EXPIRATION_DAYS=3650" > /etc/sysconfig/k3s

2、安装kube-vip

操作过程:首先生成一个用于部署在 K3s 集群中的 kube-vip Manifest,然后再启动一个高可用的 K3s 集群,启动 K3s 集群时会自动部署 kube-vip 的 Manifest 文件,从而通过 kube-vip 实现控制平面的高可用。

# 创建manifests目录
mkdir -p /var/lib/rancher/k3s/server/manifests/

# 获取 kube-vip RBAC 清单
# kube-vip 在 K3s 下作为 DaemonSet 运行,我们需要 RBAC 资源来确保 ServiceAccount 存在并进行绑定,来确保它具有与 API 服务器通信所需的权限。

curl https://kube-vip.io/manifests/rbac.yaml > /var/lib/rancher/k3s/server/manifests/kube-vip-rbac.yaml

生成kube-vip DaemonSet Manifest

export VIP=172.16.1.222 # 设置虚拟 IP 用于访问控制平面的地址
export INTERFACE=ens160 # 设置控制平面所在主机的网卡名称
KVVERSION=$(curl -sL https://api.github.com/repos/kube-vip/kube-vip/releases | jq -r ".[0].name")  # 获取 kube-vip 版本
alias kube-vip="docker run --network host --rm ghcr.io/kube-vip/kube-vip:$KVVERSION" # 针对 docker 环境设置别名

# 创建 kube-vip 清单
kube-vip manifest daemonset \
    --interface $INTERFACE \
    --address $VIP \
    --inCluster \
    --taint \
    --controlplane \
    --services \
    --arp \
    --leaderElection > /var/lib/rancher/k3s/server/manifests/kube-vip.yaml

文件内容

apiVersion: v1
kind: ServiceAccount
metadata:
  name: kube-vip
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  name: system:kube-vip-role
rules:
  - apiGroups: [""]
    resources: ["services", "services/status", "nodes", "endpoints"]
    verbs: ["list","get","watch", "update"]
  - apiGroups: ["coordination.k8s.io"]
    resources: ["leases"]
    verbs: ["list", "get", "watch", "update", "create"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: system:kube-vip-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-vip-role
subjects:
- kind: ServiceAccount
  name: kube-vip
  namespace: kube-system
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  creationTimestamp: null
  name: kube-vip-ds
  namespace: kube-system
spec:
  selector:
    matchLabels:
      name: kube-vip-ds
  template:
    metadata:
      creationTimestamp: null
      labels:
        name: kube-vip-ds
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: node-role.kubernetes.io/master
                operator: Exists
            - matchExpressions:
              - key: node-role.kubernetes.io/control-plane
                operator: Exists
      containers:
      - args:
        - manager
        env:
        - name: vip_arp
          value: "true"
        - name: port
          value: "6443"
        - name: vip_interface
          value: ens160
        - name: vip_cidr
          value: "32"
        - name: cp_enable
          value: "true"
        - name: cp_namespace
          value: kube-system
        - name: vip_ddns
          value: "false"
        - name: svc_enable
          value: "true"
        - name: vip_leaderelection
          value: "true"
        - name: vip_leaseduration
          value: "5"
        - name: vip_renewdeadline
          value: "3"
        - name: vip_retryperiod
          value: "1"
        - name: address
          value: 172.16.1.222
        # image: ghcr.io/kube-vip/kube-vip:v0.6.0
        image: ghcr.nju.edu.cn/kube-vip/kube-vip:v0.6.0
        imagePullPolicy: Always
        name: kube-vip
        resources: {}
        securityContext:
          capabilities:
            add:
            - NET_ADMIN
            - NET_RAW
            - SYS_TIME
      hostNetwork: true
      serviceAccountName: kube-vip
      tolerations:
      - effect: NoSchedule
        operator: Exists
      - effect: NoExecute
        operator: Exists
  updateStrategy: {}
status:
  currentNumberScheduled: 0
  desiredNumberScheduled: 0
  numberMisscheduled: 0
  numberReady: 0

3、安装HA K3s集群

K3s 支持多种 HA 安装方式,本次示例采用嵌入式 ETCD 的方式搭建高可用的 K3s 集群,这样集群中就存在了 3 个控制平面,然后通过 kube-vip 实现这些控制平面的高可用。

安装 K3s 时需要指定 --tls-san 参数,这样 K3s 就会使用 kube-vip 虚拟 IP 地址生成 API 服务器证书。

master节点上

第一个server

curl -sfL https://rancher-mirror.rancher.cn/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn INSTALL_K3S_EXEC=" --cluster-init --tls-san 172.16.1.222 --disable=traefik --disable servicelb  --kube-proxy-arg proxy-mode=ipvs --write-kubeconfig ~/.kube/config  --write-kubeconfig-mode 644 "  sh - 

# 查看token
cat /var/lib/rancher/k3s/server/token

其他server

curl -sfL https://rancher-mirror.rancher.cn/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn  K3S_TOKEN=K107f47359a4c5125056975c2bb8e4bee90a099366efa61e207092782f19f209abd::server:83a5baabf15d41e9cdf1de4a84b5367f INSTALL_K3S_EXEC=" --server https://172.16.1.164:6443   --tls-san 172.16.1.222 --disable=traefik --disable servicelb  --kube-proxy-arg proxy-mode=ipvs --write-kubeconfig ~/.kube/config  --write-kubeconfig-mode 644 "  sh -

agent

curl -sfL https://rancher-mirror.rancher.cn/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn INSTALL_K3S_EXEC="agent --server https://k3s.example.com --token mypassword" sh -s -
curl -sfL https://rancher-mirror.rancher.cn/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn K3S_TOKEN=K107f47359a4c5125056975c2bb8e4bee90a099366efa61e207092782f19f209abd::server:83a5baabf15d41e9cdf1de4a84b5367f sh -s - agent --server https://172.16.1.222:6443 --with-node-id 172.16.1.165

4、kube-vip-cloud 提供LB IP地址段 [可选]

https://kube-vip.io/docs/usage/cloud-provider/#install-the-kube-vip-cloud-provider

wget  https://raw.githubusercontent.com/kube-vip/kube-vip-cloud-provider/main/manifest/kube-vip-cloud-controller.yaml

kubectl apply -f kube-vip-cloud-controller.yaml

文件内容

apiVersion: v1
kind: ServiceAccount
metadata:
  name: kube-vip-cloud-controller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  name: system:kube-vip-cloud-controller-role
rules:
  - apiGroups: ["coordination.k8s.io"]
    resources: ["leases"]
    verbs: ["get", "create", "update", "list", "put"]
  - apiGroups: [""]
    resources: ["configmaps", "endpoints","events","services/status", "leases"]
    verbs: ["*"]
  - apiGroups: [""]
    resources: ["nodes", "services"]
    verbs: ["list","get","watch","update"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: system:kube-vip-cloud-controller-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-vip-cloud-controller-role
subjects:
- kind: ServiceAccount
  name: kube-vip-cloud-controller
  namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kube-vip-cloud-provider
  namespace: kube-system
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: kube-vip
      component: kube-vip-cloud-provider
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: kube-vip
        component: kube-vip-cloud-provider
    spec:
      containers:
      - command:
        - /kube-vip-cloud-provider
        - --leader-elect-resource-name=kube-vip-cloud-controller
        image: ghcr.io/kube-vip/kube-vip-cloud-provider:v0.0.7
        name: kube-vip-cloud-provider
        imagePullPolicy: Always
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      serviceAccountName: kube-vip-cloud-controller
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      - key: node-role.kubernetes.io/control-plane
        effect: NoSchedule
      affinity:
        nodeAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 10
            preference:
              matchExpressions:
              - key: node-role.kubernetes.io/control-plane
                operator: Exists
          - weight: 10
            preference:
              matchExpressions:
              - key: node-role.kubernetes.io/master
                operator: Exists

创建对应的ip地址段

apiVersion: v1
kind: ConfigMap
metadata:
  name: kubevip
  namespace: kube-system
data:
  #cidr-default: 192.168.0.200/29                      # CIDR-based IP range for use in the default Namespace
  #range-development: 192.168.0.210-192.168.0.219      # Range-based IP range for use in the development Namespace
  #cidr-finance: 192.168.0.220/29,192.168.0.230/29     # Multiple CIDR-based ranges for use in the finance Namespace
  #cidr-global: 192.168.0.240/29                       # CIDR-based range which can be used in any Namespace
  range-global: 172.16.1.230-172.16.1.239           # Range-based IP range which can be used in any Namespace

LB测试

apiVersion: v1
kind: Service
metadata:
  name: myapp
spec:
  selector:
    app: myapp
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  type: LoadBalancer    #类型选择LoadBalancer
 
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  labels:
    app: myapp
spec:
  replicas: 2
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80

四、多master高可用生产级别部署—离线版

# download
wget https://github.com/k3s-io/k3s/releases/download/v1.28.4%2Bk3s2/k3s-arm64
wget https://github.com/k3s-io/k3s/releases/download/v1.28.4%2Bk3s2/k3s-airgap-images-arm64.tar.gz

# 下载rpm包
wget https://github.com/k3s-io/k3s-selinux/releases/download/v1.4.stable.1/k3s-selinux-1.4-1.el9.noarch.rpm
yum -y localinstall k3s-selinux-1.4-1.el9.noarch.rpm --downloadonly  --downloaddir=./k3s-rpm

yum -y install  ipvsadm ipset --downloadonly  --downloaddir=./k3s-rpm


# 下载安装脚本
# wget https://get.k3s.io -o ./install.sh
# wget http://rancher-mirror.cnrancher.com/k3s/k3s-install.sh
wget https://raw.githubusercontent.com/k3s-io/k3s/master/install.sh



#分发到所有节点
# set k3s
cp ./k3s /usr/local/bin/
chmod 755 /usr/local/bin/k3s


# set images
tar -xf k3s-airgap-images-$ARCH.tar.gz
mkdir -p /var/lib/rancher/k3s/agent/images/
cp ./k3s-airgap-images-$ARCH.tar /var/lib/rancher/k3s/agent/images/

systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -ri '/^SELINUX=/cSELINUX=disabled' /etc/selinux/config



# 内核参数设置
cat <<EOF > /etc/sysctl.d/k8s.conf
# https://github.com/moby/moby/issues/31208 
# ipvsadm -l --timout
# 修复ipvs模式下长连接timeout问题 小于900即可
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 10
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
net.ipv4.neigh.default.gc_stale_time = 120
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_announce = 2
net.ipv4.conf.all.arp_announce = 2
net.ipv4.ip_forward = 1
net.ipv4.tcp_max_tw_buckets = 5000
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 1024
net.ipv4.tcp_synack_retries = 2
# 要求iptables不对bridge的数据进行处理
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1
net.netfilter.nf_conntrack_max = 2310720
fs.inotify.max_user_watches=89100
fs.may_detach_mounts = 1
fs.file-max = 52706963
fs.nr_open = 52706963
vm.swappiness = 0
vm.overcommit_memory=1
vm.panic_on_oom=0
EOF

sysctl -p 

cat >/etc/modules-load.d/ipvs.conf<<EOF
ip_vs
# 负载均衡调度算法-最少连接
ip_vs_lc
# 负载均衡调度算法-加权最少连接
ip_vs_wlc
# 负载均衡调度算法-轮询
ip_vs_rr
# 负载均衡调度算法-加权轮询
ip_vs_wrr
# 源地址散列调度算法
ip_vs_sh
nf_conntrack
br_netfilter
EOF

systemctl restart systemd-modules-load.service
lsmod | grep -e ip_vs -e nf_conntrack -e br_netfilter

# 设置续签时间
echo "CATTLE_NEW_SIGNED_CERT_EXPIRATION_DAYS=3650" > /etc/default/k3s

# 设置镜像加速
mkdir -p /etc/rancher/k3s
cat > /etc/rancher/k3s/registries.yaml <<EOF
mirrors:
  docker.io:
    endpoint:
      - "https://docker.mirrors.sjtug.sjtu.edu.cn"
      - "https://docker.nju.edu.cn"
  quay.io:
    endpoint:
      - "https://quay.nju.edu.cn"
  gcr.io:
    endpoint:
      - "https://gcr.nju.edu.cn"
  ghcr.io:
    endpoint:
      - "https://ghcr.nju.edu.cn"
  nvcr.io:
    endpoint:
      - "https://ngc.nju.edu.cn"
EOF

第一个server

# 配置kube-vip 
mkdir -p /var/lib/rancher/k3s/server/manifests/

cat > kube-vip.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kube-vip
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  name: system:kube-vip-role
rules:
  - apiGroups: [""]
    resources: ["services", "services/status", "nodes", "endpoints"]
    verbs: ["list","get","watch", "update"]
  - apiGroups: ["coordination.k8s.io"]
    resources: ["leases"]
    verbs: ["list", "get", "watch", "update", "create"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: system:kube-vip-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-vip-role
subjects:
- kind: ServiceAccount
  name: kube-vip
  namespace: kube-system
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  creationTimestamp: null
  name: kube-vip-ds
  namespace: kube-system
spec:
  selector:
    matchLabels:
      name: kube-vip-ds
  template:
    metadata:
      creationTimestamp: null
      labels:
        name: kube-vip-ds
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: node-role.kubernetes.io/master
                operator: Exists
            - matchExpressions:
              - key: node-role.kubernetes.io/control-plane
                operator: Exists
      containers:
      - args:
        - manager
        env:
        - name: vip_arp
          value: "true"
        - name: port
          value: "6443"
        - name: vip_interface
          value: ens160
        - name: vip_cidr
          value: "32"
        - name: cp_enable
          value: "true"
        - name: cp_namespace
          value: kube-system
        - name: vip_ddns
          value: "false"
        - name: svc_enable
          value: "true"
        - name: vip_leaderelection
          value: "true"
        - name: vip_leaseduration
          value: "5"
        - name: vip_renewdeadline
          value: "3"
        - name: vip_retryperiod
          value: "1"
        - name: address
          value: 172.16.1.222
        # image: ghcr.io/kube-vip/kube-vip:v0.6.0
        image: ghcr.nju.edu.cn/kube-vip/kube-vip:v0.6.0
        imagePullPolicy: Always
        name: kube-vip
        resources: {}
        securityContext:
          capabilities:
            add:
            - NET_ADMIN
            - NET_RAW
            - SYS_TIME
      hostNetwork: true
      serviceAccountName: kube-vip
      tolerations:
      - effect: NoSchedule
        operator: Exists
      - effect: NoExecute
        operator: Exists
  updateStrategy: {}
status:
  currentNumberScheduled: 0
  desiredNumberScheduled: 0
  numberMisscheduled: 0
  numberReady: 0
EOF


INSTALL_K3S_SKIP_DOWNLOAD=true INSTALL_K3S_MIRROR=cn INSTALL_K3S_EXEC="  --cluster-init --tls-san 172.16.1.222 --disable=traefik --disable servicelb  --kube-proxy-arg proxy-mode=ipvs --write-kubeconfig ~/.kube/config  --write-kubeconfig-mode 644  "  ./install.sh


# 查看token
cat /var/lib/rancher/k3s/server/token

其他server

INSTALL_K3S_SKIP_DOWNLOAD=true INSTALL_K3S_MIRROR=cn K3S_TOKEN=K107f47359a4c5125056975c2bb8e4bee90a099366efa61e207092782f19f209abd::server:83a5baabf15d41e9cdf1de4a84b5367f INSTALL_K3S_EXEC=" --server https://172.16.1.164:6443   --tls-san 172.16.1.222 --disable=traefik --disable servicelb  --kube-proxy-arg proxy-mode=ipvs --write-kubeconfig ~/.kube/config  --write-kubeconfig-mode 644 "  ./install.sh

agent

INSTALL_K3S_SKIP_DOWNLOAD=true K3S_URL=https://172.16.1.222:6443 K3S_TOKEN=K107f47359a4c5125056975c2bb8e4bee90a099366efa61e207092782f19f209abd::server:83a5baabf15d41e9cdf1de4a84b5367f  ./install.sh

kube-vip-cloud 设置

cat > kube-vip-cloud-controller.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kube-vip-cloud-controller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  name: system:kube-vip-cloud-controller-role
rules:
  - apiGroups: ["coordination.k8s.io"]
    resources: ["leases"]
    verbs: ["get", "create", "update", "list", "put"]
  - apiGroups: [""]
    resources: ["configmaps", "endpoints","events","services/status", "leases"]
    verbs: ["*"]
  - apiGroups: [""]
    resources: ["nodes", "services"]
    verbs: ["list","get","watch","update"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: system:kube-vip-cloud-controller-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-vip-cloud-controller-role
subjects:
- kind: ServiceAccount
  name: kube-vip-cloud-controller
  namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kube-vip-cloud-provider
  namespace: kube-system
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: kube-vip
      component: kube-vip-cloud-provider
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: kube-vip
        component: kube-vip-cloud-provider
    spec:
      containers:
      - command:
        - /kube-vip-cloud-provider
        - --leader-elect-resource-name=kube-vip-cloud-controller
        image: ghcr.io/kube-vip/kube-vip-cloud-provider:v0.0.7
        name: kube-vip-cloud-provider
        imagePullPolicy: Always
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      serviceAccountName: kube-vip-cloud-controller
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      - key: node-role.kubernetes.io/control-plane
        effect: NoSchedule
      affinity:
        nodeAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 10
            preference:
              matchExpressions:
              - key: node-role.kubernetes.io/control-plane
                operator: Exists
          - weight: 10
            preference:
              matchExpressions:
              - key: node-role.kubernetes.io/master
                operator: Exists
EOF

kubectl apply -f kube-vip-cloud-controller.yaml


# 设置LB ip 地址池
cat > LB-ip-pool.yaml <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
  name: kubevip
  namespace: kube-system
data:
  #cidr-default: 192.168.0.200/29                      # CIDR-based IP range for use in the default Namespace
  #range-development: 192.168.0.210-192.168.0.219      # Range-based IP range for use in the development Namespace
  #cidr-finance: 192.168.0.220/29,192.168.0.230/29     # Multiple CIDR-based ranges for use in the finance Namespace
  #cidr-global: 192.168.0.240/29                       # CIDR-based range which can be used in any Namespace
  range-global: 172.16.1.230-172.16.1.239           # Range-based IP range which can be used in any Namespace
EOF

kubectl apply -f LB-ip-pool.yaml

LB测试

apiVersion: v1
kind: Service
metadata:
  name: myapp
spec:
  selector:
    app: myapp
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  type: LoadBalancer    #类型选择LoadBalancer
 
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  labels:
    app: myapp
spec:
  replicas: 2
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80