admin管理员组文章数量:1516870
文档说明
本文档为AlmaLinux 系统定制离线部署 Kubernetes(K8s)集群的完整流程,配套Windows 端离线镜像打包操作,包含环境准备、镜像拉取/导出、集群安装、网络插件部署、可视化面板配置及集群卸载全步骤,所有脚本和配置均做兼容性适配,可直接复用,便于离线环境快速落地K8s集群。
适配版本:K8s v1.35.1、Flannel v0.28.1、K8s Dashboard v2.7.0
核心环境:Windows(镜像打包)+ AlmaLinux9.7(集群部署)
容器运行时:Containerd(AlmaLinux 原生兼容)
网络插件:Flannel
可视化工具:Kubernetes Dashboard(NodePort 固定端口访问)
目录
- 前置准备(Windows+AlmaLinux)
- Windows 端离线镜像拉取与导出
- AlmaLinux 端集群离线安装
- 核心组件配置说明(Flannel/Dashboard)
- AlmaLinux 端集群全量卸载
- 常见问题与排错
- 部署验证与日常使用
一、前置准备(Windows+AlmaLinux)
1.1 Windows 端前置配置(镜像打包机)
1.1.1 Docker Desktop 安装与激活
Windows 端需通过 Docker Desktop 拉取并打包镜像,为核心前置依赖,步骤如下:
下载安装:从 下载 Docker Desktop for Windows,双击安装(需开启 WSL2/Hyper-V,安装后重启电脑)。
启动验证:打开 Docker Desktop,等待右下角鲸鱼图标变为Running状态,打开Windows终端(CMD/PowerShell)执行以下命令验证:
docker --version docker info无报错即表示 Docker 环境激活成功。
镜像加速配置(可选,解决拉取慢):
打开 Docker Desktop → Settings → Docker Engine;
替换配置文件内容为以下(添加1panel镜像加速),点击Apply & Restart:
{ "builder": { "gc": { "defaultKeepStorage": "20GB", "enabled": true } }, "registry-mirrors": [""], "experimental": false }
1.1.2 脚本准备
将本文中Windows 端所有镜像打包脚本保存为 .bat 文件,放在同一文件夹(建议新建文件夹k8s-offline-images),后续所有镜像拉取/导出操作均在该文件夹执行。
1.2 AlmaLinux 端前置配置(集群节点)
1.2.1 文件准备
将以下文件传输至 AlmaLinux 节点同一目录(建议/root/k8s-offline/),可通过scp/WinSCP/FTP传输:
- Windows 端打包的所有镜像包(.tar格式);
- K8s 相关 RPM 包( 、kubernetes-cni、cri-tools、kubelet、kubeadm、kubectl,版本与K8s v1.35.1匹配);
- 本文中AlmaLinux 端所有脚本(.sh格式);
- 核心配置文件(kube-flannel.yml、kubernetes-dashboard.yaml)。
1.2.2 脚本赋权
对 AlmaLinux 端所有 .sh 脚本赋予执行权限:
cd /root/k8s-offline/
chmod +x offline_install_k8s.sh offline_uninstall_k8s.sh
二、Windows 端离线镜像拉取与导出
Windows 端需拉取K8s核心镜像、Flannel网络插件镜像、Dashboard可视化镜像,并打包为 .tar 离线包,所有脚本均做AlmaLinux 兼容性适配,镜像路径与AlmaLinux端安装脚本完全对齐,直接运行即可。
2.1 拉取并导出 K8s 核心+Flannel 镜像
新建文本文件,保存为pull_k8s_flannel.bat,粘贴以下内容,双击运行:
@echo off
set K8S_VERSION=v1.35.1
set PAUSE_VERSION=3.10.1
set ETCD_VERSION=3.6.6-0
set COREDNS_VERSION=v1.13.1
echo =======================================================
echo 正在拉取 K8s %K8S_VERSION% 核心镜像...
echo =======================================================
docker pull registry.k8s.io/kube-apiserver:%K8S_VERSION%
docker pull registry.k8s.io/kube-controller-manager:%K8S_VERSION%
docker pull registry.k8s.io/kube-scheduler:%K8S_VERSION%
docker pull registry.k8s.io/kube-proxy:%K8S_VERSION%
docker pull registry.k8s.io/pause:%PAUSE_VERSION%
docker pull registry.k8s.io/etcd:%ETCD_VERSION%
docker pull registry.k8s.io/coredns/coredns:%COREDNS_VERSION%
echo.
echo =======================================================
echo 正在拉取并适配 Flannel 镜像路径 (匹配官方 YAML)...
echo =======================================================
rem 拉取通用路径
docker pull flannel/flannel:v0.28.1
docker pull flannel/flannel-cni-plugin:v1.9.0-flannel1
rem 关键修复:打上 YAML 要求的 ghcr.io 和 flannel-io 标签
docker tag flannel/flannel:v0.28.1 ghcr.io/flannel-io/flannel:v0.28.1
docker tag flannel/flannel-cni-plugin:v1.9.0-flannel1 docker.io/flannel-io/flannel-cni-plugin:v1.9.0-flannel1
echo.
echo =======================================================
echo 正在导出镜像到 k8s-core-images.tar...
echo =======================================================
docker save -o k8s-core-images.tar ^
registry.k8s.io/kube-apiserver:%K8S_VERSION% ^
registry.k8s.io/kube-controller-manager:%K8S_VERSION% ^
registry.k8s.io/kube-scheduler:%K8S_VERSION% ^
registry.k8s.io/kube-proxy:%K8S_VERSION% ^
registry.k8s.io/pause:%PAUSE_VERSION% ^
registry.k8s.io/etcd:%ETCD_VERSION% ^
registry.k8s.io/coredns/coredns:%COREDNS_VERSION% ^
ghcr.io/flannel-io/flannel:v0.28.1 ^
docker.io/flannel-io/flannel-cni-plugin:v1.9.0-flannel1
pause
2.2 拉取并导出 Dashboard 镜像
新建文本文件,保存为pull_dashboard.bat,粘贴以下内容,双击运行:
@echo off
set DASH_VERSION=v2.7.0
set SCRAPER_VERSION=v1.0.8
echo =======================================================
echo 正在下载 Dashboard 离线镜像...
echo =======================================================
docker pull kubernetesui/dashboard:%DASH_VERSION%
docker pull kubernetesui/metrics-scraper:%SCRAPER_VERSION%
echo.
echo =======================================================
echo 正在导出到 dashboard-images.tar...
echo =======================================================
docker save -o dashboard-images.tar ^
kubernetesui/dashboard:%DASH_VERSION% ^
kubernetesui/metrics-scraper:%SCRAPER_VERSION%
echo 完成!
pause
2.2 拉取并导出 flannel镜像
新建文本文件,保存为pull_flannel.bat,粘贴以下内容,双击运行:
@echo off
set FLANNEL_VERSION=v0.28.1
set CNI_VERSION=v1.9.0-flannel1
echo =======================================================
echo 正在下载 Flannel 离线镜像...
echo =======================================================
docker pull docker.io/flannel/flannel:%FLANNEL_VERSION%
docker pull docker.io/flannel/flannel-cni-plugin:%CNI_VERSION%
echo.
echo =======================================================
echo 正在导出到 flannel-images.tar...
echo =======================================================
docker save -o flannel-images.tar ^
docker.io/flannel/flannel:%FLANNEL_VERSION% ^
docker.io/flannel/flannel-cni-plugin:%CNI_VERSION%
echo 完成!
pause
2.4 镜像包说明
运行完成后,Windows 端脚本文件夹下会生成2个离线镜像包,需全部传输至AlmaLinux端指定目录:
- k8s-core-flannel.tar:K8s核心组件镜像;
- dashboard-images.tar:K8s Dashboard+指标采集组件镜像。
- pull_flannel.bat:K8s Flannel网络插件镜像。
三、AlmaLinux 端集群离线安装
核心安装脚本offline_install_k8s.sh已做AlmaLinux 深度适配,实现环境清理、RPM包安装、Containerd配置、镜像导入、集群初始化、Flannel/Dashboard一键部署,无需手动干预,直接执行即可。
3.1 核心安装脚本( )
在AlmaLinux端指定目录(如/root/k8s-offline/)新建文本文件,保存为offline_install_k8s.sh,粘贴以下内容:
#!/bin/bash
# 函数注释:格式化输出绿色日志信息
function log() {
echo -e "\033[32m[$(date +'%Y-%m-%d %H:%M:%S')] $1\033[0m"
}
# 变量配置:优先从环境变量读取,否则使用默认值
K8S_VERSION="${K8S_VERSION:-1.35.1}"
PAUSE_VERSION="${PAUSE_VERSION:-3.10.1}"
POD_CIDR="${POD_CIDR:-10.244.0.0/16}"
DASHBOARD_YAML="${DASHBOARD_YAML:-kubernetes-dashboard.yaml}"
FLANNEL_YAML="${FLANNEL_YAML:-kube-flannel.yml}"
DASHBOARD_PORT="${PORT:-30092}"
# 状态检查:如果 kubelet 已运行且存在集群配置,则认为已安装
if systemctl is-active --quiet kubelet && [ -f "/etc/kubernetes/admin.conf" ]; then
log "检测到 Kubernetes 已安装,跳过部署。"
exit 100 # 返回自定义状态码 100 表示已安装
fi
log "1. 环境清理与预处理"
# 函数注释:停止服务并强制卸载残留的 kubelet 挂载点
systemctl stop kubelet 2>/dev/null || true
command -v kubeadm >/dev/null 2>&1 && kubeadm reset -f
if [ -d "/var/lib/kubelet" ]; then
mount | grep "/var/lib/kubelet" | awk '{print $3}' | xargs -r umount -l
fi
for dir in /etc/kubernetes /var/lib/kubelet /var/lib/etcd /etc/cni/net.d; do
[ -d "$dir" ] && rm -rf "$dir"
done
ip link delete cni0 2>/dev/null || true
log "1.1 内核参数与模块配置"
# 函数注释:加载必要的内核模块并开启 IP 转发,确保 K8S 网络正常工作
modprobe overlay
modprobe br_netfilter
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
sysctl --system
log "1.2 关闭防火墙与 Swap"
# 函数注释:禁用防火墙和交换分区,防止干扰集群初始化
systemctl stop firewalld && systemctl disable firewalld
sed -i '/swap/d' /etc/fstab
swapoff -a
log "2. 安装本地 RPM 软件包"
dnf localinstall -y containerd.io-*.rpm kubernetes-cni-*.rpm cri-tools-*.rpm
dnf localinstall -y kubelet-${K8S_VERSION}*.rpm kubeadm-${K8S_VERSION}*.rpm kubectl-${K8S_VERSION}*.rpm
log "3. 配置 Containerd 运行时"
# 函数注释:生成默认配置并修正沙箱镜像与 Cgroup 驱动
containerd config default > /etc/containerd/config.toml
sed -i "s|sandbox_image = \".*\"|sandbox_image = \"registry.k8s.io/pause:${PAUSE_VERSION}\"|" /etc/containerd/config.toml
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
systemctl enable --now containerd
systemctl restart containerd
log "4. 导入离线镜像"
cat <<EOF > /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
EOF
ctr -n k8s.io images import k8s-core-images.tar
[ -f "flannel-images.tar" ] && ctr -n k8s.io images import flannel-images.tar && ctr -n k8s.io images import dashboard-images.tar
log "5. 执行集群初始化"
# 函数注释:生成初始化配置文件并调用 kubeadm init
cat <<EOF > kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
imagePullPolicy: IfNotPresent
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
kubernetesVersion: v${K8S_VERSION}
imageRepository: registry.k8s.io
networking:
podSubnet: ${POD_CIDR}
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
EOF
kubeadm init --config=kubeadm-config.yaml --ignore-preflight-errors=ImagePull
log "6. 配置管理权限与网络插件"
mkdir -p $HOME/.kube
cp -f /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf
[ -f "$FLANNEL_YAML" ] && kubectl apply -f "$FLANNEL_YAML"
kubectl taint nodes --all node-role.kubernetes.io/control-plane- || true
log "7. 部署 Dashboard"
# 函数注释:安装 Dashboard 并配置 NodePort 管理员权限
if [ -f "$DASHBOARD_YAML" ]; then
kubectl apply -f "$DASHBOARD_YAML"
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
EOF
# 修正点 2:在 JSON Patch 中引用变量需使用双引号包裹 JSON 字符串,并确保变量被正确解析
kubectl -n kubernetes-dashboard patch svc kubernetes-dashboard --type='json' -p "[{\"op\": \"replace\", \"path\": \"/spec/type\", \"value\":\"NodePort\"}, {\"op\": \"replace\", \"path\": \"/spec/ports/0/nodePort\", \"value\":${DASHBOARD_PORT}}]"
log "Dashboard 已配置在端口: ${DASHBOARD_PORT}"
log "生成的登录 Token 如下:"
kubectl -n kubernetes-dashboard create token admin-user
fi
log "8. 安装结果验证"
# 函数注释:检查关键组件状态,返回最终状态码
INSTALL_SUCCESS=0
kubectl get nodes | grep -q "Ready" || INSTALL_SUCCESS=$?
if [ $INSTALL_SUCCESS -eq 0 ]; then
log "Kubernetes 安装成功。"
exit 0
else
log "Kubernetes 安装完成但节点未就绪,请检查网络插件。"
exit 1
fi
3.2 执行安装
在AlmaLinux端以root用户执行以下命令,开始集群安装:
mkdir -p /root/k8s-offline/
cd /root/k8s-offline/
./offline_install_k8s.sh
安装成功关键标识
- 日志输出Kubernetes集群离线安装成功!所有节点已就绪;
- 输出Dashboard登录Token(务必保存,用于后续登录);
- 执行kubectl get nodes,节点状态为Ready。
四、核心组件配置说明(Flannel/Dashboard)
以下为2个核心组件的配置文件,需与安装脚本放在同一目录,已做AlmaLinux适配,可直接使用。
4.1 Flannel网络插件配置(kube-flannel.yml)
---
kind: Namespace
apiVersion: v1
metadata:
name: kube-flannel
labels:
k8s-app: flannel
pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: flannel
name: flannel
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: flannel
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: flannel
name: flannel
namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-flannel
labels:
tier: node
k8s-app: flannel
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"EnableNFTables": false,
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-flannel
labels:
tier: node
app: flannel
k8s-app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
hostNetwork: true
priorityClassName: system-node-critical
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni-plugin
image: docker.io/flannel-io/flannel-cni-plugin:v1.9.0-flannel1
command:
- cp
args:
- -f
- /flannel
- /opt/cni/bin/flannel
volumeMounts:
- name: cni-plugin
mountPath: /opt/cni/bin
- name: install-cni
image: ghcr.io/flannel-io/flannel:v0.28.1
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: ghcr.io/flannel-io/flannel:v0.28.1
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN", "NET_RAW"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: EVENT_QUEUE_DEPTH
value: "5000"
- name: CONT_WHEN_CACHE_NOT_READY
value: "false"
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
- name: xtables-lock
mountPath: /run/xtables.lock
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni-plugin
hostPath:
path: /opt/cni/bin
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
4.2 Kubernetes Dashboard配置(kubernetes-dashboard.yaml)
apiVersion: v1
kind: Namespace
metadata:
name: kubernetes-dashboard
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort # 离线环境下推荐使用 NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 30092 # 固定访问端口
selector:
k8s-app: kubernetes-dashboard
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
data:
csrf: ""
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
containers:
- name: kubernetes-dashboard
image: docker.io/kubernetesui/dashboard:v2.7.0 # 补全完整镜像名
imagePullPolicy: IfNotPresent # 离线必选,优先使用本地镜像
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
- key: node-role.kubernetes.io/control-plane # 兼容新版本 master 污点名
effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
ports:
- port: 8000
targetPort: 8000
selector:
k8s-app: dashboard-metrics-scraper
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: dashboard-metrics-scraper
template:
metadata:
labels:
k8s-app: dashboard-metrics-scraper
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
containers:
- name: dashboard-metrics-scraper
image: docker.io/kubernetesui/metrics-scraper:v1.0.8 # 补全完整镜像名
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
- mountPath: /tmp
name: tmp-volume
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
volumes:
- name: tmp-volume
emptyDir: {}
五、AlmaLinux 端集群全量卸载
若需清理K8s集群环境,使用offline_uninstall_k8s.sh脚本,实现集群重置、挂载点清理、配置删除、网络接口清理、软件包卸载,默认保留Containerd镜像(避免重复打包),可按需清理。
5.1 卸载脚本( )
#!/bin/bash
# 函数注释:格式化输出红色日志(用于卸载警告)
function log() {
echo -e "\033[31m[$(date +'%Y-%m-%d %H:%M:%S')] $1\033[0m"
}
# 变量配置:CLEAN_IMAGES 默认为 false,防止误删本地缓存的离线镜像
CLEAN_IMAGES=${CLEAN_IMAGES:-"false"}
log "1. 停止集群服务..."
# 函数注释:安全停止 kubelet 并执行 kubeadm 重置逻辑
systemctl stop kubelet 2>/dev/null
if command -v kubeadm >/dev/null 2>&1; then
log "执行 kubeadm reset..."
kubeadm reset -f
fi
log "2. 清理系统挂载点..."
# 函数注释:循环检测并强制卸载所有与 kubelet 相关的残留挂载,防止目录删除失败
mount | grep "/var/lib/kubelet" | awk '{print $3}' | xargs -r umount -l 2>/dev/null
log "3. 移除核心目录与配置文件..."
# 函数)注释:删除 K8s 运行期产生的各种数据、日志及网络配置目录
for dir in /etc/kubernetes /var/lib/kubelet /var/lib/etcd /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes /var/lib/cni; do
[ -d "$dir" ] && rm -rf "$dir"
done
rm -f $HOME/.kube/config 2>/dev/null
log "4. 清理网络接口..."
# 函数注释:删除由 CNI 插件创建的虚拟网桥及接口
for iface in cni0 flannel.1 kube-ipvs0; do
if ip link show "$iface" >/dev/null 2>&1; then
ip link delete "$iface"
fi
done
log "5. 卸载软件包..."
# 函数注释:移除 Kubernetes 相关组件及容器运行时(根据需求可选保留)
dnf remove -y kubelet kubeadm kubectl cri-tools kubernetes-cni 2>/dev/null
log "6. 处理容器运行时..."
# 函数注释:停止 containerd 并根据变量决定是否清理所有容器镜像
systemctl stop containerd 2>/dev/null
if [ "$CLEAN_IMAGES" = "true" ]; then
log "正在清理 containerd 数据目录(包含所有镜像)..."
rm -rf /var/lib/containerd
else
log "跳过镜像目录清理,保留 /var/lib/containerd"
fi
log "7. 刷新防火墙规则..."
# 函数注释:清理 kube-proxy 产生的 iptables 或 ipvs 规则,恢复系统网络环境
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
ipvsadm --clear 2>/dev/null
log "====================================================="
log "卸载流程结束!环境已清理。"
log "建议手动重启服务器以确保虚拟网卡完全释放。"
log "====================================================="
exit 0
5.2 执行卸载
# 默认卸载(保留Containerd镜像)
cd /root/k8s-offline/
./offline_uninstall_k8s.sh
# 强制卸载(清理所有Containerd镜像,谨慎执行)
CLEAN_IMAGES=true ./offline_uninstall_k8s.sh
六、常见问题与排错
6.1 Windows 端:Docker 拉取镜像超时/失败
- 解决方案:配置Docker镜像加速(见1.1.1节),重启Docker Desktop后重新运行脚本。
6.2 AlmaLinux 端:执行kubectl get nodes节点状态为NotReady
原因:Flannel网络插件部署失败,未完成Pod网络配置;
排错步骤:
# 查看Flannel Pod状态 kubectl get pods -n kube-flannel # 查看异常Pod日志 kubectl logs -f <Flannel-Pod名称> -n kube-flannel常见解决:确保kube-flannel.yml中Network与安装脚本POD_CIDR一致(均为 ),重新部署Flannel:kubectl apply -f kube-flannel.yml。
6.3 AlmaLinux 端:导入镜像后部署组件提示镜像不存在
- 原因:镜像标签与YAML配置不一致;
- 验证命令:ctr -n k8s.io images ls | grep <镜像关键词>(如grep flannel/grep dashboard);
- 解决方案:确保Windows端镜像标签与YAML配置一致(本文脚本已做适配,无需手动修改)。
6.4 Dashboard 无法访问
原因1:端口未开放(本文已关闭防火墙,可忽略);
systemctl stop firewalld && systemctl disable firewalld原因2:Dashboard Pod未就绪;
排错:kubectl get pods -n kubernetes-dashboard;
- 原因3:访问地址错误;
正确地址:,非http)。
6.5 忘记Dashboard登录Token
重新生成Token命令:
kubectl -n kubernetes-dashboard create token admin-user
七、部署验证与日常使用
7.1 集群基础验证
使用offline_install_k8s.sh完成安装
# 查看节点状态(Ready为正常)
kubectl get nodes
# 查看所有命名空间Pod状态(均为Running/Completed为正常)
kubectl get pods -A
# 查看集群信息
kubectl cluster-info
7.2 Dashboard 访问
- 打开浏览器,输入;
- 选择Token方式登录,粘贴安装时保存的Token,点击Sign in;
-
成功进入Dashboard面板,可可视化管理Pod/Node/Service等资源。
7.3 常用kubectl命令
# 查看Pod(指定命名空间)
kubectl get pods -n default
# 查看Service
kubectl get svc -A
# 查看部署
kubectl get deploy -n default
# 进入Pod内部
kubectl exec -it <Pod名称> -n default -- /bin/bash
# 删除Pod
kubectl delete pod <Pod名称> -n default
# 部署示例Nginx(验证集群可用性)
kubectl create deployment nginx --image=nginx:alpine
kubectl expose deployment nginx --port=80 --type=NodePort
文档更新与说明
本文档所有脚本和配置均经过 AlmaLinux 9.7 实际测试,可直接复用;若需升级K8s/组件版本,只需同步修改Windows端脚本的版本变量和AlmaLinux端安装脚本的版本变量,保持两端版本一致即可。
分享说明:本文档为完整离线部署方案,可直接转发给相关运维/开发人员,配套所有脚本和配置文件可统一打包分发。
(注:文档部分内容可能由 AI 生成)
版权声明:本文标题:AlmaLinux与Kubernetes双强联手,1.35.1版本离线部署教程(附Windows镜像打包技巧) 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:https://www.betaflare.com/biancheng/1772305285a3273357.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。


发表评论