主页
文章
分类
标签
关于
K8S集群搭建
发布于: 2024-1-22   更新于: 2024-1-22   收录于: liunx
文章字数: 2652   阅读时间: 6 分钟   阅读量:

K8S集群搭建

系统设置

以下部署基于centos7

注意有些数据盘位置异常 需要建立软连接

mkdir -p /mnt/vos-rhxyjhz3/data

ln -s /mnt/vos-rhxyjhz3**/data**/ /data

yum install -y vim

查看系统版本

cat /etc/redhat-release

在所有机器上增加hosts信息

vim /etc/hosts

增加以下内容

10.1.104.204 master
10.1.104.205 node1
10.1.104.206 node2
10.1.104.207 node3

禁用防火墙

systemctl stop firewalld
systemctl disable firewalld

禁用SELINUX

setenforce 0

cat /etc/selinux/config
SELINUX=disabled

加载下该模块

modprobe br_netfilter

创建文件并添加内容

vim /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

执行如下命令使修改生效:

sysctl -p /etc/sysctl.d/k8s.conf

安装 ipvs:

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

安装ipset软件包

yum install -y  ipset

安装管理工具 ipvsadm

yum install -y ipvsadm

同步服务器时间

yum install chrony -y
systemctl enable chronyd
systemctl start chronyd
chronyc sources
date

关闭 swap 分区

swapoff -a

修改/etc/fstab文件,注释掉 SWAP 的自动挂载,使用free -m确认 swap 已经关闭。swappiness 参数调整,修改/etc/sysctl.d/k8s.conf

vim /etc/sysctl.d/k8s.conf

添加下面一行

vm.swappiness=0

执行如下命令生效

sysctl -p /etc/sysctl.d/k8s.conf

安装 Docker

yum install -y yum-utils \
  device-mapper-persistent-data \
  lvm2
 
yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo

可选择版本安装或者指定版本安装

yum list docker-ce --showduplicates | sort -r
yum install -y docker-ce-19.03.11

配置 Docker 镜像加速器

mkdir -p /etc/docker
mkdir -p /data/docker
vim /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "data-root": "/data/docker",
  "storage-driver": "overlay2",
  "registry-mirrors" : [
    "https://ot2k4d59.mirror.aliyuncs.com/"
  ],
  "default-address-pools": [
        {
            "base": "172.17.0.0/16",
            "size": 24
        }
  ],
  "insecure-registries":[
    "10.1.104.204:1880",
    "http://10.1.104.204:1880"
  ]
}

启动docker

systemctl start docker
systemctl enable docker

安装 Kubeadm

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装 kubeadm、kubelet、kubectl

yum install -y kubelet-1.19.3 kubeadm-1.19.3 kubectl-1.19.3 --disableexcludes=kubernetes

查看版本

kubeadm version

kubelet 设置成开机启动

systemctl enable --now kubelet

到这里为止上面所有的操作都需要在所有节点执行配置。

初始化集群

在 master 节点配置 kubeadm 初始化文件,可以通过如下命令导出默认的初始化配置:

kubeadm config print init-defaults > kubeadm.yaml

根据我们自己的需求修改配置,比如修改 imageRepository 的值,kube-proxy 的模式为 ipvs,另外需要注意的是我们这里是准备安装 flannel 网络插件的,需要将 networking.podSubnet 设置为10.244.0.0/16

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 10.151.30.11  # apiserver 节点内网IP
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: master1
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
# registry.aliyuncs.com/k8sxio   # 修改成阿里云镜像源
kind: ClusterConfiguration
kubernetesVersion: v1.19.3
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16  # Pod 网段,flannel插件需要使用这个网段
  serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs  # kube-proxy 模式

使用上面的配置文件进行初始化

kubeadm config images pull --config kubeadm.yaml
kubeadm init --config kubeadm.yaml

如果出现错误可执行 kubeadm reset 重新 初始化

拷贝 kubeconfig 文件

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

添加节点

将 master 节点上面的 $HOME/.kube/config 文件拷贝到 node 节点对应的文件中

在对应的节点

mkdir -p $HOME/.kube
scp  $HOME/.kube/config root@node1:$HOME/.kube/config
scp  $HOME/.kube/config root@node2:$HOME/.kube/config
scp  $HOME/.kube/config root@node3:$HOME/.kube/config

执行上面初始化完成后提示的 join 命令

sudo swapoff -a
kubeadm join 10.1.104.204:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:ad6ce20383a3e1d4e7cb54e933074965f22c47cd268a9ed2c80f55c5404f39b7

重新获取

kubeadm token create --print-join-command

安装网络

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

查看网卡名称

ip addr
vim flannel.yml
......
containers:
- name: kube-flannel
  image: quay.io/coreos/flannel:v0.13.0
  command:
  - /opt/bin/flanneld
  args:
  - --ip-masq
  - --kube-subnet-mgr
  - --iface=eth0  # 如果是多网卡的话,指定内网网卡的名称
......
kubectl apply -f kube-flannel.yml 

如出现 is not an absolute path or is a symlink: unknown 可重启节点docker:

systemctl restart docker

可修改 k8s node 名称

kubectl  label node  i-w500emcu  node-role.kubernetes.io/node1=

kubectl  label node  i-iza0deu5  node-role.kubernetes.io/node2=

kubectl  label node  i-f37syy2z  node-role.kubernetes.io/node3=


kubectl label nodes i-w500emcu ip-10.1.104.205
kubectl label nodes i-iza0deu5 ip-10.1.104.206
kubectl label nodes i-f37syy2z ip-10.1.104.207
kubectl get node --show-labels

永久token

kubeadm token create --print-join-command --ttl 0
kubeadm token list

至此K8S安装完成

安装Helm

mkdir -p /data/helm
cd /data/helm
wget https://get.helm.sh/helm-v3.11.3-linux-amd64.tar.gz
tar -zxvf helm-v3.11.3-linux-amd64.tar.gz
cd linux-amd64/
cp helm /usr/local/bin/
chmod a+x /usr/local/bin/

域名解析器

kubectl edit cm -n kube-system coredns
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
  Corefile: |
    .:53 {
        errors
        health {
           lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           fallthrough in-addr.arpa ip6.arpa
           ttl 30
        }
        hosts {
                10.1.160.17 s3-qos.poc.kodo.com
                fallthrough
        }
        prometheus :9153
        forward . /etc/resolv.conf {
           max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
    }
kind: ConfigMap
metadata:
  creationTimestamp: "2023-03-16T07:30:57Z"
  name: coredns
  namespace: kube-system
  resourceVersion: "18837468"
  selfLink: /api/v1/namespaces/kube-system/configmaps/coredns
  uid: b045827e-1541-461d-b5e6-1cee119d4ddb

NFS 共享盘

K8S 主节点安装NFS

yum install -y nfs-utils rpcbind

在主节点创建目录

mkdir -p /data/nfs
chmod 777 /data/nfs

更改归属组与用户

chown -R nfsnobody:nfsnobody /data/nfs

配置共享目录

echo "/data/nfs *(insecure,rw,sync,no_root_squash)" > /etc/exports

创建mysql共享目录

mkdir -p /data/nfs/mysql

启动服务

systemctl start rpcbind
systemctl start nfs
systemctl enable rpcbind
systemctl enable nfs

检查配置是否生效

exportfs
showmount -e 10.1.104.204

客户端操作

1、关闭防火墙
systemctl stop firewalld.service
systemctl disable firewalld.service

2、安装nfs
yum -y install nfs-utils rpcbind

3、创建自启动服务
 systemctl enable rpcbind
 systemctl start rpcbind
 systemctl enable nfs
 systemctl start nfs
 
 4、查看可以登录的nfs地址
[root@k8s-node1 ~]# showmount -e 10.1.104.204
Export list for 10.1.104.204:
/data/k8s *

5、挂载和查看
[root@k8s-node1 ~]# mkdir -p /data/nfs/ 
[root@k8s-node1 ~]# mount -t nfs 10.1.104.204:/data/nfs /data/nfs
[root@k8s-node1 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
devtmpfs                 3.9G     0  3.9G   0% /dev
tmpfs                    3.9G     0  3.9G   0% /dev/shm
tmpfs                    3.9G  5.4M  3.9G   1% /run
tmpfs                    3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/mapper/centos-root   50G   29G   21G  58% /
/dev/sda1               1014M  209M  806M  21% /boot
/dev/mapper/centos-home  447G   33M  447G   1% /home
tmpfs                    797M     0  797M   0% /run/user/0
172.16.4.169:/data/k8s    50G   49G  1.7G  97% /data/k8s

6、验证
[root@k8s-node1 ~]# echo 4444 >> /data/nfs/2.log
[root@k8s-node1 ~]# cat /data/nfs/2.log
444
##在服务端验证
[root@k8s-master ~]# cat /data/nfs/2.log
4444

编辑NFS.yaml

vim class.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: nfs-storage
  annotations:
    storageclass.beta.kubernetes.io/is-default-class: 'true'
    storageclass.kubernetes.io/is-default-class: 'true'
provisioner: fuseim.pri/ifs
reclaimPolicy: Delete
volumeBindingMode: Immediate 
kubectl apply -f  class.yaml
kubectl get storageclass

注意:如果SC存储不是默认的,可以标记一个StorageClass为默认的(根据自己实际名称标记即可)

#kubectl patch storageclass managed-nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
vim rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
 kubectl apply -f rbac.yaml
vim deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: gmoney23/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: 10.1.104.204
            - name: NFS_PATH
              value: /data/nfs
      volumes:
        - name: nfs-client-root
          nfs:
            server: 10.1.104.204
            path: /data/nfs
kubectl apply -f deployment.yaml

测试PVC持久化

vim test-claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
  namespace: default
spec:
  storageClassName: nfs-storage
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 256Mi
kubectl apply -f test-claim.yaml
kubectl get pvc 
vim statefulset-nfs.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: nfs-web
spec:
  serviceName: "nginx"
  replicas: 3
  selector:
    matchLabels:
      app: nfs-web # has to match .spec.template.metadata.labels
  template:
    metadata:
      labels:
        app: nfs-web
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: www
      annotations:
        volume.beta.kubernetes.io/storage-class: nfs-storage
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 10Mi
kubectl apply -f statefulset-nfs.yaml
kubectl get pod
kubectl get pvc
kubectl get pv

查看NFS-server 数据信息

ll /data/nfs

删除测试内容

kubectl delete -f test-claim.yaml
kubectl delete -f statefulset-nfs.yaml

部署ingress-nginx

编写ingress-nginx.yaml

wget https://github.com/kubernetes/ingress-nginx/blob/nginx-0.30.0/deploy/static/mandatory.yaml
mkdir -p /data/ingress-nginx
cd /data/ingress-nginx
vim ingress-nginx.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses/status
    verbs:
      - update

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      # wait up to five minutes for the drain of connections
      terminationGracePeriodSeconds: 300
      serviceAccountName: nginx-ingress-serviceaccount
      nodeSelector:
        kubernetes.io/os: linux
      containers:
        - name: nginx-ingress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 101
            runAsUser: 101
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
            - name: https
              containerPort: 443
              protocol: TCP
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown

---

apiVersion: v1
kind: LimitRange
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  limits:
  - min:
      memory: 90Mi
      cpu: 100m
    type: Container
kubectl apply -f ingress-nginx.yaml

部署nodeport service

vim service-nodeport.yaml
apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      targetPort: 80
      protocol: TCP
      nodePort: 32080  #http
    - name: https
      port: 443
      targetPort: 443
      protocol: TCP
      nodePort: 32443  #https
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
kubectl create -f service-nodeport.yaml

安装kuboard

集群管理工具

mkdir -p /data/kuboard
cd /data/kuboard
wget https://kuboard.cn/install-script/kuboard.yaml
kubectl apply -f kuboard.yaml
echo $(kubectl -n kube-system get secret $(kubectl -n kube-system get secret | grep kuboard-user | awk '{print $1}') -o go-template='{{.data.token}}' | base64 -d) 

浏览器访问

http://******:32567