天天看点

k3d搭建kubesphere+k3s本地集群环境

作者:勇往直前的程序员

前言

随着Docker、Kubernetes(k8s)等容器化技术的广泛应用,软件开发人员有必要了解并掌握该项技能。k8s本身太重,比较耗费资源,不便在个人电脑上安装,这也或多或少的限制了我们对它的学习和研究。

k8s太重的缺陷,大牛们早已注意到了,所以rancher团队开发了一款经过CNCF认证的kubernertes轻量级发行版k3s,其API与kubernetes兼容,安装包100多M吧。但麻雀虽小五脏俱全,用它来学习k8s再合适不过了。再结合k3d(k3s in docker),就是三个字:爽歪歪。

网上很多k3d教程基本照搬官网示例,且时间久远,参考意义不大。所以编写了本文,分享自己搭建本地k3s集群环境的经验,遇到的问题及解决方式。希望能够对你有所帮助。

软件环境

windows11

docker desktop for windows v4.16.3

k3d 5.4.7

k3s v1.24.10+k3s1

kubesphere v3.3.1

以上k3d、k3s、kubesphere的版本最好不要随意替换

怎样在windows上安装docker不属于本文的范畴,可自行上网查阅资料。

也可以使用Virtual Box或VMWare安装Linux系统,在虚拟机环境下安装docker,在虚拟机中搭建k3s集群

备注:我在k3s v1.25.6+k3s1中安装kubesphere v3.3.2,devops无论是安装前后开启,均无法成功开启devops插件,通过日志也没有看出有用的信息

准备k3d和k3s文件

  1. 从k3d的github releases页面下载k3d-windows-amd64.exe,将文件重命名为k3d.exe,然后将该文件路径加入到环境变量的Path路径中(以便在终端中可以直接执行k3d命令)
  2. 从k3s的github releases下载对应架构的离线镜像文件(解决无法访问docker.io域名,导致集群组件无法正常启动的问题)
k3d搭建kubesphere+k3s本地集群环境

k3s的github仓库

k3d搭建kubesphere+k3s本地集群环境

k3s离线docker镜像文件

k3s离线docker镜像版本v1.24.10+k3s1

若无法访问GitHub,可以考虑通过GitHub镜像(https://hub.nuaa.cf/)搜索k3d和k3s,然后下载对应的文件

GitHub - k3d-io/k3d: Little helper to run CNCF's k3s in Docker

GitHub - k3s-io/k3s: Lightweight Kubernetes

注意:k3d需要使用到两个docker镜像(ghcr.io/k3d-io/k3d-proxy:5.4.7和ghcr.io/k3d-io/k3d-tools:5.4.7),建议通过docker镜像加速服务器下载后,再通过docker tag命令将其修改为上述括号中的名称

准备kubesphere安装文件

kubesphere提供了国内docker镜像,所以可以参考官方最小化安装教程在 Kubernetes 上最小化安装 KubeSphere。下面是安装kubesphere使用的两个yaml文件

kubesphere-installer.yaml「链接」

---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: clusterconfigurations.installer.kubesphere.io
spec:
  group: installer.kubesphere.io
  versions:
    - name: v1alpha1
      served: true
      storage: true
      schema:
        openAPIV3Schema:
          type: object
          properties:
            spec:
              type: object
              x-kubernetes-preserve-unknown-fields: true
            status:
              type: object
              x-kubernetes-preserve-unknown-fields: true
  scope: Namespaced
  names:
    plural: clusterconfigurations
    singular: clusterconfiguration
    kind: ClusterConfiguration
    shortNames:
      - cc

---
apiVersion: v1
kind: Namespace
metadata:
  name: kubesphere-system

# 手动新建一个kubesphere-devops-worker命名空间(在v3.3.2版本中,安装失败,提示没有找到这个命名空间,所以在v3.3.1版本时,我主动加上看看)
---
apiVersion: v1
kind: Namespace
metadata:
  name: kubesphere-devops-worker

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ks-installer
  namespace: kubesphere-system

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: ks-installer
rules:
- apiGroups:
  - ""
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - apps
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - extensions
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - batch
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - rbac.authorization.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - apiregistration.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - apiextensions.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - tenant.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - certificates.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - devops.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - monitoring.coreos.com
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - logging.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - jaegertracing.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - storage.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - admissionregistration.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - policy
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - autoscaling
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - networking.istio.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - config.istio.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - iam.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - notification.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - auditing.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - events.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - core.kubefed.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - installer.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - storage.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - security.istio.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - monitoring.kiali.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - kiali.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - networking.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - edgeruntime.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - types.kubefed.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - monitoring.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - application.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'


---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: ks-installer
subjects:
- kind: ServiceAccount
  name: ks-installer
  namespace: kubesphere-system
roleRef:
  kind: ClusterRole
  name: ks-installer
  apiGroup: rbac.authorization.k8s.io

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    app: ks-installer
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ks-installer
  template:
    metadata:
      labels:
        app: ks-installer
    spec:
      serviceAccountName: ks-installer
      containers:
      - name: installer
      # 修改为国内镜像源
        image: registry.cn-beijing.aliyuncs.com/kubesphereio/ks-installer:v3.3.1
        imagePullPolicy: "Always"
        resources:
          limits:
            cpu: "1"
            memory: 1Gi
          requests:
            cpu: 20m
            memory: 100Mi
        volumeMounts:
        - mountPath: /etc/localtime
          name: host-time
          readOnly: true
      volumes:
      - hostPath:
          path: /etc/localtime
          type: ""
        name: host-time           

cluster-configuration.yaml「链接」

---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.3.1
spec:
  persistence:
    storageClass: ""        # If there is no default StorageClass in your cluster, you need to specify an existing StorageClass here.
  authentication:
    # adminPassword: ""     # Custom password of the admin user. If the parameter exists but the value is empty, a random password is generated. If the parameter does not exist, P@88w0rd is used.
    jwtSecret: ""           # Keep the jwtSecret consistent with the Host Cluster. Retrieve the jwtSecret by executing "kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret" on the Host Cluster.
    #仓库地址替换为国内镜像
  local_registry: "registry.cn-beijing.aliyuncs.com"        # Add your private registry address if it is needed.
  # dev_tag: ""               # Add your kubesphere image tag you want to install, by default it's same as ks-installer release version.
  etcd:
    monitoring: false       # Enable or disable etcd monitoring dashboard installation. You have to create a Secret for etcd before you enable it.
    endpointIps: localhost  # etcd cluster EndpointIps. It can be a bunch of IPs here.
    port: 2379              # etcd port.
    tlsEnable: true
  common:
    core:
      console:
        enableMultiLogin: true  # Enable or disable simultaneous logins. It allows different users to log in with the same account at the same time.
        port: 30880
        type: NodePort

    # apiserver:            # Enlarge the apiserver and controller manager's resource requests and limits for the large cluster
    #  resources: {}
    # controllerManager:
    #  resources: {}
    redis:
      enabled: false
      enableHA: false
      volumeSize: 2Gi # Redis PVC size.
     # 提前开启,避免devops安装失败
    openldap:
      enabled: true
      volumeSize: 2Gi   # openldap PVC size.
    minio:
      volumeSize: 20Gi # Minio PVC size.
    monitoring:
      # type: external   # Whether to specify the external prometheus stack, and need to modify the endpoint at the next line.
      endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090 # Prometheus endpoint to get metrics data.
      GPUMonitoring:     # Enable or disable the GPU-related metrics. If you enable this switch but have no GPU resources, Kubesphere will set it to zero.
        enabled: false
    gpu:                 # Install GPUKinds. The default GPU kind is nvidia.com/gpu. Other GPU kinds can be added here according to your needs.
      kinds:
      - resourceName: "nvidia.com/gpu"
        resourceType: "GPU"
        default: true
    es:   # Storage backend for logging, events and auditing.
      # master:
      #   volumeSize: 4Gi  # The volume size of Elasticsearch master nodes.
      #   replicas: 1      # The total number of master nodes. Even numbers are not allowed.
      #   resources: {}
      # data:
      #   volumeSize: 20Gi  # The volume size of Elasticsearch data nodes.
      #   replicas: 1       # The total number of data nodes.
      #   resources: {}
      logMaxAge: 7             # Log retention time in built-in Elasticsearch. It is 7 days by default.
      elkPrefix: logstash      # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log.
      basicAuth:
        enabled: false
        username: ""
        password: ""
      externalElasticsearchHost: ""
      externalElasticsearchPort: ""
  alerting:                # (CPU: 0.1 Core, Memory: 100 MiB) It enables users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from.
    enabled: false         # Enable or disable the KubeSphere Alerting System.
    # thanosruler:
    #   replicas: 1
    #   resources: {}
  auditing:                # Provide a security-relevant chronological set of recordsúČrecording the sequence of activities happening on the platform, initiated by different tenants.
    enabled: false         # Enable or disable the KubeSphere Auditing Log System.
    # operator:
    #   resources: {}
    # webhook:
    #   resources: {}
  # 安装前就开启devops插件(我在k3s-v1.25.6+k3s1和kubesphere-v3.3.2下,不论安装前后开启devops插件均会失败)
  devops:                  # (CPU: 0.47 Core, Memory: 8.6 G) Provide an out-of-the-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Image.
    enabled: true             # Enable or disable the KubeSphere DevOps System.
    # resources: {}
    jenkinsMemoryLim: 2Gi      # Jenkins memory limit.
    jenkinsMemoryReq: 512Mi   # Jenkins memory request.
    jenkinsVolumeSize: 8Gi     # Jenkins volume size.
  events:                  # Provide a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters.
    enabled: false         # Enable or disable the KubeSphere Events System.
    # operator:
    #   resources: {}
    # exporter:
    #   resources: {}
    # ruler:
    #   enabled: true
    #   replicas: 2
    #   resources: {}
  logging:                 # (CPU: 57 m, Memory: 2.76 G) Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd.
    enabled: false         # Enable or disable the KubeSphere Logging System.
    logsidecar:
      enabled: true
      replicas: 2
      # resources: {}
  metrics_server:                    # (CPU: 56 m, Memory: 44.35 MiB) It enables HPA (Horizontal Pod Autoscaler).
    enabled: false                   # Enable or disable metrics-server.
  monitoring:
    storageClass: ""                 # If there is an independent StorageClass you need for Prometheus, you can specify it here. The default StorageClass is used by default.
    node_exporter:
      port: 9100
      # resources: {}
    # kube_rbac_proxy:
    #   resources: {}
    # kube_state_metrics:
    #   resources: {}
    # prometheus:
    #   replicas: 1  # Prometheus replicas are responsible for monitoring different segments of data source and providing high availability.
    #   volumeSize: 20Gi  # Prometheus PVC size.
    #   resources: {}
    #   operator:
    #     resources: {}
    # alertmanager:
    #   replicas: 1          # AlertManager Replicas.
    #   resources: {}
    # notification_manager:
    #   resources: {}
    #   operator:
    #     resources: {}
    #   proxy:
    #     resources: {}
    gpu:                           # GPU monitoring-related plug-in installation.
      nvidia_dcgm_exporter:        # Ensure that gpu resources on your hosts can be used normally, otherwise this plug-in will not work properly.
        enabled: false             # Check whether the labels on the GPU hosts contain "nvidia.com/gpu.present=true" to ensure that the DCGM pod is scheduled to these nodes.
        # resources: {}
  multicluster:
    clusterRole: none  # host | member | none  # You can install a solo cluster, or specify it as the Host or Member Cluster.
  network:
    networkpolicy: # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods).
      # Make sure that the CNI network plugin used by the cluster supports NetworkPolicy. There are a number of CNI network plugins that support NetworkPolicy, including Calico, Cilium, Kube-router, Romana and Weave Net.
      enabled: false # Enable or disable network policies.
    ippool: # Use Pod IP Pools to manage the Pod network address space. Pods to be created can be assigned IP addresses from a Pod IP Pool.
      type: none # Specify "calico" for this field if Calico is used as your CNI plugin. "none" means that Pod IP Pools are disabled.
    topology: # Use Service Topology to view Service-to-Service communication based on Weave Scope.
      type: none # Specify "weave-scope" for this field to enable Service Topology. "none" means that Service Topology is disabled.
  openpitrix: # An App Store that is accessible to all platform tenants. You can use it to manage apps across their entire lifecycle.
    store:
     # 提前开启,避免devops安装失败
      enabled: true # Enable or disable the KubeSphere App Store.
  servicemesh:         # (0.3 Core, 300 MiB) Provide fine-grained traffic management, observability and tracing, and visualized traffic topology.
    enabled: false     # Base component (pilot). Enable or disable KubeSphere Service Mesh (Istio-based).
    istio:  # Customizing the istio installation configuration, refer to https://istio.io/latest/docs/setup/additional-setup/customize-installation/
      components:
        ingressGateways:
        - name: istio-ingressgateway
          enabled: false
        cni:
          enabled: false
  edgeruntime:          # Add edge nodes to your cluster and deploy workloads on edge nodes.
    enabled: false
    kubeedge:        # kubeedge configurations
      enabled: false
      cloudCore:
        cloudHub:
          advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided.
            - ""            # Note that once KubeEdge is enabled, CloudCore will malfunction if the address is not provided.
        service:
          cloudhubNodePort: "30000"
          cloudhubQuicNodePort: "30001"
          cloudhubHttpsNodePort: "30002"
          cloudstreamNodePort: "30003"
          tunnelNodePort: "30004"
        # resources: {}
        # hostNetWork: false
      iptables-manager:
        enabled: true 
        mode: "external"
        # resources: {}
      # edgeService:
      #   resources: {}
  gatekeeper:        # Provide admission policy and rule management, A validating (mutating TBA) webhook that enforces CRD-based policies executed by Open Policy Agent.
    enabled: false   # Enable or disable Gatekeeper.
    # controller_manager:
    #   resources: {}
    # audit:
    #   resources: {}
  terminal:
    # image: 'alpine:3.15' # There must be an nsenter program in the image
    timeout: 600         # Container timeout, if set to 0, no timeout will be used. The unit is seconds           

安装k3s集群

安装名称为local的k3s集群(可以输入k3d --help查看帮助)

k3d cluster create local -i rancher/k3s:v1.24.10-k3s1 --api-port 6443 -p 30880:30880@loadbalancer --agents 1 --servers 1  --registry-config registry.yaml --verbose --trace           

registry.yaml

mirrors:
 #本地docker私服
  "192.168.138.1:5000":
    endpoint:
      - http://192.168.138.1:5000           

等待k3s集群初始化完毕(需要一会儿),通过kubectl get pod -n kube-system 查看结果,如下图所示即表示初始化完毕

k3d搭建kubesphere+k3s本地集群环境

集群初始化完毕示例

安装kubesphere

把修改过的kubesphere-installer.yaml和cluster-configuration.yaml保存到本地目录(我的是D:\study\kubesphere)

在D:\study\kubesphere目录打开cmd终端,执行以下两条命令

kubectl apply -f kubesphere-installer.yaml
kubectl apply -f cluster-configuration.yaml           

然后使用 kubectl -n kubesphere-system get pod 查看pod列表,找到ks-installer-开头的pod(假设是ks-installer-xxx),使用以下命令查看安装日志

kubectl -n kubesphere-system -v 8 logs ks-installer-xxx -f           

看到下图中的输出就表示安装成功了(时间有点长,耐心等待)

k3d搭建kubesphere+k3s本地集群环境

安装kubesphere成功日志

进行到这里,我们就已经完成了整个k3s集群和kubesphere的搭建。在浏览器地址栏输入localhost:30880 回车,不出意外就可以看到kubesphere的登陆页面了(首次访问有点慢)。

遇到的问题及解决方式

输入admin/P@88w0rd后无法登录kubesphere。会有形如这样的提示

request to http://ks-apiserver/oauth/token failed, reason: connect ECONNREFUSED 10.43.51.184:80

原因分析:

提示很明确,网络不通。起初我怀疑是由于kubesphere相关的service创建有异常,导致请求不到对应的ks-apiserver

# 排查service(发现ks-apiserver存在)
kubectl -n kubesphere-system get service -o wide
#排查endpoints,确保service匹配到了pod(ks-apiserver对应10.42.1.24:9090,无异常)
kubectl -n kubesphere-system get endpoints -o wide           

显然pod、service都没有问题,那么问题出在哪里呢?带着疑问,我查看了k3d-local-server-0这个node的日志,发现了以下错误日志

"Failed to execute iptables-restore" err=<           

日志信息指向了iptables,再结合k8s默认使用iptables来解决pod之间的通信问题,于是打开k3d-local-server-0的终端(通过docker desktop的container列表中进入,或者使用docker exec -it进入),使用以下命令检查iptables规则中是否存在9090端口的配置

iptables-save | grep 9090
# 显示结果为空,果然没有,问题原因找到了(深层次原因,由于个人能力有限,并未深究)           

解决方法:利用k3d重启集群(如果一次不行,多试几次)

k3d cluster stop local
k3d cluster start local
# 重启之后再次进入k3d-local-server-0节点终端,iptables-save | grep 9090 就会有结果了

# 查看kubesphere-system相关pod是否正常启动(如果发现有pod的状态异常,可多等一会儿)
kubectl -n kubesphere-system get pod -o wide           
k3d搭建kubesphere+k3s本地集群环境

重启集群后可能会出现部分pod反复重启的情况

再次打开http://localhost:30880,输入账号密码后即可进入kubesphere页面

k3d搭建kubesphere+k3s本地集群环境

继续阅读