Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

离线安装初始化集群时报拉取镜像错误 #2431

Closed
zhangmingxian opened this issue Oct 16, 2024 · 3 comments
Closed

离线安装初始化集群时报拉取镜像错误 #2431

zhangmingxian opened this issue Oct 16, 2024 · 3 comments
Labels
bug Something isn't working

Comments

@zhangmingxian
Copy link

What is version of KubeKey has the issue?

kk version: &version.Info{Major:"3", Minor:"1", GitVersion:"v3.1.6", GitCommit:"5cad5b5357e80fee211faed743c8f9d452c13b5b", GitTreeState:"clean", BuildDate:"2024-09-03T07:23:37Z", GoVersion:"go1.19.2", Compiler:"gc", Platform:"linux/amd64"}

What is your os environment?

ubuntu22.04

KubeKey config file

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: local-test-registry, address: 192.168.10.130, internalAddress: 192.168.10.130, user: hpczz, password: "hpczz"}
  - {name: local-test-master-01, address: 192.168.10.131, internalAddress: 192.168.10.131, user: hpczz, password: "hpczz"}
  - {name: local-test-node-01, address: 192.168.10.132, internalAddress: 192.168.10.132, user: hpczz, password: "hpczz"}
  - {name: local-test-node-02, address: 192.168.10.133, internalAddress: 192.168.10.133, user: hpczz, password: "hpczz"}
  roleGroups:
    etcd:
    - local-test-master-01
    control-plane:
    - local-test-master-01
    worker:
    - local-test-node-01
    - local-test-node-02
    registry:
    - local-test-registry
  controlPlaneEndpoint:
    ## Internal loadbalancer for apiservers
    internalLoadbalancer: haproxy

    domain: lb.kubesphere.local
    address: ""
    port: 6443
  kubernetes:
    version: v1.26.15
    clusterName: cluster.local
    autoRenewCerts: true
    containerManager: containerd
  etcd:
    type: kubekey
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
    ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
    multusCNI:
      enabled: false
  registry:
    type: harbor
    auths:
      "dockerhub.deepln.com":
        username: admin
        password: Harbor12345
        certsPath: "/etc/docker/certs.d/dockerhub.deepln.com"
    privateRegistry: "dockerhub.deepln.com"
    namespaceOverride: "kubesphereio"
    registryMirrors: []
    insecureRegistries: []
  addons: []



---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.4.1
spec:
  persistence:
    storageClass: ""
  authentication:
    jwtSecret: ""
  local_registry: ""
  # dev_tag: ""
  etcd:
    monitoring: false
    endpointIps: localhost
    port: 2379
    tlsEnable: true
  common:
    core:
      console:
        enableMultiLogin: true
        port: 30880
        type: NodePort
    # apiserver:
    #  resources: {}
    # controllerManager:
    #  resources: {}
    redis:
      enabled: false
      enableHA: false
      volumeSize: 2Gi
    openldap:
      enabled: false
      volumeSize: 2Gi
    minio:
      volumeSize: 20Gi
    monitoring:
      # type: external
      endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
      GPUMonitoring:
        enabled: false
    gpu:
      kinds:
      - resourceName: "nvidia.com/gpu"
        resourceType: "GPU"
        default: true
    es:
      # master:
      #   volumeSize: 4Gi
      #   replicas: 1
      #   resources: {}
      # data:
      #   volumeSize: 20Gi
      #   replicas: 1
      #   resources: {}
      enabled: false
      logMaxAge: 7
      elkPrefix: logstash
      basicAuth:
        enabled: false
        username: ""
        password: ""
      externalElasticsearchHost: ""
      externalElasticsearchPort: ""
    opensearch:
      # master:
      #   volumeSize: 4Gi
      #   replicas: 1
      #   resources: {}
      # data:
      #   volumeSize: 20Gi
      #   replicas: 1
      #   resources: {}
      enabled: true
      logMaxAge: 7
      opensearchPrefix: whizard
      basicAuth:
        enabled: true
        username: "admin"
        password: "admin"
      externalOpensearchHost: ""
      externalOpensearchPort: ""
      dashboard:
        enabled: false
  alerting:
    enabled: false
    # thanosruler:
    #   replicas: 1
    #   resources: {}
  auditing:
    enabled: false
    # operator:
    #   resources: {}
    # webhook:
    #   resources: {}
  devops:
    enabled: false
    jenkinsCpuReq: 0.5
    jenkinsCpuLim: 1
    jenkinsMemoryReq: 4Gi
    jenkinsMemoryLim: 4Gi
    jenkinsVolumeSize: 16Gi
  events:
    enabled: false
    # operator:
    #   resources: {}
    # exporter:
    #   resources: {}
    ruler:
      enabled: true
      replicas: 2
    #   resources: {}
  logging:
    enabled: false
    logsidecar:
      enabled: true
      replicas: 2
      # resources: {}
  metrics_server:
    enabled: false
  monitoring:
    storageClass: ""
    node_exporter:
      port: 9100
      # resources: {}
    # kube_rbac_proxy:
    #   resources: {}
    # kube_state_metrics:
    #   resources: {}
    # prometheus:
    #   replicas: 1
    #   volumeSize: 20Gi
    #   resources: {}
    #   operator:
    #     resources: {}
    # alertmanager:
    #   replicas: 1
    #   resources: {}
    # notification_manager:
    #   resources: {}
    #   operator:
    #     resources: {}
    #   proxy:
    #     resources: {}
    gpu:
      nvidia_dcgm_exporter:
        enabled: false
        # resources: {}
  multicluster:
    clusterRole: none
  network:
    networkpolicy:
      enabled: false
    ippool:
      type: none
    topology:
      type: none
  openpitrix:
    store:
      enabled: false
  servicemesh:
    enabled: false
    istio:
      components:
        ingressGateways:
        - name: istio-ingressgateway
          enabled: false
        cni:
          enabled: false
  edgeruntime:
    enabled: false
    kubeedge:
      enabled: false
      cloudCore:
        cloudHub:
          advertiseAddress:
            - ""
        service:
          cloudhubNodePort: "30000"
          cloudhubQuicNodePort: "30001"
          cloudhubHttpsNodePort: "30002"
          cloudstreamNodePort: "30003"
          tunnelNodePort: "30004"
        # resources: {}
        # hostNetWork: false
      iptables-manager:
        enabled: true
        mode: "external"
        # resources: {}
      # edgeService:
      #   resources: {}
  gatekeeper:
    enabled: false
    # controller_manager:
    #   resources: {}
    # audit:
    #   resources: {}
  terminal:
    timeout: 600

A clear and concise description of what happend.

创建harbor和制品完成,创建集群过程中报错,报错日志内容如下

Relevant log output

[init] Using Kubernetes version: v1.26.15
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
        [WARNING ImagePull]: failed to pull image dockerhub.deepln.com/kubesphereio/kube-apiserver:v1.26.15: output: E1016 18:15:47.031538    4708 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/kube-apiserver:v1.26.15\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/kube-apiserver:v1.26.15\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/kube-apiserver/manifests/v1.26.15\": tls: failed to verify certificate: x509: certificate signed by unknown authority" image="dockerhub.deepln.com/kubesphereio/kube-apiserver:v1.26.15"
time="2024-10-16T18:15:47+08:00" level=fatal msg="pulling image: failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/kube-apiserver:v1.26.15\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/kube-apiserver:v1.26.15\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/kube-apiserver/manifests/v1.26.15\": tls: failed to verify certificate: x509: certificate signed by unknown authority"
, error: exit status 1
        [WARNING ImagePull]: failed to pull image dockerhub.deepln.com/kubesphereio/kube-controller-manager:v1.26.15: output: E1016 18:15:47.154125    4733 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/kube-controller-manager:v1.26.15\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/kube-controller-manager:v1.26.15\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/kube-controller-manager/manifests/v1.26.15\": tls: failed to verify certificate: x509: certificate signed by unknown authority" image="dockerhub.deepln.com/kubesphereio/kube-controller-manager:v1.26.15"
time="2024-10-16T18:15:47+08:00" level=fatal msg="pulling image: failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/kube-controller-manager:v1.26.15\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/kube-controller-manager:v1.26.15\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/kube-controller-manager/manifests/v1.26.15\": tls: failed to verify certificate: x509: certificate signed by unknown authority"
, error: exit status 1
        [WARNING ImagePull]: failed to pull image dockerhub.deepln.com/kubesphereio/kube-scheduler:v1.26.15: output: E1016 18:15:47.277596    4759 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/kube-scheduler:v1.26.15\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/kube-scheduler:v1.26.15\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/kube-scheduler/manifests/v1.26.15\": tls: failed to verify certificate: x509: certificate signed by unknown authority" image="dockerhub.deepln.com/kubesphereio/kube-scheduler:v1.26.15"
time="2024-10-16T18:15:47+08:00" level=fatal msg="pulling image: failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/kube-scheduler:v1.26.15\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/kube-scheduler:v1.26.15\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/kube-scheduler/manifests/v1.26.15\": tls: failed to verify certificate: x509: certificate signed by unknown authority"
, error: exit status 1
        [WARNING ImagePull]: failed to pull image dockerhub.deepln.com/kubesphereio/kube-proxy:v1.26.15: output: E1016 18:15:47.396980    4787 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/kube-proxy:v1.26.15\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/kube-proxy:v1.26.15\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/kube-proxy/manifests/v1.26.15\": tls: failed to verify certificate: x509: certificate signed by unknown authority" image="dockerhub.deepln.com/kubesphereio/kube-proxy:v1.26.15"
time="2024-10-16T18:15:47+08:00" level=fatal msg="pulling image: failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/kube-proxy:v1.26.15\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/kube-proxy:v1.26.15\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/kube-proxy/manifests/v1.26.15\": tls: failed to verify certificate: x509: certificate signed by unknown authority"
, error: exit status 1
        [WARNING ImagePull]: failed to pull image dockerhub.deepln.com/kubesphereio/pause:3.9: output: E1016 18:15:47.571104    4813 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/pause:3.9\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/pause:3.9\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/pause/manifests/3.9\": tls: failed to verify certificate: x509: certificate signed by unknown authority" image="dockerhub.deepln.com/kubesphereio/pause:3.9"
time="2024-10-16T18:15:47+08:00" level=fatal msg="pulling image: failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/pause:3.9\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/pause:3.9\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/pause/manifests/3.9\": tls: failed to verify certificate: x509: certificate signed by unknown authority"
, error: exit status 1
        [WARNING ImagePull]: failed to pull image dockerhub.deepln.com/kubesphereio/coredns:1.9.3: output: E1016 18:15:47.702678    4839 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/coredns:1.9.3\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/coredns:1.9.3\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/coredns/manifests/1.9.3\": tls: failed to verify certificate: x509: certificate signed by unknown authority" image="dockerhub.deepln.com/kubesphereio/coredns:1.9.3"
time="2024-10-16T18:15:47+08:00" level=fatal msg="pulling image: failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/coredns:1.9.3\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/coredns:1.9.3\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/coredns/manifests/1.9.3\": tls: failed to verify certificate: x509: certificate signed by unknown authority"
, error: exit status 1
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local local-test-master-01 local-test-master-01.cluster.local local-test-node-01 local-test-node-01.cluster.local local-test-node-02 local-test-node-02.cluster.local local-test-registry local-test-registry.cluster.local localhost] and IPs [10.233.0.1 192.168.10.131 127.0.0.1 192.168.10.130 192.168.10.132 192.168.10.133]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
        timed out waiting for the condition

This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
        - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
18:19:50 CST stdout: [local-test-master-01]
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W1016 18:19:50.158311    5370 reset.go:106] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get "https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s": dial tcp 192.168.10.131:6443: connect: connection refused
[preflight] Running pre-flight checks
W1016 18:19:50.158394    5370 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Deleted contents of the etcd data directory: /var/lib/etcd
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
18:19:50 CST message: [local-test-master-01]
init kubernetes cluster failed: Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=FileExisting-crictl,ImagePull"
W1016 18:15:46.780452    4666 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.26.15
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
        [WARNING ImagePull]: failed to pull image dockerhub.deepln.com/kubesphereio/kube-apiserver:v1.26.15: output: E1016 18:15:47.031538    4708 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/kube-apiserver:v1.26.15\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/kube-apiserver:v1.26.15\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/kube-apiserver/manifests/v1.26.15\": tls: failed to verify certificate: x509: certificate signed by unknown authority" image="dockerhub.deepln.com/kubesphereio/kube-apiserver:v1.26.15"
time="2024-10-16T18:15:47+08:00" level=fatal msg="pulling image: failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/kube-apiserver:v1.26.15\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/kube-apiserver:v1.26.15\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/kube-apiserver/manifests/v1.26.15\": tls: failed to verify certificate: x509: certificate signed by unknown authority"
, error: exit status 1
        [WARNING ImagePull]: failed to pull image dockerhub.deepln.com/kubesphereio/kube-controller-manager:v1.26.15: output: E1016 18:15:47.154125    4733 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/kube-controller-manager:v1.26.15\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/kube-controller-manager:v1.26.15\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/kube-controller-manager/manifests/v1.26.15\": tls: failed to verify certificate: x509: certificate signed by unknown authority" image="dockerhub.deepln.com/kubesphereio/kube-controller-manager:v1.26.15"
time="2024-10-16T18:15:47+08:00" level=fatal msg="pulling image: failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/kube-controller-manager:v1.26.15\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/kube-controller-manager:v1.26.15\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/kube-controller-manager/manifests/v1.26.15\": tls: failed to verify certificate: x509: certificate signed by unknown authority"
, error: exit status 1
        [WARNING ImagePull]: failed to pull image dockerhub.deepln.com/kubesphereio/kube-scheduler:v1.26.15: output: E1016 18:15:47.277596    4759 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/kube-scheduler:v1.26.15\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/kube-scheduler:v1.26.15\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/kube-scheduler/manifests/v1.26.15\": tls: failed to verify certificate: x509: certificate signed by unknown authority" image="dockerhub.deepln.com/kubesphereio/kube-scheduler:v1.26.15"
time="2024-10-16T18:15:47+08:00" level=fatal msg="pulling image: failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/kube-scheduler:v1.26.15\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/kube-scheduler:v1.26.15\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/kube-scheduler/manifests/v1.26.15\": tls: failed to verify certificate: x509: certificate signed by unknown authority"
, error: exit status 1
        [WARNING ImagePull]: failed to pull image dockerhub.deepln.com/kubesphereio/kube-proxy:v1.26.15: output: E1016 18:15:47.396980    4787 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/kube-proxy:v1.26.15\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/kube-proxy:v1.26.15\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/kube-proxy/manifests/v1.26.15\": tls: failed to verify certificate: x509: certificate signed by unknown authority" image="dockerhub.deepln.com/kubesphereio/kube-proxy:v1.26.15"
time="2024-10-16T18:15:47+08:00" level=fatal msg="pulling image: failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/kube-proxy:v1.26.15\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/kube-proxy:v1.26.15\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/kube-proxy/manifests/v1.26.15\": tls: failed to verify certificate: x509: certificate signed by unknown authority"
, error: exit status 1
        [WARNING ImagePull]: failed to pull image dockerhub.deepln.com/kubesphereio/pause:3.9: output: E1016 18:15:47.571104    4813 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/pause:3.9\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/pause:3.9\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/pause/manifests/3.9\": tls: failed to verify certificate: x509: certificate signed by unknown authority" image="dockerhub.deepln.com/kubesphereio/pause:3.9"
time="2024-10-16T18:15:47+08:00" level=fatal msg="pulling image: failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/pause:3.9\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/pause:3.9\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/pause/manifests/3.9\": tls: failed to verify certificate: x509: certificate signed by unknown authority"
, error: exit status 1
        [WARNING ImagePull]: failed to pull image dockerhub.deepln.com/kubesphereio/coredns:1.9.3: output: E1016 18:15:47.702678    4839 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/coredns:1.9.3\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/coredns:1.9.3\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/coredns/manifests/1.9.3\": tls: failed to verify certificate: x509: certificate signed by unknown authority" image="dockerhub.deepln.com/kubesphereio/coredns:1.9.3"
time="2024-10-16T18:15:47+08:00" level=fatal msg="pulling image: failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/coredns:1.9.3\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/coredns:1.9.3\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/coredns/manifests/1.9.3\": tls: failed to verify certificate: x509: certificate signed by unknown authority"
, error: exit status 1
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local local-test-master-01 local-test-master-01.cluster.local local-test-node-01 local-test-node-01.cluster.local local-test-node-02 local-test-node-02.cluster.local local-test-registry local-test-registry.cluster.local localhost] and IPs [10.233.0.1 192.168.10.131 127.0.0.1 192.168.10.130 192.168.10.132 192.168.10.133]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
        timed out waiting for the condition

This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
        - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher: Process exited with status 1
18:19:50 CST retry: [local-test-master-01]
18:23:58 CST stdout: [local-test-master-01]
W1016 18:19:55.556850    5556 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.26.15
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
        [WARNING ImagePull]: failed to pull image dockerhub.deepln.com/kubesphereio/kube-apiserver:v1.26.15: output: E1016 18:19:55.777213    5601 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/kube-apiserver:v1.26.15\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/kube-apiserver:v1.26.15\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/kube-apiserver/manifests/v1.26.15\": tls: failed to verify certificate: x509: certificate signed by unknown authority" image="dockerhub.deepln.com/kubesphereio/kube-apiserver:v1.26.15"
time="2024-10-16T18:19:55+08:00" level=fatal msg="pulling image: failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/kube-apiserver:v1.26.15\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/kube-apiserver:v1.26.15\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/kube-apiserver/manifests/v1.26.15\": tls: failed to verify certificate: x509: certificate signed by unknown authority"
, error: exit status 1
        [WARNING ImagePull]: failed to pull image dockerhub.deepln.com/kubesphereio/kube-controller-manager:v1.26.15: output: E1016 18:19:55.896611    5627 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/kube-controller-manager:v1.26.15\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/kube-controller-manager:v1.26.15\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/kube-controller-manager/manifests/v1.26.15\": tls: failed to verify certificate: x509: certificate signed by unknown authority" image="dockerhub.deepln.com/kubesphereio/kube-controller-manager:v1.26.15"
time="2024-10-16T18:19:55+08:00" level=fatal msg="pulling image: failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/kube-controller-manager:v1.26.15\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/kube-controller-manager:v1.26.15\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/kube-controller-manager/manifests/v1.26.15\": tls: failed to verify certificate: x509: certificate signed by unknown authority"
, error: exit status 1
        [WARNING ImagePull]: failed to pull image dockerhub.deepln.com/kubesphereio/kube-scheduler:v1.26.15: output: E1016 18:19:56.016636    5651 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/kube-scheduler:v1.26.15\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/kube-scheduler:v1.26.15\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/kube-scheduler/manifests/v1.26.15\": tls: failed to verify certificate: x509: certificate signed by unknown authority" image="dockerhub.deepln.com/kubesphereio/kube-scheduler:v1.26.15"
time="2024-10-16T18:19:56+08:00" level=fatal msg="pulling image: failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/kube-scheduler:v1.26.15\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/kube-scheduler:v1.26.15\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/kube-scheduler/manifests/v1.26.15\": tls: failed to verify certificate: x509: certificate signed by unknown authority"
, error: exit status 1
        [WARNING ImagePull]: failed to pull image dockerhub.deepln.com/kubesphereio/kube-proxy:v1.26.15: output: E1016 18:19:56.144577    5679 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/kube-proxy:v1.26.15\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/kube-proxy:v1.26.15\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/kube-proxy/manifests/v1.26.15\": tls: failed to verify certificate: x509: certificate signed by unknown authority" image="dockerhub.deepln.com/kubesphereio/kube-proxy:v1.26.15"
time="2024-10-16T18:19:56+08:00" level=fatal msg="pulling image: failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/kube-proxy:v1.26.15\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/kube-proxy:v1.26.15\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/kube-proxy/manifests/v1.26.15\": tls: failed to verify certificate: x509: certificate signed by unknown authority"
, error: exit status 1
        [WARNING ImagePull]: failed to pull image dockerhub.deepln.com/kubesphereio/pause:3.9: output: E1016 18:19:56.269755    5707 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/pause:3.9\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/pause:3.9\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/pause/manifests/3.9\": tls: failed to verify certificate: x509: certificate signed by unknown authority" image="dockerhub.deepln.com/kubesphereio/pause:3.9"
time="2024-10-16T18:19:56+08:00" level=fatal msg="pulling image: failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/pause:3.9\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/pause:3.9\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/pause/manifests/3.9\": tls: failed to verify certificate: x509: certificate signed by unknown authority"
, error: exit status 1
        [WARNING ImagePull]: failed to pull image dockerhub.deepln.com/kubesphereio/coredns:1.9.3: output: E1016 18:19:56.392608    5734 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/coredns:1.9.3\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/coredns:1.9.3\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/coredns/manifests/1.9.3\": tls: failed to verify certificate: x509: certificate signed by unknown authority" image="dockerhub.deepln.com/kubesphereio/coredns:1.9.3"
time="2024-10-16T18:19:56+08:00" level=fatal msg="pulling image: failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/coredns:1.9.3\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/coredns:1.9.3\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/coredns/manifests/1.9.3\": tls: failed to verify certificate: x509: certificate signed by unknown authority"
, error: exit status 1
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local local-test-master-01 local-test-master-01.cluster.local local-test-node-01 local-test-node-01.cluster.local local-test-node-02 local-test-node-02.cluster.local local-test-registry local-test-registry.cluster.local localhost] and IPs [10.233.0.1 192.168.10.131 127.0.0.1 192.168.10.130 192.168.10.132 192.168.10.133]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
        timed out waiting for the condition

This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
        - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
18:23:59 CST stdout: [local-test-master-01]
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W1016 18:23:58.679452    6282 reset.go:106] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get "https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s": dial tcp 192.168.10.131:6443: connect: connection refused
[preflight] Running pre-flight checks
W1016 18:23:58.680140    6282 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Deleted contents of the etcd data directory: /var/lib/etcd
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
18:23:59 CST message: [local-test-master-01]
init kubernetes cluster failed: Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=FileExisting-crictl,ImagePull"
W1016 18:19:55.556850    5556 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.26.15
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
        [WARNING ImagePull]: failed to pull image dockerhub.deepln.com/kubesphereio/kube-apiserver:v1.26.15: output: E1016 18:19:55.777213    5601 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/kube-apiserver:v1.26.15\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/kube-apiserver:v1.26.15\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/kube-apiserver/manifests/v1.26.15\": tls: failed to verify certificate: x509: certificate signed by unknown authority" image="dockerhub.deepln.com/kubesphereio/kube-apiserver:v1.26.15"
time="2024-10-16T18:19:55+08:00" level=fatal msg="pulling image: failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/kube-apiserver:v1.26.15\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/kube-apiserver:v1.26.15\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/kube-apiserver/manifests/v1.26.15\": tls: failed to verify certificate: x509: certificate signed by unknown authority"
, error: exit status 1
        [WARNING ImagePull]: failed to pull image dockerhub.deepln.com/kubesphereio/kube-controller-manager:v1.26.15: output: E1016 18:19:55.896611    5627 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/kube-controller-manager:v1.26.15\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/kube-controller-manager:v1.26.15\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/kube-controller-manager/manifests/v1.26.15\": tls: failed to verify certificate: x509: certificate signed by unknown authority" image="dockerhub.deepln.com/kubesphereio/kube-controller-manager:v1.26.15"
time="2024-10-16T18:19:55+08:00" level=fatal msg="pulling image: failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/kube-controller-manager:v1.26.15\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/kube-controller-manager:v1.26.15\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/kube-controller-manager/manifests/v1.26.15\": tls: failed to verify certificate: x509: certificate signed by unknown authority"
, error: exit status 1
        [WARNING ImagePull]: failed to pull image dockerhub.deepln.com/kubesphereio/kube-scheduler:v1.26.15: output: E1016 18:19:56.016636    5651 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/kube-scheduler:v1.26.15\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/kube-scheduler:v1.26.15\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/kube-scheduler/manifests/v1.26.15\": tls: failed to verify certificate: x509: certificate signed by unknown authority" image="dockerhub.deepln.com/kubesphereio/kube-scheduler:v1.26.15"
time="2024-10-16T18:19:56+08:00" level=fatal msg="pulling image: failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/kube-scheduler:v1.26.15\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/kube-scheduler:v1.26.15\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/kube-scheduler/manifests/v1.26.15\": tls: failed to verify certificate: x509: certificate signed by unknown authority"
, error: exit status 1
        [WARNING ImagePull]: failed to pull image dockerhub.deepln.com/kubesphereio/kube-proxy:v1.26.15: output: E1016 18:19:56.144577    5679 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/kube-proxy:v1.26.15\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/kube-proxy:v1.26.15\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/kube-proxy/manifests/v1.26.15\": tls: failed to verify certificate: x509: certificate signed by unknown authority" image="dockerhub.deepln.com/kubesphereio/kube-proxy:v1.26.15"
time="2024-10-16T18:19:56+08:00" level=fatal msg="pulling image: failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/kube-proxy:v1.26.15\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/kube-proxy:v1.26.15\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/kube-proxy/manifests/v1.26.15\": tls: failed to verify certificate: x509: certificate signed by unknown authority"
, error: exit status 1
        [WARNING ImagePull]: failed to pull image dockerhub.deepln.com/kubesphereio/pause:3.9: output: E1016 18:19:56.269755    5707 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/pause:3.9\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/pause:3.9\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/pause/manifests/3.9\": tls: failed to verify certificate: x509: certificate signed by unknown authority" image="dockerhub.deepln.com/kubesphereio/pause:3.9"
time="2024-10-16T18:19:56+08:00" level=fatal msg="pulling image: failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/pause:3.9\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/pause:3.9\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/pause/manifests/3.9\": tls: failed to verify certificate: x509: certificate signed by unknown authority"
, error: exit status 1
        [WARNING ImagePull]: failed to pull image dockerhub.deepln.com/kubesphereio/coredns:1.9.3: output: E1016 18:19:56.392608    5734 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/coredns:1.9.3\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/coredns:1.9.3\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/coredns/manifests/1.9.3\": tls: failed to verify certificate: x509: certificate signed by unknown authority" image="dockerhub.deepln.com/kubesphereio/coredns:1.9.3"
time="2024-10-16T18:19:56+08:00" level=fatal msg="pulling image: failed to pull and unpack image \"dockerhub.deepln.com/kubesphereio/coredns:1.9.3\": failed to resolve reference \"dockerhub.deepln.com/kubesphereio/coredns:1.9.3\": failed to do request: Head \"https://dockerhub.deepln.com/v2/kubesphereio/coredns/manifests/1.9.3\": tls: failed to verify certificate: x509: certificate signed by unknown authority"
, error: exit status 1
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local local-test-master-01 local-test-master-01.cluster.local local-test-node-01 local-test-node-01.cluster.local local-test-node-02 local-test-node-02.cluster.local local-test-registry local-test-registry.cluster.local localhost] and IPs [10.233.0.1 192.168.10.131 127.0.0.1 192.168.10.130 192.168.10.132 192.168.10.133]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
        timed out waiting for the condition

This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
        - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher: Process exited with status 1
18:23:59 CST retry: [local-test-master-01]
18:24:19 CST stdout: [local-test-master-01]
W1016 18:24:04.042093    6475 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.26.15
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR ExternalEtcdVersion]: Get "https://192.168.10.131:2379/version": dial tcp 192.168.10.131:2379: connect: connection refused
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
18:24:19 CST stdout: [local-test-master-01]
[preflight] Running pre-flight checks
W1016 18:24:19.174524    6503 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Deleted contents of the etcd data directory: /var/lib/etcd
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
18:24:19 CST message: [local-test-master-01]
init kubernetes cluster failed: Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=FileExisting-crictl,ImagePull"
W1016 18:24:04.042093    6475 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.26.15
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR ExternalEtcdVersion]: Get "https://192.168.10.131:2379/version": dial tcp 192.168.10.131:2379: connect: connection refused
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher: Process exited with status 1
18:24:19 CST failed: [local-test-master-01]
error: Pipeline[CreateClusterPipeline] execute failed: Module[InitKubernetesModule] exec failed:
failed: [local-test-master-01] [KubeadmInit] exec failed after 3 retries: init kubernetes cluster failed: Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=FileExisting-crictl,ImagePull"
W1016 18:24:04.042093    6475 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.26.15
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR ExternalEtcdVersion]: Get "https://192.168.10.131:2379/version": dial tcp 192.168.10.131:2379: connect: connection refused
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher: Process exited with status 1

Additional information

No response

@zhangmingxian zhangmingxian added the bug Something isn't working label Oct 16, 2024
@zhangmingxian zhangmingxian changed the title 离线安装初始化集群报错 1.26.15版本离线安装初始化集群时报错 Oct 16, 2024
@zhangmingxian zhangmingxian changed the title 1.26.15版本离线安装初始化集群时报错 离线安装初始化集群时报错 Oct 16, 2024
@zhangmingxian zhangmingxian changed the title 离线安装初始化集群时报错 离线安装初始化集群时报拉取镜像错误 Oct 17, 2024
@Meizuamy
Copy link

我遇到了同样的问题,请问您这个是什么原因导致的呢?

@Meizuamy
Copy link

@zhangmingxian

@zhangmingxian
Copy link
Author

我遇到了同样的问题,请问您这个是什么原因导致的呢?

具体解决方法忘记了,主要是镜像要在制品中存在,第二个是harbor要创建对应的仓库
kubesphereio
csiplugin
mirrorgooglecontainers
kubesphere
prom

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants