天天看点

k8s笔记3(crictl,ctr,nerdctl)

  • ​crictl​

    ​​ 是遵循 CRI 接口规范的一个命令行工具,通常用它来检查和管理​

    ​kubelet​

    ​节点上的容器运行时和镜像。k8s/k3s集群在node通过kubelet节点下载镜像也是用的这个而不是ctr。
  • ​ctr​

    ​​ 是 ​

    ​containerd​

    ​​ 的一个客户端工具。​​containerd/hosts.md at main · containerd/containerd (github.com)​​
  • ​ctr -v​

    ​​ 输出的是 ​

    ​containerd​

    ​​ 的版本,​

    ​crictl -v​

    ​​ 输出的是当前 k8s 的版本,从结果显而易见可以认为 ​

    ​crictl​

    ​​ 是用于 ​

    ​k8s​

    ​ 的。
  • 一般来说某个主机安装了 k8s 后,命令行才会有 crictl 命令。而 ctr 是跟 k8s 无关的,主机安装了 containerd 服务后就可以操作 ctr 命令。
  • ​​

    ​ctr​

    ​​ 客户端 主要区分了 3 个命名空间分别是​

    ​k8s.io​

    ​​、​

    ​moby​

    ​​和​

    ​default​

    ​​,以上我们用​

    ​crictl​

    ​​操作的均在​

    ​k8s.io​

    ​​命名空间,使用​

    ​ctr​

    ​​ 看镜像列表就需要加上-n 参数。crictl 是只有一个​

    ​k8s.io​

    ​命名空间,但是没有-n 参数.
  • crictl没有tag命令,只能使用nerdctl和ctr,必须指定命名空间,要不然kubelet无法使用。
  • 推荐使用 nerdctl,使用效果与 docker 命令的语法一致,github 下载链接​:​​https://github.com/containerd/nerdctl/releases​​

​​Containerd ctr、crictl、nerdctl 实战-阿里云开发者社区 (aliyun.com)​​

1、crictl pull私有仓库harbor镜像(tls:false) ----->crictl没有tag/push/import/export功能(ctr有)。

*:配置文件/etc/containerd/config.toml是给​

​crictl​

​​和​

​kubelet​

​​使用,​

​ctr​

​是不可以用这个配置文件,ctr 不使用 CRI,因此它不读取 plugins."io.containerd.grpc.v1.cri"配置。ctr使用哪个配置文件?(​​Containerd客户端工具(CLI)介绍ctr,nerdctl,crictl,podman以及docker_Michaelwubo的博客​​)

# vim /etc/containerd/config.toml
     [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."192.168.31.211:30002"]
          endpoint = ["http://192.168.31.211:30002"]
      [plugins."io.containerd.grpc.v1.cri".registry.configs]
        [plugins."io.containerd.grpc.v1.cri".registry.configs."192.168.31.211:30002".tls]
          insecure_skip_verify = true
        [plugins."io.containerd.grpc.v1.cri".registry.configs."192.168.31.211:30002".auth]
          username = "admin"
          password = "Harbor12345"
# systemctl restart containerd.service
# crictl pull  192.168.31.211:30002/mizy/firefox:v1.1      

2、ctr命令tag/push测试,

# ctr  images pull --user admin:Harbor12345     192.168.31.211:30002/mizy/firefox:v1.1
INFO[0000] trying next host                              error="failed to do request: Head \"https://192.168.31.211:30002/v2/mizy/firefox/manifests/v1.1\": http: server gave HTTP response to HTTPS client" host="192.168.31.211:30002"
ctr: failed to resolve reference "192.168.31.211:30002/mizy/firefox:v1.1": failed to do request: Head "https://192.168.31.211:30002/v2/mizy/firefox/manifests/v1.1": http: server gave HTTP response to HTTPS client
---------ctr“参数化”方式pull/push镜像-------------------
# ctr --address=/run/containerd/containerd.sock images pull --skip-verify  --user admin:Harbor12345   --plain-http  192.168.31.211:30002/mizy/firefox:v1.1
# ctr --address=/run/containerd/containerd.sock images push --skip-verify  --user admin:Harbor12345   --plain-http  192.168.31.211:30002/mizy/firefox:v1.1
---------ctr“hosts方式”方式pull/push镜像-------------------
# vim /etc/containerd/config.toml       //这里指定hosts.toml的位置
[plugins."io.containerd.grpc.v1.cri".registry]
   config_path = "/etc/containerd/certs.d"
# vim /etc/containerd/certs.d/192.168.31.211:30002/hosts.toml
server = "https://docker.io"
[host."http://192.168.31.211:30002"]
  capabilities = ["pull", "resolve", "push"]
  skip_verify = true
***:注意不用重启systemctl restart containerd,mizy/busybox:latest这里必须加:latest,否者找不到镜像
# ctr images pull --hosts-dir "/etc/containerd/certs.d"   --user admin:Harbor12345  192.168.31.211:30002/mizy/busybox:latest
# ctr images push  --hosts-dir "/etc/containerd/certs.d"  --user admin:Harbor12345  192.168.31.211:30002/mizy/busybox:latest      

!!!config.toml加了config_path = "/etc/containerd/certs.d"后,k8s-master01节点NotReady,注释调后OK。

# kubectl get node
NAME           STATUS     ROLES    AGE    VERSION
k8s-master01   NotReady   <none>   133d   v1.24.0
# k describe node k8s-master01
Events:
  Type     Reason             Age                  From     Message
  ----     ------             ----                 ----     -------
  Normal   NodeReady          3m6s (x2 over 120m)  kubelet  Node k8s-master01 status is now: NodeReady
  Normal   NodeNotReady       104s (x2 over 10m)   kubelet  Node k8s-master01 status is now: NodeNotReady
  Warning  ImageGCFailed      26s (x2 over 5m26s)  kubelet  rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService
  Warning  ContainerGCFailed  21s (x11 over 11m)   kubelet  rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService      

3、helm重新install harbor(type改成 ingress)

# helm uninstall my-release
# k get pvc
NAME                                         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
data-my-release-harbor-redis-0               Bound    pvc-e9a9e8fd-ef2f-4a5d-972a-7dccb24b93b6   1Gi        RWO            rook-ceph-block   16h
data-my-release-harbor-trivy-0               Bound    pvc-0688991b-4bb9-43cb-b4d9-aa0fa8095ef0   5Gi        RWO            rook-ceph-block   16h
database-data-my-release-harbor-database-0   Bound    pvc-2af938a5-3b1d-4a3e-b61a-26621aecb872   1Gi        RWO            rook-ceph-block   16h
# k get pv 
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                                   STORAGECLASS      REASON   AGE
pvc-0688991b-4bb9-43cb-b4d9-aa0fa8095ef0   5Gi        RWO            Delete           Bound    default/data-my-release-harbor-trivy-0                  rook-ceph-block            16h
pvc-2af938a5-3b1d-4a3e-b61a-26621aecb872   1Gi        RWO            Delete           Bound    default/database-data-my-release-harbor-database-0      rook-ceph-block            16h
pvc-e9a9e8fd-ef2f-4a5d-972a-7dccb24b93b6   1Gi        RWO            Delete           Bound    default/data-my-release-harbor-redis-0                  rook-ceph-block            16h
# vim values.yaml
expose:
  type: ingress 
  tls:
    enabled: false
  ingress:
#以下设置会在生成的Ingress(atc-release-harbor-ingress)添加项spec:  ingressClassName: nginx
    className: "nginx" 
externalURL: http://core.harbor.domain:31068
# helm install atc-release . -n harbor
NAME: atc-release
LAST DEPLOYED: Wed Oct 19 10:32:54 2022
NAMESPACE: harbor
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Please wait for several minutes for Harbor deployment to complete.
Then you should be able to visit the Harbor portal at http://core.harbor.domain:31068
For more details, please visit https://github.com/goharbor/harbor
# k get ingress -n harbor
NAME                                CLASS   HOSTS                  ADDRESS        PORTS   AGE
atc-release-harbor-ingress          nginx   core.harbor.domain     10.16.27.143   80      103s
atc-release-harbor-ingress-notary   nginx   notary.harbor.domain   10.16.27.143   80      103s    
atc-release-harbor-ingress找的服务atc-release-harbor-core
windows主机C:\Windows\System32\drivers\etc\hosts添加项192.168.31.211 core.harbor.domain
浏览器打开http://core.harbor.domain/正常,注意:以前的镜像都没有了,这样操作后恢复成了初始状态。
[root@k8s-master01 containerd]# k get pvc  -n harbor
NAME                                          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
atc-release-harbor-chartmuseum                Bound    pvc-e97a3321-7dd7-40a3-ab51-6cd2b942d666   50Gi       RWO            rook-ceph-block   17m
atc-release-harbor-jobservice                 Bound    pvc-a469a2dc-611a-4b52-abd6-da6497e7471c   1Gi        RWO            rook-ceph-block   17m
atc-release-harbor-jobservice-scandata        Bound    pvc-5a41b3e1-d09b-4df1-b7c9-3e76eb8daf62   1Gi        RWO            rook-ceph-block   17m
data-atc-release-harbor-redis-0               Bound    pvc-8e9dc58f-1c49-4377-9593-366e7c54eb5e   1Gi        RWO            rook-ceph-block   19m
data-atc-release-harbor-trivy-0               Bound    pvc-f60a27e4-039f-4bf4-baef-739eee4971bd   5Gi        RWO            rook-ceph-block   19m
database-data-atc-release-harbor-database-0   Bound    pvc-8c010b25-6d09-4751-9d65-7c966a04db73   1Gi        RWO            rook-ceph-block   19m      

4、问题:没有PVC:atc-release-harbor-registry(存储需求300G)

# k get pod atc-release-harbor-registry-6ddf6fdcd4-bzw6g   -n harbor -oyaml
Status:         Pending 
 registry-data:
    Type:        PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:   atc-release-harbor-registry
    ReadOnly:    false
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  36m (x13 over 101m)  default-scheduler  0/9 nodes are available: 9 persistentvolumeclaim "atc-release-harbor-registry" not found. preemption: 0/9 nodes are available: 9 Preemption is not helpful for scheduling.
将registry的size从200G改为300Gi
# vim values.yaml
  persistentVolumeClaim:
    registry:
      # Use the existing PVC which must be created manually before bound,
      # and specify the "subPath" if the PVC is shared with other components
      existingClaim: ""
      # Specify the "storageClass" used to provision the volume. Or the default
      # StorageClass will be used (the default).
      # Set it to "-" to disable dynamic provisioning
      storageClass: "rook-ceph-block"
      subPath: ""
      accessMode: ReadWriteOnce
      size: 300Gi
      annotations: {}
# helm upgrade atc-release . -n harbor
Release "atc-release" has been upgraded. Happy Helming!
NAME: atc-release
LAST DEPLOYED: Wed Oct 19 12:32:40 2022
NAMESPACE: harbor
STATUS: deployed
REVISION: 2
TEST SUITE: None
NOTES:
Please wait for several minutes for Harbor deployment to complete.
Then you should be able to visit the Harbor portal at http://core.harbor.domain:31068
For more details, please visit https://github.com/goharbor/harbor 
# k get pvc -n harbor
NAME                                          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
atc-release-harbor-registry                   Bound    pvc-ceadb130-45a7-4a2a-9ac2-b1a67f2cbc87   300Gi      RWO            rook-ceph-block   7s      

5、harbor使用ingress后,crictl pull问题“x509: certificate is valid for ingress.local, not core.harbor.domain”

# vim /etc/containerd/config.toml
      [plugins."io.containerd.grpc.v1.cri".registry.configs]
        [plugins."io.containerd.grpc.v1.cri".registry.configs."core.harbor.domain:31068".tls]
          insecure_skip_verify = true
        [plugins."io.containerd.grpc.v1.cri".registry.configs."core.harbor.domain:31068".auth]
           username = "admin"
           password = "Harbor12345"
      [plugins."io.containerd.grpc.v1.cri".registry.headers]
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."core.harbor.domain:31068"]
          endpoint = ["http://core.harbor.domain:31068"]
# crictl pull core.harbor.domain/mizy/firefox:v1.1 
E1019 17:55:38.252096   32417 remote_image.go:238] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"core.harbor.domain/mizy/firefox:v1.1\": failed to resolve reference \"core.harbor.domain/mizy/firefox:v1.1\": failed to do request: Head \"https://core.harbor.domain/v2/mizy/firefox/manifests/v1.1\": x509: certificate is valid for ingress.local, not core.harbor.domain" image="core.harbor.domain/mizy/firefox:v1.1"
FATA[0000] pulling image: rpc error: code = Unknown desc = failed to pull and unpack image "core.harbor.domain/mizy/firefox:v1.1": failed to resolve reference "core.harbor.domain/mizy/firefox:v1.1": failed to do request: Head "https://core.harbor.domain/v2/mizy/firefox/manifests/v1.1": x509: certificate is valid for ingress.local, not core.harbor.domain
# ctr --address=/run/containerd/containerd.sock images pull --skip-verify  --user admin:Harbor12345   --plain-http  core.harbor.domain/mizy/firefox:v1.1
INFO[0000] trying next host                              error="failed to authorize: failed to fetch oauth token: Post \"http://core.harbor.domain:31068/service/token\": dial tcp 192.168.31.218:31068: connect: connection refused" host=core.harbor.domain
ctr: failed to resolve reference "core.harbor.domain/mizy/firefox:v1.1": failed to authorize: failed to fetch oauth token: Post "http://core.harbor.domain:31068/service/token": dial tcp 192.168.31.218:31068: connect: connection refused      

docker也报错“# vim /etc/docker/daemon.json”

# vim /etc/docker/daemon.json
  "insecure-registries": [
    "core.harbor.domain:31068"
  ],
# docker push core.harbor.domain/library/alpine:latest
The push refers to repository [core.harbor.domain/library/alpine]
Get "https://core.harbor.domain/v2/": x509: certificate is valid for ingress.local, not core.harbor.domain
[root@mizy ~]# docker login core.harbor.domain
Username: admin
Password: 
Error response from daemon: Get "https://core.harbor.domain/v2/": x509: certificate is valid for ingress.local, not core.harbor.domain      
  • 集群内node上,使用服务的CLUSTER-IP,docker可以登录成功,但是push/pull失败;
  • 集群外使用ingress域名需要证书,集群内使用服务的CLUSTER-IP能登录,但是push/pull失败
  • 配置了config.toml,crictl pull测试也失败
Pod里测试:
bash-5.1# ping core.harbor.domain
ping: core.harbor.domain: Name does not resolve
# vim /etc/docker/daemon.json
  "insecure-registries": [
     "10.16.195.202:80"
  ],
# k get svc -n harbor
NAME                               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
atc-release-harbor-core            ClusterIP   10.16.83.160    <none>        80/TCP              4h31m
atc-release-harbor-portal          ClusterIP   10.16.195.202   <none>        80/TCP              4h31m
# docker login 10.16.195.202:80
Username: admin
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
[root@k8s-master01 ~]# docker push 10.16.195.202:80/mizy/busybox
Using default tag: latest
The push refers to repository [10.16.195.202:80/mizy/busybox]
0b16ab2571f4: Layer already exists 
error parsing HTTP 405 response body: invalid character '<' looking for beginning of value: "<html>\r\n<head><title>405 Not Allowed</title></head>\r\n<body>\r\n<center><h1>405 Not Allowed</h1></center>\r\n<hr><center>nginx/1.22.0</center>\r\n</body>\r\n</html>\r\n"
# docker pull 10.16.195.202:80/mizy/busybox:latest
Error response from daemon: error unmarshalling content: invalid character '<' looking for beginning of value
# crictl pull 10.16.195.202:80/mizy/firefox:v1.1
E1019 21:08:34.779146   17486 remote_image.go:238] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"10.16.195.202:80/mizy/firefox:v1.1\": failed to unpack image on snapshotter overlayfs: unexpected media type text/html for sha256:045797f8d9961adff707f5da022afc7e40b99ae3a0fbec76aaa7136a7f8760ea: not found" image="10.16.195.202:80/mizy/firefox:v1.1"
FATA[0000] pulling image: rpc error: code = NotFound desc = failed to pull and unpack image "10.16.195.202:80/mizy/firefox:v1.1": failed to unpack image on snapshotter overlayfs: unexpected media type text/html for sha256:045797f8d9961adff707f5da022afc7e40b99ae3a0fbec76aaa7136a7f8760ea: not found