Containerd容器镜像管理,容器运行时-Containerd容器管理,Containerd配置镜像加速

 Containerd容器镜像管理命令

  • docker使用docker images命令管理镜像
  • 单机containerd使用ctr images命令管理镜像,containerd本身的CLI
  • k8s中containerd使用crictl images命令管理镜像,Kubernetes社区的专用CLI工具

一、命令使用

 
  1. [root@ceotos_7][15:36:10][OK] ~
  2. #ctr --help
  3. NAME:
  4. ctr -
  5. __
  6. _____/ /______
  7. / ___/ __/ ___/
  8. / /__/ /_/ /
  9. \___/\__/_/
  10.  
  11. containerd CLI
  12.  
  13.  
  14. USAGE:
  15. ctr [global options] command [command options] [arguments...]
  16.  
  17. VERSION:
  18. v1.6.19
  19.  
  20. DESCRIPTION:
  21.  
  22. ctr is an unsupported debug and administrative client for interacting
  23. with the containerd daemon. Because it is unsupported, the commands,
  24. options, and operations are not guaranteed to be backward compatible or
  25. stable from release to release of the containerd project.
  26.  
  27. COMMANDS:
  28. plugins, plugin provides information about containerd plugins
  29. version print the client and server versions
  30. containers, c, container manage containers
  31. content manage content
  32. events, event display containerd events
  33. images, image, i manage images
  34. leases manage leases
  35. namespaces, namespace, ns manage namespaces
  36. pprof provide golang pprof outputs for containerd
  37. run run a container
  38. snapshots, snapshot manage snapshots
  39. tasks, t, task manage tasks
  40. install install a new package
  41. oci OCI tools
  42. shim interact with a shim directly
  43. help, h Shows a list of commands or help for one command
  44.  
  45. GLOBAL OPTIONS:
  46. --debug enable debug output in logs
  47. --address value, -a value address for containerd's GRPC server (default: "/run/containerd/containerd.sock") [$CONTAINERD_ADDRESS]
  48. --timeout value total timeout for ctr commands (default: 0s)
  49. --connect-timeout value timeout for connecting to containerd (default: 0s)
  50. --namespace value, -n value namespace to use with commands (default: "default") [$CONTAINERD_NAMESPACE]
  51. --help, -h show help
  52. --version, -v print the version
  53.  
 

二、Containerd-镜像管理

2.1、查看镜像(五种方式均可查看)

 
  1. [root@ceotos_7][15:36:57][OK] ~
  2. #ctr i ls
  3. REF TYPE DIGEST SIZE PLATFORMS LABELS
  4. [root@ceotos_7][16:09:19][OK] ~
  5. #ctr image ls
  6. REF TYPE DIGEST SIZE PLATFORMS LABELS
  7. [root@ceotos_7][16:09:28][OK] ~
  8. #ctr image list
  9. REF TYPE DIGEST SIZE PLATFORMS LABELS
  10. [root@ceotos_7][16:09:40][OK] ~
  11. #ctr i list
  12. REF TYPE DIGEST SIZE PLATFORMS LABELS
  13. [root@ceotos_7][16:09:46][OK] ~
  14. #ctr images ls
  15. REF TYPE DIGEST SIZE PLATFORMS LABELS
 

镜像也是有命名空间的(指定命名空间)

 
  1. [root@ceotos_7][16:09:56][OK] ~
  2. #ctr -n k8s.io image ls
  3. REF TYPE DIGEST SIZE PLATFORMS LABELS
  4. docker.io/library/nginx:stable application/vnd.docker.distribution.manifest.list.v2+json sha256:362b3204bf9c7252f41df91924b72f311a93c108e5bcb806854715c0efffd5f7 54.2 MiB linux/386,linux/amd64,linux/arm/v5,linux/arm/v7,linux/arm64/v8,linux/mips64le,linux/ppc64le,linux/s390x io.cri-containerd.image=managed
  5. sha256:8c9eabeac475449c72ad457ccbc014788a02dbbc64f24158b0a40fdc5def2dc9 application/vnd.docker.distribution.manifest.list.v2+json sha256:362b3204bf9c7252f41df91924b72f311a93c108e5bcb806854715c0efffd5f7 54.2 MiB linux/386,linux/amd64,linux/arm/v5,linux/arm/v7,linux/arm64/v8,linux/mips64le,linux/ppc64le,linux/s390x io.cri-containerd.image=managed
 

2.2、下载镜像

containerd支持oci标准的镜像,所以可以直接使用docker官方或dockerfile构建的镜像

 
  1. [root@ceotos_7][16:10:06][OK] ~
  2. #ctr images pull --help
  3. NAME:
  4. ctr images pull - pull an image from a remote
  5.  
  6. USAGE:
  7. ctr images pull [command options] [flags] <ref>
  8.  
  9. DESCRIPTION:
  10. Fetch and prepare an image for use in containerd.
  11.  
  12. After pulling an image, it should be ready to use the same reference in a run
  13. command. As part of this process, we do the following:
  14. 1. Fetch all resources into containerd.
  15. 2. Prepare the snapshot filesystem with the pulled resources.
  16. 3. Register metadata for the image.
  17. OPTIONS:
  18. --skip-verify, -k skip SSL certificate validation    # 跳过 SSL 证书验证
  19. --plain-http allow connections using plain HTTP  # 允许使用纯 HTTP 连接
  20. --user value, -u value user[:password] Registry user and password  # 注册用户和密码
  21. --refresh value refresh token for authorization server  # 授权服务器的刷新令牌
  22. --hosts-dir value Custom hosts configuration directory  # 自定义主机配置目录
  23. --tlscacert value path to TLS root CA  # 值到 TLS 根 CA 的路径
  24. --tlscert value path to TLS client certificate  # 值 TLS 客户端证书的路径
  25. --tlskey value path to TLS client key  # 客户端密钥的 tlskey 值路径
  26. --http-dump dump all HTTP request/responses when interacting with container registry  #在与容器注册表交互时转储所有 HTTP 请求/响应
  27. --http-trace enable HTTP tracing for registry interactions  # 为注册表交互启用 HTTP 跟踪
  28. --snapshotter value snapshotter name. Empty value stands for the default value. [$CONTAINERD_SNAPSHOTTER]  # 值快照程序名称。空值代表默认值。
  29. --label value labels to attach to the image  # 值标签附加到图像
  30. --platform value Pull content from a specific platform # 从特定平台拉取内容
  31. --all-platforms pull content and metadata from all platforms  # 从所有平台中提取内容和元数据
  32. --all-metadata Pull metadata for all platforms  # 为所有平台拉取元数据
  33. --print-chainid Print the resulting image's chain ID # 打印结果图像的链 ID
  34. --max-concurrent-downloads value Set the max concurrent downloads for each pull (default: 0)  # 设置每次拉取的最大并发下载量 
 

2.2.1 指定(单个)平台下载

 
  1. [root@node1 ~]# ctr images pull --platform linux/amd64 docker.io/library/nginx:alpine
  2. docker.io/library/nginx:alpine: resolved |++++++++++++++++++++++++++++++++++++++|
  3. index-sha256:455c39afebd4d98ef26dd70284aa86e6810b0485af5f4f222b19b89758cabf1e: done |++++++++++++++++++++++++++++++++++++++|
  4. manifest-sha256:0f2ab24c6aba5d96fcf6e7a736333f26dca1acf5fa8def4c276f6efc7d56251f: done |++++++++++++++++++++++++++++++++++++++|
  5. layer-sha256:4342b1ab302e894161372b32fe2976899a978bf8ff2241fb1655dc25e6645a34: done |++++++++++++++++++++++++++++++++++++++|
  6. config-sha256:19dd4d73108a1feefc29d299f3727467ac02486c83474fc3979e4a7637291fe6: done |++++++++++++++++++++++++++++++++++++++|
  7. layer-sha256:ca7dd9ec2225f2385955c43b2379305acd51543c28cf1d4e94522b3d94cce3ce: done |++++++++++++++++++++++++++++++++++++++|
  8. layer-sha256:76a48b0f58980a64d28bc3575ae4733eb337f7b82403559122b13d5e2ced3921: done |++++++++++++++++++++++++++++++++++++++|
  9. layer-sha256:2f12a0e7c01d607251a4040fa41518fd2542f3ebab83a6f7817867d0de111c96: done |++++++++++++++++++++++++++++++++++++++|
  10. layer-sha256:1a7b9b9bbef6853211515e42f58be7763749950c244a0c485bb4afd1946e06d7: done |++++++++++++++++++++++++++++++++++++++|
  11. layer-sha256:b704883c57afcf77f6bc48709943bcf808c9e9945d7e04926be41226fa415d33: done |++++++++++++++++++++++++++++++++++++++|
  12. elapsed: 8.6 s total: 7.7 Mi (915.8 KiB/s)
  13. unpacking linux/amd64 sha256:455c39afebd4d98ef26dd70284aa86e6810b0485af5f4f222b19b89758cabf1e...
  14. done: 488.54181ms
  15. [root@node1 ~]# uname -a
  16. Linux node1 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
 

2.2.2 指定(全部)平台下载

 
  1. [root@node1 ~]# ctr images pull --all-platforms docker.io/library/nginx:latest
  2. ..........
  3. layer-sha256:5b221a36b4338b09410bbe89507e41d0b7f29bca528624270cdae477a994a020: done |++++++++++++++++++++++++++++++++++++++|
  4. layer-sha256:fcd48e11f0ee5b433a823d2ce982c083cc16daf0de2c64acd8f58f0fee3b4abf: done |++++++++++++++++++++++++++++++++++++++|
  5. layer-sha256:2c61dffb3feda2a72f267842bc181dda76c16a6902616dbf8379f2e2175aa046: done |++++++++++++++++++++++++++++++++++++++|
  6. elapsed: 38.7s total: 395.6 (10.2 MiB/s)
  7. unpacking linux/amd64 sha256:d08d964023fe853b491e1f5eb182499653722c58cc4c294f2675f39d7c6a209d...
  8. unpacking linux/arm/v5 sha256:d08d964023fe853b491e1f5eb182499653722c58cc4c294f2675f39d7c6a209d...
  9. unpacking linux/arm/v7 sha256:d08d964023fe853b491e1f5eb182499653722c58cc4c294f2675f39d7c6a209d...
  10. unpacking linux/arm64/v8 sha256:d08d964023fe853b491e1f5eb182499653722c58cc4c294f2675f39d7c6a209d...
  11. unpacking linux/386 sha256:d08d964023fe853b491e1f5eb182499653722c58cc4c294f2675f39d7c6a209d...
  12. unpacking linux/mips64le sha256:d08d964023fe853b491e1f5eb182499653722c58cc4c294f2675f39d7c6a209d...
  13. unpacking linux/ppc64le sha256:d08d964023fe853b491e1f5eb182499653722c58cc4c294f2675f39d7c6a209d...
  14. unpacking linux/s390x sha256:d08d964023fe853b491e1f5eb182499653722c58cc4c294f2675f39d7c6a209d...
  15. done: 30.090253019s
 

2.3 查看所有镜像

 
  1. [root@ceotos_7][16:31:14][OK] ~
  2. #ctr -n k8s.io image ls
  3. REF TYPE DIGEST SIZE PLATFORMS LABELS
  4. docker.io/library/nginx:alpine application/vnd.docker.distribution.manifest.list.v2+json sha256:207332a7d1d17b884b5a0e94bcf7c0f67f1a518b9bf8da6c2ea72c83eec889b8 15.9 MiB linux/386,linux/amd64,linux/arm/v6,linux/arm/v7,linux/arm64/v8,linux/ppc64le,linux/s390x io.cri-containerd.image=managed
  5. docker.io/library/nginx:latest application/vnd.docker.distribution.manifest.list.v2+json sha256:aa0afebbb3cfa473099a62c4b32e9b3fb73ed23f2a75a65ce1d4b4f55a5c2ef2 54.3 MiB linux/386,linux/amd64,linux/arm/v5,linux/arm/v7,linux/arm64/v8,linux/mips64le,linux/ppc64le,linux/s390x io.cri-containerd.image=managed
  6. docker.io/library/nginx:stable application/vnd.docker.distribution.manifest.list.v2+json sha256:362b3204bf9c7252f41df91924b72f311a93c108e5bcb806854715c0efffd5f7 54.2 MiB linux/386,linux/amd64,linux/arm/v5,linux/arm/v7,linux/arm64/v8,linux/mips64le,linux/ppc64le,linux/s390x io.cri-containerd.image=managed
  7. sha256:2bc7edbc3cf2fce630a95d0586c48cd248e5df37df5b1244728a5c8c91becfe0 application/vnd.docker.distribution.manifest.list.v2+json sha256:207332a7d1d17b884b5a0e94bcf7c0f67f1a518b9bf8da6c2ea72c83eec889b8 15.9 MiB linux/386,linux/amd64,linux/arm/v6,linux/arm/v7,linux/arm64/v8,linux/ppc64le,linux/s390x io.cri-containerd.image=managed
  8. sha256:8c9eabeac475449c72ad457ccbc014788a02dbbc64f24158b0a40fdc5def2dc9 application/vnd.docker.distribution.manifest.list.v2+json sha256:362b3204bf9c7252f41df91924b72f311a93c108e5bcb806854715c0efffd5f7 54.2 MiB linux/386,linux/amd64,linux/arm/v5,linux/arm/v7,linux/arm64/v8,linux/mips64le,linux/ppc64le,linux/s390x io.cri-containerd.image=managed
  9. sha256:904b8cb13b932e23230836850610fa45dce9eb0650d5618c2b1487c2a4f577b8 application/vnd.docker.distribution.manifest.list.v2+json sha256:aa0afebbb3cfa473099a62c4b32e9b3fb73ed23f2a75a65ce1d4b4f55a5c2ef2 54.3 MiB linux/386,linux/amd64,linux/arm/v5,linux/arm/v7,linux/arm64/v8,linux/mips64le,linux/ppc64le,linux/s390x io.cri-containerd.image=managed
 

2.4、挂载镜像

 
  1. [root@node1 ~]# ctr images mount docker.io/library/nginx:latest /mnt
  2. sha256:8b811a30cb94c227fb2ae61a2a1ec1e93381dbef06f9ea6b5c06df4f27651fed
  3. /mnt
  4. [root@node1 ~]# ls /mnt
  5. bin boot dev docker-entrypoint.d docker-entrypoint.sh etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
 

2.5、卸载镜像

 
  1. [root@node1 ~]# umount /mnt/
  2. [root@node1 ~]# ls /mnt/
 

2.6、镜像导出

说明:

  • --all-platforms,导出所有平台镜像,本版本为1.6版本,1.4版本不需要添加此选项。

2.6.1、导出所有平台镜像

 
  1. [root@node1 ~]# ctr i export --all-platforms nginx.img docker.io/library/nginx:latest
  2. [root@node1 ~]# ls
  3. etc nginx.img opt usr
 

2.6.2、导出单个平台镜像

 
  1. [root@node1 ~]# ctr i export --platform linux/amd64 nginx.img docker.io/library/nginx:latest
  2. [root@node1 ~]# ll
  3. 总用量 183296
  4. drwxr-xr-x 4 root root 51 10月 25 01:43 etc
  5. -rw-r--r-- 1 root root 56853504 11月 16 10:56 nginx.img
  6. drwxr-xr-x 4 root root 35 10月 25 01:42 opt
  7. drwxr-xr-x 3 root root 19 10月 25 01:41 usr
 

2.7、删除镜像

说明:

  • rm 、remove 、delete、del  四个命令都可以用来删除镜像
  • 多个镜像也可以一起删除,只需要加到命令后面
 
  1. [root@node1 ~]# ctr images rm docker.io/library/nginx:alpine
  2. docker.io/library/nginx:alpine
  3. [root@node1 ~]# ctr i ls
  4. REF TYPE DIGEST SIZE PLATFORMS LABELS
  5. docker.io/library/nginx:latest application/vnd.docker.distribution.manifest.list.v2+json sha256:d08d964023fe853b491e1f5eb182499653722c58cc4c294f2675f39d7c6a209d 54.2 MiB linux/386,linux/amd64,linux/arm/v5,linux/arm/v7,linux/arm64/v8,linux/mips64le,linux/ppc64le,linux/s390x -
  6. docker.io/library/nginx:stable application/vnd.docker.distribution.manifest.list.v2+json sha256:6f93c7c8b3ecc6ff99a743564c9701278d3f678bbe09d12dd3019bbb3d534f92 54.2 MiB linux/386,linux/amd64,linux/arm/v5,linux/arm/v7,linux/arm64/v8,linux/mips64le,linux/ppc64le,linux/s390x -
 

删除所有镜像:

 
  1. [root@node1 ~]# ctr i rm $(ctr i ls|grep -v REF)
  2. docker.io/library/mysql:latest
 

2.8、导入镜像

 
  1. [root@node1 ~]# ctr i ls
  2. REF TYPE DIGEST SIZE PLATFORMS LABELS
  3. docker.io/library/nginx:stable application/vnd.docker.distribution.manifest.list.v2+json sha256:6f93c7c8b3ecc6ff99a743564c9701278d3f678bbe09d12dd3019bbb3d534f92 54.2 MiB linux/386,linux/amd64,linux/arm/v5,linux/arm/v7,linux/arm64/v8,linux/mips64le,linux/ppc64le,linux/s390x -
  4. [root@node1 ~]# ctr images import nginx.img
  5. unpacking docker.io/library/nginx:latest (sha256:d08d964023fe853b491e1f5eb182499653722c58cc4c294f2675f39d7c6a209d)...done
  6. [root@node1 ~]# ctr i ls
  7. REF TYPE DIGEST SIZE PLATFORMS LABELS
  8. docker.io/library/nginx:latest application/vnd.docker.distribution.manifest.list.v2+json sha256:d08d964023fe853b491e1f5eb182499653722c58cc4c294f2675f39d7c6a209d 54.2 MiB linux/386,linux/amd64,linux/arm/v5,linux/arm/v7,linux/arm64/v8,linux/mips64le,linux/ppc64le,linux/s390x -
  9. docker.io/library/nginx:stable application/vnd.docker.distribution.manifest.list.v2+json sha256:6f93c7c8b3ecc6ff99a743564c9701278d3f678bbe09d12dd3019bbb3d534f92 54.2 MiB linux/386,linux/amd64,linux/arm/v5,linux/arm/v7,linux/arm64/v8,linux/mips64le,linux/ppc64le,linux/s390x -
 

2.9、修改tag

语法:

  • ctr images tag 命令选项  源镜像tag  目标tag  目标tag  ....可以跟多个目标tag
 
  1. [root@node1 ~]# ctr i tag docker.io/library/mysql:latest mysql:latest
  2. mysql:latest
  3. [root@node1 ~]# ctr i ls
  4. REF TYPE DIGEST SIZE PLATFORMS LABELS
  5. docker.io/library/mysql:latest application/vnd.docker.distribution.manifest.list.v2+json sha256:25aace9734db96ae09c24c6a2eeb6db4720c41d493de352eb76007eddf437fbe 150.0 MiB linux/amd64,linux/arm64/v8 -
  6. docker.io/library/nginx:latest application/vnd.docker.distribution.manifest.list.v2+json sha256:d08d964023fe853b491e1f5eb182499653722c58cc4c294f2675f39d7c6a209d 54.2 MiB linux/386,linux/amd64,linux/arm/v5,linux/arm/v7,linux/arm64/v8,linux/mips64le,linux/ppc64le,linux/s390x -
  7. docker.io/library/nginx:stable application/vnd.docker.distribution.manifest.list.v2+json sha256:6f93c7c8b3ecc6ff99a743564c9701278d3f678bbe09d12dd3019bbb3d534f92 54.2 MiB linux/386,linux/amd64,linux/arm/v5,linux/arm/v7,linux/arm64/v8,linux/mips64le,linux/ppc64le,linux/s390x -
  8. mysql:latest application/vnd.docker.distribution.manifest.list.v2+json sha256:25aace9734db96ae09c24c6a2eeb6db4720c41d493de352eb76007eddf437fbe 150.0 MiB linux/amd64,linux/arm64/v8
  9. [root@node1 ~]# ctr i tag docker.io/library/mysql:latest mysql:12345 mysql:123
  10. mysql:12345
  11. mysql:123
 

三、Containerd容器管理

3.1、查看容器

 
  1. [root@node1 ~]# ctr c ls
  2. CONTAINER IMAGE RUNTIME
  3. [root@node1 ~]# ctr container ls
  4. CONTAINER IMAGE RUNTIME
  5. [root@node1 ~]# ctr containers ls
  6. CONTAINER IMAGE RUNTIME
 

3.2、查看容器进程(任务)

 
  1. [root@node1 ~]# ctr t ls
  2. TASK PID STATUS
  3. [root@node1 ~]# ctr tasks ls
  4. TASK PID STATUS
  5. [root@node1 ~]# ctr task ls
  6. TASK PID STATUS   
 

3.3、创建静态容器

 
  1. [root@node1 ~]# ctr containers create docker.io/library/nginx:latest nginx
  2. ctr: image "docker.io/library/nginx:latest": not found
  3. [root@node1 ~]# ctr images pull docker.io/library/nginx:latest
  4. docker.io/library/nginx:latest: resolved |++++++++++++++++++++++++++++++++++++++|
  5. index-sha256:e209ac2f37c70c1e0e9873a5f7231e91dcd83fdf1178d8ed36c2ec09974210ba: done |++++++++++++++++++++++++++++++++++++++|
  6. manifest-sha256:6ad8394ad31b269b563566998fd80a8f259e8decf16e807f8310ecc10c687385: done |++++++++++++++++++++++++++++++++++++++|
  7. layer-sha256:9802a2cfdb8d8504273e75f503a7c9fb4594782653b8252ec3073ae7b850a235: done |++++++++++++++++++++++++++++++++++++++|
  8. config-sha256:88736fe827391462a4db99252117f136b2b25d1d31719006326a437bb40cb12d: done |++++++++++++++++++++++++++++++++++++++|
  9. layer-sha256:a603fa5e3b4127f210503aaa6189abf6286ee5a73deeaab460f8f33ebc6b64e2: done |++++++++++++++++++++++++++++++++++++++|
  10. layer-sha256:c39e1cda007e48da53e4b20c928bcefa9e10958c7461c1ca645b5eed9a2ba029: done |++++++++++++++++++++++++++++++++++++++|
  11. layer-sha256:90cfefba34d7c6a81fe1dfbb4a579998c65ff49092052967f63ddc48f6be85d9: done |++++++++++++++++++++++++++++++++++++++|
  12. layer-sha256:a38226fb7abac764207dffedaee902fdf63c9d4ec076236fb632fe991c4d4b4f: done |++++++++++++++++++++++++++++++++++++++|
  13. layer-sha256:62583498bae6886d90f3b1cbad2ebbeb68b66948161413087ff27b05cb75b994: done |++++++++++++++++++++++++++++++++++++++|
  14. elapsed: 10.7s total: 54.2 M (5.1 MiB/s)
  15. unpacking linux/amd64 sha256:e209ac2f37c70c1e0e9873a5f7231e91dcd83fdf1178d8ed36c2ec09974210ba...
  16. done: 2.371879085s
  17. [root@node1 ~]# ctr containers create docker.io/library/nginx:latest nginx
  18. [root@node1 ~]# ctr containers ls
  19. CONTAINER IMAGE RUNTIME
  20. nginx docker.io/library/nginx:latest io.containerd.runc.v2
  21. [root@node1 ~]# ctr tasks ls
  22. TASK PID STATUS
 

说明:

  • 使用 ctr container create 命令创建容器后,容器并没有处于运行状态,其只是一个静态的容器。这个 container 对象只是包含了运行一个容器所需的资源及配置的数据结构,例如: namespaces、rootfs 和容器的配置都已经初始化成功了,只是用户进程(本案例为nginx)还没有启动。需要使用`ctr tasks`命令才能获取一个动态容器。
  • ctr 创建容器的时候本地必须要有镜像,否则报错

3.4、静态容器启动为动态容器 

 
  1. [root@node1 ]# ctr tasks start nginx
  2. ctr: failed to start shim: failed to resolve runtime path: runtime "io.containerd.runc.v2" binary not installed "containerd-shim-runc-v2": file does not exist: unknown
  3. CTR:启动垫片失败:无法解析运行时路径:未安装运行时“io.containerd.runc.v2”二进制文件“containerd-shim-runc-v2”:文件不存在:未知
  4. [root@node1 bin]# cp /root/usr/local/bin/containerd-shim-runc-v2 /usr/bin/
  5. [root@node1 bin]# ls /usr/bin/ | grep containerd-shim-runc-v2
  6. containerd-shim-runc-v2
  7. # 启动task,即表时在容器中运行了进程,即为动态容器。
  8. [root@node1 bin]# ctr tasks start -d nginx
 

3.4.1、查看容器宿主机进程

 
  1. # 查看容器所在宿主机进程,是以宿主机进程的方式存在的。
  2. [root@node1 bin]# ctr task ls
  3. TASK PID STATUS
  4. nginx 3356 RUNNING
  5. # 查看容器的进程(都是物理机的进程)
  6. [root@node1 bin]# ctr task ps nginx
  7. PID INFO
  8. 3356 -
  9. 3387 -
  10. 3388 -
  11. # 物理机查看到相应的进程
  12. [root@node1 bin]# ps aux | grep 3356
  13. root 3356 0.0 0.3 8916 3488 ? Ss 17:26 0:00 nginx: master process nginx -g daemon off;
  14. root 3416 0.0 0.0 112824 988 pts/0 S+ 17:29 0:00 grep --color=auto 3356
 

3.5、进入到容器中

[root@node1 bin]# ctr tasks exec  --exec-id 2 -t nginx2 /bin/sh

说明:

  • 为exec进程设定一个id,可以随意输入,只要保证唯一即可,也可使用$RANDOM变量。

3.6、运行一个动态容器

说明:

  •  -d   代表dameon,后台运行
  •  --net-host   代表容器的IP就是宿主机的IP(相当于docker里的host类型网络)
 
  1. [root@node1 vod]# ctr run -d --net-host docker.io/library/nginx:alpine nginx
  2. [root@node1 vod]# ctr t ls
  3. TASK PID STATUS
  4. nginx 3582 RUNNING
 

3.6.1、进入容器

 
  1. [root@node1 vod]# ctr task exec --exec-id 1 -t nginx /bin/sh
  2. / # ifconfig
  3. ens32 Link encap:Ethernet HWaddr 00:0C:29:DF:7E:67
  4. inet addr:192.168.1.90 Bcast:192.168.1.255 Mask:255.255.255.0
  5. inet6 addr: fe80::8449:8163:c2e:26fb/64 Scope:Link
  6. inet6 addr: fe80::e340:238:62a0:6413/64 Scope:Link
  7. UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
  8. RX packets:39979 errors:0 dropped:0 overruns:0 frame:0
  9. TX packets:4733 errors:0 dropped:0 overruns:0 carrier:0
  10. collisions:0 txqueuelen:1000
  11. RX bytes:4738148 (4.5 MiB) TX bytes:496878 (485.2 KiB)
  12.  
  13. lo Link encap:Local Loopback
  14. inet addr:127.0.0.1 Mask:255.0.0.0
  15. inet6 addr: ::1/128 Scope:Host
  16. UP LOOPBACK RUNNING MTU:65536 Metric:1
  17. RX packets:56 errors:0 dropped:0 overruns:0 frame:0
  18. TX packets:56 errors:0 dropped:0 overruns:0 carrier:0
  19. collisions:0 txqueuelen:1
  20. RX bytes:4409 (4.3 KiB) TX bytes:4409 (4.3 KiB)
  21.  
  22. / # curl http://192.168.1.90
  23. <!DOCTYPE html>
  24. <html>
  25. <head>
  26. <title>Welcome to nginx!</title>
  27. <style>
  28. html { color-scheme: light dark; }
  29. body { width: 35em; margin: 0 auto;
  30. font-family: Tahoma, Verdana, Arial, sans-serif; }
  31. </style>
  32. </head>
  33. <body>
  34. <h1>Welcome to nginx!</h1>
  35. <p>If you see this page, the nginx web server is successfully installed and
  36. working. Further configuration is required.</p>
  37.  
  38. <p>For online documentation and support please refer to
  39. <a href="http://nginx.org/">nginx.org</a>.<br/>
  40. Commercial support is available at
  41. <a href="http://nginx.com/">nginx.com</a>.</p>
  42.  
  43. <p><em>Thank you for using nginx.</em></p>
  44. </body>
  45. </html>
  46. # 修改显示文件
  47. / # echo "nginx" > /usr/share/nginx/html/index.html
  48. / # curl http://192.168.1.90
  49. nginx
  50. / # exit
  51. # 宿主机也可以访问
  52. [root@node1 vod]# curl http://192.168.1.90
  53. nginx 
 

3.7、暂停容器

 
  1. [root@node1 ~]# ctr t ls
  2. TASK PID STATUS
  3. nginx4 3187 RUNNING
  4. [root@node1 ~]# ctr tasks --help
  5. NAME:
  6. ctr tasks - manage tasks
  7.  
  8. USAGE:
  9. ctr tasks command [command options] [arguments...]
  10.  
  11. COMMANDS:
  12. attach attach to the IO of a running container
  13. checkpoint checkpoint a container
  14. delete, del, remove, rm delete one or more tasks
  15. exec execute additional processes in an existing container
  16. list, ls list tasks
  17. kill signal a container (default: SIGTERM)
  18. pause pause an existing container
  19. ps list processes for container
  20. resume resume a paused container
  21. start start a container that has been created
  22. metrics, metric get a single data point of metrics for a task with the built-in Linux runtime
  23.  
  24. OPTIONS:
  25. --help, -h show help
  26.  
  27. [root@node1 ~]# ctr tasks pause nginx4
  28. [root@node1 ~]# ctr t ls
  29. TASK PID STATUS
  30. nginx4 3187 PAUSED
 

3.8、恢复容器

 
  1. [root@node1 ~]# ctr task resume nginx4
  2. [root@node1 ~]# ctr t ls
  3. TASK PID STATUS
  4. nginx4 3187 RUNNING
 

3.9、停止容器

 
  1. # 使用kill命令停止容器中运行的进程,既为停止容器
  2. [root@node1 ~]# ctr t ls
  3. TASK PID STATUS
  4. nginx4 3187 RUNNING
  5. [root@node1 ~]# ctr task kill nginx4
  6. [root@node1 ~]# ctr t ls
  7. TASK PID STATUS
  8. nginx4 3187 STOPPED
 

3.9.1、删除一个进程

 
  1. # 必须先停止tasks或先删除task,再删除容器
  2. [root@node1 ~]# ctr task delete nginx4
  3. [root@node1 ~]# ctr c ls
  4. CONTAINER IMAGE RUNTIME
  5. nginx4 docker.io/library/nginx:alpine io.containerd.runc.v2
 

注:

  • 查看静态容器,确认其还存在于系统中
  • 再次启动,容器即可恢复,如下:

 

 
  1. [root@node1 ~]# ctr task start -d nginx4
  2. /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
  3. /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
  4. [root@node1 ~]# ctr t ls
  5. TASK PID STATUS
  6. nginx4 3416 RUNNING 
 

3.10、删除容器

注:删除容器之前,必须先停止进程,否则报错,删除容器后,进程也会一块删除!

 
  1. [root@node1 ~]# ctr t ls
  2. TASK PID STATUS
  3. nginx4 3416 RUNNING
  4. [root@node1 ~]# ctr c ls
  5. CONTAINER IMAGE RUNTIME
  6. nginx4 docker.io/library/nginx:alpine io.containerd.runc.v2
  7. [root@node1 ~]# ctr container rm nginx4
  8. ERRO[0000] failed to delete container "nginx4" error="cannot delete a non stopped container: {running 0 0001-01-01 00:00:00 +0000 UTC}"
  9. ctr: cannot delete a non stopped container: {running 0 0001-01-01 00:00:00 +0000 UTC} # 无法删除未停止的容器
  10. [root@node1 ~]# ctr task kill nginx4
  11. [root@node1 ~]# ctr t ls
  12. TASK PID STATUS
  13. nginx4 3416 STOPPED
  14. [root@node1 ~]# ctr container delete nginx4
  15. [root@node1 ~]# ctr c ls
  16. CONTAINER IMAGE RUNTIME
 

一、容器基本操作

容器基本操作主要是 ctr image 命令,查看命令帮助

 
  1. [root@localhost ~]# ctr containers -h
  2. NAME:
  3. ctr containers - Manage containers
  4.  
  5. USAGE:
  6. ctr containers command [command options] [arguments...]
  7.  
  8. COMMANDS:
  9. create Create container
  10. delete, del, remove, rm Delete one or more existing containers
  11. info Get info about a container
  12. list, ls List containers
  13. label Set and clear labels for a container
  14. checkpoint Checkpoint a container
  15. restore Restore a container from checkpoint
  16.  
  17. OPTIONS:
  18. --help, -h show help
 

二、创建静态容器

[root@localhost ~]# ctr container create docker.io/library/nginx:alpine nginx

nginx 指定容器名称 使用 ctr container create 命令创建容器后,容器并没有处于运行状态,其只是一个静态的容器。这个 container 对象只是包含了运行一个容器所需的资源及配置的数据结构,例如:namespaces、rootfs 和容器的配置都已经初始化成功了,只是用户进程(本案例为nginx)还没有启动。需要使用ctr tasks命令才能获取一个动态容器。

三、查看容器

 
  1. [root@localhost ~]# ctr container ls
  2. CONTAINER IMAGE RUNTIME
  3. nginx docker.io/library/nginx:alpine io.containerd.runc.v2
 

加上 -q 选项 仅查看名字

 
  1. [root@localhost ~]# ctr container ls -q
  2. nginx
 

也可以简写

 
  1. [root@localhost ~]# ctr c ls -q
  2. nginx
 

查看容器详细配置,类似于 docker inspect 功能。

[root@localhost ~]# ctr container info nginx

四、删除容器

 
  1.  
  2. [root@localhost ~]# ctr container rm nginx
  3. [root@localhost ~]# ctr container ls
  4. CONTAINER IMAGE RUNTIME
 

五、容器任务

上面我们通过 container create 命令创建的容器,并没有处于运行状态,只是一个静态的容器。一个 container 对象只是包含了运行一个容器所需的资源及相关配置数据,表示 namespaces、rootfs 和容器的配置都已经初始化成功了,只是用户进程还没有启动。一个容器真正运行起来是由 Task 任务实现的,Task 可以为容器设置网卡,还可以配置工具来对容器进行监控等。我们操作容器实际上是对容器进程操作。

5.1 静态容器启动为动态容器

将静态容器启动为动态容器 ,使用 ctr task 命令 Task 相关操作可以通过 ctr task 获取,如下我们通过 Task 来启动容器:

[root@localhost ~]# ctr task start -d nginx

-d是一个命令行选项,它的全称是--detach。这个选项告诉ctr task start命令在启动任务后立即返回,让任务在后台运行。

5.2 查看容器进程

通过 task ls 查看正在运行的容器进程:

 
  1. [root@localhost ~]# ctr task ls
  2. TASK PID STATUS
  3. nginx 22945 RUNNING
 

通过ps查看,其中第一个 PID 23181 就是我们容器中的 1 号进程。

 
  1. [root@localhost ~]# ctr task ps nginx
  2. PID INFO
  3. 23181 -
  4. 23208 -
 

查看物理机进程,可以看到相应的进程ID:23181 、23208 可以对应的上

 
  1. [root@localhost ~]# ps -aux|grep nginx
  2. root 23159 0.0 2.1 722644 20916 ? Sl 13:01 0:00 /usr/local/bin/containerd-shim-runc-v2 -namespace default -id nginx -address /run/containerd/containerd.sock
  3. root 23181 0.0 0.5 8904 5120 ? Ss 13:01 0:00 nginx: master process nginx -g daemon off;
  4. 101 23208 0.0 0.2 9400 2256 ? S 13:01 0:00 nginx: worker process
  5. root 23266 0.0 0.2 112836 2332 pts/3 S+ 13:15 0:00 grep --color=auto nginx
 

5.3 exec终端操作

 
  1. [root@localhost ~]# ctr task exec --exec-id 0 -t nginx sh
  2. / # ls
  3. bin docker-entrypoint.d etc lib mnt proc run srv tmp var
  4. dev docker-entrypoint.sh home media opt root sbin sys usr
  5. / # pwd
  6. /
 

这里要注意 --exec-id参数 为 exec 进程设定一个id,可以随意输入,只要保证唯一即可,也可使用$RANDOM变量。

5.4 运行一个动态容器

 
  1. [root@localhost ~]# ctr run -d --net-host docker.io/library/nginx:alpine nginx2
  2.  
  3. [root@localhost ~]# ctr c ls
  4. CONTAINER IMAGE RUNTIME
  5. nginx docker.io/library/nginx:alpine io.containerd.runc.v2
  6. nginx2 docker.io/library/nginx:alpine io.containerd.runc.v2
  7.  
  8. [root@localhost ~]# ctr task ls
  9. TASK PID STATUS
  10. nginx 23181 RUNNING
  11. nginx2 23339 RUNNING
 
  • -d 代表dameon,后台运行

  • --net-host 代表容器的IP就是宿主机的IP(相当于docker里的host类型网络)

5.5 进入容器

 
  1. [root@localhost ~]# ctr task exec --exec-id 1 -t nginx2 /bin/sh
  2. / # ifconfig
  3. eno16777736 Link encap:Ethernet HWaddr 00:0C:29:AD:FC:E9
  4. inet addr:192.168.36.137 Bcast:192.168.36.255 Mask:255.255.255.0
  5. inet6 addr: fe80::20c:29ff:fead:fce9/64 Scope:Link
  6. UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
  7. RX packets:2304427 errors:0 dropped:0 overruns:0 frame:0
  8. TX packets:462774 errors:0 dropped:0 overruns:0 carrier:0
  9. collisions:0 txqueuelen:1000
  10. RX bytes:3259139229 (3.0 GiB) TX bytes:182005861 (173.5 MiB)
  11.  
  12. lo Link encap:Local Loopback
  13. inet addr:127.0.0.1 Mask:255.0.0.0
  14. inet6 addr: ::1/128 Scope:Host
  15. UP LOOPBACK RUNNING MTU:65536 Metric:1
  16. RX packets:8 errors:0 dropped:0 overruns:0 frame:0
  17. TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
  18. collisions:0 txqueuelen:1000
  19. RX bytes:696 (696.0 B) TX bytes:696 (696.0 B)
  20.  
  21. / # curl 192.168.36.137
  22. <!DOCTYPE html>
  23. <html>
  24. <head>
  25. <title>Welcome to nginx!</title>
  26. <style>
  27. html { color-scheme: light dark; }
  28. body { width: 35em; margin: 0 auto;
  29. font-family: Tahoma, Verdana, Arial, sans-serif; }
  30. </style>
  31. </head>
  32. <body>
  33. <h1>Welcome to nginx!</h1>
  34. <p>If you see this page, the nginx web server is successfully installed and
  35. working. Further configuration is required.</p>
  36.  
  37. <p>For online documentation and support please refer to
  38. <a href="http://nginx.org/">nginx.org</a>.<br/>
  39. Commercial support is available at
  40. <a href="http://nginx.com/">nginx.com</a>.</p>
  41.  
  42. <p><em>Thank you for using nginx.</em></p>
  43. </body>
  44. </html>
 

六、暂停容器进程

和 docker pause 类似的功能

[root@localhost ~]# ctr task pause nginx

暂停后容器状态变成了 PAUSED:

 
  1. [root@localhost ~]# ctr task ls
  2. TASK PID STATUS
  3. nginx 22945 PAUSED
 

七、恢复容器进程

使用 resume 命令来恢复容器:

 
  1. [root@localhost ~]# ctr task resume nginx
  2. [root@localhost ~]# ctr task ls
  3. TASK PID STATUS
  4. nginx 22945 RUNNING
 

八、杀死容器进程

ctr 没有 stop 容器的功能,只能暂停或者杀死容器进程,然后在删除容器杀死容器进程可以使用 task kill 命令

 
  1. [root@localhost ~]# ctr task kill nginx
  2. [root@localhost ~]# ctr task ls
  3. TASK PID STATUS
  4. nginx 22945 STOPPED
 

九、删除进程

杀掉容器后可以看到容器的状态变成了 STOPPED。同样也可以通过 task rm 命令删除 Task:

 
  1. [root@localhost ~]# ctr task rm nginx
  2. [root@localhost ~]# ctr task ls
  3. TASK PID STATUS
 

删除进程之后才可以删除容器

[root@localhost ~]# ctr c rm nginx

十、查看容器进程资源

除此之外我们还可以获取容器的 cgroup 相关信息,可以使用 task metrics 命令用来获取容器的内存、CPU 和 PID 的限额与使用量。

 
  1. # 重新启动容器
  2. [root@localhost ~]# ctr task start -d nginx
  3.  
  4. [root@localhost ~]# ctr task metrics nginx
  5. ID TIMESTAMP
  6. nginx seconds:1701925304 nanos:694970440
  7.  
  8. METRIC VALUE
  9. memory.usage_in_bytes 2592768
  10. memory.limit_in_bytes 9223372036854771712
  11. memory.stat.cache 258048
  12. cpuacct.usage 21976291
  13. cpuacct.usage_percpu [21976291 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
  14. pids.current 2
  15. pids.limit 0
 

参考地址:containerd/config.md at main · containerd/containerd · GitHub

1. 修改Config.toml文件

 
  1. [plugins."io.containerd.grpc.v1.cri".registry]
  2. config_path = "/etc/containerd/certs.d" # 镜像地址配置文件
  3.  
  4. [plugins."io.containerd.grpc.v1.cri".registry.auths]
  5.  
  6. [plugins."io.containerd.grpc.v1.cri".registry.configs]
  7.  
  8. [plugins."io.containerd.grpc.v1.cri".registry.headers]
  9.  
  10. [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
  11.  
 

2. 创建相应目录

mkdir /etc/containerd/certs.d/docker.io -pv

3. 配置加速

 
  1. cat > /etc/containerd/certs.d/docker.io/hosts.toml << EOF
  2. server = "https://docker.io"
  3. [host."https://xxxxxxxx.mirror.aliyuncs.com"]
  4. capabilities = ["pull", "resolve"]
  5. EOF
 

4. 重启Containerd

systemctl restart containerd

5. 重新拉取镜像

ctr i pull docker.io/library/mysql:latest

https://github.com/containerd/containerd/blob/main/docs/cri/config.md#registry-configuration

在学习 Containerd 之前我们有必要对 Docker 的发展历史做一个简单的回顾,因为这里面牵涉到的组件实战是有点多,有很多我们会经常听到,但是不清楚这些组件到底是干什么用的,比如 libcontainerrunccontainerdCRIOCI 等等。

一、Docker

从 Docker 1.11 版本开始,Docker 容器运行就不是简单通过 Docker Daemon 来启动了,而是通过集成 containerd、runc 等多个组件来完成的。虽然 Docker Daemon 守护进程模块在不停的重构,但是基本功能和定位没有太大的变化,一直都是 CS 架构,守护进程负责和 Docker Client 端交互,并管理 Docker 镜像和容器。现在的架构中组件 containerd 就会负责集群节点上容器的生命周期管理,并向上为 Docker Daemon 提供 gRPC 接口。

当我们要创建一个容器的时候,现在 Docker Daemon 并不能直接帮我们创建了,而是请求 containerd 来创建一个容器,containerd 收到请求后,也并不会直接去操作容器,而是创建一个叫做 containerd-shim 的进程,让这个进程去操作容器,我们指定容器进程是需要一个父进程来做状态收集、维持 stdin 等 fd 打开等工作的,假如这个父进程就是 containerd,那如果 containerd 挂掉的话,整个宿主机上所有的容器都得退出了,而引入 containerd-shim 这个垫片就可以来规避这个问题了。

然后创建容器需要做一些 namespaces 和 cgroups 的配置,以及挂载 root 文件系统等操作,这些操作其实已经有了标准的规范,那就是 OCI(开放容器标准),runc 就是它的一个参考实现(Docker 被逼无耐将 libcontainer 捐献出来改名为 runc 的),这个标准其实就是一个文档,主要规定了容器镜像的结构、以及容器需要接收哪些操作指令,比如 create、start、stop、delete 等这些命令。runc 就可以按照这个 OCI 文档来创建一个符合规范的容器,既然是标准肯定就有其他 OCI 实现,比如 Kata、gVisor 这些容器运行时都是符合 OCI 标准的。

所以真正启动容器是通过 containerd-shim 去调用 runc 来启动容器的,runc 启动完容器后本身会直接退出,containerd-shim 则会成为容器进程的父进程, 负责收集容器进程的状态, 上报给 containerd, 并在容器中 pid 为 1 的进程退出后接管容器中的子进程进行清理, 确保不会出现僵尸进程

而 Docker 将容器操作都迁移到 containerd 中去是因为当前做 Swarm,想要进军 PaaS 市场,做了这个架构切分,让 Docker Daemon 专门去负责上层的封装编排,当然后面的结果我们知道 Swarm 在 Kubernetes 面前是惨败,然后 Docker 公司就把 containerd 项目捐献给了 CNCF 基金会,这个也是现在的 Docker 架构。

二、CRI

我们知道 Kubernetes 提供了一个 CRI 的容器运行时接口,那么这个 CRI 到底是什么呢?这个其实也和 Docker 的发展密切相关的。

在 Kubernetes 早期的时候,当时 Docker 实在是太火了,Kubernetes 当然会先选择支持 Docker,而且是通过硬编码的方式直接调用 Docker API,后面随着 Docker 的不断发展以及 Google 的主导,出现了更多容器运行时,Kubernetes 为了支持更多更精简的容器运行时,Google 就和红帽主导推出了 CRI 标准,用于将 Kubernetes 平台和特定的容器运行时(当然主要是为了干掉 Docker)解耦。

CRI(Container Runtime Interface 容器运行时接口)本质上就是 Kubernetes 定义的一组与容器运行时进行交互的接口,所以只要实现了这套接口的容器运行时都可以对接到 Kubernetes 平台上来。不过 Kubernetes 推出 CRI 这套标准的时候还没有现在的统治地位,所以有一些容器运行时可能不会自身就去实现 CRI 接口,于是就有了 shim(垫片), 一个 shim 的职责就是作为适配器将各种容器运行时本身的接口适配到 Kubernetes 的 CRI 接口上,其中 dockershim 就是 Kubernetes 对接 Docker 到 CRI 接口上的一个垫片实现。

Kubelet 通过 gRPC 框架与容器运行时或 shim 进行通信,其中 kubelet 作为客户端,CRI shim(也可能是容器运行时本身)作为服务器。

CRI 定义的 API(kubernetes/api.proto at release-1.5 · kubernetes/kubernetes · GitHub) 主要包括两个 gRPC 服务,ImageService 和 RuntimeServiceImageService 服务主要是拉取镜像、查看和删除镜像等操作,RuntimeService 则是用来管理 Pod 和容器的生命周期,以及与容器交互的调用(exec/attach/port-forward)等操作,可以通过 kubelet 中的标志 --container-runtime-endpoint 和 --image-service-endpoint 来配置这两个服务的套接字。

不过这里同样也有一个例外,那就是 Docker,由于 Docker 当时的江湖地位很高,Kubernetes 是直接内置了 dockershim 在 kubelet 中的,所以如果你使用的是 Docker 这种容器运行时的话是不需要单独去安装配置适配器之类的,当然这个举动似乎也麻痹了 Docker 公司。

现在如果我们使用的是 Docker 的话,当我们在 Kubernetes 中创建一个 Pod 的时候,首先就是 kubelet 通过 CRI 接口调用 dockershim,请求创建一个容器,kubelet 可以视作一个简单的 CRI Client, 而 dockershim 就是接收请求的 Server,不过他们都是在 kubelet 内置的。

dockershim 收到请求后, 转化成 Docker Daemon 能识别的请求, 发到 Docker Daemon 上请求创建一个容器,请求到了 Docker Daemon 后续就是 Docker 创建容器的流程了,去调用 containerd,然后创建 containerd-shim 进程,通过该进程去调用 runc 去真正创建容器。

其实我们仔细观察也不难发现使用 Docker 的话其实是调用链比较长的,真正容器相关的操作其实 containerd 就完全足够了,Docker 太过于复杂笨重了,当然 Docker 深受欢迎的很大一个原因就是提供了很多对用户操作比较友好的功能,但是对于 Kubernetes 来说压根不需要这些功能,因为都是通过接口去操作容器的,所以自然也就可以将容器运行时切换到 containerd 来。

切换到 containerd 可以消除掉中间环节,操作体验也和以前一样,但是由于直接用容器运行时调度容器,所以它们对 Docker 来说是不可见的。 因此,你以前用来检查这些容器的 Docker 工具就不能使用了。

你不能再使用 docker ps 或 docker inspect 命令来获取容器信息。由于不能列出容器,因此也不能获取日志、停止容器,甚至不能通过 docker exec 在容器中执行命令。

当然我们仍然可以下载镜像,或者用 docker build 命令构建镜像,但用 Docker 构建、下载的镜像,对于容器运行时和 Kubernetes,均不可见。为了在 Kubernetes 中使用,需要把镜像推送到镜像仓库中去。

从上图可以看出在 containerd 1.0 中,对 CRI 的适配是通过一个单独的 CRI-Containerd 进程来完成的,这是因为最开始 containerd 还会去适配其他的系统(比如 swarm),所以没有直接实现 CRI,所以这个对接工作就交给 CRI-Containerd 这个 shim 了。

然后到了 containerd 1.1 版本后就去掉了 CRI-Containerd 这个 shim,直接把适配逻辑作为插件的方式集成到了 containerd 主进程中,现在这样的调用就更加简洁了。

与此同时 Kubernetes 社区也做了一个专门用于 Kubernetes 的 CRI 运行时 CRI-O,直接兼容 CRI 和 OCI 规范。

这个方案和 containerd 的方案显然比默认的 dockershim 简洁很多,不过由于大部分用户都比较习惯使用 Docker,所以大家还是更喜欢使用 dockershim 方案。

但是随着 CRI 方案的发展,以及其他容器运行时对 CRI 的支持越来越完善,Kubernetes 社区在2020年7月份就开始着手移除 dockershim 方案了:https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2221-remove-dockershim,现在的移除计划是在 1.20 版本中将 kubelet 中内置的 dockershim 代码分离,将内置的 dockershim 标记为维护模式,当然这个时候仍然还可以使用 dockershim,目标是在 1.23⁄1.24 版本发布没有 dockershim 的版本(代码还在,但是要默认支持开箱即用的 docker 需要自己构建 kubelet,会在某个宽限期过后从 kubelet 中删除内置的 dockershim 代码)。 那么这是否就意味这 Kubernetes 不再支持 Docker 了呢?当然不是的,这只是废弃了内置的 dockershim 功能而已,Docker 和其他容器运行时将一视同仁,不会单独对待内置支持,如果我们还想直接使用 Docker 这种容器运行时应该怎么办呢?可以将 dockershim 的功能单独提取出来独立维护一个 cri-dockerd 即可,就类似于 containerd 1.0 版本中提供的 CRI-Containerd,当然还有一种办法就是 Docker 官方社区将 CRI 接口内置到 Dockerd 中去实现。

但是我们也清楚 Dockerd 也是去直接调用的 Containerd,而 containerd 1.1 版本后就内置实现了 CRI,所以 Docker 也没必要再去单独实现 CRI 了,当 Kubernetes 不再内置支持开箱即用的 Docker 的以后,最好的方式当然也就是直接使用 Containerd 这种容器运行时,而且该容器运行时也已经经过了生产环境实践的,接下来我们就来学习下 Containerd 的使用。

三、Containerd

我们知道很早之前的 Docker Engine 中就有了 containerd,只不过现在是将 containerd 从 Docker Engine 里分离出来,作为一个独立的开源项目,目标是提供一个更加开放、稳定的容器运行基础设施。分离出来的 containerd 将具有更多的功能,涵盖整个容器运行时管理的所有需求,提供更强大的支持。

containerd 是一个工业级标准的容器运行时,它强调简单性健壮性可移植性,containerd 可以负责干下面这些事情:

  • 管理容器的生命周期(从创建容器到销毁容器)
  • 拉取/推送容器镜像
  • 存储管理(管理镜像及容器数据的存储)
  • 调用 runc 运行容器(与 runc 等容器运行时交互)
  • 管理容器网络接口及网络

3.1 架构

containerd 可用作 Linux 和 Windows 的守护程序,它管理其主机系统完整的容器生命周期,从镜像传输和存储到容器执行和监测,再到底层存储到网络附件等等。

上图是 containerd 官方提供的架构图,可以看出 containerd 采用的也是 C/S 架构,服务端通过 unix domain socket 暴露低层的 gRPC API 接口出去,客户端通过这些 API 管理节点上的容器,每个 containerd 只负责一台机器,Pull 镜像,对容器的操作(启动、停止等),网络,存储都是由 containerd 完成。具体运行容器由 runc 负责,实际上只要是符合 OCI 规范的容器都可以支持。

为了解耦,containerd 将系统划分成了不同的组件,每个组件都由一个或多个模块协作完成(Core 部分),每一种类型的模块都以插件的形式集成到 Containerd 中,而且插件之间是相互依赖的,例如,上图中的每一个长虚线的方框都表示一种类型的插件,包括 Service Plugin、Metadata Plugin、GC Plugin、Runtime Plugin 等,其中 Service Plugin 又会依赖 Metadata Plugin、GC Plugin 和 Runtime Plugin。每一个小方框都表示一个细分的插件,例如 Metadata Plugin 依赖 Containers Plugin、Content Plugin 等。比如:

  • Content Plugin: 提供对镜像中可寻址内容的访问,所有不可变的内容都被存储在这里。
  • Snapshot Plugin: 用来管理容器镜像的文件系统快照,镜像中的每一层都会被解压成文件系统快照,类似于 Docker 中的 graphdriver。

总体来看 containerd 可以分为三个大块:Storage、Metadata 和 Runtime。

3.2 安装

这里我使用的系统是 Linux Mint 20.2,首先需要安装 seccomp 依赖:

 
  1. ~ apt-get update
  2. ~ apt-get install libseccomp2 -y
 

由于 containerd 需要调用 runc,所以我们也需要先安装 runc,不过 containerd 提供了一个包含相关依赖的压缩包 cri-containerd-cni-${VERSION}.${OS}-${ARCH}.tar.gz,可以直接使用这个包来进行安装。首先从 release 页面下载最新版本的压缩包,当前为 1.5.5 版本:

 
  1. ➜ ~ wget https://github.com/containerd/containerd/releases/download/v1.5.5/cri-containerd-cni-1.5.5-linux-amd64.tar.gz
  2. # 如果有限制,也可以替换成下面的 URL 加速下载
  3. # wget https://download.fastgit.org/containerd/containerd/releases/download/v1.5.5/cri-containerd-cni-1.5.5-linux-amd64.tar.gz
 

可以通过 tar 的 -t 选项直接看到压缩包中包含哪些文件:

 
  1. ➜ ~ tar -tf cri-containerd-cni-1.4.3-linux-amd64.tar.gz
  2. etc/
  3. etc/cni/
  4. etc/cni/net.d/
  5. etc/cni/net.d/10-containerd-net.conflist
  6. etc/crictl.yaml
  7. etc/systemd/
  8. etc/systemd/system/
  9. etc/systemd/system/containerd.service
  10. usr/
  11. usr/local/
  12. usr/local/bin/
  13. usr/local/bin/containerd-shim-runc-v2
  14. usr/local/bin/ctr
  15. usr/local/bin/containerd-shim
  16. usr/local/bin/containerd-shim-runc-v1
  17. usr/local/bin/crictl
  18. usr/local/bin/critest
  19. usr/local/bin/containerd
  20. usr/local/sbin/
  21. usr/local/sbin/runc
  22. opt/
  23. opt/cni/
  24. opt/cni/bin/
  25. opt/cni/bin/vlan
  26. opt/cni/bin/host-local
  27. opt/cni/bin/flannel
  28. opt/cni/bin/bridge
  29. opt/cni/bin/host-device
  30. opt/cni/bin/tuning
  31. opt/cni/bin/firewall
  32. opt/cni/bin/bandwidth
  33. opt/cni/bin/ipvlan
  34. opt/cni/bin/sbr
  35. opt/cni/bin/dhcp
  36. opt/cni/bin/portmap
  37. opt/cni/bin/ptp
  38. opt/cni/bin/static
  39. opt/cni/bin/macvlan
  40. opt/cni/bin/loopback
  41. opt/containerd/
  42. opt/containerd/cluster/
  43. opt/containerd/cluster/version
  44. opt/containerd/cluster/gce/
  45. opt/containerd/cluster/gce/cni.template
  46. opt/containerd/cluster/gce/configure.sh
  47. opt/containerd/cluster/gce/cloud-init/
  48. opt/containerd/cluster/gce/cloud-init/master.yaml
  49. opt/containerd/cluster/gce/cloud-init/node.yaml
  50. opt/containerd/cluster/gce/env
 

直接将压缩包解压到系统的各个目录中:

➜  ~ tar -C / -xzf cri-containerd-cni-1.5.5-linux-amd64.tar.gz

当然要记得将 /usr/local/bin 和 /usr/local/sbin 追加到 ~/.bashrc 文件的 PATH 环境变量中:

export PATH=$PATH:/usr/local/bin:/usr/local/sbin

然后执行下面的命令使其立即生效:

➜  ~ source ~/.bashrc

containerd 的默认配置文件为 /etc/containerd/config.toml,我们可以通过如下所示的命令生成一个默认的配置:

 
  1. ➜ ~ mkdir /etc/containerd
  2. ➜ ~ containerd config default > /etc/containerd/config.toml
 

由于上面我们下在的 containerd 压缩包中包含一个 etc/systemd/system/containerd.service 的文件,这样我们就可以通过 systemd 来配置 containerd 作为守护进程运行了,内容如下所示:

 
  1. ➜ ~ cat /etc/systemd/system/containerd.service
  2. [Unit]
  3. Description=containerd container runtime
  4. Documentation=https://containerd.io
  5. After=network.target local-fs.target
  6.  
  7. [Service]
  8. ExecStartPre=-/sbin/modprobe overlay
  9. ExecStart=/usr/local/bin/containerd
  10.  
  11. Type=notify
  12. Delegate=yes
  13. KillMode=process
  14. Restart=always
  15. RestartSec=5
  16. # Having non-zero Limit*s causes performance problems due to accounting overhead
  17. # in the kernel. We recommend using cgroups to do container-local accounting.
  18. LimitNPROC=infinity
  19. LimitCORE=infinity
  20. LimitNOFILE=1048576
  21. # Comment TasksMax if your systemd version does not supports it.
  22. # Only systemd 226 and above support this version.
  23. TasksMax=infinity
  24. OOMScoreAdjust=-999
  25.  
  26. [Install]
  27. WantedBy=multi-user.target
 

这里有两个重要的参数:

  • Delegate: 这个选项允许 containerd 以及运行时自己管理自己创建容器的 cgroups。如果不设置这个选项,systemd 就会将进程移到自己的 cgroups 中,从而导致 containerd 无法正确获取容器的资源使用情况。
  • KillMode: 这个选项用来处理 containerd 进程被杀死的方式。默认情况下,systemd 会在进程的 cgroup 中查找并杀死 containerd 的所有子进程。KillMode 字段可以设置的值如下。

    • control-group(默认值):当前控制组里面的所有子进程,都会被杀掉
    • process:只杀主进程
    • mixed:主进程将收到 SIGTERM 信号,子进程收到 SIGKILL 信号
    • none:没有进程会被杀掉,只是执行服务的 stop 命令

我们需要将 KillMode 的值设置为 process,这样可以确保升级或重启 containerd 时不杀死现有的容器。

现在我们就可以启动 containerd 了,直接执行下面的命令即可:

➜  ~ systemctl enable containerd --now

启动完成后就可以使用 containerd 的本地 CLI 工具 ctr 了,比如查看版本:

3.3 配置

我们首先来查看下上面默认生成的配置文件 /etc/containerd/config.toml

 
  1. disabled_plugins = []
  2. imports = []
  3. oom_score = 0
  4. plugin_dir = ""
  5. required_plugins = []
  6. root = "/var/lib/containerd"
  7. state = "/run/containerd"
  8. version = 2
  9.  
  10. [cgroup]
  11. path = ""
  12.  
  13. [debug]
  14. address = ""
  15. format = ""
  16. gid = 0
  17. level = ""
  18. uid = 0
  19.  
  20. [grpc]
  21. address = "/run/containerd/containerd.sock"
  22. gid = 0
  23. max_recv_message_size = 16777216
  24. max_send_message_size = 16777216
  25. tcp_address = ""
  26. tcp_tls_cert = ""
  27. tcp_tls_key = ""
  28. uid = 0
  29.  
  30. [metrics]
  31. address = ""
  32. grpc_histogram = false
  33.  
  34. [plugins]
  35.  
  36. [plugins."io.containerd.gc.v1.scheduler"]
  37. deletion_threshold = 0
  38. mutation_threshold = 100
  39. pause_threshold = 0.02
  40. schedule_delay = "0s"
  41. startup_delay = "100ms"
  42.  
  43. [plugins."io.containerd.grpc.v1.cri"]
  44. disable_apparmor = false
  45. disable_cgroup = false
  46. disable_hugetlb_controller = true
  47. disable_proc_mount = false
  48. disable_tcp_service = true
  49. enable_selinux = false
  50. enable_tls_streaming = false
  51. ignore_image_defined_volumes = false
  52. max_concurrent_downloads = 3
  53. max_container_log_line_size = 16384
  54. netns_mounts_under_state_dir = false
  55. restrict_oom_score_adj = false
  56. sandbox_image = "k8s.gcr.io/pause:3.5"
  57. selinux_category_range = 1024
  58. stats_collect_period = 10
  59. stream_idle_timeout = "4h0m0s"
  60. stream_server_address = "127.0.0.1"
  61. stream_server_port = "0"
  62. systemd_cgroup = false
  63. tolerate_missing_hugetlb_controller = true
  64. unset_seccomp_profile = ""
  65.  
  66. [plugins."io.containerd.grpc.v1.cri".cni]
  67. bin_dir = "/opt/cni/bin"
  68. conf_dir = "/etc/cni/net.d"
  69. conf_template = ""
  70. max_conf_num = 1
  71.  
  72. [plugins."io.containerd.grpc.v1.cri".containerd]
  73. default_runtime_name = "runc"
  74. disable_snapshot_annotations = true
  75. discard_unpacked_layers = false
  76. no_pivot = false
  77. snapshotter = "overlayfs"
  78.  
  79. [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
  80. base_runtime_spec = ""
  81. container_annotations = []
  82. pod_annotations = []
  83. privileged_without_host_devices = false
  84. runtime_engine = ""
  85. runtime_root = ""
  86. runtime_type = ""
  87.  
  88. [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options]
  89.  
  90. [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
  91.  
  92. [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  93. base_runtime_spec = ""
  94. container_annotations = []
  95. pod_annotations = []
  96. privileged_without_host_devices = false
  97. runtime_engine = ""
  98. runtime_root = ""
  99. runtime_type = "io.containerd.runc.v2"
  100.  
  101. [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
  102. BinaryName = ""
  103. CriuImagePath = ""
  104. CriuPath = ""
  105. CriuWorkPath = ""
  106. IoGid = 0
  107. IoUid = 0
  108. NoNewKeyring = false
  109. NoPivotRoot = false
  110. Root = ""
  111. ShimCgroup = ""
  112. SystemdCgroup = false
  113.  
  114. [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
  115. base_runtime_spec = ""
  116. container_annotations = []
  117. pod_annotations = []
  118. privileged_without_host_devices = false
  119. runtime_engine = ""
  120. runtime_root = ""
  121. runtime_type = ""
  122.  
  123. [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options]
  124.  
  125. [plugins."io.containerd.grpc.v1.cri".image_decryption]
  126. key_model = "node"
  127.  
  128. [plugins."io.containerd.grpc.v1.cri".registry]
  129. config_path = ""
  130.  
  131. [plugins."io.containerd.grpc.v1.cri".registry.auths]
  132.  
  133. [plugins."io.containerd.grpc.v1.cri".registry.configs]
  134.  
  135. [plugins."io.containerd.grpc.v1.cri".registry.headers]
  136.  
  137. [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
  138.  
  139. [plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
  140. tls_cert_file = ""
  141. tls_key_file = ""
  142.  
  143. [plugins."io.containerd.internal.v1.opt"]
  144. path = "/opt/containerd"
  145.  
  146. [plugins."io.containerd.internal.v1.restart"]
  147. interval = "10s"
  148.  
  149. [plugins."io.containerd.metadata.v1.bolt"]
  150. content_sharing_policy = "shared"
  151.  
  152. [plugins."io.containerd.monitor.v1.cgroups"]
  153. no_prometheus = false
  154.  
  155. [plugins."io.containerd.runtime.v1.linux"]
  156. no_shim = false
  157. runtime = "runc"
  158. runtime_root = ""
  159. shim = "containerd-shim"
  160. shim_debug = false
  161.  
  162. [plugins."io.containerd.runtime.v2.task"]
  163. platforms = ["linux/amd64"]
  164.  
  165. [plugins."io.containerd.service.v1.diff-service"]
  166. default = ["walking"]
  167.  
  168. [plugins."io.containerd.snapshotter.v1.aufs"]
  169. root_path = ""
  170.  
  171. [plugins."io.containerd.snapshotter.v1.btrfs"]
  172. root_path = ""
  173.  
  174. [plugins."io.containerd.snapshotter.v1.devmapper"]
  175. async_remove = false
  176. base_image_size = ""
  177. pool_name = ""
  178. root_path = ""
  179.  
  180. [plugins."io.containerd.snapshotter.v1.native"]
  181. root_path = ""
  182.  
  183. [plugins."io.containerd.snapshotter.v1.overlayfs"]
  184. root_path = ""
  185.  
  186. [plugins."io.containerd.snapshotter.v1.zfs"]
  187. root_path = ""
  188.  
  189. [proxy_plugins]
  190.  
  191. [stream_processors]
  192.  
  193. [stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]
  194. accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]
  195. args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
  196. env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
  197. path = "ctd-decoder"
  198. returns = "application/vnd.oci.image.layer.v1.tar"
  199.  
  200. [stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
  201. accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]
  202. args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
  203. env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
  204. path = "ctd-decoder"
  205. returns = "application/vnd.oci.image.layer.v1.tar+gzip"
  206.  
  207. [timeouts]
  208. "io.containerd.timeout.shim.cleanup" = "5s"
  209. "io.containerd.timeout.shim.load" = "5s"
  210. "io.containerd.timeout.shim.shutdown" = "3s"
  211. "io.containerd.timeout.task.state" = "2s"
  212.  
  213. [ttrpc]
  214. address = ""
  215. gid = 0
  216. uid = 0
 

这个配置文件比较复杂,我们可以将重点放在其中的 plugins 配置上面,仔细观察我们可以发现每一个顶级配置块的命名都是 plugins."io.containerd.xxx.vx.xxx" 这种形式,每一个顶级配置块都表示一个插件,其中 io.containerd.xxx.vx 表示插件的类型,vx 后面的 xxx 表示插件的 ID,我们可以通过 ctr 查看插件列表:

 
  1. ➜ ~ ctr plugin ls
  2. ctr plugin ls
  3. TYPE ID PLATFORMS STATUS
  4. io.containerd.content.v1 content - ok
  5. io.containerd.snapshotter.v1 aufs linux/amd64 ok
  6. io.containerd.snapshotter.v1 btrfs linux/amd64 skip
  7. io.containerd.snapshotter.v1 devmapper linux/amd64 error
  8. io.containerd.snapshotter.v1 native linux/amd64 ok
  9. io.containerd.snapshotter.v1 overlayfs linux/amd64 ok
  10. io.containerd.snapshotter.v1 zfs linux/amd64 skip
  11. io.containerd.metadata.v1 bolt - ok
  12. io.containerd.differ.v1 walking linux/amd64 ok
  13. io.containerd.gc.v1 scheduler - ok
  14. io.containerd.service.v1 introspection-service - ok
  15. io.containerd.service.v1 containers-service - ok
  16. io.containerd.service.v1 content-service - ok
  17. io.containerd.service.v1 diff-service - ok
  18. io.containerd.service.v1 images-service - ok
  19. io.containerd.service.v1 leases-service - ok
  20. io.containerd.service.v1 namespaces-service - ok
  21. io.containerd.service.v1 snapshots-service - ok
  22. io.containerd.runtime.v1 linux linux/amd64 ok
  23. io.containerd.runtime.v2 task linux/amd64 ok
  24. io.containerd.monitor.v1 cgroups linux/amd64 ok
  25. io.containerd.service.v1 tasks-service - ok
  26. io.containerd.internal.v1 restart - ok
  27. io.containerd.grpc.v1 containers - ok
  28. io.containerd.grpc.v1 content - ok
  29. io.containerd.grpc.v1 diff - ok
  30. io.containerd.grpc.v1 events - ok
  31. io.containerd.grpc.v1 healthcheck - ok
  32. io.containerd.grpc.v1 images - ok
  33. io.containerd.grpc.v1 leases - ok
  34. io.containerd.grpc.v1 namespaces - ok
  35. io.containerd.internal.v1 opt - ok
  36. io.containerd.grpc.v1 snapshots - ok
  37. io.containerd.grpc.v1 tasks - ok
  38. io.containerd.grpc.v1 version - ok
  39. io.containerd.grpc.v1 cri linux/amd64 ok
 

顶级配置块下面的子配置块表示该插件的各种配置,比如 cri 插件下面就分为 containerd、cni 和 registry 的配置,而 containerd 下面又可以配置各种 runtime,还可以配置默认的 runtime。比如现在我们要为镜像配置一个加速器,那么就需要在 cri 配置块下面的 registry 配置块下面进行配置 registry.mirrors

 
  1. [plugins."io.containerd.grpc.v1.cri".registry]
  2. [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
  3. [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
  4. endpoint = ["https://bqr1dr1n.mirror.aliyuncs.com"]
  5. [plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]
  6. endpoint = ["https://registry.aliyuncs.com/k8sxio"]
 
  • registry.mirrors."xxx": 表示需要配置 mirror 的镜像仓库,例如 registry.mirrors."docker.io" 表示配置 docker.io 的 mirror。
  • endpoint: 表示提供 mirror 的镜像加速服务,比如我们可以注册一个阿里云的镜像服务来作为 docker.io 的 mirror。

另外在默认配置中还有两个关于存储的配置路径:

 
  1. root = "/var/lib/containerd"
  2. state = "/run/containerd"
 

其中 root 是用来保存持久化数据,包括 Snapshots, Content, Metadata 以及各种插件的数据,每一个插件都有自己单独的目录,Containerd 本身不存储任何数据,它的所有功能都来自于已加载的插件。 而另外的 state 是用来保存运行时的临时数据的,包括 sockets、pid、挂载点、运行时状态以及不需要持久化的插件数据。

3.4 使用

我们知道 Docker CLI 工具提供了需要增强用户体验的功能,containerd 同样也提供一个对应的 CLI 工具:ctr,不过 ctr 的功能没有 docker 完善,但是关于镜像和容器的基本功能都是有的。接下来我们就先简单介绍下 ctr 的使用。

帮助

直接输入 ctr 命令即可获得所有相关的操作命令使用方式:

 
  1. ➜ ~ ctr
  2. NAME:
  3. ctr -
  4. __
  5. _____/ /______
  6. / ___/ __/ ___/
  7. / /__/ /_/ /
  8. \___/\__/_/
  9.  
  10. containerd CLI
  11.  
  12.  
  13. USAGE:
  14. ctr [global options] command [command options] [arguments...]
  15.  
  16. VERSION:
  17. v1.5.5
  18.  
  19. DESCRIPTION:
  20.  
  21. ctr is an unsupported debug and administrative client for interacting
  22. with the containerd daemon. Because it is unsupported, the commands,
  23. options, and operations are not guaranteed to be backward compatible or
  24. stable from release to release of the containerd project.
  25.  
  26. COMMANDS:
  27. plugins, plugin provides information about containerd plugins
  28. version print the client and server versions
  29. containers, c, container manage containers
  30. content manage content
  31. events, event display containerd events
  32. images, image, i manage images
  33. leases manage leases
  34. namespaces, namespace, ns manage namespaces
  35. pprof provide golang pprof outputs for containerd
  36. run run a container
  37. snapshots, snapshot manage snapshots
  38. tasks, t, task manage tasks
  39. install install a new package
  40. oci OCI tools
  41. shim interact with a shim directly
  42. help, h Shows a list of commands or help for one command
  43.  
  44. GLOBAL OPTIONS:
  45. --debug enable debug output in logs
  46. --address value, -a value address for containerd's GRPC server (default: "/run/containerd/containerd.sock") [$CONTAINERD_ADDRESS]
  47. --timeout value total timeout for ctr commands (default: 0s)
  48. --connect-timeout value timeout for connecting to containerd (default: 0s)
  49. --namespace value, -n value namespace to use with commands (default: "default") [$CONTAINERD_NAMESPACE]
  50. --help, -h show help
  51. --version, -v print the version
 

3.4.1 镜像操作

拉取镜像

拉取镜像可以使用 ctr image pull 来完成,比如拉取 Docker Hub 官方镜像 nginx:alpine,需要注意的是镜像地址需要加上 docker.io Host 地址:

 
  1. ➜ ~ ctr image pull docker.io/library/nginx:alpine
  2. docker.io/library/nginx:alpine: resolved |++++++++++++++++++++++++++++++++++++++|
  3. index-sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce: exists |++++++++++++++++++++++++++++++++++++++|
  4. manifest-sha256:ce6ca11a3fa7e0e6b44813901e3289212fc2f327ee8b1366176666e8fb470f24: done |++++++++++++++++++++++++++++++++++++++|
  5. layer-sha256:9a6ac07b84eb50935293bb185d0a8696d03247f74fd7d43ea6161dc0f293f81f: done |++++++++++++++++++++++++++++++++++++++|
  6. layer-sha256:e82f830de071ebcda58148003698f32205b7970b01c58a197ac60d6bb79241b0: done |++++++++++++++++++++++++++++++++++++++|
  7. layer-sha256:d7c9fa7589ae28cd3306b204d5dd9a539612593e35df70f7a1d69ff7548e74cf: done |++++++++++++++++++++++++++++++++++++++|
  8. layer-sha256:bf2b3ee132db5b4c65432e53aca69da4e609c6cb154e0d0e14b2b02259e9c1e3: done |++++++++++++++++++++++++++++++++++++++|
  9. config-sha256:7ce0143dee376bfd2937b499a46fb110bda3c629c195b84b1cf6e19be1a9e23b: done |++++++++++++++++++++++++++++++++++++++|
  10. layer-sha256:3c1eaf69ff492177c34bdbf1735b6f2e5400e417f8f11b98b0da878f4ecad5fb: done |++++++++++++++++++++++++++++++++++++++|
  11. layer-sha256:29291e31a76a7e560b9b7ad3cada56e8c18d50a96cca8a2573e4f4689d7aca77: done |++++++++++++++++++++++++++++++++++++++|
  12. elapsed: 11.9s total: 8.7 Mi (748.1 KiB/s)
  13. unpacking linux/amd64 sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce...
  14. done: 410.86624ms
 

也可以使用 --platform 选项指定对应平台的镜像。当然对应的也有推送镜像的命令 ctr image push,如果是私有镜像则在推送的时候可以通过 --user 来自定义仓库的用户名和密码。

列出本地镜像

 
  1. ➜ ~ ctr image ls
  2. REF TYPE DIGEST SIZE PLATFORMS LABELS
  3. docker.io/library/nginx:alpine application/vnd.docker.distribution.manifest.list.v2+json sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce 9.5 MiB linux/386,linux/amd64,linux/arm/v6,linux/arm/v7,linux/arm64/v8,linux/ppc64le,linux/s390x -
  4. ➜ ~ ctr image ls -q
  5. docker.io/library/nginx:alpine
 

使用 -q(--quiet) 选项可以只打印镜像名称。

检测本地镜像

 
  1. ➜ ~ ctr image check
  2. REF TYPE DIGEST STATUS SIZE UNPACKED
  3. docker.io/library/nginx:alpine application/vnd.docker.distribution.manifest.list.v2+json sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce complete (7/7) 9.5 MiB/9.5 MiB true
 

主要查看其中的 STATUScomplete 表示镜像是完整可用的状态。

重新打标签

同样的我们也可以重新给指定的镜像打一个 Tag:

 
  1. ➜ ~ ctr image tag docker.io/library/nginx:alpine harbor.k8s.local/course/nginx:alpine
  2. harbor.k8s.local/course/nginx:alpine
  3. ➜ ~ ctr image ls -q
  4. docker.io/library/nginx:alpine
  5. harbor.k8s.local/course/nginx:alpine
 

删除镜像

不需要使用的镜像也可以使用 ctr image rm 进行删除:

 
  1. ➜ ~ ctr image rm harbor.k8s.local/course/nginx:alpine
  2. harbor.k8s.local/course/nginx:alpine
  3. ➜ ~ ctr image ls -q
  4. docker.io/library/nginx:alpine
 

加上 --sync 选项可以同步删除镜像和所有相关的资源。

将镜像挂载到主机目录

 
  1. ➜ ~ ctr image mount docker.io/library/nginx:alpine /mnt
  2. sha256:c3554b2d61e3c1cffcaba4b4fa7651c644a3354efaafa2f22cb53542f6c600dc
  3. /mnt
  4. ➜ ~ tree -L 1 /mnt
  5. /mnt
  6. ├── bin
  7. ├── dev
  8. ├── docker-entrypoint.d
  9. ├── docker-entrypoint.sh
  10. ├── etc
  11. ├── home
  12. ├── lib
  13. ├── media
  14. ├── mnt
  15. ├── opt
  16. ├── proc
  17. ├── root
  18. ├── run
  19. ├── sbin
  20. ├── srv
  21. ├── sys
  22. ├── tmp
  23. ├── usr
  24. └── var
  25.  
  26. 18 directories, 1 file
 

将镜像从主机目录上卸载

 
  1. ➜ ~ ctr image unmount /mnt
  2. /mnt
 

将镜像导出为压缩包

➜  ~ ctr image export nginx.tar.gz docker.io/library/nginx:alpine

从压缩包导入镜像

➜  ~ ctr image import nginx.tar.gz

3.4.2 容器操作

容器相关操作可以通过 ctr container 获取。

创建容器

➜  ~ ctr container create docker.io/library/nginx:alpine nginx

列出容器

 
  1. ➜ ~ ctr container ls
  2. CONTAINER IMAGE RUNTIME
  3. nginx docker.io/library/nginx:alpine io.containerd.runc.v2
 

同样可以加上 -q 选项精简列表内容:

 
  1. ➜ ~ ctr container ls -q
  2. nginx
 

查看容器详细配置

类似于 docker inspect 功能。

 
  1. ➜ ~ ctr container info nginx
  2. {
  3. "ID": "nginx",
  4. "Labels": {
  5. "io.containerd.image.config.stop-signal": "SIGQUIT"
  6. },
  7. "Image": "docker.io/library/nginx:alpine",
  8. "Runtime": {
  9. "Name": "io.containerd.runc.v2",
  10. "Options": {
  11. "type_url": "containerd.runc.v1.Options"
  12. }
  13. },
  14. "SnapshotKey": "nginx",
  15. "Snapshotter": "overlayfs",
  16. "CreatedAt": "2021-08-12T08:23:13.792871558Z",
  17. "UpdatedAt": "2021-08-12T08:23:13.792871558Z",
  18. "Extensions": null,
  19. "Spec": {
  20. ......
 

删除容器

 
  1. ➜ ~ ctr container rm nginx
  2. ➜ ~ ctr container ls
  3. CONTAINER IMAGE RUNTIME
 

除了使用 rm 子命令之外也可以使用 delete 或者 del 删除容器。

3.4.3 任务

上面我们通过 container create 命令创建的容器,并没有处于运行状态,只是一个静态的容器。一个 container 对象只是包含了运行一个容器所需的资源及相关配置数据,表示 namespaces、rootfs 和容器的配置都已经初始化成功了,只是用户进程还没有启动。

一个容器真正运行起来是由 Task 任务实现的,Task 可以为容器设置网卡,还可以配置工具来对容器进行监控等。

Task 相关操作可以通过 ctr task 获取,如下我们通过 Task 来启动容器:

 
  1. ➜ ~ ctr task start -d nginx
  2. /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
  3. /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
 

启动容器后可以通过 task ls 查看正在运行的容器:

 
  1. ➜ ~ ctr task ls
  2. TASK PID STATUS
  3. nginx 3630 RUNNING
 

同样也可以使用 exec 命令进入容器进行操作:

 
  1. ➜ ~ ctr task exec --exec-id 0 -t nginx sh
  2. / #
 

不过这里需要注意必须要指定 --exec-id 参数,这个 id 可以随便写,只要唯一就行。

暂停容器,和 docker pause 类似的功能:

➜  ~ ctr task pause nginx

暂停后容器状态变成了 PAUSED

 
  1. ➜ ~ ctr task ls
  2. TASK PID STATUS
  3. nginx 3630 PAUSED
 

同样也可以使用 resume 命令来恢复容器:

 
  1. ➜ ~ ctr task resume nginx
  2. ➜ ~ ctr task ls
  3. TASK PID STATUS
  4. nginx 3630 RUNNING
 

不过需要注意 ctr 没有 stop 容器的功能,只能暂停或者杀死容器。杀死容器可以使用 task kill 命令:

 
  1. ➜ ~ ctr task kill nginx
  2. ➜ ~ ctr task ls
  3. TASK PID STATUS
  4. nginx 3630 STOPPED
 

杀掉容器后可以看到容器的状态变成了 STOPPED。同样也可以通过 task rm 命令删除 Task:

 
  1. ➜ ~ ctr task rm nginx
  2. ➜ ~ ctr task ls
  3. TASK PID STATUS
 

除此之外我们还可以获取容器的 cgroup 相关信息,可以使用 task metrics 命令用来获取容器的内存、CPU 和 PID 的限额与使用量。

 
  1. # 重新启动容器
  2. ➜ ~ ctr task metrics nginx
  3. ID TIMESTAMP
  4. nginx 2021-08-12 08:50:46.952769941 +0000 UTC
  5.  
  6. METRIC VALUE
  7. memory.usage_in_bytes 8855552
  8. memory.limit_in_bytes 9223372036854771712
  9. memory.stat.cache 0
  10. cpuacct.usage 22467106
  11. cpuacct.usage_percpu [2962708 860891 1163413 1915748 1058868 2888139 6159277 5458062]
  12. pids.current 9
  13. pids.limit 0
 

还可以使用 task ps 命令查看容器中所有进程在宿主机中的 PID:

 
  1. ➜ ~ ctr task ps nginx
  2. PID INFO
  3. 3984 -
  4. 4029 -
  5. 4030 -
  6. 4031 -
  7. 4032 -
  8. 4033 -
  9. 4034 -
  10. 4035 -
  11. 4036 -
  12. ➜ ~ ctr task ls
  13. TASK PID STATUS
  14. nginx 3984 RUNNING
 

其中第一个 PID 3984 就是我们容器中的1号进程。

3.4.4 命名空间

另外 Containerd 中也支持命名空间的概念,比如查看命名空间:

 
  1. ➜ ~ ctr ns ls
  2. NAME LABELS
  3. default
 

如果不指定,ctr 默认使用的是 default 空间。同样也可以使用 ns create 命令创建一个命名空间:

 
  1. ➜ ~ ctr ns create test
  2. ➜ ~ ctr ns ls
  3. NAME LABELS
  4. default
  5. test
 

使用 remove 或者 rm 可以删除 namespace:

 
  1. ➜ ~ ctr ns rm test
  2. test
  3. ➜ ~ ctr ns ls
  4. NAME LABELS
  5. default
 

有了命名空间后就可以在操作资源的时候指定 namespace,比如查看 test 命名空间的镜像,可以在操作命令后面加上 -n test 选项:

 
  1. ➜ ~ ctr -n test image ls
  2. REF TYPE DIGEST SIZE PLATFORMS LABELS
 

我们知道 Docker 其实也是默认调用的 containerd,事实上 Docker 使用的 containerd 下面的命名空间默认是 moby,而不是 default,所以假如我们有用 docker 启动容器,那么我们也可以通过 ctr -n moby 来定位下面的容器:

➜  ~ ctr -n moby container ls

同样 Kubernetes 下使用的 containerd 默认命名空间是 k8s.io,所以我们可以使用 ctr -n k8s.io 来查看 Kubernetes 下面创建的容器。后续我们再介绍如何将 Kubernetes 集群的容器运行时切换到 containerd

一文搞懂容器运行时 Containerd-CSDN博客

 

默认国内已无法直接从hub.docker上拉取镜像,不管是docker还是k8s的runtime(containerd)

containerd配置镜像加速

vim /etc/containerd/config.toml

主要配置以下内容
https://hbv0b596.mirror.aliyuncs.com记得改成自己的阿里云镜像加速地址
查看阿里云镜像加速地址:https://cr.console.aliyun.com/cn-hangzhou/instances
在这里插入图片描述

       [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
        endpoint=["https://dockerproxy.com", "https://mirror.baidubce.com","https://ccr.ccs.tencentyun.com","https://docker.m.daocloud.io","https://docker.nju.edu.cn","https://docker.mirrors.ustc.edu.cn","https://registry-1.docker.io", "https://hbv0b596.mirror.aliyuncs.com"]
       [plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.k8s.io"]
        endpoint=["https://dockerproxy.com", "https://mirror.baidubce.com","https://ccr.ccs.tencentyun.com","https://docker.m.daocloud.io","https://docker.nju.edu.cn","https://docker.mirrors.ustc.edu.cn","https://hbv0b596.mirror.aliyuncs.com", "https://k8s.m.daocloud.io", "https://docker.mirrors.ustc.edu.cn","https://hub-mirror.c.163.com"]

完整配置文件如下

disabled_plugins = []
imports = []
oom_score = 0
plugin_dir = ""
required_plugins = []
root = "/var/lib/containerd"
state = "/run/containerd"
temp = ""
version = 2

[cgroup]
  path = ""

[debug]
  address = ""
  format = ""
  gid = 0
  level = ""
  uid = 0

[grpc]
  address = "/run/containerd/containerd.sock"
  gid = 0
  max_recv_message_size = 16777216
  max_send_message_size = 16777216
  tcp_address = ""
  tcp_tls_ca = ""
  tcp_tls_cert = ""
  tcp_tls_key = ""
  uid = 0

[metrics]
  address = ""
  grpc_histogram = false

[plugins]

  [plugins."io.containerd.gc.v1.scheduler"]
    deletion_threshold = 0
    mutation_threshold = 100
    pause_threshold = 0.02
    schedule_delay = "0s"
    startup_delay = "100ms"

  [plugins."io.containerd.grpc.v1.cri"]
    device_ownership_from_security_context = false
    disable_apparmor = false
    disable_cgroup = false
    disable_hugetlb_controller = true
    disable_proc_mount = false
    disable_tcp_service = true
    enable_selinux = false
    enable_tls_streaming = false
    enable_unprivileged_icmp = false
    enable_unprivileged_ports = false
    ignore_image_defined_volumes = false
    max_concurrent_downloads = 3
    max_container_log_line_size = 16384
    netns_mounts_under_state_dir = false
    restrict_oom_score_adj = false
    sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"
    selinux_category_range = 1024
    stats_collect_period = 10
    stream_idle_timeout = "4h0m0s"
    stream_server_address = "127.0.0.1"
    stream_server_port = "0"
    systemd_cgroup = false
    tolerate_missing_hugetlb_controller = true
    unset_seccomp_profile = ""

    [plugins."io.containerd.grpc.v1.cri".cni]
      bin_dir = "/opt/cni/bin"
      conf_dir = "/etc/cni/net.d"
      conf_template = ""
      ip_pref = ""
      max_conf_num = 1
    
    [plugins."io.containerd.grpc.v1.cri".containerd]
      default_runtime_name = "runc"
      disable_snapshot_annotations = true
      discard_unpacked_layers = false
      ignore_rdt_not_enabled_errors = false
      no_pivot = false
      snapshotter = "overlayfs"
    
      [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
        base_runtime_spec = ""
        cni_conf_dir = ""
        cni_max_conf_num = 0
        container_annotations = []
        pod_annotations = []
        privileged_without_host_devices = false
        runtime_engine = ""
        runtime_path = ""
        runtime_root = ""
        runtime_type = ""
    
        [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options]
    
      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
    
        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
          base_runtime_spec = ""
          cni_conf_dir = ""
          cni_max_conf_num = 0
          container_annotations = []
          pod_annotations = []
          privileged_without_host_devices = false
          runtime_engine = ""
          runtime_path = ""
          runtime_root = ""
          runtime_type = "io.containerd.runc.v2"
    
          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
            BinaryName = ""
            CriuImagePath = ""
            CriuPath = ""
            CriuWorkPath = ""
            IoGid = 0
            IoUid = 0
            NoNewKeyring = false
            NoPivotRoot = false
            Root = ""
            ShimCgroup = ""
            SystemdCgroup = true 
    
      [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
        base_runtime_spec = ""
        cni_conf_dir = ""
        cni_max_conf_num = 0
        container_annotations = []
        pod_annotations = []
        privileged_without_host_devices = false
        runtime_engine = ""
        runtime_path = ""
        runtime_root = ""
        runtime_type = ""
    
        [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options]
    
    [plugins."io.containerd.grpc.v1.cri".image_decryption]
      key_model = "node"
    
    [plugins."io.containerd.grpc.v1.cri".registry]
    
      [plugins."io.containerd.grpc.v1.cri".registry.auths]
    
      [plugins."io.containerd.grpc.v1.cri".registry.configs]
    
      [plugins."io.containerd.grpc.v1.cri".registry.headers]
    
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
    
       [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
        endpoint=["https://dockerproxy.com", "https://mirror.baidubce.com","https://ccr.ccs.tencentyun.com","https://docker.m.daocloud.io","https://docker.nju.edu.cn","https://docker.mirrors.ustc.edu.cn","https://registry-1.docker.io", "https://hbv0b596.mirror.aliyuncs.com"]
       [plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.k8s.io"]
        endpoint=["https://dockerproxy.com", "https://mirror.baidubce.com","https://ccr.ccs.tencentyun.com","https://docker.m.daocloud.io","https://docker.nju.edu.cn","https://docker.mirrors.ustc.edu.cn","https://hbv0b596.mirror.aliyuncs.com", "https://k8s.m.daocloud.io", "https://docker.mirrors.ustc.edu.cn","https://hub-mirror.c.163.com"]




    [plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
      tls_cert_file = ""
      tls_key_file = ""

  [plugins."io.containerd.internal.v1.opt"]
    path = "/opt/containerd"

  [plugins."io.containerd.internal.v1.restart"]
    interval = "10s"

  [plugins."io.containerd.internal.v1.tracing"]
    sampling_ratio = 1.0
    service_name = "containerd"

  [plugins."io.containerd.metadata.v1.bolt"]
    content_sharing_policy = "shared"

  [plugins."io.containerd.monitor.v1.cgroups"]
    no_prometheus = false

  [plugins."io.containerd.runtime.v1.linux"]
    no_shim = false
    runtime = "runc"
    runtime_root = ""
    shim = "containerd-shim"
    shim_debug = false

  [plugins."io.containerd.runtime.v2.task"]
    platforms = ["linux/amd64"]
    sched_core = false

  [plugins."io.containerd.service.v1.diff-service"]
    default = ["walking"]

  [plugins."io.containerd.service.v1.tasks-service"]
    rdt_config_file = ""

  [plugins."io.containerd.snapshotter.v1.aufs"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.btrfs"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.devmapper"]
    async_remove = false
    base_image_size = ""
    discard_blocks = false
    fs_options = ""
    fs_type = ""
    pool_name = ""
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.native"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.overlayfs"]
    mount_options = []
    root_path = ""
    sync_remove = false
    upperdir_label = false

  [plugins."io.containerd.snapshotter.v1.zfs"]
    root_path = ""

  [plugins."io.containerd.tracing.processor.v1.otlp"]
    endpoint = ""
    insecure = false
    protocol = ""

[proxy_plugins]

[stream_processors]

  [stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]
    accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]
    args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
    env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
    path = "ctd-decoder"
    returns = "application/vnd.oci.image.layer.v1.tar"

  [stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
    accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]
    args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
    env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
    path = "ctd-decoder"
    returns = "application/vnd.oci.image.layer.v1.tar+gzip"

[timeouts]
  "io.containerd.timeout.bolt.open" = "0s"
  "io.containerd.timeout.shim.cleanup" = "5s"
  "io.containerd.timeout.shim.load" = "5s"
  "io.containerd.timeout.shim.shutdown" = "3s"
  "io.containerd.timeout.task.state" = "2s"

[ttrpc]
  address = ""
  gid = 0
  uid = 0

参考:https://juejin.cn/post/7312330825206693939

docker配置镜像加速

参考:https://blog.csdn.net/easylife206/article/details/133191312

完整配置

vim /etc/docker/daemon.json
{
  "builder": {
    "gc": {
      "defaultKeepStorage": "20GB",
      "enabled": true
    }
  },
  "experimental": true,
  "features": {
    "buildkit": true
  },
  "insecure-registries": [
    "172.24.86.231"
  ],
  "registry-mirrors": [
    "https://dockerproxy.com",
    "https://mirror.baidubce.com",
    "https://ccr.ccs.tencentyun.com",
    "https://docker.m.daocloud.io",
    "https://docker.nju.edu.cn",
    "https://docker.mirrors.ustc.edu.cn"
  ],
  "log-driver":"json-file",
  "log-opts": {
    "max-size":"500m", 
    "max-file":"3"
  }
}

containerd配置镜像加速_plugins."io.containerd.grpc.v1.cri".registry.mirro-CSDN博客

前言

在某些 air gap 场景中,往往需要离线或使用代理 (Proxy), 例如:

1.需要通过 Proxy pull 容器镜像:1.Docker Hub: docker.io2.Quay: quay.io3.GCR: gcr.io4.GitHub 镜像库:ghcr.io2.在某些企业环境中,需要通过代理访问外部服务

Docker 如何配置代理想必大家都很清楚,但是自从 Kubernetes 1.20 版本以后开始弃用 Docker[1], containerd 逐渐成为主流 CRI. 所以我们下面介绍一下如何配置 contaienrd 的 Proxy.

📝Notes:

还有一种场景需要 containerd 配置 proxy, 就是将 Dragonfly 和 containerd 结合使用[2] 的时候。

Containerd 配置 Proxy 步骤

这里以通过 systemd 安装的 containerd 为例。

containerd 的配置一般位于 /etc/containerd/config.toml 下,service 文件位于:/etc/systemd/system/containerd.service 配置 Proxy 可以通过 service 环境变量方式配置,具体如下:

创建或编辑文件:/etc/systemd/system/containerd.service.d/http-proxy.conf

内容如下:

 
  1. [Service]
  2. Environment="HTTP_PROXY=http://127.0.0.1:7890"
  3. Environment="HTTPS_PROXY=http://127.0.0.1:7890"
  4. Environment="NO_PROXY=localhost"
 


配置后保存重启即可:

systemctl restart containerd.service

最佳实践:Proxy 中 NO_PROXY 的推荐配置

在配置 Proxy 时要特别注意,哪些要走 Proxy, 哪些不走 Proxy 要非常明确,避免出现网络访问异常甚至业务异常。

这里有个推荐 NO_PROXY 配置:

1.本地地址和网段:localhost 和 127.0.0.1 或 127.0.0.0/82.Kubernetes 的默认域名后缀:.svc 和 .cluster.local3.Kubernetes Node 的网段甚至所有应该不用 proxy 访问的 node 网段:<nodeCIDR>4.APIServer 的内部 URL: <APIServerInternalURL>5.Service Network: <serviceNetworkCIDRs>6.(如有)etcd 的 Discovery Domain: <etcdDiscoveryDomain>7.Cluster Network: <clusterNetworkCIDRs>8.其他特定平台相关网段(如 DevOps, Git/制品仓库。..): <platformSpecific>9.其他特定 NO_PROXY 网段:<REST_OF_CUSTOM_EXCEPTIONS>10.常用内网网段:1.10.0.0.0/82.172.16.0.0/123.192.168.0.0/16

最终配置如下:

 
  1. [Service]
  2. Environment="HTTP_PROXY=http://127.0.0.1:7890"
  3. Environment="HTTPS_PROXY=http://127.0.0.1:7890"
  4. Environment="NO_PROXY=localhost,127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,.svc,.cluster.local,.ewhisper.cn,<nodeCIDR>,<APIServerInternalURL>,<serviceNetworkCIDRs>,<etcdDiscoveryDomain>,<clusterNetworkCIDRs>,<platformSpecific>,<REST_OF_CUSTOM_EXCEPTIONS>"
 

🎉🎉🎉

总结

Kubernetes 1.20 以上,企业 air gap 场景下可能会需要用到 containerd 配置 Proxy. 本文介绍了其配置方法,以及配置过程中 NO_PROXY 的最佳实践。

References

[1] Kubernetes 1.20 版本以后开始弃用 Docker: https://ewhisper.cn/posts/36509/
[2] Dragonfly 和 containerd 结合使用: https://d7y.io/docs/setup/runtime/containerd/proxy/

posted @ 2025-02-05 15:36  CharyGao  阅读(851)  评论(0)    收藏  举报