kubespray 国内环境在线安装 kubernetes
使用 Kubespray 安装 Kubernetes
此快速入门有助于使用 Kubespray 安装在 GCE、Azure、OpenStack、AWS、vSphere、Equinix Metal(曾用名 Packet)、Oracle Cloud Infrastructure(实验性)或 Baremetal 上托管的 Kubernetes 集群。
Kubespray 是由若干 Ansible Playbook、 清单(inventory)、 制备工具和通用 OS/Kubernetes 集群配置管理任务的领域知识组成的。
Kubespray 提供:
- 高可用性集群
- 可组合属性(例如可选择网络插件)
- 支持大多数流行的 Linux 发行版
- Flatcar Container Linux
- Debian Bullseye、Buster、Jessie、Stretch
- Ubuntu 16.04、18.04、20.04、22.04
- CentOS/RHEL 7、8、9
- Fedora 35、36
- Fedora CoreOS
- openSUSE Leap 15.x/Tumbleweed
- Oracle Linux 7、8、9
- Alma Linux 8、9
- Rocky Linux 8、9
- Kylin Linux Advanced Server V10
- Amazon Linux 2
- 持续集成测试
要选择最适合你的用例的工具,请阅读 kubeadm 和 kops 之间的这份比较。
注:在线安装要求网络情况比较好,否则可以因为下载时间过长,而导致超时失败
创建集群
(1/5)环境准备
| 主机 | 系统版本(参考 上述linux发行版) | 配置 | 硬盘 | ip |
| server | ubuntu22.04 | 1核1g | 40g | 10.11.12.100 |
| node1 | ubuntu22.04 | 2核8g | 40g | 10.11.12.11 |
| node2 | ubuntu22.04 | 2核8g | 40g | 10.11.12.12 |
| node3 | ubuntu22.04 | 2核8g | 40g | 10.11.12.13 |
说明:高可用部署etcd要求3个节点,所以高可用集群最少需要3个节点。
kubespray需要一个部署节点,也可以复用集群任意一个节点,这里在第一个server节点(10.11.12.100)安装kubespray,并作为一个独立的部署节点,且位于集群节点之外。
(2/5)设置服务器
1、确保每台台服务器都能root登录(后面用到root用户,非root用户可以自己探索)
sudo sed -i '/PermitRootLogin /c PermitRootLogin yes' /etc/ssh/sshd_config sudo systemctl restart sshd ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
2、安装 containerd 容器环境(以下操作都在server服务器上)
# nerdctl 与docker 命令基本一致,下载不了,可以去github下down下来 wget https://github.com/containerd/nerdctl/releases/download/v1.0.0/nerdctl-full-1.0.0-linux-amd64.tar.gz tar -zxvf nerdctl-full-1.0.0-linux-amd64.tar.gz -C /usr/local systemctl enable --now containerd buildkit
3、配置免密登录
ssh-copy-id root@10.11.12.11 ssh-copy-id root@10.11.12.12 ssh-copy-id root@10.11.12.13
(3/5)安装kubespray
1、下载源码
root@server:~# git clone https://github.com/kubernetes-sigs/kubespray.git root@server:~# cd kubespray
2、拉取镜像
nerdctl pull quay.io/kubespray/kubespray:v2.20.0
# 等待镜像拉取完成,过程可能会有点长
# 进入容器
nerdctl run --rm -it --mount type=bind,source="$(pwd)"/inventory/sample,dst=/inventory \
--mount type=bind,source="${HOME}"/.ssh/id_rsa,dst=/root/.ssh/id_rsa \
quay.io/kubespray/kubespray:v2.20.0 bash
3、修改配置
cp -rfp inventory/sample inventory/mycluster
declare -a IPS=(10.11.12.11 10.11.12.12 10.11.12.13)
CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
# 国内能否安装的关键
cp inventory/mycluster/group_vars/all/offline.yml inventory/mycluster/group_vars/all/mirror.yml
sed -i -E '/# .*\{\{ files_repo/s/^# //g' inventory/mycluster/group_vars/all/mirror.yml
tee -a inventory/mycluster/group_vars/all/mirror.yml <<EOF
gcr_image_repo: "gcr.m.daocloud.io"
kube_image_repo: "k8s.m.daocloud.io"
docker_image_repo: "docker.m.daocloud.io"
quay_image_repo: "quay.m.daocloud.io"
github_image_repo: "ghcr.m.daocloud.io"
files_repo: "https://files.m.daocloud.io"
EOF
4、可选操作
修改配置文件 inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
cp ~/kubespray/inventory/sample/group_vars/k8s_cluster/k8s-cluster.yml . vim k8s-cluster.yml
k8s-cluster.yml
# 选择网络插件,支持 cilium, calico, weave 和 flannel # 这里我选择的是cilium kube_network_plugin: cilium # 如果ip和我不一样的,一定要确认一下这两个网段是否有冲突 # 设置 Service 网段 kube_service_addresses: 10.233.0.0/18 # 设置 Pod 网段 kube_pods_subnet: 10.233.64.0/18 # 支持 docker, crio 和 containerd,推荐 containerd. container_manager: containerd # 是否开启 kata containers kata_containers_enabled: false # 集群名称 因为带了.符号,所以不建议使用自带的,可以改为自己的 cluster_name: my-first-k8s-cluster
# 文件拷贝回去 nerdctl cp k8s-cluster.yml [cid]:/kubespray/inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml # 校验改动是否成功 cat inventory/mycluster/group_vars/all/all.yml cat inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
(4/5)安装kubenetes
# 这里的become-user可以指定k8s集群免密登录用户 ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml # 也可以指定其他用户,必须是免密登录用户 ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=ubuntu --private-key /root/.ssh/id_rs cluster.yml
# 等待集群安装
(5/5)验证结果
root@node1:~# kubectl get node
NAME STATUS ROLES AGE VERSION
node1 Ready control-plane 20h v1.24.6
node2 Ready control-plane 20h v1.24.6
node3 Ready <none> 20h v1.24.6
root@node1:~#
root@node1:~# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
etcd-0 Healthy {"health":"true","reason":""}
scheduler Healthy ok
etcd-1 Healthy {"health":"true","reason":""}
etcd-2 Healthy {"health":"true","reason":""}
root@node1:~#
root@node1:~# kubectl get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system cilium-45ckm 1/1 Running 1 (57m ago) 20h
kube-system cilium-8lhw6 1/1 Running 1 (57m ago) 20h
kube-system cilium-operator-f6648bc78-2c2nd 1/1 Running 2 (57m ago) 20h
kube-system cilium-operator-f6648bc78-8dqdr 1/1 Running 1 (57m ago) 20h
kube-system cilium-s7q9v 1/1 Running 1 (57m ago) 20h
kube-system coredns-665c4cc98d-h5db4 1/1 Running 1 (57m ago) 20h
kube-system coredns-665c4cc98d-qq669 1/1 Running 1 (57m ago) 20h
kube-system dns-autoscaler-6567c8b74f-sgrj9 1/1 Running 1 (57m ago) 20h
kube-system kube-apiserver-node1 1/1 Running 2 (57m ago) 20h
kube-system kube-apiserver-node2 1/1 Running 2 (57m ago) 20h
kube-system kube-controller-manager-node1 1/1 Running 2 (57m ago) 20h
kube-system kube-controller-manager-node2 1/1 Running 3 (57m ago) 20h
kube-system kube-proxy-58dw2 1/1 Running 1 (57m ago) 20h
kube-system kube-proxy-6cqxx 1/1 Running 1 (57m ago) 20h
kube-system kube-proxy-crz2s 1/1 Running 1 (57m ago) 20h
kube-system kube-scheduler-node1 1/1 Running 2 (57m ago) 20h
kube-system kube-scheduler-node2 1/1 Running 2 (57m ago) 20h
kube-system nginx-proxy-node3 1/1 Running 1 (57m ago) 20h
kube-system nodelocaldns-dktsh 1/1 Running 1 (57m ago) 20h
kube-system nodelocaldns-lrnl6 1/1 Running 1 (57m ago) 20h
kube-system nodelocaldns-xqbzn 1/1 Running 1 (57m ago) 20h
安装完成
浙公网安备 33010602011771号