k8s集群搭建
1.准备三台主机
master:192.168.147.151(兼职node节点)
node1:192.168.147.152
node2:192.168.147.153
修改/etc/hosts文件,做好主机名解析
2.master安装配置
(1)安装etcd并配置
etcd是一个服务发现存储仓库。
从官方文档中可以看到:listen-client-urls and listen-peer-urls specify the local addresses etcd server binds to for accepting incoming connections. To listen on a port for all interfaces, specify 0.0.0.0 as the listen IP address.
我理解的就是,listen-client-urls 就是那些地址可以访问你的etcd服务端,这里修改为0.0.0.0
advertise-client-urls and initial-advertise-peer-urls specify the addresses etcd clients or other etcd members should use to contact the etcd server. The advertise addresses must be reachable from the remote machines. Do not advertise addresses like localhost or 0.0.0.0 for a production setup since these addresses are unreachable from remote machines.
advertise-client-urls就是etcd服务端地址,这个地址必须是可以被远端地址访问的,不能写成localhost或者0.0.0.0。
yum -y install etcd vi /etc/etcd/etcd.conf (修改ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379" ETCD_ADVERTISE_CLIENT_URLS="http://192.168.147.151:2379" 这两行即可)
systemctl start etcd
systemctl status etcd
systemctl enable etcd
[root@master ~]# ss -tnl State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 127.0.0.1:2380 *:* LISTEN 0 128 *:22 *:* LISTEN 0 100 127.0.0.1:25 *:* LISTEN 0 128 :::2379 :::* LISTEN 0 128 :::22 :::* LISTEN 0 100 ::1:25 :::*
可以看到端口已经监听了。
(2)安装kubernetes
yum -y install kubernetes-master
修改配置文件 /etc/kubernetes/apiserver
[root@master ~]# cat /etc/kubernetes/apiserver ### # kubernetes system config # # The following values are used to configure the kube-apiserver # # The address on the local server to listen to. KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0" #修改监听ip为0.0.0.0 # The port on the local server to listen on. KUBE_API_PORT="--port=8080"#注释打开 # Port minions listen on KUBELET_PORT="--kubelet-port=10250"#注释打开 # Comma separated list of nodes in the etcd cluster KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.147.151:2379"#修改etcd-server的地址 # Address range to use for services KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" # default admission control policies KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota" # Add your own! KUBE_API_ARGS=""
修改controller-manager和scheduler共用的配置文件/etc/kubernetes/config
[root@master ~]# cat /etc/kubernetes/config ### # kubernetes system config # # The following values are used to configure various aspects of all # kubernetes services, including # # kube-apiserver.service # kube-controller-manager.service # kube-scheduler.service # kubelet.service # kube-proxy.service # logging to stderr means we get it in the systemd journal KUBE_LOGTOSTDERR="--logtostderr=true" # journal message level, 0 is debug KUBE_LOG_LEVEL="--v=0" # Should this cluster be allowed to run privileged docker containers KUBE_ALLOW_PRIV="--allow-privileged=false" # How the controller-manager, scheduler, and proxy find the apiserver KUBE_MASTER="--master=http://192.168.147.151:8080"#修改master的ip
(3)启动
[root@master ~]# systemctl start kube-apiserver [root@master ~]# systemctl start kube-controller-manager [root@master ~]# systemctl start kube-scheduler [root@master ~]# systemctl enable kube-scheduler Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service. [root@master ~]# systemctl enable kube-controller-manager Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service. [root@master ~]# systemctl enable kube-apiserver Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
[root@master ~]# kubectl get componentstatus NAME STATUS MESSAGE ERROR etcd-0 Healthy {"health":"true"} scheduler Healthy ok controller-manager Healthy ok
3.安装node节点
(1)yum -y install kubernetes-node
(2)修改 /etc/kubernetes/config
KUBE_MASTER="--master=http://192.168.147.151:8080"#修改master的ip
(3)修改/etc/kubernetes/kubelet
[root@master ~]# cat /etc/kubernetes/kubelet ### # kubernetes kubelet (minion) config # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces) KUBELET_ADDRESS="--address=192.168.147.151"#改为本节点的ip # The port for the info server to serve on KUBELET_PORT="--port=10250"#去掉注释 # You may leave this blank to use the actual hostname KUBELET_HOSTNAME="--hostname-override=master"#改为本节点的主机名或者ip # location of the api-server KUBELET_API_SERVER="--api-servers=http://192.168.147.151:8080"#改为master节点的ip # pod infrastructure container KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest" # Add your own! KUBELET_ARGS=""
systemctl start kubelet(启动kubelet的时候会自动启动docker) systemctl enable kubelet systemctl enable kube-proxy systemctl start kube-proxy
(4)查看节点
[root@master ~]# kubectl get node NAME STATUS AGE master Ready 8m node1 Ready 2m node2 Ready 25s
4.node节点配置flannel网络插件
为了让容器之间能够跨宿主机通讯,需要安装flannel插件
(1)安装flannel
yum -y install flannel
(2)修改配置文件
vi /etc/sysconfig/flanneld
[root@master ~]# cat /etc/sysconfig/flanneld # Flanneld configuration options # etcd url location. Point this to the server where etcd runs FLANNEL_ETCD_ENDPOINTS="http://192.168.147.151:2379"#修改为ectd服务端地址 # etcd config key. This is the configuration key that flannel queries # For address range assignment FLANNEL_ETCD_PREFIX="/atomic.io/network" # Any additional options that you want to pass #FLANNEL_OPTIONS=""
(3)设置key
[root@master ~]# etcdctl set /atomic.io/network/config '{ "Network":"172.16.0.0/16" }'#设置docker容器的网段 { "Network":"172.16.0.0/16" }
这一步只用在etcd服务端所在的主机即master节点做
(4)启动flannel并重启docker
[root@master ~]# systemctl start flanneld [root@master ~]# systemctl enable flanneld Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service. Created symlink from /etc/systemd/system/docker.service.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service. [root@master ~]# systemctl restart docker
(5)测试不同宿主机之间容器的联通性
docker pull busybox
docker run -it busybox
进入容器后 ip a看下ip,互相ping发现不通
这是因为1.13版本及之后版本的docker改了iptables规则
[root@master ~]# iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination KUBE-FIREWALL all -- 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy DROP) target prot opt source destination DOCKER-ISOLATION all -- 0.0.0.0/0 0.0.0.0/0 DOCKER all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT) target prot opt source destination KUBE-SERVICES all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */ KUBE-FIREWALL all -- 0.0.0.0/0 0.0.0.0/0 Chain DOCKER (1 references) target prot opt source destination Chain DOCKER-ISOLATION (1 references) target prot opt source destination RETURN all -- 0.0.0.0/0 0.0.0.0/0 Chain KUBE-FIREWALL (2 references) target prot opt source destination DROP all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000 Chain KUBE-SERVICES (1 references) target prot opt source destination
iptables -P FORWARD ACCEPT(三个节点都修改)
修改之后查看可以互ping了
[root@master ~]# docker run -it busybox / # ping 172.16.58.2 PING 172.16.58.2 (172.16.58.2): 56 data bytes 64 bytes from 172.16.58.2: seq=0 ttl=60 time=2.827 ms 64 bytes from 172.16.58.2: seq=1 ttl=60 time=1.403 ms ^C --- 172.16.58.2 ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 1.403/2.115/2.827 ms / # ping 172.16.79.2 PING 172.16.79.2 (172.16.79.2): 56 data bytes 64 bytes from 172.16.79.2: seq=0 ttl=60 time=5.576 ms 64 bytes from 172.16.79.2: seq=1 ttl=60 time=14.580 ms ^C --- 172.16.79.2 ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 5.576/10.078/14.580 ms / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 7: eth0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1472 qdisc noqueue link/ether 02:42:ac:10:13:02 brd ff:ff:ff:ff:ff:ff inet 172.16.19.2/24 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::42:acff:fe10:1302/64 scope link valid_lft forever preferred_lft forever
但是每次重启主机都需要修改iptables规则,比较麻烦,可以把它写到docker的启动程序中
vi /usr/lib/systemd/system/docker.service
[root@master ~]# cat /usr/lib/systemd/system/docker.service [Unit] Description=Docker Application Container Engine Documentation=http://docs.docker.com After=network.target Wants=docker-storage-setup.service Requires=docker-cleanup.timer [Service] Type=notify NotifyAccess=main EnvironmentFile=-/run/containers/registries.conf EnvironmentFile=-/etc/sysconfig/docker EnvironmentFile=-/etc/sysconfig/docker-storage EnvironmentFile=-/etc/sysconfig/docker-network Environment=GOTRACEBACK=crash Environment=DOCKER_HTTP_HOST_COMPAT=1 Environment=PATH=/usr/libexec/docker:/usr/bin:/usr/sbin ExecStart=/usr/sbin/iptables -P FORWARD ACCEPT#加入此行 ExecStart=/usr/bin/dockerd-current \ --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current \ --default-runtime=docker-runc \ --exec-opt native.cgroupdriver=systemd \ --userland-proxy-path=/usr/libexec/docker/docker-proxy-current \ --init-path=/usr/libexec/docker/docker-init-current \ --seccomp-profile=/etc/docker/seccomp.json \ $OPTIONS \ $DOCKER_STORAGE_OPTIONS \ $DOCKER_NETWORK_OPTIONS \ $ADD_REGISTRY \ $BLOCK_REGISTRY \ $INSECURE_REGISTRY \ $REGISTRIES ExecReload=/bin/kill -s HUP $MAINPID LimitNOFILE=1048576 LimitNPROC=1048576 LimitCORE=infinity TimeoutStartSec=0 Restart=on-abnormal KillMode=process [Install] WantedBy=multi-user.target
执行 systemctl daemon-reload,重载所有修改过的配置文件,或者 systemctl reload docker.service 这样就不用重启docker了
浙公网安备 33010602011771号