安装 部署 K8S 笔记

安装Docker

安装yum工具

$ yum install -y yum-utils
device-mapper-persistent-data
lvm2

然后设置阿里云仓库

$ yum-config-manager
--add-repo
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

查看可选版本

$ yum list docker-ce.x86_64 --showduplicates | sort -r

安装docker

$ yum install docker-ce docker-ce-cli containerd.io
或者
$ yum install -y docker-ce-18.09.9 docker-ce-cli-18.09.9 containerd.io

启动docker

service docker start

修改docker 仓库地址

$ vi /etc/docker/daemon.json

{
  "registry-mirrors": ["http://hub.c.163.com/"]
}

保存后重启docker

service docker restart

卸载docker

$ yum list installed |grep docker

containerd.io.x86_64                 1.4.3-3.1.el7                  @docker-ce-stable
docker-ce-cli.x86_64                 1:20.10.3-3.el7                @docker-ce-stable

$ yum -y remove docker.x86_64

删除目录

$ rm -rf /var/lib/docker

安装Kubernetes工具

$ vi /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0

$yum install -y kubectl.x86_64
$yum install -y kubeadm

同步时间

$yum install ntpdate

$ntpdate cn.pool.ntp.org

$hwclock --systohc

关闭swap

$ swapoff -a

$ vi /etc/fstab 注释有swap那行就永久关闭

关闭防火墙

$ iptables -F

修改IP地址

如果是VirtualBox虚拟机就在vb设置里面将第一块网卡设置NET,第二块设置桥接即可。跳过一下配置

先查看ip地址

$ ip addr

第二块enp0s3就是网卡,接着修改对应的ip地址

$ vi /etc/sysconfig/network-scripts/ifcfg-enp0s3

TYPE="Ethernet"
PROXY_METHOD="static"
BROWSER_ONLY="no"
BOOTPROTO="dhcp"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="enp0s3"
UUID="6a104c04-1109-4b64-81e1-3e7f9ba045f2"
DEVICE="enp0s3"
ONBOOT="yes"
IPADDR="192.168.141.110"
NETMASK="255.255.255.0"
GATEWAY="192.168.141.254"
DNS1="202.100.199.8"
DNS2="8.8.4.4"

修改主机名称

$vi /etc/hostname

k8s-master

$vi /etc/sysconfig/network

HOSTNAME=k8s-master

其他主机k8s-node1,k8s-node2

安装kubernetes集群

初始化集群

$ kubeadm reset

$ rm -f

$ cd /usr/local/

$ mkdir kubernets

$ mkdir cluster

$ kubeadm config print init-defaults --kubeconfig ClusterConfiguration > kubeadm.yml

修改配置

$ vi kubeadm.yml

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.21.226 #修改master节点ip
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers #改成阿里云仓库
kind: ClusterConfiguration
kubernetesVersion: v1.20.1 #改成安装的kubernetes版本
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16" #新增pot网络配置
  serviceSubnet: 10.96.0.0/12
scheduler: {}

查看需要安装那些镜像

$ kubeadm config images --list --config kubeadm.yml

拉取镜像

$ kubeadm config images pull --config kubeadm.yml

安装主节点

$ kubeadm init --config=kubeadm.yml --upload-certs | tee kubeadm-init.log

成功后返回以下内容:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.21.226:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:1a30805571c5d7de92e9b51bbc0a265968969590cb5fcf9dcd7d334a5b6cff9a

然后执行返回的命令,如果是root用户执行前面2句

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

安装从节点

将上面最后一条命令复制出来粘贴到从节点里面执行

$ kubeadm join 192.168.21.226:6443 --token abcdef.0123456789abcdef
--discovery-token-ca-cert-hash sha256:1a30805571c5d7de92e9b51bbc0a265968969590cb5fcf9dcd7d334a5b6cff9a

若返回以下错误

[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1

执行:

$ echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables

====

[ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1

执行:

$ echo "1" > /proc/sys/net/ipv4/ip_forward

成功到master执行

$ kubectl get node 返回

NAME         STATUS     ROLES                  AGE     VERSION
k8s-master   NotReady   control-plane,master   8m46s   v1.20.4
k8s-node1    NotReady   <none>                 71s     v1.20.4
k8s-node2    NotReady   <none>                 79s     v1.20.4

[kubelet-check] Initial timeout of 40s passed.
error execution phase kubelet-start: error uploading crisocket: timed out waiting for the condition
To see the stack trace of this error execute with --v=5 or higher

$ kubeadm reset

$ systemctl daemon-reload

$ systemctl restart kubelet

配置网络

$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

完成后查看状态, status 是ready表示成功了

$ kubectl get nodes

NAME         STATUS     ROLES                  AGE   VERSION
k8s-master   NotReady   control-plane,master   35m   v1.20.4
k8s-node1    Ready      <none>                 25m   v1.20.4
k8s-node2    Ready      <none>                 27m   v1.20.4

查看状态 kubectl get cs

Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}  

kubeadm安装k8s 组件controller-manager 和scheduler状态 Unhealthy

检查kube-scheduler和kube-controller-manager组件配置是否禁用了非安全端口

vi /etc/kubernetes/manifests/kube-scheduler.yaml
vi /etc/kubernetes/manifests/kube-controller-manager.yaml
img
将port=0去掉
然后systemctl restart kubelet
再检查集群组件状态已正常
img

检查 Master 状态

$ kubectl cluster-info

查看节点状态

$ kubectl get nodes

运行Nginx容器

# API 版本号
apiVersion: apps/v1
# 类型,如:Pod/ReplicationController/Deployment/Service/Ingress
kind: Deployment
metadata:
  # Kind 的名称
  name: nginx
spec:
  selector:
    matchLabels:
      # 容器标签的名字,发布 Service 时,selector 需要和这里对应
      app: nginx
  # 部署的实例数量
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      # 配置容器,数组类型,说明可以配置多个容器
      containers:
      # 容器名称
      - name: nginx
        # 容器镜像
        image: nginx:1.17
        # 只有镜像不存在时,才会进行镜像拉取
        imagePullPolicy: IfNotPresent
        ports:
        # Pod 端口
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:  
  name: nginx-http
spec:  
  selector:    
    name: nginx  
  type: LoadBalancer  
  ports:  
  - port: 80
    targetPort: 80    
    nodePort: 31234

创建

$ kubectl apply -f nginx.yml

查看Pods状态

$ kubectl get pods

如果status是ContainerCreating,可以输入

$ kubectl describe pod nginx

查看日志

删除节点

$ kubectl delete -f pod_nginx.yml

$ kubectl create -f pod_nginx.yml

查看部署

$ kubectl get deployment

服务发布 如果yml里面有了Service 就不需要这一步

$ kubectl expose deployment nginx --port=80 --type=LoadBalancer

查看服务

$ kubectl get services

$ kubectl describe service nginx

删除已经部署的服务

$ kubectl delete deployment nginx

删除已发布的服务

$ kebectl delete service nginx

posted @ 2021-03-02 17:53  踏雪无痕  阅读(130)  评论(0)    收藏  举报
心·就在你曾经来到过的地方