kubernetes云平台管理实战:常用命令大全(十二)

一、获取集群相关信息

获取集群版本

[root@master ~]# kubectl version --short=true
Client Version: v1.16.7
Server Version: v1.16.7

kubernetes集群以及部署的附件CoreDNS等提供了多种不同的服务,客户端访问这些服务时需要事先了解其访问接口

获取集群信息

[root@master ~]# kubelet cluster-info
Kubernetes master is running at http://localhost:8080
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'

获取node节点

[root@master ~]# kubectl get node
NAME     STATUS   ROLES    AGE    VERSION
master   Ready    master   6d3h   v1.16.7
node1    Ready    worker   6d3h   v1.16.7
node2    Ready    <none>   6d3h   v1.16.7

二、创建资源对象

1、命令创建

[root@master ~]# kubectl run nginx-deloy --image=nginx:1.12 --replicas=2
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nginx-deloy created

2、yaml文件创建

编辑k8s_pod.yml文件

[root@k8s-master ~]# cat k8s_pod.yml
apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    app: web
spec:
  containers:
    - name: nginx
      image: 10.0.128.0:5000/nginx:latest
      ports:
        - containerPort: 80

启动pod

[root@k8s-master ~]# kubectl create -f k8s_pod.yml
pod "nginx" created

3、声明式配置

kubectl create -f nginx_deploy.yml
kubectl run nginx --image=10.0.128.0:5000/nginx:1.13 --replicas=5 
kubectl run nginx --image=10.0.128.0:5000/nginx:1.13 --replicas=5 --record

三、查看资源对象

示例二

[root@master ~]# kubectl get namespaces 
NAME STATUS AGE
default Active 4d
demo-project Active 3d21h
ingress-demo Active 40h
istio-system Active 4d
kube-node-lease Active 4d
kube-public Active 4d
kube-system Active 4d
kubesphere-alerting-system Active 4d
kubesphere-controls-system Active 4d
kubesphere-devops-system Active 4d
kubesphere-logging-system Active 4d
kubesphere-monitoring-system Active 4d
kubesphere-sample-dev Active 2d22h
kubesphere-sample-prod Active 2d22h
kubesphere-system Active 4d
namespace Active 3d19h
openpitrix-system Active 4d

示例二

[root@master ~]# kubectl get pods,services
NAME READY STATUS RESTARTS AGE
pod/load-generator-5fb4fb465b-9k9js 1/1 Running 0 3d14h
pod/nginx-deloy-6c8868f55c-j45fr 1/1 Running 0 4m50s
pod/nginx-deloy-6c8868f55c-sq2wz 1/1 Running 0 4m50s
pod/php-apache-695cb9659c-hx6vp 0/1 ImagePullBackOff 0 3d14h

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.233.0.1 <none> 443/TCP 4d
service/php-apache ClusterIP 10.233.10.49 <none> 80/TCP 3d14h
[root@master ~]# kubectl get pods,services -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/load-generator-5fb4fb465b-9k9js 1/1 Running 0 3d14h 10.233.90.47 node1 <none> <none>
pod/nginx-deloy-6c8868f55c-j45fr 1/1 Running 0 4m56s 10.233.90.105 node1 <none> <none>
pod/nginx-deloy-6c8868f55c-sq2wz 1/1 Running 0 4m56s 10.233.96.78 node2 <none> <none>
pod/php-apache-695cb9659c-hx6vp 0/1 ImagePullBackOff 0 3d14h 10.233.90.46 node1 <none> <none>

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.233.0.1 <none> 443/TCP 4d <none>
service/php-apache ClusterIP 10.233.10.49 <none> 80/TCP 3d14h run=php-apache

获取指定Namespace对象中的资源对象的信息

[root@master ~]# kubectl get pods -l k8s-app -n kube-system 
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-79c9854776-89tp2 1/1 Running 0 4d
calico-node-286tw 1/1 Running 1 4d
calico-node-h2nwg 1/1 Running 1 4d
calico-node-rzvq7 1/1 Running 1 4d
coredns-7f9d8dc6c8-pvxqz 1/1 Running 0 4d
dns-autoscaler-796f4ddddf-twhhf 1/1 Running 0 4d
kube-proxy-9zgrh 1/1 Running 0 4d
kube-proxy-dbnz2 1/1 Running 0 4d
kube-proxy-wzq9q 1/1 Running 0 4d
nodelocaldns-86xf8 1/1 Running 0 4d
nodelocaldns-prjwk 1/1 Running 0 4d
nodelocaldns-v629j 1/1 Running 0 4d

系统的大部分资源隶属于某个Namespace对象,缺省的名称空间时default、若需要获取指定Namespace对象中的资源对象的信息,则需要使用-n 或Namespace指明其名称

四、打印资源对象的详细信息

[root@k8s-master ~]# kubectl describe pod nginx
Name:       nginx
Namespace:  default
Node:       k8s-node1/10.0.128.1
Start Time: Sun, 20 Jan 2019 13:04:51 +0800
Labels:     app=web
Status:     Running
IP:     172.16.10.2
Controllers:    <none>
Containers:
  nginx:
    Container ID:       docker://27d25a2ee0248b103991a27b81e3f244382ebdb642694e2aeb5503c373fdb912
    Image:          10.0.128.0:5000/nginx:latest
    Image ID:           docker-pullable://10.0.128.0:5000/nginx@sha256:e2847e35d4e0e2d459a7696538cbfea42ea2d3b8a1ee8329ba7e68694950afd3
    Port:           80/TCP
    State:          Running
      Started:          Sun, 20 Jan 2019 13:48:30 +0800
    Ready:          True
    Restart Count:      0
    Volume Mounts:      <none>
    Environment Variables:  <none>
Conditions:
  Type      Status
  Initialized   True
  Ready     True
  PodScheduled  True
No volumes.
QoS Class:  BestEffort
Tolerations:    <none>
Events:
  FirstSeen LastSeen    Count   From            SubObjectPath   Type        Reason      Message
  --------- --------    -----   ----            -------------   --------    ------      -------
  48m       48m     1   {default-scheduler }            Normal      Scheduled   Successfully assigned nginx to k8s-node1
  48m       6m      13  {kubelet k8s-node1}         Warning     FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "POD" with ErrImagePull: "image pull failed for registry.access.redhat.com/rhel7/pod-infrastructure:latest, this may be because there are no credentials on this request.  details: (open /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such file or directory)"
 
  47m   5m  182 {kubelet k8s-node1}     Warning FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "POD" with ImagePullBackOff: "Back-off pulling image \"registry.access.redhat.com/rhel7/pod-infrastructure:latest\""
 
  4m    4m  1   {kubelet k8s-node1} spec.containers{nginx}  Normal  Pulling         pulling image "10.0.128.0:5000/nginx:latest"
  4m    4m  2   {kubelet k8s-node1}             Warning MissingClusterDNS   kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
  4m    4m  1   {kubelet k8s-node1} spec.containers{nginx}  Normal  Pulled          Successfully pulled image "10.0.128.0:5000/nginx:latest"
  4m    4m  1   {kubelet k8s-node1} spec.containers{nginx}  Normal  Created         Created container with docker id 27d25a2ee024; Security:[seccomp=unconfined]
  4m    4m  1 

查看kube-system名称空间中拥有标签component=kube-apiserver的Pod对象的资源配置清单及当前的转台信息,并输出为yaml格式

[root@master ~]# kubectl get pods -l component=kube-apiserver -o yaml -n kube-system 
apiVersion: v1
items:
- apiVersion: v1
  kind: Pod
  metadata:
    annotations:
      kubernetes.io/config.hash: 7e3e3089b03ec1438ee6f54b9f53c431
      kubernetes.io/config.mirror: 7e3e3089b03ec1438ee6f54b9f53c431
      kubernetes.io/config.seen: "2020-04-24T07:00:14.906135295+08:00"
      kubernetes.io/config.source: file
    creationTimestamp: "2020-04-23T23:00:18Z"
    labels:
      component: kube-apiserver
      tier: control-plane
    name: kube-apiserver-master
    namespace: kube-system
    resourceVersion: "372"
    selfLink: /api/v1/namespaces/kube-system/pods/kube-apiserver-master
    uid: 21650798-4839-4a14-b49d-f80d83bdf1a9
  spec:
    containers:
    - command:
      - kube-apiserver
      - --advertise-address=192.168.0.13
      - --allow-privileged=true
      - --anonymous-auth=True
      - --apiserver-count=1
      - --authorization-mode=Node,RBAC
      - --bind-address=0.0.0.0
      - --client-ca-file=/etc/kubernetes/ssl/ca.crt
      - --enable-admission-plugins=NodeRestriction
      - --enable-aggregator-routing=False
      - --enable-bootstrap-token-auth=true
      - --endpoint-reconciler-type=lease
      - --etcd-cafile=/etc/ssl/etcd/ssl/ca.pem
      - --etcd-certfile=/etc/ssl/etcd/ssl/node-master.pem
      - --etcd-keyfile=/etc/ssl/etcd/ssl/node-master-key.pem
      - --etcd-servers=https://192.168.0.13:2379
      - --feature-gates=CSINodeInfo=true,VolumeSnapshotDataSource=true,ExpandCSIVolumes=true,RotateKubeletClientCertificate=true
      - --insecure-port=0
      - --kubelet-client-certificate=/etc/kubernetes/ssl/apiserver-kubelet-client.crt
      - --kubelet-client-key=/etc/kubernetes/ssl/apiserver-kubelet-client.key
      - --kubelet-preferred-address-types=InternalDNS,InternalIP,Hostname,ExternalDNS,ExternalIP
      - --profiling=False
      - --proxy-client-cert-file=/etc/kubernetes/ssl/front-proxy-client.crt
      - --proxy-client-key-file=/etc/kubernetes/ssl/front-proxy-client.key
      - --request-timeout=1m0s
      - --requestheader-allowed-names=front-proxy-client
      - --requestheader-client-ca-file=/etc/kubernetes/ssl/front-proxy-ca.crt
      - --requestheader-extra-headers-prefix=X-Remote-Extra-
      - --requestheader-group-headers=X-Remote-Group
      - --requestheader-username-headers=X-Remote-User
      - --runtime-config=
      - --secure-port=6443
      - --service-account-key-file=/etc/kubernetes/ssl/sa.pub
      - --service-cluster-ip-range=10.233.0.0/18
      - --service-node-port-range=30000-32767
      - --storage-backend=etcd3
      - --tls-cert-file=/etc/kubernetes/ssl/apiserver.crt
      - --tls-private-key-file=/etc/kubernetes/ssl/apiserver.key
      image: kubesphere/hyperkube:v1.16.7
      imagePullPolicy: IfNotPresent
      livenessProbe:
        failureThreshold: 8
        httpGet:
          host: 192.168.0.13
          path: /healthz
          port: 6443
          scheme: HTTPS
        initialDelaySeconds: 15
        periodSeconds: 10
        successThreshold: 1
        timeoutSeconds: 15
      name: kube-apiserver
      resources:
        requests:
          cpu: 250m
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /etc/ssl/certs
        name: ca-certs
        readOnly: true
      - mountPath: /etc/pki
        name: etc-pki
        readOnly: true
      - mountPath: /etc/pki/ca-trust
        name: etc-pki-ca-trust
        readOnly: true
      - mountPath: /etc/pki/tls
        name: etc-pki-tls
        readOnly: true
      - mountPath: /etc/ssl/etcd/ssl
        name: etcd-certs-0
        readOnly: true
      - mountPath: /etc/kubernetes/ssl
        name: k8s-certs
        readOnly: true
    dnsPolicy: ClusterFirst
    enableServiceLinks: true
    hostNetwork: true
    nodeName: master
    priority: 2000000000
    priorityClassName: system-cluster-critical
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext: {}
    terminationGracePeriodSeconds: 30
    tolerations:
    - effect: NoExecute
      operator: Exists
    volumes:
    - hostPath:
        path: /etc/ssl/certs
        type: DirectoryOrCreate
      name: ca-certs
    - hostPath:
        path: /etc/pki
        type: DirectoryOrCreate
      name: etc-pki
    - hostPath:
        path: /etc/pki/ca-trust
        type: ""
      name: etc-pki-ca-trust
    - hostPath:
        path: /etc/pki/tls
        type: ""
      name: etc-pki-tls
    - hostPath:
        path: /etc/ssl/etcd/ssl
        type: DirectoryOrCreate
      name: etcd-certs-0
    - hostPath:
        path: /etc/kubernetes/ssl
        type: DirectoryOrCreate
      name: k8s-certs
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: "2020-04-23T23:00:18Z"
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: "2020-04-23T23:00:18Z"
      status: "True"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: "2020-04-23T23:00:18Z"
      status: "True"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: "2020-04-23T23:00:18Z"
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: docker://003b7e68f2a5918de30923950ee3dd47d23cc2dec259f07509cee1c3eadb9c11
      image: kubesphere/hyperkube:v1.16.7
      imageID: docker-pullable://kubesphere/hyperkube@sha256:b4285fd78d62c5bc9ef28dac4a88b2914727ddc8c82a32003d6a2ef2dd0caf3c
      lastState: {}
      name: kube-apiserver
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2020-04-23T22:59:44Z"
    hostIP: 192.168.0.13
    phase: Running
    podIP: 192.168.0.13
    podIPs:
    - ip: 192.168.0.13
    qosClass: Burstable
    startTime: "2020-04-23T23:00:18Z"
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

五、容器中常用命令

1、打印容器中的日志信息

[root@master ~]# kubectl logs kube-apiserver-master -n kube-system |head
Flag --insecure-port has been deprecated, This flag will be removed in a future version.
I0423 22:59:44.568981 1 server.go:623] external host was not specified, using 192.168.0.13
I0423 22:59:44.569260 1 server.go:149] Version: v1.16.7
I0423 22:59:45.311086 1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
I0423 22:59:45.311116 1 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
I0423 22:59:45.312164 1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
I0423 22:59:45.312179 1 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
I0423 22:59:45.315842 1 client.go:357] parsed scheme: "endpoint"
I0423 22:59:45.315918 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://192.168.0.13:2379 0 <nil>}]
I0423 22:59:45.329792 1 client.go:357] parsed scheme: "endpoint"

添加-f 选项、还能用于持续监控指定容器中的日志输出、其行为类似于用了-f选项的tail命令

2、在容器中执行命令

[root@master ~]# kubectl exec nginx-deloy-6c8868f55c-j45fr ls
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var

如果POD中有多个容器、则需要以-c 选项执行容器后再运行

六、删除资源对象

[root@k8s-master ~]# kubectl delete pods myweb-9rmf4
pod "myweb-9rmf4" deleted
kubectl delete pods -l app=monitor -n kube-system
kubectl delete pods -l --all -n kube-public
posted @ 2020-04-30 12:05  活的潇洒80  阅读(1094)  评论(0编辑  收藏  举报