CKA试题分享01

近期要准备CKA考试,在网络中也找到很多相关的试题,在此总结记录方便之后查看

1、Set configuration context $ kubectl config use-context k8s

Monitor the logs of Pod foobar and Extract log lines corresponding to error file-not-found
Write them to /opt/KULM00201/foobar Question weight 5%

kubectl logs foobar | grep file-not-found > /opt/KULM00201/foobar

2、Set configuration context $ kubectl config use-context k8s

List all PVs sorted by name saving the full kubectl output to /opt/KUCC0010/my_volumes . Use kubectl’s own functionally for sorting the output, and do not manipulate it any further.
Question weight 3%

kubectl get pv --sort-by=.metadata.name > /opt/KUCC0010/my_volumes

3、Set configuration context $ kubectl config use-context k8s

Ensure a single instance of Pod nginx is running on each node of the kubernetes cluster where nginx also represents the image name which has to be used. Do no override any taints currently in place.
Use Daemonsets to complete this task and use ds.kusc00201 as Daemonset name. Question weight 3%
a、

kubectl create deployment ds.kusc00201 --image=nginx --dry-run -oyaml >ds.kusc00201.yaml
在进行变更

b、

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: ds.kusc00201
  labels:
    app: nginx
spec:
  selector:
    matchLabels:
      name: nginx
  template:
    metadata:
      labels:
        name: nginx
    spec:
      containers:
      - name: nginx
        image: nginx

4、Set configuration context $ kubectl config use-context k8s

Perform the following tasks
Add an init container to lumpy–koala (Which has been defined in spec file /opt/kucc00100/pod-spec-KUCC00100.yaml)
The init container should create an empty file named /workdir/calm.txt
If /workdir/calm.txt is not detected, the Pod should exit
Once the spec file has been updated with the init container definition, the Pod should be created.
Question weight 7%

apiVersion: v1
kind: Pod
metadata:
  name: lumpy--koala
  labels:
    app: myapp
spec:
  containers:
  - name: myapp-container
    image: busybox:1.28
    command: ["sh","-c",'if [ !-f /workdir/calm.txt ];then exit 1;else sleep 300;fi']
    volumeMounts:
    - name: workdir
      mountPath: /workdir
  initContainers:
  - name: init-myservice
    image: busybox:1.28
    command: ['sh', '-c', 'mkdir -p /workdir && touch /workdir/calm.txt']
    volumeMounts:
    - name: workdir
      mountPath: /workdir
  volumes:
  - name: workdir
    emptyDir: {}

5、Set configuration context $ kubectl config use-context k8s

Create a pod named kucc4 with a single container for each of the following images running inside (there may be between 1 and 4 images specified): nginx + redis + memcached + consul
Question weight: 4%

kubectl run kucc4 --image=nginx --dry-run -o yaml > kucc4.yaml

# vim kucc4.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: kucc4
  name: kucc4
spec:
  containers:
  - image: nginx
    name: nginx
  - image: redis
    name: redis
  - image: memcached
    name: memcached
  - image: consul
    name: consul 

检查无误创建
kubectl apply -f kucc4.yaml 

6、Set configuration context $ kubectl config use-context k8s

Schedule a Pod as follows:
Name: nginx-kusc00101
Image: nginx
Node selector: disk=ssd
Question weight: 2%

apiVersion: v1
kind: Pod
metadata:
  name: nginx-kusc00101
spec:
  containers:
  - name: nginx
    image: nginx
    imagePullPolicy: IfNotPresent
  nodeSelector:
    disk: ssd

7、Set configuration context $ kubectl config use-context k8s Create a deployment as follows

Name: nginx-app
Using container nginx with version 1.10.2-alpine
The deployment should contain 3 replicas
Next, deploy the app with new version 1.13.0-alpine by performing a rolling update and record that update.
Finally, rollback that update to the previous version 1.10.2-alpine
Question weight: 4%

kubectl create deployment nginx-app --image=nginx:1.10.2-alpine
kubectl scale deployment nginx-app --replicas=3 --record
kubectl set image deployment/nginx-app nginx=nginx:1.13.0-alpine --record
kubectl rollout undo deployment nginx-app

8、Set configuration context $ kubectl config use-context k8s

Create and configure the service front-end-service so it’s accessible through NodePort and routes to the existing pod named front-end
Question weight: 4%

查看pod/front-end的端口后再进行暴露
kubectl expose pod/front-end --name=front-end-service --type=NodePort --port=80 --target-port=80

9、Set configuration context $ kubectl config use-context k8s Create a Pod as follows:

Name: jenkins
Using image: jenkins
In a new Kubenetes namespace named website-frontend
Question weight 3%

kubectl create ns website-frontend
# 创建pod
kubectl run jenkins --image=jenkins --namespace=website-frontend 
# 或者通过yaml
apiVersion: v1
kind: Pod
metadata:
  name: jenkins
  namespace: website-frontend
  labels:
    app: jenkins
spec:
  containers:
  - name: jenkins
    image: jenkins
    imagePullPolicy: IfNotPresent

10、Set configuration context $ kubectl config use-context k8s Create a deployment spec file that will:

Launch 7 replicas of the redis image with the label: app_env_stage=dev
Deployment name: kual00201
Save a copy of this spec file to /opt/KUAL00201/deploy_spec.yaml (or .json)
When you are done, clean up (delete) any new k8s API objects that you produced during this task
Question weight: 3%

kubectl create deployment kual00201 --image=redis --dry-run -o yaml > /opt/KUAL00201/deploy_spec.yaml
labels:
  app_env_stage: dev(改三处地方) # 将所有的涉及labels都改成要求名称
replicas: 7

11、Set configuration context $ kubectl config use-context k8s

Create a file /opt/KUCC00302/kucc00302.txt that lists all pods that implement Service foo in Namespace production.
The format of the file should be one pod name per line.
Question weight: 3%

kubectl get svc foo -n production -o wide
# 获取SELECTOR 字段的lable
kubectl get pod -l 'run=foo' | awk '{print $1}' | grep -v NAME > /opt/KUCC00302/kucc00302.txt

12、Set configuration context $ kubectl config use-context k8s Create a Kubernetes Secret as follows:

Name: super-secret
Credential: alice or username:bob
Create a Pod named pod-secrets-via-file using the redis image which mounts a secret named super-secret at /secrets
Create a second Pod named pod-secrets-via-env using the redis image, which exports credential as TOPSECRET (将Credential 的secret作为env导入到pod中)
Question weight: 9%

# echo -n 'bob' | base64      
Ym9i
# vim usersecret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: super-secret
data:
  username: Ym9i
# vim pod1.yaml  # 以卷的模式挂载到pod中
apiVersion: v1
kind: Pod
metadata:
  name: pod-secrets-via-file
spec:
  containers:
  - name: pod-secrets-via-file
    image: redis
    volumeMounts:
      - name: foo
        mountPath: /secrets
  volumes:
  - name: foo
    secret:
      secretName: super-secret
# vim pod2.yaml  # 以ENV环境变量的方式导入到pod中
apiVersion: v1       
kind: Pod
metadata:
  name: pod-secrets-via-env
spec:
  containers:
  - name: pod-secrets-via-env
    image: redis
    env:
    - name: TOPSECRET
      valueFrom:
        secretKeyRef:
          name: super-secret
          key: username

13、Set configuration context $ kubectl config use-context k8s Create a pod as follows:

Name: non-persistent-redis
Container image: redis
Named-volume with name: cache-control
Mount path: /data/redis
It should launch in the pre-prod namespace and the volume MUST NOT be persistent.
卷不是持久存在的表示是使用emptyDir: {}
Question weight: 4%

apiVersion: v1
kind: Pod
metadata:
  name: non-persistent-redis
  namespace: pre-prod
spec:
  containers:
  - name: redis
    image: redis
    volumeMounts:
    - name: cache-control
      mountPath: /data/redis
  volumes:
  - name: cache-control
    emptyDir: {}

14、Set configuration context $ kubectl config use-context k8s

Check to see how many nodes are ready (not including nodes tainted NoSchedule) and write the number to /opt/nodenum (状态为ready的node但排除有污点NoSchedule的node)
Question weight: 2%

Kubectl get nodes (确保ready)
kubectl describe nodes | grep Taints | grep -v NoSchedule | wc -l  >/opt/nodenum

15、Set configuration context $ kubectl config use-context k8s

From the Pod label name=cpu-utilizer, find pods running high CPU workloads and write the name of the Pod consuming most CPU to the file /opt/cpu.txt (which already exists)
Question weight: 2%

kubectl top pod -l name=cpu-utilizer -A --sort-by=cpu > /opt/cpu.txt

16、 Set configuration context $ kubectl config use-context k8s Create a deployment as follows

Name: nginx-dns
Exposed via a service: nginx-dns
Ensure that the service & pod are accessible via their respective DNS records
The container(s) within any Pod(s) running as a part of this deployment should use the nginx image
Next, use the utility nslookup to look up the DNS records of the service & pod and write the output to /opt/service.dns and /opt/pod.dns respectively.
Ensure you use the busybox:1.28 image(or earlier) for any testing, an the latest release has an unpstream bug which impacts thd use of nslookup.
Question weight: 7%

kubectl create deployment nginx-dns --image=nginx
kubectl expose deployment nginx-dns --target-port=80 --port=80
kubectl run test --image=busybox:1.28  --command -- sleep 3600000
kubectl exec -it test-7785f75955-j6rsp -- nslookup 10.96.24.23 > /opt/service.dns
kubectl exec -it test-7785f75955-j6rsp -- nslookup  192.168.196.168 > /opt/pod.dns

17、No configuration context change required for this item

Create a snapshot of the etcd instance running at https://127.0.0.1:2379 saving the snapshot to the file path /data/backup/etcd-snapshot.db
The etcd instance is running etcd version 3.1.10
The following TLS certificates/key are supplied for connecting to the server with etcdctl
CA certificate: /opt/KUCM00302/ca.crt
Client certificate: /opt/KUCM00302/etcd-client.crt
Clientkey:/opt/KUCM00302/etcd-client.key
Question weight: 7%

环境准备:
kubectl -n kube-system cp etcd-master01:/usr/local/bin/etcdctl /usr/local/bin/etcdctl
或者 
find / -name etcdctl #以绝对路径执行

ETCDCTL_API=3 etcdctl --endpoints https://127.0.0.1:2379 \
--cacert=/opt/KUCM00302/ca.crt \
--cert=/opt/KUCM00302/etcd-client.crt \
--key=/opt/KUCM00302/etcd-client.key \
snapshot save /data/backup/etcd-snapshot.db

或者

]# ETCDCTL_API=3 /var/lib/docker/overlay2/92f166baeeda8792cdccc209d472b83e164af5629bac4f3411331969126eb739/merged/usr/local/bin/etcdctl --endpoints https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key snapshot save /data/backup/etcd-snapshot.db

18、Set configuration context $ kubectl config use-context ek8s

Set the node labelled with name=ek8s-node-1 as unavailable and reschedule all the pods running on it. (设置node不可用,重新调度不可用节点上的pod)
Question weight: 4%

kubectl get nodes --show-labels | grep name=ek8s-node-1
kubectl cordon node01 # 设置不可用
kubectl drain node01 --delete-local-data=true  --force --ignore-daemonsets=true # 驱逐该节点pod

19、Set configuration context $ kubectl config use-context wk8s

A Kubernetes worker node, labelled with name=wk8s-node-0 is in state NotReady . Investigate why this is the case, and perform any appropriate steps to bring the node to a Ready state, ensuring that any changes are made permanent.
Hints:
You can ssh to the failed node using $ ssh wk8s-node-0
You can assume elevated privileges on the node with the following command $ sudo -i Question weight: 4%

ssh wk8s-node-0
systemctl restart kubelet.service
systemctl enable kubelet.service

20、Set configuration context $ kubectl config use-context wk8s

Configure the kubelet systemd managed service, on the node labelled with name=wk8s-node-1, to launch a Pod containing a single container of image nginx named myservice automatically. Any spec files required should be placed in the /etc/kubernetes/manifests directory on the node.
Hints:
You can ssh to the failed node using $ ssh wk8s-node-1
You can assume elevated privileges on the node with the following command $ sudo -i Question weight: 4%

$ systemctl status kubectl //找配置文件绝对路径
$ cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf //查找配置文件或者环境变量文件中是否有--pod-manifest-path=/etc/kubernetes/manifests/,如果没有添加
$ 
$ ssh wk8s-node-1
$ vim  /etc/kubernetes/manifests/static-pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: myservice
spec:
  containers:
  - name: myservice
    image: nginx
    
/etc/kubernetes/manifest下创建Pod的yaml文件
systemctl daemon-reload
systemctl restart kubelet 
systemctl enable kubelet

21、Set configuration context $ kubectl config use-context ik8s

In this task, you will configure a new Node, ik8s-node-0, to join a Kubernetes cluster as follows:

Configure kubelet for automatic certificate rotation and ensure that both server and client CSRs are automatically approved and signed as appropnate via the use of RBAC.
Ensure that the appropriate cluster-info ConfigMap is created and configured appropriately in the correct namespace so that future Nodes can easily join the cluster
Your bootstrap kubeconfig should be created on the new Node at /etc/kubernetes/bootstrap-kubelet.conf (do not remove this file once your Node has successfully joined the cluster)
The appropriate cluster-wide CA certificate is located on the Node at /etc/kubernetes/pki/ca.crt . You should ensure that any automatically issued certificates are installed to the node at /var/lib/kubelet/pki and that the kubeconfig file for kubelet will be rendered at /etc/kubernetes/kubelet.conf upon successful bootstrapping
Use an additional group for bootstrapping Nodes attempting to join the cluster which should be called system:bootstrappers:cka:default-node-token
Solution should start automatically on boot, with the systemd service unit file for kubelet available at /etc/systemd/system/kubelet.service
To test your solution, create the appropriate resources from the spec file located at /opt/…/kube-flannel.yaml This will create the necessary supporting resources as well as the kube-flannel -ds DaemonSet . You should ensure that this DaemonSet is correctly deployed to the single node in the cluster.

Hints:

kubelet is not configured or running on ik8s-master-0 for this task, and you should not attempt to configure it.
You will make use of TLS bootstrapping to complete this task.
You can obtain the IP address of the Kubernetes API server via the following command $ ssh ik8s-node-0 getent hosts ik8s-master-0
The API server is listening on the usual port, 6443/tcp, and will only server TLS requests
The kubelet binary is already installed on ik8s-node-0 at /usr/bin/kubelet . You will not need to deploy kube-proxy to the cluster during this task.
You can ssh to the new worker node using $ ssh ik8s-node-0
You can ssh to the master node with the following command $ ssh ik8s-master-0
No further configuration of control plane services running on ik8s-master-0 is required
You can assume elevated privileges on both nodes with the following command $ sudo -i
Docker is already installed and running on ik8s-node-0
Question weight: 8%

22、Set configuration context $ kubectl config use-context bk8s

Given a partially-functioning Kubenetes cluster, identify symptoms of failure on the cluster. Determine the node, the failing service and take actions to bring up the failed service and restore the health of the cluster. Ensure that any changes are made permanently.

The worker node in this cluster is labelled with name=bk8s-node-0 Hints:

You can ssh to the relevant nodes using $ ssh $(NODE) where ( N O D E ) is one of bk8s−master−0orbk8s−node − 0 You can as sumee levated privileges on any node in the cluster with the following command (NODE) is one of bk8s-master-0 or bk8s-node-0 You can assume elevated privileges on any node in the cluster with the following command(NODE)isoneofbk8s−master−0orbk8s−node−0 You can as sume elevated privileges on any node in the cluster with the following command sudo -i
Question weight: 4%

排错:1.kubectl get;2.服务重启;3.enable;4.检查配置路径存在

$ kubectl get nodes
$ systemctl restart api-server
$ kubeclt get cs
$ systemctl restart kube-controller-manager

23、Set configuration context $ kubectl config use-context hk8s

Creae a persistent volume with name app-config of capacity 1Gi and access mode ReadWriteOnce. The type of volume is hostPath and its location is /srv/app-config

Question weight: 3%

apiVersion: v1
kind: PersistentVolume
metadata:
  name: app-config
spec:
  capacity:
    storage: 1Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: slow
  hostPath:
    path: /srv/app-config
  
posted @ 2020-09-18 16:45  Ymei  阅读(1912)  评论(0编辑  收藏  举报