26-k8s实现RBAC
一、API Server内置的访问控制机制
1.API Server的访问方式
-
集群外部: https://IP:Port
比如: https://10.0.0.231:6443 -
集群内部: https://kubernetes.default.svc
比如: 直接基于名为"kubernetes"的svc访问即可
[root@master231 ~]# kubectl get svc kubernetes
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.200.0.1 <none> 443/TCP 6d22h
[root@master231 ~]#
[root@master231 ~]# kubectl describe svc kubernetes | grep Endpoints
Endpoints: 10.0.0.231:6443
2.API Server内置了插件化的访问控制机制
每种访问控制机制均有一组专用的插件栈
-
认证(Authentication):
核验请求者身份的合法性,进行身份识别,验证客户端身份。
身份核验过程遵循“或”逻辑,且任何一个插件核验成功后都将不再进行后续的插件验证。
均不成功,则失败,或以“匿名者”身份访问,建议禁用“匿名者”。
-
授权(Authorization):
核验请求的操作是否获得许可,验证客户端是否有权限操作资源对象。
鉴权过程遵循“或”逻辑,且任何一个插件对操作的许可授权后都将不再进行后续的插件验证。
均未许可,则拒绝请求的操作
-
准入控制(Admission Control):
检查操作内容是否合规,仅同"写"请求相关,负责实现"检验"字段类型是否合法及和补全默认字段。
内容合规性检查过程遵循“与”逻辑,且无论成败,每次的操作请求都要经由所有插件的检验。
将数据写入etcd前,负责检查内容的有效性,因此仅对“写”操作有效。
分两类:validating(校验)和 mutating(补全或订正)。
3.身份认证策略
-
X.509客户端证书认证:
在双向TLS通信中,客户端持有数字证书信任的CA,需要在kube-apiserver程序启动时,通过--client-ca-file选项传递。
认证通过后,客户端数字证书中的CN(Common Name)即被识别为用户名,而O(Organization)被识别为组名。
kubeadm部署的K8s集群,默认使用"/etc/kubernetes/pki/ca.crt"(各组件间颁发数字证书的CA)进行客户端认证。 -
持有者令牌:
-
1.静态令牌文件(Static Token File):
令牌信息保存于文本文件中,由kube-apiserver在启动时通过--token-auth-file选项加载。
加载完成后的文件变动,仅能通过重启程序进行重载,因此,相关的令牌会长期有效。
客户端在HTTP请求中,通过“Authorization: Bearer TOKEN”标头附带令牌令牌以完成认证。 -
2.Bootstrap令牌:
一般用于加入集群时使用,尤其是在集群的扩容场景时会用到。 -
3.Service Account令牌:
该认证方式将由kube-apiserver程序内置直接启用,它借助于经过签名的Bearer Token来验证请求。
签名时使用的密钥可以由--service-account-key-file选项指定,也可以默认使用API Server的tls私钥
用于将Pod认证到API Server之上,以支持集群内的进程与API Server通信。
K8s可使用ServiceAccount准入控制器自动为Pod关联ServiceAccount。 -
4.OIDC(OpenID Connect)令牌:
有点类似于"微信","支付宝"认证的逻辑,自建的话需要配置认证中心。
OAuth2认证机制,通常由底层的IaaS服务所提供。 -
5.Webhook令牌:
基于web的形式进行认证,比如之前配置的"钉钉机器人","微信机器人"等;
是一种用于验证Bearer Token的回调机制,能够扩展支持外部的认证服务,例如LDAP等。
-
-
身份认证代理(Authenticating Proxy):
由kube-apiserver从请求报文的特定HTTP标头中识别用户身份,相应的标头名称可由特定的选项配置指定。
kube-apiserver应该基于专用的CA来验证代理服务器身份。 -
匿名请求:
生产环境中建议禁用匿名认证。
4.Kubernetes上的用户
“用户”即服务请求者的身份指代,一般使用身份标识符进行识别,比如用户名,用户组,服务账号,匿名用户等。
Kubernetes系统的用户大体可分Service Account,User Account和Anonymous Account。
- Service Account:
Kubernetes内置的资源类型,用于Pod内的进程访问API Server时使用的身份信息。
引用格式: "system:serviceaccount:NAMESPACE:SA_NAME"
-
User Account:
用户账户,指非Pod类的客户端访问API Server时使用的身份标识,一般是现实中的“人”。
API Server没有为这类账户提供保存其信息的资源类型,相关的信息通常保存于外部的文件或认证系统中。
身份核验操作可由API Server进行,也可能是由外部身份认证服务完成。
可以手动定义证书,其中O字段表示组,CN字段表示用户名。
-
Anonymous Account:
不能被识别为Service Account,也不能被识别为User Account的用户。
这类账户K8S系统称之为"system:anonymous",即“匿名用户”。
二、静态令牌文件认证测试案例
1.模拟生成token
1.1 方式1
[root@master231 ~]# echo "$(openssl rand -hex 3).$(openssl rand -hex 8)"
ea7f20.66675518084d8015
[root@master231 ~]#
[root@master231 ~]# echo "$(openssl rand -hex 3).$(openssl rand -hex 8)"
ba63a7.ad4fd7e7e89f178e
[root@master231 ~]#
[root@master231 ~]#
[root@master231 ~]# echo "$(openssl rand -hex 3).$(openssl rand -hex 8)"
e358fb.503e8b0e9c8a84c2
1.2 方式2
[root@master231 ~]# kubeadm token generate
2va7th.thj057b9afbmb144
[root@master231 ~]#
[root@master231 ~]# kubeadm token generate
ms9voh.tpfqctzux2bk2ngn
[root@master231 ~]#
[root@master231 ~]# kubeadm token generate
955iiz.dtsy7809brt2syj8
2.创建csv文件
[root@master231 ~]# cd /etc/kubernetes/pki/
[root@master231 pki]#
[root@master231 pki]# cat token.csv
ea7f20.66675518084d8015,yinzhengjie,10001,k8s
ba63a7.ad4fd7e7e89f178e,jasonyin,10002,k8s
e358fb.503e8b0e9c8a84c2,linux96,10003,k3s
2va7th.thj057b9afbmb144,linux97,10004,k3s
ms9voh.tpfqctzux2bk2ngn,linux98,10005,k3s
955iiz.dtsy7809brt2syj8,linux99,10006,k3s
注意:
文件格式为CSV,每行定义一个用户,由“令牌、用户名、用户ID和所属的用户组”四个字段组成,用户组为可选字段
具体格式: token,user,uid,"group1,group2,group3"
3.修改api-server参数加载token文件
[root@master231 pki]# vim /etc/kubernetes/manifests/kube-apiserver.yaml
...
spec:
containers:
- command:
- kube-apiserver
- --token-auth-file=/etc/kubernetes/pki/token.csv
...
volumeMounts:
...
- mountPath: /etc/kubernetes/pki/token.csv
name: yinzhengjie-static-token-file
readOnly: true
...
volumes:
...
- hostPath:
path: /etc/kubernetes/pki/token.csv
type: File
name: yinzhengjie-static-token-file
...
[root@master231 pki]# kubectl get pods -n kube-system # 最少要等待30s+
NAME READY STATUS RESTARTS AGE
...
kube-apiserver-master231 1/1 Running 1 (15s ago) 52s
...
4.kubectl使用token认证并指定api-server的证书
[root@worker232 ~]# kubectl --server=https://10.0.0.231:6443 --certificate-authority=/etc/kubernetes/pki/ca.crt --token=ea7f20.66675518084d8015 get nodes
Error from server (Forbidden): nodes is forbidden: User "yinzhengjie" cannot list resource "nodes" in API group "" at the cluster scope
[root@worker232 ~]#
[root@worker232 ~]# kubectl --server=https://10.0.0.231:6443 --certificate-authority=/etc/kubernetes/pki/ca.crt --token=8fd32c.0868709b9e5786a8 get nodes
Error from server (Forbidden): nodes is forbidden: User "linux96" cannot list resource "nodes" in API group "" at the cluster scope
[root@worker232 ~]#
[root@worker232 ~]# kubectl --server=https://10.0.0.231:6443 --certificate-authority=/etc/kubernetes/pki/ca.crt --token=oldboy.yinzhengjiejason get nodes
Error from server (Forbidden): nodes is forbidden: User "system:bootstrap:oldboy" cannot list resource "nodes" in API group "" at the cluster scope
[root@worker232 ~]#
[root@worker232 ~]# kubectl --server=https://10.0.0.231:6443 --certificate-authority=/etc/kubernetes/pki/ca.crt --token=newboy.yinzhengjiejason get nodes
error: You must be logged in to the server (Unauthorized) # 未认证!
[root@worker232 ~]#
[root@worker232 ~]# kubectl --server=https://10.0.0.231:6443 --certificate-authority=/etc/kubernetes/pki/ca.crt get nodes # 不使用token登录,判定为匿名用户
Please enter Username: admin
Please enter Password: Error from server (Forbidden): nodes is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
5.curl基于token认证案例
[root@worker232 ~]# curl -k https://10.0.0.231:6443 # 如果不指定认证信息,将被识别为匿名"system:anonymous"用户。
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {},
"code": 403
}[root@worker232 ~]#
[root@worker232 ~]# curl -k -H "Authorization: Bearer 01b202.d5c4210389cbff08" https://10.0.0.231:6443/api/v1/pods
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "pods is forbidden: User \"yinzhengjie\" cannot list resource \"pods\" in API group \"\" at the cluster scope",
"reason": "Forbidden",
"details": {
"kind": "pods"
},
"code": 403
}[root@worker232 ~]#
[root@worker232 ~]#
[root@worker232 ~]# curl -k -H "Authorization: Bearer 8fd32c.0868709b9e5786a8" https://10.0.0.231:6443/api/v1/pods
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "pods is forbidden: User \"linux96\" cannot list resource \"pods\" in API group \"\" at the cluster scope",
"reason": "Forbidden",
"details": {
"kind": "pods"
},
"code": 403
}[root@worker232 ~]#
[root@worker232 ~]# curl -k -H "Authorization: Bearer dezyan.yinzhengjiejason" https://10.0.0.231:6443/api/v1/pods
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "pods is forbidden: User \"system:bootstrap:oldboy\" cannot list resource \"pods\" in API group \"\" at the cluster scope",
"reason": "Forbidden",
"details": {
"kind": "pods"
},
"code": 403
}[root@worker232 ~]#
[root@worker232 ~]#
[root@worker232 ~]# curl -k -H "Authorization: Bearer newboy.yinzhengjiejason" https://10.0.0.231:6443/api/v1/pods
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "Unauthorized",
"reason": "Unauthorized",
"code": 401 # 认证失败!
}
三、X509数字证书认证测试案例
1.客户端节点创建证书签发请求
1.1 创建证书签署请求的密钥
[root@worker233 ~]# openssl genrsa -out jiege.key 2048
[root@worker233 ~]#
[root@worker233 ~]# ll jiege.key
-rw------- 1 root root 1704 Apr 14 10:43 jiege.key
1.2 创建证书签署请求
[root@worker233 ~]# openssl req -new -key jiege.key -out jiege.csr -subj "/CN=jiege/O=oldboyedu"
[root@worker233 ~]#
[root@worker233 ~]# ll jiege*
-rw-r--r-- 1 root root 911 Apr 14 10:43 jiege.csr
-rw------- 1 root root 1704 Apr 14 10:43 jiege.key
1.3 将证书签署请求证书使用base64编码
[root@worker233 ~]# cat jiege.csr | base64 | tr -d '\n';echo
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ2FUQ0NBVkVDQVFBd0pERU9NQXdHQTFVRUF3d0ZhbWxsWjJVeEVqQVFCZ05WQkFvTUNXOXNaR0p2ZVdWawpkVENDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUEFEQ0NBUW9DZ2dFQkFOT2l5Q0hwNkxJd0ZVZ0RoUldFCmtRaUo2Z1VsMnJGd3dQMEZSQW9PVHBnVUVLVmc4NlhBczdNN1hreWpvdzE2clF0djd3OW9DQXNHTEYwTll6ak0Kc3lzR1VzRnYrWEFlTDVhL0xhMzlKRHhnMHRZNFp2R3MvNDlYYkM0dnoxcmJpbWxVNmJGQ1M1RFduOWJSRWJlKwpyQk9oWERKT3czS0tRU09Bc3FiSXA1NVRWb3dxeDY3bk13SXlQVmVkWkRvMWpTbkJidlVZYXFlKzdiVzlNbVBVCmg0VGg4blRDZ2ZWeUwrK1VvWDJVT2lJQTdvQWdaNURRVTZXSXg2TFRUT0pSWG1HNDRBVm53ckhJeUZNWFBKNXcKa2pPRlRTd3FIdGk1L0hsUEZvdk45eHk4MTgrb3QzalNlQ1pIVjdiamlQNEd0UWlsNExZUmZURFJiTUJyc2EvUgpWajBDQXdFQUFhQUFNQTBHQ1NxR1NJYjNEUUVCQ3dVQUE0SUJBUUIvMHFIUXJ4RUZtMDF0YldpdDZUV2JMaitQClozQVN1M2FYcmN1aDlJR1Fwd2hselIyLzlqSTNuOEpOaTkxZm1xQUhhUjdsaXUzVEtWdnlTWHBocHVHNGxDNjgKQVpwenhIdndMZ1E5ZWJYSU5BYTRMTkxpTXdNVkpxOUphZjF2bzdqMTRzcHVWbjBFeFczSVZOeG9vODU4YXBxOQpBK2RFK3hwYTQwaEszUmRkR3A5YVdWOStOU1RMNm5zVmFqOUQwTVR2MG1kR0lqdGZmODJhTVZKb3ZTQStDMys0CkpmeGtOb0tSUE40RUZtTW1IbkhFMUZwTnhjWFpMODUvOWZYYVVESXdKVnRuQXYxWFNyMTZRcVNPUDNRWHM2QTUKdnIrZlZVeDdzVHFuYTZJRXQ5cjgrb1VOTVVybkNTOUpraG12Y2FBQUF4WkNVTDZxM2l3dTNNdis0clRtCi0tLS0tRU5EIENFUlRJRklDQVRFIFJFUVVFU1QtLS0tLQo=
2.服务端签发证书
2.2 为客户端创建csr资源证书签发请求资源清单
[root@master231 ~]# cat csr-jiege.yaml
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
name: jiege-csr
spec:
# 将客户端证书签发请求使用base64编码存储,拷贝上一步生成的base64代码替换即可。
request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ2FUQ0NBVkVDQVFBd0pERU9NQXdHQTFVRUF3d0ZhbWxsWjJVeEVqQVFCZ05WQkFvTUNXOXNaR0p2ZVdWawpkVENDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUEFEQ0NBUW9DZ2dFQkFOT2l5Q0hwNkxJd0ZVZ0RoUldFCmtRaUo2Z1VsMnJGd3dQMEZSQW9PVHBnVUVLVmc4NlhBczdNN1hreWpvdzE2clF0djd3OW9DQXNHTEYwTll6ak0Kc3lzR1VzRnYrWEFlTDVhL0xhMzlKRHhnMHRZNFp2R3MvNDlYYkM0dnoxcmJpbWxVNmJGQ1M1RFduOWJSRWJlKwpyQk9oWERKT3czS0tRU09Bc3FiSXA1NVRWb3dxeDY3bk13SXlQVmVkWkRvMWpTbkJidlVZYXFlKzdiVzlNbVBVCmg0VGg4blRDZ2ZWeUwrK1VvWDJVT2lJQTdvQWdaNURRVTZXSXg2TFRUT0pSWG1HNDRBVm53ckhJeUZNWFBKNXcKa2pPRlRTd3FIdGk1L0hsUEZvdk45eHk4MTgrb3QzalNlQ1pIVjdiamlQNEd0UWlsNExZUmZURFJiTUJyc2EvUgpWajBDQXdFQUFhQUFNQTBHQ1NxR1NJYjNEUUVCQ3dVQUE0SUJBUUIvMHFIUXJ4RUZtMDF0YldpdDZUV2JMaitQClozQVN1M2FYcmN1aDlJR1Fwd2hselIyLzlqSTNuOEpOaTkxZm1xQUhhUjdsaXUzVEtWdnlTWHBocHVHNGxDNjgKQVpwenhIdndMZ1E5ZWJYSU5BYTRMTkxpTXdNVkpxOUphZjF2bzdqMTRzcHVWbjBFeFczSVZOeG9vODU4YXBxOQpBK2RFK3hwYTQwaEszUmRkR3A5YVdWOStOU1RMNm5zVmFqOUQwTVR2MG1kR0lqdGZmODJhTVZKb3ZTQStDMys0CkpmeGtOb0tSUE40RUZtTW1IbkhFMUZwTnhjWFpMODUvOWZYYVVESXdKVnRuQXYxWFNyMTZRcVNPUDNRWHM2QTUKdnIrZlZVeDdzVHFuYTZJRXQ5cjgrb1VOTVVybkNTOUpraG12Y2FBQUF4WkNVTDZxM2l3dTNNdis0clRtCi0tLS0tRU5EIENFUlRJRklDQVRFIFJFUVVFU1QtLS0tLQo=
# 指定颁发证书的请求类型,仅支持如下三种,切均可以由kube-controllmanager中的“csrsigning”控制器发出。
# "kubernetes.io/kube-apiserver-client":
# 颁发用于向kube-apiserver进行身份验证的客户端证书。
# 对该签名者的请求是Kubernetes控制器管理器从不自动批准。
#
# "kubernetes.io/kube-apiserver-client-kubelet":
# 颁发kubelets用于向kube-apiserver进行身份验证的客户端证书。
# 对该签名者的请求可以由kube-controllermanager中的“csrapproving”控制器自动批准。
#
# "kubernetes.io/kubelet-serving":
# 颁发kubelets用于服务TLS端点的证书,kube-apiserver可以连接到这些端点安全。
# 对该签名者的请求永远不会被kube-controllmanager自动批准。
signerName: kubernetes.io/kube-apiserver-client
# 指定证书的过期时间,此处我设置的是24h(3600*24=86400)
expirationSeconds: 86400
# 指定在颁发的证书中请求的一组密钥用法。
# 对TLS客户端证书的请求通常请求:
# “数字签名(digital signature)”、“密钥加密(key encipherment)”、“客户端身份验证(client auth)”。
# 对TLS服务证书的请求通常请求:
# “密钥加密(key encipherment)”、“数字签名(digital signature)”、“服务器身份验证(server auth)”。
#
# 有效值的值为: "signing", "digital signature", "content commitment", "key encipherment","key agreement",
# "data encipherment", "cert sign", "crl sign", "encipher only", "decipher only", "any", "server auth",
# "client auth", "code signing", "email protection", "s/mime", "ipsec end system", "ipsec tunnel","ipsec user",
# "timestamping", "ocsp signing", "microsoft sgc", "netscape sgc"。
usages:
- client auth
[root@master231 ~]#
2.2 创建证书签发请求
[root@master231 ~]# kubectl get csr
No resources found
[root@master231 ~]#
[root@master231 ~]# kubectl apply -f csr-jiege.yaml
certificatesigningrequest.certificates.k8s.io/jiege-csr created
[root@master231 ~]#
[root@master231 ~]# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION
jiege-csr 1s kubernetes.io/kube-apiserver-client kubernetes-admin 24h Pending
2.3 服务端手动签发证书
[root@master231 ~]# kubectl certificate approve jiege-csr
certificatesigningrequest.certificates.k8s.io/jiege-csr approved
[root@master231 ~]#
[root@master231 ~]# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION
jiege-csr 68s kubernetes.io/kube-apiserver-client kubernetes-admin 24h Approved,Issued
2.4 获取签发后的证书
[root@master231 ~]# kubectl get csr jiege-csr -o jsonpath='{.status.certificate}' | base64 -d > jiege.crt
[root@master231 ~]#
[root@master231 ~]# ll jiege.crt
-rw-r--r-- 1 root root 1115 Apr 14 10:57 jiege.crt
2.5 将证书拷贝到客户端节点,便于后续使用
[root@master231 ~]# scp jiege.crt 10.0.0.233:~
3.客户端测试验证
3.1 查看本地证书文件
[root@worker233 ~]# ll jiege.*
-rw-r--r-- 1 root root 1115 Apr 14 10:58 jiege.crt
-rw-r--r-- 1 root root 911 Apr 14 10:43 jiege.csr
-rw------- 1 root root 1704 Apr 14 10:43 jiege.key
3.2 访问api-server
[root@worker233 ~]# kubectl -s https://10.0.0.231:6443 --client-key jiege.key --client-certificate jiege.crt --insecure-skip-tls-verify get nodes
Error from server (Forbidden): nodes is forbidden: User "jiege" cannot list resource "nodes" in API group "" at the cluster scope
四、kubeconfig的组成部分
1.kubeconfig概述
kubeconfig是YAML格式的文件,用于存储身份认证信息,以便于客户端加载并认证到API Server。
kubeconfig保存有认证到一至多个Kubernetes集群的相关配置信息,并允许管理员按需在各配置间灵活切换
-
clusters:
Kubernetes集群访问端点(API Server)列表。
说白了,就是可以定义多个K8S集群列表。
-
users:
认证到API Server的身份凭据列表。
说白了,可以定义多个用户列表,这个用户可以是token,或者x509证书凭据。
-
contexts:
将每一个user同可认证到的cluster建立关联的上下文列表。
说白了,就是将多个用户和对应的集群进行关联,将来使用哪个用户,就会去关联的集群进行访问认证。也可以定义多个上下文的关系。
-
current-context:
当前默认使用的context。
2.查看Kubeconfig证书文件内容
[root@master231 ~]# kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://10.0.0.231:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
[root@master231 ~]#
[root@master231 ~]# kubectl config view --raw
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJMU1EUXdOekF6TURBd05Gb1hEVE0xTURRd05UQXpNREF3TkZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTTl4Cmh0RHhVQVJsUGo0NlFEa1Rwd3dPWnJsN2d1bG5IUzRYN1Y1S1pFN3cyZVZRakJXUmpRMENnSzNjMFFBa3hoT1YKWXl4Y1pSbVg2U3FkRFZOWFBNQVZzSmNUeDd4VkRWNk9DYVQxSjRkZmcxVWNGTTNidXM5R3VMMzBITVBRYVEvaApyN2RrcnkxTUlLaVh3MUU5SkFSc05PMnhnamJBMHJEWlpIOXRRRlpwMlpUa1BNU1AzMG5WTWJvNWh3MHZLUGplCnoxNlB6Q3JwUjJIRkZrc0dXRmI3SnVobHlkWmpDaVQwOFJPY3N5ZERUTVFXZWZBdTNEcUJvMHpOSmtrcVovaVAKWkFFZ29DNXZ2MEg2N0Q4SEJxSzArRmUrZjJCaUs1SGNoYkF1WndwWjNkQ0pMTXVmU3FSWkNVVmFtTW56dWlaRApQTmVJbmdPSCtsMWZReTFad0pzQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZCRms1eStsM2RFMUhtT3lkSUYybDlDMDgvbk9NQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBQmxjZ0l1YUsxSVZydVBTVzk2SwpkTTZ6V294WmJlaVpqTWdpd2Q2R3lSL0JBdjI2QzB5V1piZjFHY3A4TlBISDJLdlhscTliUGpSODZSUkNpRFQ4Ci9VZGlTWVpQejByNnJrcTVCZ2x1Rk5XNlRTTXJyRndEVDlubVh0d0pZdzVQU29sS0JHQjIvaThaVTVwL3FkQUMKZ2Z3bU1sY3NPV3ZFUVV5bTVUYmZiWVU3NStxODJsNjY5ZGpGenh2VHFEWEIvZ0hoK1JvRXVaRTNSdjd5Slc1MwpMbkVhVWZSYjRCcmxGclFrKzlPRXZKMUF5UTE0LzcwTjlhVlJXZVZpTkxyQVdJTTNnajN1WmVHMk5yMXdic1ozCjM3VDF5MSs3TVlRcUpiUWRleUpyUVRyaGNjMXlRWTJIOEpaOXBqOERhNVVpSjlkQ1ZMeEtJSlFMeTV4b0RXaTgKL2hvPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://10.0.0.231:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURJVENDQWdtZ0F3SUJBZ0lJUWFsb3k5Q3ltaHN3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TlRBME1EY3dNekF3TURSYUZ3MHlOakEwTURjd016QXdNRFphTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXJrT29uQlNkeEhGZHZSTnYKVW9WbUFUdU1lVDB1T3VUalk0eU9meXY4UElsRGVEeGdtdXp5OXBjK0xzdkNFUXJGRHhSL1hVOW8vZzF3NTJFcwpvSXAvQjdhdzl2anZ1M2FidVBrRS9Kc2xwWi9GdjFMdnNoZE1BYWh6ZkZzVmIxUVMxTjVxcjJBZzhaQXp3SmJJCjlGYXhIMzE2WktwaU1GZW1ubGJMVVVYbG9QeVVjSkdEcGRNa3F1ME8vTDIvbGMvNVBqNkpRZWdrUVNXN1ZHUTgKTkcxR29TcVljekhtZkVZdE14WEF0TVNQMTRFR0pCZjBqMG5sd1R3QU92SkJCZWNmQnRoSU5Zek14d2dNYzFJSApXSnkyU1R0Mkd4VkpybVlYSkpNdU5rNkpmeWlxUklBMzNQQ0FOdS9DcHRtV2FGT2lsZXVFUVhrdy9VajdHMDhECm5YZ2ZHd0lEQVFBQm8xWXdWREFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RBWURWUjBUQVFIL0JBSXdBREFmQmdOVkhTTUVHREFXZ0JRUlpPY3ZwZDNSTlI1anNuU0JkcGZRdFBQNQp6akFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBUnBDMWVUeVI4NXNtZnNJUWozemdzT0NxQWxIcW5Ub2xCNm0wCk14VjdVTGVCVmZoNmg3a3F5cVBzelorczM1MHJxNHpyczI2Qy8xSVBwK1p3MEhvVm9zdmNOSkZvMW0wY2lpUlMKUjVqSXU0Q1Rpd2R0aWZSUUd2SmhmQVFMZmNZd1JlTHJVQUg0YmxYRUZibkorM2FyeHZPQ1B3NThjL2lJTm9XWQpBenlZUElEZHJTSjFCTlZGYkVhSjhYR1ZSYW0rSGRkNHM1bExieGYzWFlPT0o0eWNha29pdWFQN3RUNmw3MXZ2CnAwNS9nOHA3R3NsV1R0cWFFa3JXbW5yUVlXN1Z1M015cWE0M1l4dFFMa2hvVzNad2lseEc1TVo4ZXd1NXdvWlQKQUgrRzB3MkNhbzk4NEVIUFBnL2tQOFVPTGRCZWhjVUgwU2J6YXBBMjJDZ3luN0ozZEE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBcmtPb25CU2R4SEZkdlJOdlVvVm1BVHVNZVQwdU91VGpZNHlPZnl2OFBJbERlRHhnCm11enk5cGMrTHN2Q0VRckZEeFIvWFU5by9nMXc1MkVzb0lwL0I3YXc5dmp2dTNhYnVQa0UvSnNscFovRnYxTHYKc2hkTUFhaHpmRnNWYjFRUzFONXFyMkFnOFpBendKYkk5RmF4SDMxNlpLcGlNRmVtbmxiTFVVWGxvUHlVY0pHRApwZE1rcXUwTy9MMi9sYy81UGo2SlFlZ2tRU1c3VkdROE5HMUdvU3FZY3pIbWZFWXRNeFhBdE1TUDE0RUdKQmYwCmowbmx3VHdBT3ZKQkJlY2ZCdGhJTll6TXh3Z01jMUlIV0p5MlNUdDJHeFZKcm1ZWEpKTXVOazZKZnlpcVJJQTMKM1BDQU51L0NwdG1XYUZPaWxldUVRWGt3L1VqN0cwOERuWGdmR3dJREFRQUJBb0lCQUhZUGdIdTl1K1VLcU9jZgo4NXVFcE1iUkFTcGlPSi9OMGYvdmlkcStnZlRCU2VSN2d6ZHlzR2cvcnZFbE9pVXhscS9Rd3prRVE2MWFqZE0wCkVuZnhYSDV0VnhiN0wrOWhPNzdsZG10czhPUjBpaFJFcS8rTHFRSzJqUWNDN2xLdU10UGttNEtWTGJ4NlpaVmsKa21CM0d5aXFhZkVwUGJ4aXBZOUFYaDZCckVDVHZ4VGYxUElOcVlkT1JEcjl5S2hFUjZRV2tHTlJzZjZYUFR6MwpRRytMYVRzbERtbW1NL1JickU1V1dlUTJSQlJnWVJjU2hQYmh3cUZGZXhhN2dkVmtRQVFOY21WUW5weHdXcDNCCnZCUWh0MTh6Z2tKbXUwN215aWdjZE9Gak0vdFdTd0ZkSVhZKzBrNHVZNWtmL1dackNRQ0YzUXBrZld6L0pGbEkKNU9VS2VJRUNnWUVBd284d0pTd1BoUTNZWDJDQzgwcWdRNDhiZWlVZFgyN0tjSlRWa0hYSkhheHZEczRvTXo5agpRV0FPaFB2NGdXM0tFYUlnUDN4K3kwa3lzeHFXNVVMdERvVHVyVE45cWQ0L012bVJFZEdjcys0OWNXSkRSTDRTCnZUR2dZQWZvR3hCS21qZjcwR0Zqdlp1VjJtMGl6QTJlNXRubWFpam8xeDRuaGxWc1BCVkJBYVVDZ1lFQTVVdkEKNHNFbkFUQVdBTlRFeVU2R2JXY0JpN0F5KzdYTUkvcGphMmZiRjN1RjRrNTZpZGtTVmNPeTlhUTVVOUZKeWdkWAo4d05CbDdyZldQVGVOd3BBc3RMVkZwd3gvQzRxQ3U4SEE1dXRZSW9wcFRUd3FRWG1pS0tQQVh4bUg2aDNRZElxCnQvL1dnejh2N0E2RTc4V1Q1UmJOZk9XS0lBVlh5UE5oMGo3SlFiOENnWUJCeExtWHR6OC8wU0JWallCMjBjRS8KVlQ4S21VVkduMk1iajVScUV3YjdXdkRuNWxTOGppNzFTSTFmOHZWY2UwcVZqMktyVTJCaFE4czV0RUZTR3IrYgo2dC9yK0w0QUVEcjQ5bGhOMTdmTE16dmQra09YRjFHcVZ2NUp1Q0tFRTR2RWVpeExrc0J1dGd1QUhPaG9aaXBUCkMxSFNqU1c0b2w3bUVEWllVUzc2YVFLQmdRRGt5c2JITzdYZ3NJdHovdG53aUNNSUxOelU5bGFZNUppeVdaZzAKUnFmTmNacHc2cC9JeGtsT1BIeG9NSnBuTVJDd3ZzMGFGV2l3cm0xSHhPV3FBOWYwMXZ4Nm1CWWtMQ2dWU3RZegoybldRTzZ3OFJXdlJLNnNSTVNzQ2I0OHpEWlVabjB5eTFsdkVFQnVRTGhpbGF2OGNlcmxGWTRDRVhQQnYrYkhrCjZITkczd0tCZ0dPekxRZnorMEFoaXJTZTZTZllmanQrMkdVSGc3U21UcjZjNm9jTnlpZGNSQks5Q25jcENiOW4KeVZ2SktzSkNuY2FvTCsra2M1aE1YWEJxendEQzNweVlFOWR2UFRiNXFOa1Z3UEJqa0VMcEsyaXhsRUlYRUc1cApJdjVxeVJWTit1QU9PMm5zNWJXQTUwTUpHK1JjSUZrQUphcUR1R1dMWFNZdmdVOVdPREpZCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
五、为静态令牌认证token用户生成kubeconfig实战
1.创建一个集群
[root@worker232 ~]# kubectl config set-cluster myk8s --embed-certs=true --certificate-authority=/etc/kubernetes/pki/ca.crt --server="https://10.0.0.231:6443" --kubeconfig=./yinzhengjie-k8s.conf
Cluster "myk8s" set.
[root@worker232 ~]#
[root@worker232 ~]# cat ./yinzhengjie-k8s.conf
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJMU1EUXdOekF6TURBd05Gb1hEVE0xTURRd05UQXpNREF3TkZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTTl4Cmh0RHhVQVJsUGo0NlFEa1Rwd3dPWnJsN2d1bG5IUzRYN1Y1S1pFN3cyZVZRakJXUmpRMENnSzNjMFFBa3hoT1YKWXl4Y1pSbVg2U3FkRFZOWFBNQVZzSmNUeDd4VkRWNk9DYVQxSjRkZmcxVWNGTTNidXM5R3VMMzBITVBRYVEvaApyN2RrcnkxTUlLaVh3MUU5SkFSc05PMnhnamJBMHJEWlpIOXRRRlpwMlpUa1BNU1AzMG5WTWJvNWh3MHZLUGplCnoxNlB6Q3JwUjJIRkZrc0dXRmI3SnVobHlkWmpDaVQwOFJPY3N5ZERUTVFXZWZBdTNEcUJvMHpOSmtrcVovaVAKWkFFZ29DNXZ2MEg2N0Q4SEJxSzArRmUrZjJCaUs1SGNoYkF1WndwWjNkQ0pMTXVmU3FSWkNVVmFtTW56dWlaRApQTmVJbmdPSCtsMWZReTFad0pzQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZCRms1eStsM2RFMUhtT3lkSUYybDlDMDgvbk9NQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBQmxjZ0l1YUsxSVZydVBTVzk2SwpkTTZ6V294WmJlaVpqTWdpd2Q2R3lSL0JBdjI2QzB5V1piZjFHY3A4TlBISDJLdlhscTliUGpSODZSUkNpRFQ4Ci9VZGlTWVpQejByNnJrcTVCZ2x1Rk5XNlRTTXJyRndEVDlubVh0d0pZdzVQU29sS0JHQjIvaThaVTVwL3FkQUMKZ2Z3bU1sY3NPV3ZFUVV5bTVUYmZiWVU3NStxODJsNjY5ZGpGenh2VHFEWEIvZ0hoK1JvRXVaRTNSdjd5Slc1MwpMbkVhVWZSYjRCcmxGclFrKzlPRXZKMUF5UTE0LzcwTjlhVlJXZVZpTkxyQVdJTTNnajN1WmVHMk5yMXdic1ozCjM3VDF5MSs3TVlRcUpiUWRleUpyUVRyaGNjMXlRWTJIOEpaOXBqOERhNVVpSjlkQ1ZMeEtJSlFMeTV4b0RXaTgKL2hvPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://10.0.0.231:6443
name: myk8s
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null
[root@worker232 ~]#
[root@worker232 ~]# ll yinzhengjie-k8s.conf
-rw------- 1 root root 1663 Apr 14 11:26 yinzhengjie-k8s.conf
2.查看集群信息
[root@worker232 ~]# kubectl config get-clusters --kubeconfig=./yinzhengjie-k8s.conf
NAME
myk8s
3.查看令牌认证文件
[root@master231 ~]# cat /etc/kubernetes/pki/token.csv
01b202.d5c4210389cbff08,yinzhengjie,10001,k8s
497804.9fc391f505052952,jasonyin,10002,k8s
8fd32c.0868709b9e5786a8,linux96,10003,k3s
jvt496.ls43vufojf45q73i,linux97,10004,k3s
qo7azt.y27gu4idn5cunudd,linux98,10005,k3s
mic1bd.mx3vohsg05bjk5rr,linux99,10006,k3s
4.创建用户信息
[root@worker232 ~]# kubectl config set-credentials yinzhengjie --token="01b202.d5c4210389cbff08" --kubeconfig=./yinzhengjie-k8s.conf
User "yinzhengjie" set.
[root@worker232 ~]#
[root@worker232 ~]# kubectl config set-credentials jasonyin --token="497804.9fc391f505052952" --kubeconfig=./yinzhengjie-k8s.conf
User "jasonyin" set.
[root@worker232 ~]#
[root@worker232 ~]# cat yinzhengjie-k8s.conf
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJMU1EUXdOekF6TURBd05Gb1hEVE0xTURRd05UQXpNREF3TkZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTTl4Cmh0RHhVQVJsUGo0NlFEa1Rwd3dPWnJsN2d1bG5IUzRYN1Y1S1pFN3cyZVZRakJXUmpRMENnSzNjMFFBa3hoT1YKWXl4Y1pSbVg2U3FkRFZOWFBNQVZzSmNUeDd4VkRWNk9DYVQxSjRkZmcxVWNGTTNidXM5R3VMMzBITVBRYVEvaApyN2RrcnkxTUlLaVh3MUU5SkFSc05PMnhnamJBMHJEWlpIOXRRRlpwMlpUa1BNU1AzMG5WTWJvNWh3MHZLUGplCnoxNlB6Q3JwUjJIRkZrc0dXRmI3SnVobHlkWmpDaVQwOFJPY3N5ZERUTVFXZWZBdTNEcUJvMHpOSmtrcVovaVAKWkFFZ29DNXZ2MEg2N0Q4SEJxSzArRmUrZjJCaUs1SGNoYkF1WndwWjNkQ0pMTXVmU3FSWkNVVmFtTW56dWlaRApQTmVJbmdPSCtsMWZReTFad0pzQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZCRms1eStsM2RFMUhtT3lkSUYybDlDMDgvbk9NQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBQmxjZ0l1YUsxSVZydVBTVzk2SwpkTTZ6V294WmJlaVpqTWdpd2Q2R3lSL0JBdjI2QzB5V1piZjFHY3A4TlBISDJLdlhscTliUGpSODZSUkNpRFQ4Ci9VZGlTWVpQejByNnJrcTVCZ2x1Rk5XNlRTTXJyRndEVDlubVh0d0pZdzVQU29sS0JHQjIvaThaVTVwL3FkQUMKZ2Z3bU1sY3NPV3ZFUVV5bTVUYmZiWVU3NStxODJsNjY5ZGpGenh2VHFEWEIvZ0hoK1JvRXVaRTNSdjd5Slc1MwpMbkVhVWZSYjRCcmxGclFrKzlPRXZKMUF5UTE0LzcwTjlhVlJXZVZpTkxyQVdJTTNnajN1WmVHMk5yMXdic1ozCjM3VDF5MSs3TVlRcUpiUWRleUpyUVRyaGNjMXlRWTJIOEpaOXBqOERhNVVpSjlkQ1ZMeEtJSlFMeTV4b0RXaTgKL2hvPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://10.0.0.231:6443
name: myk8s
contexts: null
current-context: ""
kind: Config
preferences: {}
users:
- name: jasonyin
user:
token: 497804.9fc391f505052952
- name: yinzhengjie
user:
token: 01b202.d5c4210389cbff08
5.查看用户信息
[root@worker232 ~]# kubectl config get-users --kubeconfig=./yinzhengjie-k8s.conf
NAME
jasonyin
yinzhengjie
6. 定义上下文
[root@worker232 ~]# kubectl config set-context yinzhengjie@myk8s --user=yinzhengjie --cluster=myk8s --kubeconfig=./yinzhengjie-k8s.conf
Context "yinzhengjie@myk8s" created.
[root@worker232 ~]#
[root@worker232 ~]# kubectl config set-context jasonyin@myk8s --user=jasonyin --cluster=myk8s --kubeconfig=./yinzhengjie-k8s.conf
Context "jasonyin@myk8s" created.
[root@worker232 ~]#
[root@worker232 ~]# cat yinzhengjie-k8s.conf
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJMU1EUXdOekF6TURBd05Gb1hEVE0xTURRd05UQXpNREF3TkZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTTl4Cmh0RHhVQVJsUGo0NlFEa1Rwd3dPWnJsN2d1bG5IUzRYN1Y1S1pFN3cyZVZRakJXUmpRMENnSzNjMFFBa3hoT1YKWXl4Y1pSbVg2U3FkRFZOWFBNQVZzSmNUeDd4VkRWNk9DYVQxSjRkZmcxVWNGTTNidXM5R3VMMzBITVBRYVEvaApyN2RrcnkxTUlLaVh3MUU5SkFSc05PMnhnamJBMHJEWlpIOXRRRlpwMlpUa1BNU1AzMG5WTWJvNWh3MHZLUGplCnoxNlB6Q3JwUjJIRkZrc0dXRmI3SnVobHlkWmpDaVQwOFJPY3N5ZERUTVFXZWZBdTNEcUJvMHpOSmtrcVovaVAKWkFFZ29DNXZ2MEg2N0Q4SEJxSzArRmUrZjJCaUs1SGNoYkF1WndwWjNkQ0pMTXVmU3FSWkNVVmFtTW56dWlaRApQTmVJbmdPSCtsMWZReTFad0pzQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZCRms1eStsM2RFMUhtT3lkSUYybDlDMDgvbk9NQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBQmxjZ0l1YUsxSVZydVBTVzk2SwpkTTZ6V294WmJlaVpqTWdpd2Q2R3lSL0JBdjI2QzB5V1piZjFHY3A4TlBISDJLdlhscTliUGpSODZSUkNpRFQ4Ci9VZGlTWVpQejByNnJrcTVCZ2x1Rk5XNlRTTXJyRndEVDlubVh0d0pZdzVQU29sS0JHQjIvaThaVTVwL3FkQUMKZ2Z3bU1sY3NPV3ZFUVV5bTVUYmZiWVU3NStxODJsNjY5ZGpGenh2VHFEWEIvZ0hoK1JvRXVaRTNSdjd5Slc1MwpMbkVhVWZSYjRCcmxGclFrKzlPRXZKMUF5UTE0LzcwTjlhVlJXZVZpTkxyQVdJTTNnajN1WmVHMk5yMXdic1ozCjM3VDF5MSs3TVlRcUpiUWRleUpyUVRyaGNjMXlRWTJIOEpaOXBqOERhNVVpSjlkQ1ZMeEtJSlFMeTV4b0RXaTgKL2hvPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://10.0.0.231:6443
name: myk8s
contexts:
- context:
cluster: myk8s
user: jasonyin
name: jasonyin@myk8s
- context:
cluster: myk8s
user: yinzhengjie
name: yinzhengjie@myk8s
current-context: ""
kind: Config
preferences: {}
users:
- name: jasonyin
user:
token: 497804.9fc391f505052952
- name: yinzhengjie
user:
token: 01b202.d5c4210389cbff08
7. 查看上下文列表
[root@worker232 ~]# kubectl config get-contexts --kubeconfig=./yinzhengjie-k8s.conf
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
jasonyin@myk8s myk8s jasonyin
yinzhengjie@myk8s myk8s yinzhengjie
8.定义当前使用的上下文
[root@worker232 ~]# kubectl config use-context yinzhengjie@myk8s --kubeconfig=./yinzhengjie-k8s.conf
Switched to context "yinzhengjie@myk8s".
[root@worker232 ~]#
[root@worker232 ~]# cat yinzhengjie-k8s.conf
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJMU1EUXdOekF6TURBd05Gb1hEVE0xTURRd05UQXpNREF3TkZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTTl4Cmh0RHhVQVJsUGo0NlFEa1Rwd3dPWnJsN2d1bG5IUzRYN1Y1S1pFN3cyZVZRakJXUmpRMENnSzNjMFFBa3hoT1YKWXl4Y1pSbVg2U3FkRFZOWFBNQVZzSmNUeDd4VkRWNk9DYVQxSjRkZmcxVWNGTTNidXM5R3VMMzBITVBRYVEvaApyN2RrcnkxTUlLaVh3MUU5SkFSc05PMnhnamJBMHJEWlpIOXRRRlpwMlpUa1BNU1AzMG5WTWJvNWh3MHZLUGplCnoxNlB6Q3JwUjJIRkZrc0dXRmI3SnVobHlkWmpDaVQwOFJPY3N5ZERUTVFXZWZBdTNEcUJvMHpOSmtrcVovaVAKWkFFZ29DNXZ2MEg2N0Q4SEJxSzArRmUrZjJCaUs1SGNoYkF1WndwWjNkQ0pMTXVmU3FSWkNVVmFtTW56dWlaRApQTmVJbmdPSCtsMWZReTFad0pzQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZCRms1eStsM2RFMUhtT3lkSUYybDlDMDgvbk9NQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBQmxjZ0l1YUsxSVZydVBTVzk2SwpkTTZ6V294WmJlaVpqTWdpd2Q2R3lSL0JBdjI2QzB5V1piZjFHY3A4TlBISDJLdlhscTliUGpSODZSUkNpRFQ4Ci9VZGlTWVpQejByNnJrcTVCZ2x1Rk5XNlRTTXJyRndEVDlubVh0d0pZdzVQU29sS0JHQjIvaThaVTVwL3FkQUMKZ2Z3bU1sY3NPV3ZFUVV5bTVUYmZiWVU3NStxODJsNjY5ZGpGenh2VHFEWEIvZ0hoK1JvRXVaRTNSdjd5Slc1MwpMbkVhVWZSYjRCcmxGclFrKzlPRXZKMUF5UTE0LzcwTjlhVlJXZVZpTkxyQVdJTTNnajN1WmVHMk5yMXdic1ozCjM3VDF5MSs3TVlRcUpiUWRleUpyUVRyaGNjMXlRWTJIOEpaOXBqOERhNVVpSjlkQ1ZMeEtJSlFMeTV4b0RXaTgKL2hvPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://10.0.0.231:6443
name: myk8s
contexts:
- context:
cluster: myk8s
user: jasonyin
name: jasonyin@myk8s
- context:
cluster: myk8s
user: yinzhengjie
name: yinzhengjie@myk8s
current-context: yinzhengjie@myk8s
kind: Config
preferences: {}
users:
- name: jasonyin
user:
token: 497804.9fc391f505052952
- name: yinzhengjie
user:
token: 01b202.d5c4210389cbff08
9. 查看当前使用的上下文
[root@worker232 ~]# kubectl config current-context --kubeconfig=./yinzhengjie-k8s.conf
yinzhengjie@myk8s
[root@worker232 ~]#
[root@worker232 ~]# kubectl config get-contexts --kubeconfig=./yinzhengjie-k8s.conf
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
jasonyin@myk8s myk8s jasonyin
* yinzhengjie@myk8s myk8s yinzhengjie
[root@worker232 ~]#
10.打印kubeconfig信息,默认会使用“REDACTED”或者“DATA+OMITTED”关键字隐藏证书信息
[root@worker232 ~]# kubectl config view --kubeconfig=./yinzhengjie-k8s.conf
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://10.0.0.231:6443
name: myk8s
contexts:
- context:
cluster: myk8s
user: jasonyin
name: jasonyin@myk8s
- context:
cluster: myk8s
user: yinzhengjie
name: yinzhengjie@myk8s
current-context: yinzhengjie@myk8s
kind: Config
preferences: {}
users:
- name: jasonyin
user:
token: REDACTED
- name: yinzhengjie
user:
token: REDACTED
[root@worker232 ~]#
[root@worker232 ~]# kubectl config view --kubeconfig=./yinzhengjie-k8s.conf --raw
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJMU1EUXdOekF6TURBd05Gb1hEVE0xTURRd05UQXpNREF3TkZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTTl4Cmh0RHhVQVJsUGo0NlFEa1Rwd3dPWnJsN2d1bG5IUzRYN1Y1S1pFN3cyZVZRakJXUmpRMENnSzNjMFFBa3hoT1YKWXl4Y1pSbVg2U3FkRFZOWFBNQVZzSmNUeDd4VkRWNk9DYVQxSjRkZmcxVWNGTTNidXM5R3VMMzBITVBRYVEvaApyN2RrcnkxTUlLaVh3MUU5SkFSc05PMnhnamJBMHJEWlpIOXRRRlpwMlpUa1BNU1AzMG5WTWJvNWh3MHZLUGplCnoxNlB6Q3JwUjJIRkZrc0dXRmI3SnVobHlkWmpDaVQwOFJPY3N5ZERUTVFXZWZBdTNEcUJvMHpOSmtrcVovaVAKWkFFZ29DNXZ2MEg2N0Q4SEJxSzArRmUrZjJCaUs1SGNoYkF1WndwWjNkQ0pMTXVmU3FSWkNVVmFtTW56dWlaRApQTmVJbmdPSCtsMWZReTFad0pzQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZCRms1eStsM2RFMUhtT3lkSUYybDlDMDgvbk9NQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBQmxjZ0l1YUsxSVZydVBTVzk2SwpkTTZ6V294WmJlaVpqTWdpd2Q2R3lSL0JBdjI2QzB5V1piZjFHY3A4TlBISDJLdlhscTliUGpSODZSUkNpRFQ4Ci9VZGlTWVpQejByNnJrcTVCZ2x1Rk5XNlRTTXJyRndEVDlubVh0d0pZdzVQU29sS0JHQjIvaThaVTVwL3FkQUMKZ2Z3bU1sY3NPV3ZFUVV5bTVUYmZiWVU3NStxODJsNjY5ZGpGenh2VHFEWEIvZ0hoK1JvRXVaRTNSdjd5Slc1MwpMbkVhVWZSYjRCcmxGclFrKzlPRXZKMUF5UTE0LzcwTjlhVlJXZVZpTkxyQVdJTTNnajN1WmVHMk5yMXdic1ozCjM3VDF5MSs3TVlRcUpiUWRleUpyUVRyaGNjMXlRWTJIOEpaOXBqOERhNVVpSjlkQ1ZMeEtJSlFMeTV4b0RXaTgKL2hvPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://10.0.0.231:6443
name: myk8s
contexts:
- context:
cluster: myk8s
user: jasonyin
name: jasonyin@myk8s
- context:
cluster: myk8s
user: yinzhengjie
name: yinzhengjie@myk8s
current-context: yinzhengjie@myk8s
kind: Config
preferences: {}
users:
- name: jasonyin
user:
token: 497804.9fc391f505052952
- name: yinzhengjie
user:
token: 01b202.d5c4210389cbff08
11.客户端进行认证
[root@worker232 ~]# kubectl get pods --kubeconfig=./yinzhengjie-k8s.conf
Error from server (Forbidden): pods is forbidden: User "yinzhengjie" cannot list resource "pods" in API group "" in the namespace "default"
[root@worker232 ~]#
[root@worker232 ~]# kubectl get pods --kubeconfig=./yinzhengjie-k8s.conf --context=jasonyin@myk8s
Error from server (Forbidden): pods is forbidden: User "jasonyin" cannot list resource "pods" in API group "" in the namespace "default"
六、为X509数字证书的用户生成kubeconfig实战
1.准备证书
[root@worker233 ~]# ll jiege.*
-rw-r--r-- 1 root root 1115 Apr 14 10:58 jiege.crt
-rw-r--r-- 1 root root 911 Apr 14 10:43 jiege.csr
-rw------- 1 root root 1704 Apr 14 10:43 jiege.key
2.添加证书用户
[root@worker233 ~]# kubectl config set-credentials jiege --client-certificate=/root/jiege.crt --client-key=/root/jiege.key --embed-certs=true --kubeconfig=./yinzhengjie-k8s.conf
User "jiege" set.
[root@worker233 ~]#
[root@worker233 ~]# ll yinzhengjie-k8s.conf
-rw------- 1 root root 3935 Apr 14 11:39 yinzhengjie-k8s.conf
[root@worker233 ~]#
[root@worker233 ~]# cat yinzhengjie-k8s.conf
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users:
- name: jiege
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURDakNDQWZLZ0F3SUJBZ0lSQUtxMEY4YXlpUGlFMkdHUWtpYUN4ZWN3RFFZSktvWklodmNOQVFFTEJRQXcKRlRFVE1CRUdBMVVFQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TlRBME1UUXdNalE0TWpoYUZ3MHlOVEEwTVRVdwpNalE0TWpoYU1DUXhFakFRQmdOVkJBb1RDVzlzWkdKdmVXVmtkVEVPTUF3R0ExVUVBeE1GYW1sbFoyVXdnZ0VpCk1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRRFRvc2doNmVpeU1CVklBNFVWaEpFSWllb0YKSmRxeGNNRDlCVVFLRGs2WUZCQ2xZUE9sd0xPek8xNU1vNk1OZXEwTGIrOFBhQWdMQml4ZERXTTR6TE1yQmxMQgpiL2x3SGkrV3Z5MnQvU1E4WU5MV09HYnhyUCtQVjJ3dUw4OWEyNHBwVk9teFFrdVExcC9XMFJHM3Zxd1RvVnd5ClRzTnlpa0VqZ0xLbXlLZWVVMWFNS3NldTV6TUNNajFYbldRNk5ZMHB3VzcxR0dxbnZ1MjF2VEpqMUllRTRmSjAKd29IMWNpL3ZsS0Y5bERvaUFPNkFJR2VRMEZPbGlNZWkwMHppVVY1aHVPQUZaOEt4eU1oVEZ6eWVjSkl6aFUwcwpLaDdZdWZ4NVR4YUx6ZmNjdk5mUHFMZDQwbmdtUjFlMjQ0aitCclVJcGVDMkVYMHcwV3pBYTdHdjBWWTlBZ01CCkFBR2pSakJFTUJNR0ExVWRKUVFNTUFvR0NDc0dBUVVGQndNQ01Bd0dBMVVkRXdFQi93UUNNQUF3SHdZRFZSMGoKQkJnd0ZvQVVFV1RuTDZYZDBUVWVZN0owZ1hhWDBMVHorYzR3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQURVbQpSVzRoRm83cjlreEszK1FuaENQL0lzVjNGZXltQkN5WUdUWVJoUlJOTCtEQldadlhTTUxuSkppNXRsZkFNSmNtCnY2MWN4MDY0cDRXM25TSG1aU04rODUySUR1alBwWjRXeTJ1VmIwVXR6MUtkM1RBVmJTNGdWTnVRMEgvaGs1aXEKSm9Zelh0WjdiQU4xSEgyQ3RjMUlpSGlNYzBHV1djcUtQQWtzZmNrTjR2Z2lYUDNZVTRFS1lJdXBtVWV4czBLbApoRXVHNUp3aGtLVStYWFZqNm1CWDdrNnBIT3Z3SG5lNEJDRW1sT2lIYnRXU3ZPd2poUTB1ZEJ6OEFKUWYxYVJjCkkyMW5oK2dCekpDdk5oOUpLVXpkemVMSFpld0g2dzB1YndJdEUvWDV3S3l6UmNwMUpweGZoZm1TZW00elRKbnMKS2JnV3pOUzYvUHp0ak90NWV4az0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
client-key-data: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRRFRvc2doNmVpeU1CVkkKQTRVVmhKRUlpZW9GSmRxeGNNRDlCVVFLRGs2WUZCQ2xZUE9sd0xPek8xNU1vNk1OZXEwTGIrOFBhQWdMQml4ZApEV000ekxNckJsTEJiL2x3SGkrV3Z5MnQvU1E4WU5MV09HYnhyUCtQVjJ3dUw4OWEyNHBwVk9teFFrdVExcC9XCjBSRzN2cXdUb1Z3eVRzTnlpa0VqZ0xLbXlLZWVVMWFNS3NldTV6TUNNajFYbldRNk5ZMHB3VzcxR0dxbnZ1MjEKdlRKajFJZUU0Zkowd29IMWNpL3ZsS0Y5bERvaUFPNkFJR2VRMEZPbGlNZWkwMHppVVY1aHVPQUZaOEt4eU1oVApGenllY0pJemhVMHNLaDdZdWZ4NVR4YUx6ZmNjdk5mUHFMZDQwbmdtUjFlMjQ0aitCclVJcGVDMkVYMHcwV3pBCmE3R3YwVlk5QWdNQkFBRUNnZ0VBTnI0TWRubENyNVN3YklnOGpHeFY5NWQwNlEvNW1aeEl6eW5saDVSYjBBcWcKbzZhSVgzK1ErL09IV051YStZbVo2VE55NnRGR0ExUDlkYlJZemdCazkrUVMwK1phNXgxbndkNkJ1bGVZWCtYTApvNDNEVXhBa3FyYzZURmdoa3FibkRvZmdTdkdUQ2t2NTNGOEg3amRyMjBnSnlSbUdoTUl1UnppcS9XazVza0h6CjFWQzRvdWl1Qk1yTStzcXhOWVNmYnJGK3pXV3R1QW05RzBkejVWRzdKSGRIOUEyMHFCeW5uNkF2VU5zempvdm8KYk9jVDVMenc5eGtOKzRjNnlXd3JWdzRRb3hCUWdUVi9Cd0l3bjlqZnB2eXRqaGp4bW9kVEoxcEJZT0ZMb0Q3WQp1YlVoVHdtL1Q1SmZXT0wyR09nZjNOempYeFlVS056WmhvMXJVMVEzSVFLQmdRRHVoV3NwQmdRY2dGZU9OWEhBCjdBQ1MrQldvWFZTeU1TMWdodUF1cFZmakVWL2ZHa3kvOVMyYkZMUVhPaFMrSCtuZUNlZHdUVzZKVC9rNitxYVkKbkVqaGpMenJsTWY3YUt1QkdFUnpZTmc0S2pUekdlOFViaURRRFE2MlRtMDk1eVhVN0lTSjJnS1Vad0RWY0ROUApVR3lBOWFEMHF4aGp1WkJOVFpwaG94MzhId0tCZ1FEakpRRGpscC9uRVFEMFpScm56WFJVSmc4ZTdFUGF6dVBlCkRSYUlrSjFCSzlTRjlBd0pid2hNNkVwRUxWbjNWSnpSZ2JVSENDdnhhbzB0WTFxaldaN1RocTFQb3I4aXQ1RUQKSlE4VG9UMzkrdDgwR0N4T1lZWC8zUUlHcThKa1lGSGtiekhJek9wK1B0UEJESXNIMkdXRWxKUVVrMWo1bG1pWAptdEorRVV4aUl3S0JnUUMwb2FkZ251UzRMTjJobllteS8wY0VCZ3Bvd1oxbGdPYUxaamthT2k4UGo5WFo0RkhsClFTaXplLzlTWTdMWHROVm9TSG5UeTEvOWJ1b2dwemRJOVhvZ0RYUDR1R2ltVlVNa2RadEpBVHRkZFdFNkJSYlEKa3dJWWJQc0tSdVJsNzhudnNOcENoeTVTOHBwb0NSdGlZbFo1Wndyb256WE9OL1kzQktENGRnNDhJd0tCZ0NzMwpYaHp2Q290WEE5eDc1QXVZWG5xb0p4WldFMjd0RUJPdVg4d3AzNUdIdWs2bUtTZ2VWUEQwL1RSTmdLRjdHcjhOCnM1aWI2R2h0UW1FUlZ5eGZIOFhWQ09KdTczaTJma09mNkdkdXRURythbnNwNGp3amQvQS9aMlJIaDV1N2E3bFAKb3FRMndLSzJaMm1DYm0xV3NiSHc1dCtuVFRWbmRZenFxd1BMWE1JTEFvR0FMK21ldGNiejlSSFN1d0NUTW5lRQo0dFFxanBqM3o1OUQwWFR0dytiRG4vYnRBalF1VHJqa2k2Ums2a0E2bG1YYXRKQ2Z3WnVNdTd5L0MyUThUS1hjCjVWcUt1cGNhdnpHTWkzeVJrcmlmSEhpb2V1NGpXNlQyYk1XcDRuUTRoV050cEx1blF5aXNCeGpOZEMzZzBONmEKb2M4eXBOL3ZUVHFGdVB6Q3l2VmxUWEU9Ci0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0K
3.查看用户列表
[root@worker233 ~]# kubectl config get-users --kubeconfig=./yinzhengjie-k8s.conf
NAME
jiege
4.创建一个集群
[root@worker233 ~]# kubectl config set-cluster myk8s --embed-certs=false --certificate-authority=/etc/kubernetes/pki/ca.crt --server="https://10.0.0.231:6443" --kubeconfig=./yinzhengjie-k8s.conf
Cluster "myk8s" set.
[root@worker233 ~]#
[root@worker233 ~]# ll /etc/kubernetes/pki/ca.crt
-rw-r--r-- 1 root root 1099 Apr 10 14:50 /etc/kubernetes/pki/ca.crt
[root@worker233 ~]#
[root@worker233 ~]# cat yinzhengjie-k8s.conf
apiVersion: v1
clusters:
- cluster:
certificate-authority: /etc/kubernetes/pki/ca.crt
server: https://10.0.0.231:6443
name: myk8s
contexts: null
current-context: ""
kind: Config
preferences: {}
users:
- name: jiege
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURDakNDQWZLZ0F3SUJBZ0lSQUtxMEY4YXlpUGlFMkdHUWtpYUN4ZWN3RFFZSktvWklodmNOQVFFTEJRQXcKRlRFVE1CRUdBMVVFQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TlRBME1UUXdNalE0TWpoYUZ3MHlOVEEwTVRVdwpNalE0TWpoYU1DUXhFakFRQmdOVkJBb1RDVzlzWkdKdmVXVmtkVEVPTUF3R0ExVUVBeE1GYW1sbFoyVXdnZ0VpCk1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRRFRvc2doNmVpeU1CVklBNFVWaEpFSWllb0YKSmRxeGNNRDlCVVFLRGs2WUZCQ2xZUE9sd0xPek8xNU1vNk1OZXEwTGIrOFBhQWdMQml4ZERXTTR6TE1yQmxMQgpiL2x3SGkrV3Z5MnQvU1E4WU5MV09HYnhyUCtQVjJ3dUw4OWEyNHBwVk9teFFrdVExcC9XMFJHM3Zxd1RvVnd5ClRzTnlpa0VqZ0xLbXlLZWVVMWFNS3NldTV6TUNNajFYbldRNk5ZMHB3VzcxR0dxbnZ1MjF2VEpqMUllRTRmSjAKd29IMWNpL3ZsS0Y5bERvaUFPNkFJR2VRMEZPbGlNZWkwMHppVVY1aHVPQUZaOEt4eU1oVEZ6eWVjSkl6aFUwcwpLaDdZdWZ4NVR4YUx6ZmNjdk5mUHFMZDQwbmdtUjFlMjQ0aitCclVJcGVDMkVYMHcwV3pBYTdHdjBWWTlBZ01CCkFBR2pSakJFTUJNR0ExVWRKUVFNTUFvR0NDc0dBUVVGQndNQ01Bd0dBMVVkRXdFQi93UUNNQUF3SHdZRFZSMGoKQkJnd0ZvQVVFV1RuTDZYZDBUVWVZN0owZ1hhWDBMVHorYzR3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQURVbQpSVzRoRm83cjlreEszK1FuaENQL0lzVjNGZXltQkN5WUdUWVJoUlJOTCtEQldadlhTTUxuSkppNXRsZkFNSmNtCnY2MWN4MDY0cDRXM25TSG1aU04rODUySUR1alBwWjRXeTJ1VmIwVXR6MUtkM1RBVmJTNGdWTnVRMEgvaGs1aXEKSm9Zelh0WjdiQU4xSEgyQ3RjMUlpSGlNYzBHV1djcUtQQWtzZmNrTjR2Z2lYUDNZVTRFS1lJdXBtVWV4czBLbApoRXVHNUp3aGtLVStYWFZqNm1CWDdrNnBIT3Z3SG5lNEJDRW1sT2lIYnRXU3ZPd2poUTB1ZEJ6OEFKUWYxYVJjCkkyMW5oK2dCekpDdk5oOUpLVXpkemVMSFpld0g2dzB1YndJdEUvWDV3S3l6UmNwMUpweGZoZm1TZW00elRKbnMKS2JnV3pOUzYvUHp0ak90NWV4az0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
client-key-data: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRRFRvc2doNmVpeU1CVkkKQTRVVmhKRUlpZW9GSmRxeGNNRDlCVVFLRGs2WUZCQ2xZUE9sd0xPek8xNU1vNk1OZXEwTGIrOFBhQWdMQml4ZApEV000ekxNckJsTEJiL2x3SGkrV3Z5MnQvU1E4WU5MV09HYnhyUCtQVjJ3dUw4OWEyNHBwVk9teFFrdVExcC9XCjBSRzN2cXdUb1Z3eVRzTnlpa0VqZ0xLbXlLZWVVMWFNS3NldTV6TUNNajFYbldRNk5ZMHB3VzcxR0dxbnZ1MjEKdlRKajFJZUU0Zkowd29IMWNpL3ZsS0Y5bERvaUFPNkFJR2VRMEZPbGlNZWkwMHppVVY1aHVPQUZaOEt4eU1oVApGenllY0pJemhVMHNLaDdZdWZ4NVR4YUx6ZmNjdk5mUHFMZDQwbmdtUjFlMjQ0aitCclVJcGVDMkVYMHcwV3pBCmE3R3YwVlk5QWdNQkFBRUNnZ0VBTnI0TWRubENyNVN3YklnOGpHeFY5NWQwNlEvNW1aeEl6eW5saDVSYjBBcWcKbzZhSVgzK1ErL09IV051YStZbVo2VE55NnRGR0ExUDlkYlJZemdCazkrUVMwK1phNXgxbndkNkJ1bGVZWCtYTApvNDNEVXhBa3FyYzZURmdoa3FibkRvZmdTdkdUQ2t2NTNGOEg3amRyMjBnSnlSbUdoTUl1UnppcS9XazVza0h6CjFWQzRvdWl1Qk1yTStzcXhOWVNmYnJGK3pXV3R1QW05RzBkejVWRzdKSGRIOUEyMHFCeW5uNkF2VU5zempvdm8KYk9jVDVMenc5eGtOKzRjNnlXd3JWdzRRb3hCUWdUVi9Cd0l3bjlqZnB2eXRqaGp4bW9kVEoxcEJZT0ZMb0Q3WQp1YlVoVHdtL1Q1SmZXT0wyR09nZjNOempYeFlVS056WmhvMXJVMVEzSVFLQmdRRHVoV3NwQmdRY2dGZU9OWEhBCjdBQ1MrQldvWFZTeU1TMWdodUF1cFZmakVWL2ZHa3kvOVMyYkZMUVhPaFMrSCtuZUNlZHdUVzZKVC9rNitxYVkKbkVqaGpMenJsTWY3YUt1QkdFUnpZTmc0S2pUekdlOFViaURRRFE2MlRtMDk1eVhVN0lTSjJnS1Vad0RWY0ROUApVR3lBOWFEMHF4aGp1WkJOVFpwaG94MzhId0tCZ1FEakpRRGpscC9uRVFEMFpScm56WFJVSmc4ZTdFUGF6dVBlCkRSYUlrSjFCSzlTRjlBd0pid2hNNkVwRUxWbjNWSnpSZ2JVSENDdnhhbzB0WTFxaldaN1RocTFQb3I4aXQ1RUQKSlE4VG9UMzkrdDgwR0N4T1lZWC8zUUlHcThKa1lGSGtiekhJek9wK1B0UEJESXNIMkdXRWxKUVVrMWo1bG1pWAptdEorRVV4aUl3S0JnUUMwb2FkZ251UzRMTjJobllteS8wY0VCZ3Bvd1oxbGdPYUxaamthT2k4UGo5WFo0RkhsClFTaXplLzlTWTdMWHROVm9TSG5UeTEvOWJ1b2dwemRJOVhvZ0RYUDR1R2ltVlVNa2RadEpBVHRkZFdFNkJSYlEKa3dJWWJQc0tSdVJsNzhudnNOcENoeTVTOHBwb0NSdGlZbFo1Wndyb256WE9OL1kzQktENGRnNDhJd0tCZ0NzMwpYaHp2Q290WEE5eDc1QXVZWG5xb0p4WldFMjd0RUJPdVg4d3AzNUdIdWs2bUtTZ2VWUEQwL1RSTmdLRjdHcjhOCnM1aWI2R2h0UW1FUlZ5eGZIOFhWQ09KdTczaTJma09mNkdkdXRURythbnNwNGp3amQvQS9aMlJIaDV1N2E3bFAKb3FRMndLSzJaMm1DYm0xV3NiSHc1dCtuVFRWbmRZenFxd1BMWE1JTEFvR0FMK21ldGNiejlSSFN1d0NUTW5lRQo0dFFxanBqM3o1OUQwWFR0dytiRG4vYnRBalF1VHJqa2k2Ums2a0E2bG1YYXRKQ2Z3WnVNdTd5L0MyUThUS1hjCjVWcUt1cGNhdnpHTWkzeVJrcmlmSEhpb2V1NGpXNlQyYk1XcDRuUTRoV050cEx1blF5aXNCeGpOZEMzZzBONmEKb2M4eXBOL3ZUVHFGdVB6Q3l2VmxUWEU9Ci0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0K
5.配置上下文
[root@worker233 ~]# kubectl config set-context jiege@myk8s --user=jiege --cluster=myk8s --kubeconfig=./yinzhengjie-k8s.conf
Context "jiege@myk8s" created.
[root@worker233 ~]#
[root@worker233 ~]# cat yinzhengjie-k8s.conf
apiVersion: v1
clusters:
- cluster:
certificate-authority: /etc/kubernetes/pki/ca.crt
server: https://10.0.0.231:6443
name: myk8s
contexts:
- context:
cluster: myk8s
user: jiege
name: jiege@myk8s
current-context: ""
kind: Config
preferences: {}
users:
- name: jiege
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURDakNDQWZLZ0F3SUJBZ0lSQUtxMEY4YXlpUGlFMkdHUWtpYUN4ZWN3RFFZSktvWklodmNOQVFFTEJRQXcKRlRFVE1CRUdBMVVFQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TlRBME1UUXdNalE0TWpoYUZ3MHlOVEEwTVRVdwpNalE0TWpoYU1DUXhFakFRQmdOVkJBb1RDVzlzWkdKdmVXVmtkVEVPTUF3R0ExVUVBeE1GYW1sbFoyVXdnZ0VpCk1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRRFRvc2doNmVpeU1CVklBNFVWaEpFSWllb0YKSmRxeGNNRDlCVVFLRGs2WUZCQ2xZUE9sd0xPek8xNU1vNk1OZXEwTGIrOFBhQWdMQml4ZERXTTR6TE1yQmxMQgpiL2x3SGkrV3Z5MnQvU1E4WU5MV09HYnhyUCtQVjJ3dUw4OWEyNHBwVk9teFFrdVExcC9XMFJHM3Zxd1RvVnd5ClRzTnlpa0VqZ0xLbXlLZWVVMWFNS3NldTV6TUNNajFYbldRNk5ZMHB3VzcxR0dxbnZ1MjF2VEpqMUllRTRmSjAKd29IMWNpL3ZsS0Y5bERvaUFPNkFJR2VRMEZPbGlNZWkwMHppVVY1aHVPQUZaOEt4eU1oVEZ6eWVjSkl6aFUwcwpLaDdZdWZ4NVR4YUx6ZmNjdk5mUHFMZDQwbmdtUjFlMjQ0aitCclVJcGVDMkVYMHcwV3pBYTdHdjBWWTlBZ01CCkFBR2pSakJFTUJNR0ExVWRKUVFNTUFvR0NDc0dBUVVGQndNQ01Bd0dBMVVkRXdFQi93UUNNQUF3SHdZRFZSMGoKQkJnd0ZvQVVFV1RuTDZYZDBUVWVZN0owZ1hhWDBMVHorYzR3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQURVbQpSVzRoRm83cjlreEszK1FuaENQL0lzVjNGZXltQkN5WUdUWVJoUlJOTCtEQldadlhTTUxuSkppNXRsZkFNSmNtCnY2MWN4MDY0cDRXM25TSG1aU04rODUySUR1alBwWjRXeTJ1VmIwVXR6MUtkM1RBVmJTNGdWTnVRMEgvaGs1aXEKSm9Zelh0WjdiQU4xSEgyQ3RjMUlpSGlNYzBHV1djcUtQQWtzZmNrTjR2Z2lYUDNZVTRFS1lJdXBtVWV4czBLbApoRXVHNUp3aGtLVStYWFZqNm1CWDdrNnBIT3Z3SG5lNEJDRW1sT2lIYnRXU3ZPd2poUTB1ZEJ6OEFKUWYxYVJjCkkyMW5oK2dCekpDdk5oOUpLVXpkemVMSFpld0g2dzB1YndJdEUvWDV3S3l6UmNwMUpweGZoZm1TZW00elRKbnMKS2JnV3pOUzYvUHp0ak90NWV4az0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
client-key-data: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRRFRvc2doNmVpeU1CVkkKQTRVVmhKRUlpZW9GSmRxeGNNRDlCVVFLRGs2WUZCQ2xZUE9sd0xPek8xNU1vNk1OZXEwTGIrOFBhQWdMQml4ZApEV000ekxNckJsTEJiL2x3SGkrV3Z5MnQvU1E4WU5MV09HYnhyUCtQVjJ3dUw4OWEyNHBwVk9teFFrdVExcC9XCjBSRzN2cXdUb1Z3eVRzTnlpa0VqZ0xLbXlLZWVVMWFNS3NldTV6TUNNajFYbldRNk5ZMHB3VzcxR0dxbnZ1MjEKdlRKajFJZUU0Zkowd29IMWNpL3ZsS0Y5bERvaUFPNkFJR2VRMEZPbGlNZWkwMHppVVY1aHVPQUZaOEt4eU1oVApGenllY0pJemhVMHNLaDdZdWZ4NVR4YUx6ZmNjdk5mUHFMZDQwbmdtUjFlMjQ0aitCclVJcGVDMkVYMHcwV3pBCmE3R3YwVlk5QWdNQkFBRUNnZ0VBTnI0TWRubENyNVN3YklnOGpHeFY5NWQwNlEvNW1aeEl6eW5saDVSYjBBcWcKbzZhSVgzK1ErL09IV051YStZbVo2VE55NnRGR0ExUDlkYlJZemdCazkrUVMwK1phNXgxbndkNkJ1bGVZWCtYTApvNDNEVXhBa3FyYzZURmdoa3FibkRvZmdTdkdUQ2t2NTNGOEg3amRyMjBnSnlSbUdoTUl1UnppcS9XazVza0h6CjFWQzRvdWl1Qk1yTStzcXhOWVNmYnJGK3pXV3R1QW05RzBkejVWRzdKSGRIOUEyMHFCeW5uNkF2VU5zempvdm8KYk9jVDVMenc5eGtOKzRjNnlXd3JWdzRRb3hCUWdUVi9Cd0l3bjlqZnB2eXRqaGp4bW9kVEoxcEJZT0ZMb0Q3WQp1YlVoVHdtL1Q1SmZXT0wyR09nZjNOempYeFlVS056WmhvMXJVMVEzSVFLQmdRRHVoV3NwQmdRY2dGZU9OWEhBCjdBQ1MrQldvWFZTeU1TMWdodUF1cFZmakVWL2ZHa3kvOVMyYkZMUVhPaFMrSCtuZUNlZHdUVzZKVC9rNitxYVkKbkVqaGpMenJsTWY3YUt1QkdFUnpZTmc0S2pUekdlOFViaURRRFE2MlRtMDk1eVhVN0lTSjJnS1Vad0RWY0ROUApVR3lBOWFEMHF4aGp1WkJOVFpwaG94MzhId0tCZ1FEakpRRGpscC9uRVFEMFpScm56WFJVSmc4ZTdFUGF6dVBlCkRSYUlrSjFCSzlTRjlBd0pid2hNNkVwRUxWbjNWSnpSZ2JVSENDdnhhbzB0WTFxaldaN1RocTFQb3I4aXQ1RUQKSlE4VG9UMzkrdDgwR0N4T1lZWC8zUUlHcThKa1lGSGtiekhJek9wK1B0UEJESXNIMkdXRWxKUVVrMWo1bG1pWAptdEorRVV4aUl3S0JnUUMwb2FkZ251UzRMTjJobllteS8wY0VCZ3Bvd1oxbGdPYUxaamthT2k4UGo5WFo0RkhsClFTaXplLzlTWTdMWHROVm9TSG5UeTEvOWJ1b2dwemRJOVhvZ0RYUDR1R2ltVlVNa2RadEpBVHRkZFdFNkJSYlEKa3dJWWJQc0tSdVJsNzhudnNOcENoeTVTOHBwb0NSdGlZbFo1Wndyb256WE9OL1kzQktENGRnNDhJd0tCZ0NzMwpYaHp2Q290WEE5eDc1QXVZWG5xb0p4WldFMjd0RUJPdVg4d3AzNUdIdWs2bUtTZ2VWUEQwL1RSTmdLRjdHcjhOCnM1aWI2R2h0UW1FUlZ5eGZIOFhWQ09KdTczaTJma09mNkdkdXRURythbnNwNGp3amQvQS9aMlJIaDV1N2E3bFAKb3FRMndLSzJaMm1DYm0xV3NiSHc1dCtuVFRWbmRZenFxd1BMWE1JTEFvR0FMK21ldGNiejlSSFN1d0NUTW5lRQo0dFFxanBqM3o1OUQwWFR0dytiRG4vYnRBalF1VHJqa2k2Ums2a0E2bG1YYXRKQ2Z3WnVNdTd5L0MyUThUS1hjCjVWcUt1cGNhdnpHTWkzeVJrcmlmSEhpb2V1NGpXNlQyYk1XcDRuUTRoV050cEx1blF5aXNCeGpOZEMzZzBONmEKb2M4eXBOL3ZUVHFGdVB6Q3l2VmxUWEU9Ci0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0K
6.查看上下文列表
[root@worker233 ~]# kubectl config get-contexts --kubeconfig=./yinzhengjie-k8s.conf
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
jiege@myk8s myk8s jiege
7.查看kubeconfig信息
[root@worker233 ~]# kubectl --kubeconfig=./yinzhengjie-k8s.conf config view
apiVersion: v1
clusters:
- cluster:
certificate-authority: /etc/kubernetes/pki/ca.crt
server: https://10.0.0.231:6443
name: myk8s
contexts:
- context:
cluster: myk8s
user: jiege
name: jiege@myk8s
current-context: ""
kind: Config
preferences: {}
users:
- name: jiege
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
8.客户端测试验证
[root@worker233 ~]# kubectl get pods --kubeconfig=./yinzhengjie-k8s.conf
The connection to the server localhost:8080 was refused - did you specify the right host or port?
[root@worker233 ~]#
[root@worker233 ~]# kubectl get pods --kubeconfig=./yinzhengjie-k8s.conf --context=jiege@myk8s
Error from server (Forbidden): pods is forbidden: User "jiege" cannot list resource "pods" in API group "" in the namespace "default"
9.配置默认上下文
[root@worker233 ~]# kubectl config use-context jiege@myk8s --kubeconfig=./yinzhengjie-k8s.conf
Switched to context "jiege@myk8s".
[root@worker233 ~]#
[root@worker233 ~]# cat yinzhengjie-k8s.conf
apiVersion: v1
clusters:
- cluster:
certificate-authority: /etc/kubernetes/pki/ca.crt
server: https://10.0.0.231:6443
name: myk8s
contexts:
- context:
cluster: myk8s
user: jiege
name: jiege@myk8s
current-context: jiege@myk8s
kind: Config
preferences: {}
users:
- name: jiege
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURDakNDQWZLZ0F3SUJBZ0lSQUtxMEY4YXlpUGlFMkdHUWtpYUN4ZWN3RFFZSktvWklodmNOQVFFTEJRQXcKRlRFVE1CRUdBMVVFQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TlRBME1UUXdNalE0TWpoYUZ3MHlOVEEwTVRVdwpNalE0TWpoYU1DUXhFakFRQmdOVkJBb1RDVzlzWkdKdmVXVmtkVEVPTUF3R0ExVUVBeE1GYW1sbFoyVXdnZ0VpCk1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRRFRvc2doNmVpeU1CVklBNFVWaEpFSWllb0YKSmRxeGNNRDlCVVFLRGs2WUZCQ2xZUE9sd0xPek8xNU1vNk1OZXEwTGIrOFBhQWdMQml4ZERXTTR6TE1yQmxMQgpiL2x3SGkrV3Z5MnQvU1E4WU5MV09HYnhyUCtQVjJ3dUw4OWEyNHBwVk9teFFrdVExcC9XMFJHM3Zxd1RvVnd5ClRzTnlpa0VqZ0xLbXlLZWVVMWFNS3NldTV6TUNNajFYbldRNk5ZMHB3VzcxR0dxbnZ1MjF2VEpqMUllRTRmSjAKd29IMWNpL3ZsS0Y5bERvaUFPNkFJR2VRMEZPbGlNZWkwMHppVVY1aHVPQUZaOEt4eU1oVEZ6eWVjSkl6aFUwcwpLaDdZdWZ4NVR4YUx6ZmNjdk5mUHFMZDQwbmdtUjFlMjQ0aitCclVJcGVDMkVYMHcwV3pBYTdHdjBWWTlBZ01CCkFBR2pSakJFTUJNR0ExVWRKUVFNTUFvR0NDc0dBUVVGQndNQ01Bd0dBMVVkRXdFQi93UUNNQUF3SHdZRFZSMGoKQkJnd0ZvQVVFV1RuTDZYZDBUVWVZN0owZ1hhWDBMVHorYzR3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQURVbQpSVzRoRm83cjlreEszK1FuaENQL0lzVjNGZXltQkN5WUdUWVJoUlJOTCtEQldadlhTTUxuSkppNXRsZkFNSmNtCnY2MWN4MDY0cDRXM25TSG1aU04rODUySUR1alBwWjRXeTJ1VmIwVXR6MUtkM1RBVmJTNGdWTnVRMEgvaGs1aXEKSm9Zelh0WjdiQU4xSEgyQ3RjMUlpSGlNYzBHV1djcUtQQWtzZmNrTjR2Z2lYUDNZVTRFS1lJdXBtVWV4czBLbApoRXVHNUp3aGtLVStYWFZqNm1CWDdrNnBIT3Z3SG5lNEJDRW1sT2lIYnRXU3ZPd2poUTB1ZEJ6OEFKUWYxYVJjCkkyMW5oK2dCekpDdk5oOUpLVXpkemVMSFpld0g2dzB1YndJdEUvWDV3S3l6UmNwMUpweGZoZm1TZW00elRKbnMKS2JnV3pOUzYvUHp0ak90NWV4az0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
client-key-data: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRRFRvc2doNmVpeU1CVkkKQTRVVmhKRUlpZW9GSmRxeGNNRDlCVVFLRGs2WUZCQ2xZUE9sd0xPek8xNU1vNk1OZXEwTGIrOFBhQWdMQml4ZApEV000ekxNckJsTEJiL2x3SGkrV3Z5MnQvU1E4WU5MV09HYnhyUCtQVjJ3dUw4OWEyNHBwVk9teFFrdVExcC9XCjBSRzN2cXdUb1Z3eVRzTnlpa0VqZ0xLbXlLZWVVMWFNS3NldTV6TUNNajFYbldRNk5ZMHB3VzcxR0dxbnZ1MjEKdlRKajFJZUU0Zkowd29IMWNpL3ZsS0Y5bERvaUFPNkFJR2VRMEZPbGlNZWkwMHppVVY1aHVPQUZaOEt4eU1oVApGenllY0pJemhVMHNLaDdZdWZ4NVR4YUx6ZmNjdk5mUHFMZDQwbmdtUjFlMjQ0aitCclVJcGVDMkVYMHcwV3pBCmE3R3YwVlk5QWdNQkFBRUNnZ0VBTnI0TWRubENyNVN3YklnOGpHeFY5NWQwNlEvNW1aeEl6eW5saDVSYjBBcWcKbzZhSVgzK1ErL09IV051YStZbVo2VE55NnRGR0ExUDlkYlJZemdCazkrUVMwK1phNXgxbndkNkJ1bGVZWCtYTApvNDNEVXhBa3FyYzZURmdoa3FibkRvZmdTdkdUQ2t2NTNGOEg3amRyMjBnSnlSbUdoTUl1UnppcS9XazVza0h6CjFWQzRvdWl1Qk1yTStzcXhOWVNmYnJGK3pXV3R1QW05RzBkejVWRzdKSGRIOUEyMHFCeW5uNkF2VU5zempvdm8KYk9jVDVMenc5eGtOKzRjNnlXd3JWdzRRb3hCUWdUVi9Cd0l3bjlqZnB2eXRqaGp4bW9kVEoxcEJZT0ZMb0Q3WQp1YlVoVHdtL1Q1SmZXT0wyR09nZjNOempYeFlVS056WmhvMXJVMVEzSVFLQmdRRHVoV3NwQmdRY2dGZU9OWEhBCjdBQ1MrQldvWFZTeU1TMWdodUF1cFZmakVWL2ZHa3kvOVMyYkZMUVhPaFMrSCtuZUNlZHdUVzZKVC9rNitxYVkKbkVqaGpMenJsTWY3YUt1QkdFUnpZTmc0S2pUekdlOFViaURRRFE2MlRtMDk1eVhVN0lTSjJnS1Vad0RWY0ROUApVR3lBOWFEMHF4aGp1WkJOVFpwaG94MzhId0tCZ1FEakpRRGpscC9uRVFEMFpScm56WFJVSmc4ZTdFUGF6dVBlCkRSYUlrSjFCSzlTRjlBd0pid2hNNkVwRUxWbjNWSnpSZ2JVSENDdnhhbzB0WTFxaldaN1RocTFQb3I4aXQ1RUQKSlE4VG9UMzkrdDgwR0N4T1lZWC8zUUlHcThKa1lGSGtiekhJek9wK1B0UEJESXNIMkdXRWxKUVVrMWo1bG1pWAptdEorRVV4aUl3S0JnUUMwb2FkZ251UzRMTjJobllteS8wY0VCZ3Bvd1oxbGdPYUxaamthT2k4UGo5WFo0RkhsClFTaXplLzlTWTdMWHROVm9TSG5UeTEvOWJ1b2dwemRJOVhvZ0RYUDR1R2ltVlVNa2RadEpBVHRkZFdFNkJSYlEKa3dJWWJQc0tSdVJsNzhudnNOcENoeTVTOHBwb0NSdGlZbFo1Wndyb256WE9OL1kzQktENGRnNDhJd0tCZ0NzMwpYaHp2Q290WEE5eDc1QXVZWG5xb0p4WldFMjd0RUJPdVg4d3AzNUdIdWs2bUtTZ2VWUEQwL1RSTmdLRjdHcjhOCnM1aWI2R2h0UW1FUlZ5eGZIOFhWQ09KdTczaTJma09mNkdkdXRURythbnNwNGp3amQvQS9aMlJIaDV1N2E3bFAKb3FRMndLSzJaMm1DYm0xV3NiSHc1dCtuVFRWbmRZenFxd1BMWE1JTEFvR0FMK21ldGNiejlSSFN1d0NUTW5lRQo0dFFxanBqM3o1OUQwWFR0dytiRG4vYnRBalF1VHJqa2k2Ums2a0E2bG1YYXRKQ2Z3WnVNdTd5L0MyUThUS1hjCjVWcUt1cGNhdnpHTWkzeVJrcmlmSEhpb2V1NGpXNlQyYk1XcDRuUTRoV050cEx1blF5aXNCeGpOZEMzZzBONmEKb2M4eXBOL3ZUVHFGdVB6Q3l2VmxUWEU9Ci0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0K
[root@worker233 ~]#
10.再次测试
[root@worker233 ~]# kubectl config current-context --kubeconfig=./yinzhengjie-k8s.conf
jiege@myk8s
[root@worker233 ~]#
[root@worker233 ~]# kubectl config get-contexts --kubeconfig=./yinzhengjie-k8s.conf
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* jiege@myk8s myk8s jiege
[root@worker233 ~]#
[root@worker233 ~]# kubectl get pods --kubeconfig=./yinzhengjie-k8s.conf
Error from server (Forbidden): pods is forbidden: User "jiege" cannot list resource "pods" in API group "" in the namespace "default"
[root@worker233 ~]#
七、K8S默认基于sa进行认证
1.为何需要Service Account
Kubernetes原生(kubernetes-native)托管运行于Kubernetes之上,通常需要直接与API Server进行交互以获取必要的信息。
API Server同样需要对这类来自于Pod资源中客户端程序进行身份验证,Service Account也就是设计专用于这类场景的账号。
ServiceAccount是API Server支持的标准资源类型之一。
- 1.基于资源对象保存ServiceAccount的数据;
- 2.认证信息保存于ServiceAccount对象专用的Secret中(v1.23-版本)
- 3.隶属名称空间级别,专供集群上的Pod中的进程访问API Server时使用;
2.Pod使用ServiceAccount方式
在Pod上使用Service Account通常有两种方式:
-
自动设定:
- Service Account通常由API
- Server自动创建并通过ServiceAccount准入控制器自动关联到集群中创建的每个Pod上。
-
自定义:
- 在Pod规范上,使用serviceAccountName指定要使用的特定ServiceAccount
Kubernetes基于三个组件完成Pod上serviceaccount的自动化,分别对应: ServiceAccount Admission Controller,Token Controller,ServiceAccount Controller。
-
ServiceAccount Admission Controller:
- API Server准入控制器插件,主要负责完成Pod上的ServiceAccount的自动化。
- 为每个名称空间自动生成一个"default"的sa,若用户未指定sa,则默认使用"default"。
-
Token Controller:
- 为每一个sa分配一个token的组件,已经集成到Controller manager的组件中
-
ServiceAccount Controller:
- 为sa生成对应的数据信息,已经集成到Controller manager的组件中。
注意:
需要用到特殊权限时,可为Pod指定要使用的自定义ServiceAccount资源对象
3.ServiceAccount Token的不同实现方式
ServiceAccount使用专用的Secret对象(Kubernetes v1.23-)存储相关的敏感信息
- 1.Secret对象的类型标识为“kubernetes.io/service-account-token”
- 2.该Secret对象会自动附带认证到API Server用到的Token,也称为ServiceAccount Token
ServiceAccount Token的不同实现方式
-
1.Kubernetes v1.20-
- 系统自动生成专用的Secret对象,并基于secret卷插件关联至相关的Pod
- Secret中会自动附带Token且永久有效(安全性低,如果将来获取该token可以长期登录)。
-
2.Kubernetes v1.21-v1.23:
- 系统自动生成专用的Secret对象,并通过projected卷插件关联至相关的Pod;
- Pod不会使用Secret上的Token,被弃用后,在未来版本就不在创建该token。
- 而是由Kubelet向TokenRequest API请求生成,默认有效期为一年,且每小时更新一次;
-
3.Kubernetes v1.24+:
- 系统不再自动生成专用的Secret对象。
- 而是由Kubelet负责向TokenRequest API请求生成Token,默认有效期为一年,且每小时更新一次;
4.创建sa并让pod引用指定的sa
[root@master231 pods]# cat 26-pods-sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: yinzhengjie
---
apiVersion: v1
kind: Pod
metadata:
name: oldboyedu-pods-sa
spec:
serviceAccountName: yinzhengjie
containers:
- name: c1
image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1
5.验证pod使用sa的验证身份
[root@master231 auth]# kubectl exec -it oldboyedu-pods-sa -- sh
/ # ls -l /var/run/secrets/kubernetes.io/serviceaccount
total 0
lrwxrwxrwx 1 root root 13 Feb 23 04:13 ca.crt -> ..data/ca.crt
lrwxrwxrwx 1 root root 16 Feb 23 04:13 namespace -> ..data/namespace
lrwxrwxrwx 1 root root 12 Feb 23 04:13 token -> ..data/token
/ #
/ # TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
/ #
/ # curl -k -H "Authorization: Bearer ${TOKEN}" https://kubernetes
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "forbidden: User \"system:serviceaccount:default:yinzhengjie\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {},
"code": 403
}/ #
6.pod基于projected存储卷引用ServiceAccount案例(可跳过)
如下图所示,Kubernetes v1.21+版本中,Pod加载上面三种数据的方式,改变为基于projected卷插件,通过三个数据源(source)分别进行
serviceAccountToken:
提供由Kubelet负责向TokenRequest API请求生成的Token。
configMap:
经由kube-root-ca.crt这个ConfigMap对象的ca.crt键,引用Kubernetes CA的证书
downwardAPI:
基于fieldRef,获取当前Pod所处的名称空间。
实战案例:
[root@master231 ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
xiuxian-v1 1/1 Running 0 39m 10.100.203.145 worker232 <none> <none>
xiuxian-v2 1/1 Running 0 39m 10.100.140.83 worker233 <none> <none>
[root@master231 ~]#
[root@master231 ~]# kubectl get pods xiuxian-v1 -o yaml
apiVersion: v1
kind: Pod
metadata:
...
name: xiuxian-v1
namespace: default
...
spec:
containers:
- image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1
...
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-v929g
readOnly: true
...
serviceAccount: default
serviceAccountName: default
...
volumes:
- name: kube-api-access-v929g
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
status:
...
[root@master231 ~]#
八、启用authorization模式
在kube-apiserver上使用“--authorization-mode”选项进行定义,多个模块彼此间以逗号分隔。
如上图所示,kubeadm部署的集群,默认启用了Node和RBAC
API Server中的鉴权框架及启用的鉴权模块负责鉴权:
支持的鉴权模块:
-
Node:
专用的授权模块,它基于kubelet将要运行的Pod向kubelet进行授权。
-
ABAC:通过将属性(包括资源属性、用户属性、对象和环境属性等)组合在一起的策略,将访问权限授予用户。
-
RBAC:
基于企业内个人用户的角色来管理对计算机或网络资源的访问的鉴权方法。
-
Webhook:
用于支持同Kubernetes外部的授权机制进行集成。
另外两个特殊的鉴权模块是AlwaysDeny和AlwaysAllow。
参考链接:
https://kubernetes.io/zh-cn/docs/reference/access-authn-authz/
九、RBAC基础概念图解
-
实体(Entity):
- 在RBAC也称为Subject,通常指的是User、Group或者是ServiceAccount;
-
角色(Role):
- 承载资源操作权限的容器。
-
资源(Resource)
- 在RBAC中也称为Object,指代Subject期望操作的目标,例如Service,Deployments,ConfigMap,Secret、Pod等资源。
- 仅限于"/api/v1/..."及"/apis/
/ /..."起始的路径; - 其它路径对应的端点均被视作“非资源类请求(Non-Resource Requests)”,例如"/api"或"/healthz"等端点;
-
动作(Actions):
- Subject可以于Object上执行的特定操作,具体的可用动作取决于Kubernetes的定义。
- 资源型对象
- 只读操作:get、list、watch等。
- 读写操作:create、update、patch、delete、deletecollection等。
- 非资源型端点仅支持"get"操作。
-
角色绑定(Role Binding)
- 将角色关联至实体上,它能够将角色具体的操作权限赋予给实体。
-
角色的类型:
- Namespace级别:
- 称为Role,定义名称空间范围内的资源操作权限集合
- Namespace和Cluster级别:
- 称为ClusterRole,定义集群范围内的资源操作权限集合,包括集群级别及名称空间级别的资源对象。
- Namespace级别:
-
角色绑定的类型
- Cluster级别:
- 称为ClusterRoleBinding,可以将实体(User、Group或ServiceAccount)关联至ClusterRole
- Namespace级别:
- 称为RoleBinding,可以将实体关联至ClusterRole或Role。
- 即便将Subject使用RoleBinding关联到了ClusterRole上,该角色赋予到Subject的权限也会降级到RoleBinding所属的Namespace范围之内
- Cluster级别:
十、ClusterRole
启用RBAC鉴权模块时,API Server会自动创建一组ClusterRole和ClusterRoleBinding对象
多数都以“system:”为前缀,也有几个面向用户的ClusterRole未使用该前缀,如cluster-admin、admin等
它们都默认使用“kubernetes.io/bootstrapping: rbac-defaults”这一标签。
默认的ClusterRole大体可以分为5个类别。
-
API发现相关的角色:
包括system:basic-user、system:discovery和system:public-info-viewer。
-
面向用户的角色:
包括cluster-admin、admin、edit和view。
-
核心组件专用的角色:
包括system:kube-scheduler、system:volume-scheduler、system:kube-controller-manager、system:node和system:node-proxier等。
-
其它组件专用的角色:
包括system:kube-dns、system:node-bootstrapper、system:node-problem-detector和system:monitoring等。
-
内置控制器专用的角色:
专为内置的控制器使用的角色,具体可参考官网文档。
十一、K8S内置的面向用户的集群角色
- cluster-admin:
- 允许用户在目标范围内的任意资源上执行任意操作;使用ClusterRoleBinding关联至用户时,授权操作集群及所有名称空间中任何资源;使用RoleBinding关联至用户时,授权控制其所属名称空间中的所有资源,包括Namespace资源自身,隶属于"system:masters 组"。
- admin
- 管理员权限,主要用于结合RoleBinding为特定名称空间快速授权生成管理员用户,它能够将RoleBinding所属名称空间中的大多数资源的读/写权限授予目标用户,包括创建Role和RoleBinding的能力;但不支持对ResourceQuota及Namespace本身进行操作;
- edit:
- 接近于admin的权限,支持对名称空间内的大多数对象进行读/写操作,包括Secret,但不允许查看或修改Role及RoleBinding;
- view:
- 允许以只读方式访问名称空间中的大多数对象,但不包括Role、RoleBinding和Secret;
十二、Role授权给一个用户类型实战
1.环境准备
[root@master231 scheduler]# pwd
/oldboyedu/manifests/scheduler
[root@master231 scheduler]#
[root@master231 scheduler]# ll
total 32
drwxr-xr-x 2 root root 4096 Apr 10 19:49 ./
drwxr-xr-x 5 root root 4096 Apr 10 15:29 ../
-rw-r--r-- 1 root root 428 Apr 10 10:30 01-deploy-nodeSelector.yaml
-rw-r--r-- 1 root root 1045 Apr 10 11:19 02-deploy-nodeSelector-tolerations.yaml
-rw-r--r-- 1 root root 515 Apr 10 11:42 03-deploy-resources.yaml
-rw-r--r-- 1 root root 752 Apr 10 19:10 04-deploy-nodeAffinity.yaml
-rw-r--r-- 1 root root 645 Apr 10 19:35 05-deploy-podAffinity.yaml
-rw-r--r-- 1 root root 654 Apr 10 19:49 06-deploy-podAntiAffinity.yaml
[root@master231 scheduler]#
[root@master231 scheduler]# kubectl apply -f 01-deploy-nodeSelector.yaml
deployment.apps/scheduler-nodeselector created
[root@master231 scheduler]#
[root@master231 scheduler]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
scheduler-nodeselector-774bf9875f-4mrjp 1/1 Running 0 4s 10.100.203.144 worker232 <none> <none>
scheduler-nodeselector-774bf9875f-6hwzv 1/1 Running 0 4s 10.100.203.143 worker232 <none> <none>
scheduler-nodeselector-774bf9875f-8mfvr 1/1 Running 0 4s 10.100.203.146 worker232 <none> <none>
scheduler-nodeselector-774bf9875f-ftdrt 1/1 Running 0 4s 10.100.203.141 worker232 <none> <none>
scheduler-nodeselector-774bf9875f-r5ff6 1/1 Running 0 4s 10.100.203.140 worker232 <none> <none>
2.为授权前测试
[root@worker233 ~]# kubectl get pods --kubeconfig=./yinzhengjie-k8s.conf
Error from server (Forbidden): pods is forbidden: User "jiege" cannot list resource "pods" in API group "" in the namespace "default"
3.创建Role
[root@master231 ~]# kubectl create role reader --resource=po,svc --verb=get,watch,list -o yaml --dry-run=client # 将来可用于声明式
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
creationTimestamp: null
name: reader
rules:
- apiGroups:
- ""
resources:
- pods
- services
verbs:
- get
- watch
- list
[root@master231 ~]#
[root@master231 ~]# kubectl create role reader --resource=po,svc --verb=get,watch,list # 响应式创建
role.rbac.authorization.k8s.io/reader created
[root@master231 ~]#
[root@master231 ~]# kubectl get role reader
NAME CREATED AT
reader 2025-04-14T07:16:25Z
4.创建角色绑定
[root@master231 ~]# kubectl create rolebinding jiege-as-reader --user=jiege --role=reader -o yaml --dry-run=client
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
creationTimestamp: null
name: jiege-as-reader
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: reader
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: jiege
[root@master231 ~]#
[root@master231 ~]# kubectl create rolebinding jiege-as-reader --user=jiege --role=reader
rolebinding.rbac.authorization.k8s.io/jiege-as-reader created
[root@master231 ~]#
[root@master231 ~]# kubectl get rolebindings jiege-as-reader
NAME ROLE AGE
jiege-as-reader Role/reader 13s
5.授权后再次验证
[root@worker233 ~]# kubectl get pods,svc -o wide --kubeconfig=./yinzhengjie-k8s.conf
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/scheduler-nodeselector-774bf9875f-4mrjp 1/1 Running 0 46s 10.100.203.144 worker232 <none> <none>
pod/scheduler-nodeselector-774bf9875f-6hwzv 1/1 Running 0 46s 10.100.203.143 worker232 <none> <none>
pod/scheduler-nodeselector-774bf9875f-8mfvr 1/1 Running 0 46s 10.100.203.146 worker232 <none> <none>
pod/scheduler-nodeselector-774bf9875f-ftdrt 1/1 Running 0 46s 10.100.203.141 worker232 <none> <none>
pod/scheduler-nodeselector-774bf9875f-r5ff6 1/1 Running 0 46s 10.100.203.140 worker232 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.200.0.1 <none> 443/TCP 7d4h <none>
service/yiliao LoadBalancer 10.200.160.162 10.0.0.150 80:30020/TCP 3d3h app=yiliao
[root@worker233 ~]#
[root@worker233 ~]#
[root@worker233 ~]# kubectl get deploy --kubeconfig=./yinzhengjie-k8s.conf
Error from server (Forbidden): deployments.apps is forbidden: User "jiege" cannot list resource "deployments" in API group "apps" in the namespace "default"
[root@worker233 ~]#
6.修改权限
方式一: (响应式)
[root@master231 ~]# kubectl create role reader --resource=po,svc,deploy --verb=get,watch,list -o yaml --dry-run=client | kubectl apply -f -
Warning: resource roles/reader is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
role.rbac.authorization.k8s.io/reader configured
[root@master231 ~]#
方式二: (声明式)
[root@master231 ~]# kubectl create role reader --resource=po,svc,deploy --verb=get,watch,list -o yaml --dry-run=client
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
creationTimestamp: null
name: reader
rules:
- apiGroups:
- ""
resources:
- pods
- services
verbs:
- get
- watch
- list
- apiGroups:
- apps
resources:
- deployments
verbs:
- get
- watch
- list
[root@master231 ~]#
[root@master231 ~]# kubectl create role reader --resource=po,svc,deploy --verb=get,watch,list -o yaml --dry-run=client > 01-Role-jiege.yaml
[root@master231 ~]#
[root@master231 ~]# kubectl apply -f 01-Role-jiege.yaml
role.rbac.authorization.k8s.io/reader configured
7.测试验证
[root@worker233 ~]# kubectl get deploy --kubeconfig=./yinzhengjie-k8s.conf -n default
NAME READY UP-TO-DATE AVAILABLE AGE
scheduler-nodeselector 5/5 5 5 6m44s
[root@worker233 ~]#
[root@worker233 ~]# kubectl get deploy --kubeconfig=./yinzhengjie-k8s.conf
NAME READY UP-TO-DATE AVAILABLE AGE
scheduler-nodeselector 5/5 5 5 6m45s
[root@worker233 ~]#
[root@worker233 ~]# kubectl get deploy --kubeconfig=./yinzhengjie-k8s.conf -n kube-system
Error from server (Forbidden): deployments.apps is forbidden: User "jiege" cannot list resource "deployments" in API group "apps" in the namespace "kube-system"
十三、ClusterRole授权给一个用户组类型
1.授权前测试
[root@worker232 ~]# cat yinzhengjie-k8s.conf
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJMU1EUXdOekF6TURBd05Gb1hEVE0xTURRd05UQXpNREF3TkZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTTl4Cmh0RHhVQVJsUGo0NlFEa1Rwd3dPWnJsN2d1bG5IUzRYN1Y1S1pFN3cyZVZRakJXUmpRMENnSzNjMFFBa3hoT1YKWXl4Y1pSbVg2U3FkRFZOWFBNQVZzSmNUeDd4VkRWNk9DYVQxSjRkZmcxVWNGTTNidXM5R3VMMzBITVBRYVEvaApyN2RrcnkxTUlLaVh3MUU5SkFSc05PMnhnamJBMHJEWlpIOXRRRlpwMlpUa1BNU1AzMG5WTWJvNWh3MHZLUGplCnoxNlB6Q3JwUjJIRkZrc0dXRmI3SnVobHlkWmpDaVQwOFJPY3N5ZERUTVFXZWZBdTNEcUJvMHpOSmtrcVovaVAKWkFFZ29DNXZ2MEg2N0Q4SEJxSzArRmUrZjJCaUs1SGNoYkF1WndwWjNkQ0pMTXVmU3FSWkNVVmFtTW56dWlaRApQTmVJbmdPSCtsMWZReTFad0pzQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZCRms1eStsM2RFMUhtT3lkSUYybDlDMDgvbk9NQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBQmxjZ0l1YUsxSVZydVBTVzk2SwpkTTZ6V294WmJlaVpqTWdpd2Q2R3lSL0JBdjI2QzB5V1piZjFHY3A4TlBISDJLdlhscTliUGpSODZSUkNpRFQ4Ci9VZGlTWVpQejByNnJrcTVCZ2x1Rk5XNlRTTXJyRndEVDlubVh0d0pZdzVQU29sS0JHQjIvaThaVTVwL3FkQUMKZ2Z3bU1sY3NPV3ZFUVV5bTVUYmZiWVU3NStxODJsNjY5ZGpGenh2VHFEWEIvZ0hoK1JvRXVaRTNSdjd5Slc1MwpMbkVhVWZSYjRCcmxGclFrKzlPRXZKMUF5UTE0LzcwTjlhVlJXZVZpTkxyQVdJTTNnajN1WmVHMk5yMXdic1ozCjM3VDF5MSs3TVlRcUpiUWRleUpyUVRyaGNjMXlRWTJIOEpaOXBqOERhNVVpSjlkQ1ZMeEtJSlFMeTV4b0RXaTgKL2hvPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://10.0.0.231:6443
name: myk8s
contexts:
- context:
cluster: myk8s
user: jasonyin
name: jasonyin@myk8s
- context:
cluster: myk8s
user: yinzhengjie
name: yinzhengjie@myk8s
current-context: yinzhengjie@myk8s
kind: Config
preferences: {}
users:
- name: jasonyin
user:
token: 497804.9fc391f505052952
- name: yinzhengjie
user:
token: 01b202.d5c4210389cbff08
[root@worker232 ~]#
[root@worker232 ~]# kubectl get pods --kubeconfig=./yinzhengjie-k8s.conf
Error from server (Forbidden): pods is forbidden: User "yinzhengjie" cannot list resource "pods" in API group "" in the namespace "default"
[root@worker232 ~]#
[root@worker232 ~]# kubectl get pods --kubeconfig=./yinzhengjie-k8s.conf --context=jasonyin@myk8s
Error from server (Forbidden): pods is forbidden: User "jasonyin" cannot list resource "pods" in API group "" in the namespace "default"
[root@worker232 ~]#
2.创建集群角色
[root@master231 manifests]# kubectl create clusterrole reader --resource=deploy,rs,pods --verb=get,watch,list -o yaml --dry-run=client
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
creationTimestamp: null
name: reader
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- watch
- list
- apiGroups:
- apps
resources:
- deployments
- replicasets
verbs:
- get
- watch
- list
[root@master231 manifests]#
[root@master231 manifests]# kubectl create clusterrole reader --resource=deploy,rs,pods --verb=get,watch,list
clusterrole.rbac.authorization.k8s.io/reader created
[root@master231 manifests]#
[root@master231 manifests]# kubectl get clusterrole reader
NAME CREATED AT
reader 2025-04-14T07:44:31Z
[root@master231 manifests]#
3.将集群角色绑定给k8s组
[root@master231 ~]# cat /etc/kubernetes/pki/token.csv
01b202.d5c4210389cbff08,yinzhengjie,10001,k8s
497804.9fc391f505052952,jasonyin,10002,k8s
8fd32c.0868709b9e5786a8,linux96,10003,k3s
jvt496.ls43vufojf45q73i,linux97,10004,k3s
qo7azt.y27gu4idn5cunudd,linux98,10005,k3s
mic1bd.mx3vohsg05bjk5rr,linux99,10006,k3s
[root@master231 ~]#
[root@master231 ~]# kubectl create clusterrolebinding k8s-as-reader --clusterrole=reader --group=k8s -o yaml --dry-run=client
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
creationTimestamp: null
name: k8s-as-reader
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: reader
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: k8s
[root@master231 ~]#
[root@master231 ~]# kubectl create clusterrolebinding k8s-as-reader --clusterrole=reader --group=k8s
clusterrolebinding.rbac.authorization.k8s.io/k8s-as-reader created
[root@master231 ~]#
[root@master231 ~]# kubectl get clusterrolebindings k8s-as-reader
NAME ROLE AGE
k8s-as-reader ClusterRole/reader 10s
4.基于kubeconfig测试
[root@worker232 ~]# kubectl get deploy,rs,pod -o wide --kubeconfig=./yinzhengjie-k8s.conf --context=jasonyin@myk8s
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/scheduler-nodeselector 5/5 5 5 29m c1 harbor250.oldboyedu.com/oldboyedu-xiuxian/apps:v1 apps=xiuxian
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/scheduler-nodeselector-774bf9875f 5 5 5 29m c1 harbor250.oldboyedu.com/oldboyedu-xiuxian/apps:v1 apps=xiuxian,pod-template-hash=774bf9875f
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/scheduler-nodeselector-774bf9875f-4mrjp 1/1 Running 0 29m 10.100.203.144 worker232 <none> <none>
pod/scheduler-nodeselector-774bf9875f-6hwzv 1/1 Running 0 29m 10.100.203.143 worker232 <none> <none>
pod/scheduler-nodeselector-774bf9875f-8mfvr 1/1 Running 0 29m 10.100.203.146 worker232 <none> <none>
pod/scheduler-nodeselector-774bf9875f-ftdrt 1/1 Running 0 29m 10.100.203.141 worker232 <none> <none>
pod/scheduler-nodeselector-774bf9875f-r5ff6 1/1 Running 0 29m 10.100.203.140 worker232 <none> <none>
[root@worker232 ~]#
[root@worker232 ~]# kubectl get deploy,rs,pod -o wide --kubeconfig=./yinzhengjie-k8s.conf
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/scheduler-nodeselector 5/5 5 5 29m c1 harbor250.oldboyedu.com/oldboyedu-xiuxian/apps:v1 apps=xiuxian
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/scheduler-nodeselector-774bf9875f 5 5 5 29m c1 harbor250.oldboyedu.com/oldboyedu-xiuxian/apps:v1 apps=xiuxian,pod-template-hash=774bf9875f
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/scheduler-nodeselector-774bf9875f-4mrjp 1/1 Running 0 29m 10.100.203.144 worker232 <none> <none>
pod/scheduler-nodeselector-774bf9875f-6hwzv 1/1 Running 0 29m 10.100.203.143 worker232 <none> <none>
pod/scheduler-nodeselector-774bf9875f-8mfvr 1/1 Running 0 29m 10.100.203.146 worker232 <none> <none>
pod/scheduler-nodeselector-774bf9875f-ftdrt 1/1 Running 0 29m 10.100.203.141 worker232 <none> <none>
pod/scheduler-nodeselector-774bf9875f-r5ff6 1/1 Running 0 29m 10.100.203.140 worker232 <none> <none>
[root@worker232 ~]#
5.基于token测试
[root@worker232 ~]# kubectl --server=https://10.0.0.231:6443 --token=01b202.d5c4210389cbff08 --certificate-authority=/etc/kubernetes/pki/ca.crt get deploy,rs,po -o wide -A
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
calico-apiserver deployment.apps/calico-apiserver 2/2 2 2 7d4h calico-apiserver docker.io/calico/apiserver:v3.25.2 apiserver=true
calico-system deployment.apps/calico-kube-controllers 1/1 1 1 7d4h calico-kube-controllers docker.io/calico/kube-controllers:v3.25.2 k8s-app=calico-kube-controllers
calico-system deployment.apps/calico-typha 2/2 2 2 7d4h calico-typha docker.io/calico/typha:v3.25.2 k8s-app=calico-typha
default deployment.apps/scheduler-nodeselector 5/5 5 5 31m c1 harbor250.oldboyedu.com/oldboyedu-xiuxian/apps:v1 apps=xiuxian
....
metallb-system pod/speaker-srvw8 1/1 Running 1 (7h17m ago) 4d6h 10.0.0.231 master231 <none> <none>
metallb-system pod/speaker-tgwql 1/1 Running 1 (7h17m ago) 4d4h 10.0.0.232 worker232 <none> <none>
metallb-system pod/speaker-zpn5c 1/1 Running 1 (7h17m ago) 4d1h 10.0.0.233 worker233 <none> <none>
tigera-operator pod/tigera-operator-8d497bb9f-bcj5s 1/1 Running 4 (5h44m ago) 4d3h 10.0.0.232 worker232 <none> <none>
[root@worker232 ~]#
[root@worker232 ~]#
[root@worker232 ~]# kubectl --server=https://10.0.0.231:6443 --token=8fd32c.0868709b9e5786a8 --certificate-authority=/etc/kubernetes/pki/ca.crt get deploy,rs,po -o wide -A # 很明显,linux96属于k3s分组,不属于K8S组的,因此无法访问!
Error from server (Forbidden): deployments.apps is forbidden: User "linux96" cannot list resource "deployments" in API group "apps" at the cluster scope
Error from server (Forbidden): replicasets.apps is forbidden: User "linux96" cannot list resource "replicasets" in API group "apps" at the cluster scope
Error from server (Forbidden): pods is forbidden: User "linux96" cannot list resource "pods" in API group "" at the cluster scope
[root@worker232 ~]#
6.更新权限
[root@worker232 ~]# kubectl --kubeconfig=./yinzhengjie-k8s.conf delete pod/scheduler-nodeselector-774bf9875f-4mrjp # 只能读取不能删除
Error from server (Forbidden): pods "scheduler-nodeselector-774bf9875f-4mrjp" is forbidden: User "yinzhengjie" cannot delete resource "pods" in API group "" in the namespace "default"
[root@worker232 ~]#
[root@master231 ~]# kubectl create clusterrole reader --resource=deploy,rs,pods --verb=get,watch,list,delete -o yaml --dry-run=client | kubectl apply -f -
Warning: resource clusterroles/reader is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
clusterrole.rbac.authorization.k8s.io/reader configured
7.验证删除权限是否生效
[root@worker232 ~]# kubectl get pod -o wide --kubeconfig=./yinzhengjie-k8s.conf
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
scheduler-nodeselector-774bf9875f-4mrjp 1/1 Running 0 35m 10.100.203.144 worker232 <none> <none>
scheduler-nodeselector-774bf9875f-6hwzv 1/1 Running 0 35m 10.100.203.143 worker232 <none> <none>
scheduler-nodeselector-774bf9875f-8mfvr 1/1 Running 0 35m 10.100.203.146 worker232 <none> <none>
scheduler-nodeselector-774bf9875f-ftdrt 1/1 Running 0 35m 10.100.203.141 worker232 <none> <none>
scheduler-nodeselector-774bf9875f-r5ff6 1/1 Running 0 35m 10.100.203.140 worker232 <none> <none>
[root@worker232 ~]#
[root@worker232 ~]#
[root@worker232 ~]# kubectl --kubeconfig=./yinzhengjie-k8s.conf delete pods --all
pod "scheduler-nodeselector-774bf9875f-4mrjp" deleted
pod "scheduler-nodeselector-774bf9875f-6hwzv" deleted
pod "scheduler-nodeselector-774bf9875f-8mfvr" deleted
pod "scheduler-nodeselector-774bf9875f-ftdrt" deleted
pod "scheduler-nodeselector-774bf9875f-r5ff6" deleted
[root@worker232 ~]#
[root@worker232 ~]# kubectl get pod -o wide --kubeconfig=./yinzhengjie-k8s.conf
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
scheduler-nodeselector-774bf9875f-cklnv 1/1 Running 0 10s 10.100.203.139 worker232 <none> <none>
scheduler-nodeselector-774bf9875f-h7bmb 1/1 Running 0 10s 10.100.203.136 worker232 <none> <none>
scheduler-nodeselector-774bf9875f-hdv7b 1/1 Running 0 10s 10.100.203.131 worker232 <none> <none>
scheduler-nodeselector-774bf9875f-hqqt4 1/1 Running 0 10s 10.100.203.147 worker232 <none> <none>
scheduler-nodeselector-774bf9875f-sbkg9 1/1 Running 0 10s 10.100.203.133 worker232 <none> <none>
[root@worker232 ~]#
十四、ClusterRole授权给一个ServiceAccount类型
1.编写资源清单
[root@master231 sa]# cat > oldboyedu-sa-rbac.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: dezyan
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: xiuxian
spec:
replicas: 1
selector:
matchLabels:
app: xiuxian
template:
metadata:
labels:
app: xiuxian
spec:
serviceAccountName: dezyan
containers:
- image: python:3.9.16-alpine3.16
command:
- tail
- -f
- /etc/hosts
name: apps
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: reader-dezyan
rules:
- apiGroups:
- ""
resources:
- pods
- services
verbs:
- get
- watch
- list
- delete
- apiGroups:
- apps
resources:
- deployments
verbs:
- get
- watch
- list
- delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: reader-dezyan-bind
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: reader-dezyan
subjects:
- kind: ServiceAccount
name: dezyan
namespace: default
EOF
2.创建资源
镜像资料:
wget http://192.168.16.253/Resources/Kubernetes/RBAC/python-v3.9.16.tar.gz
docker load -i python-v3.9.16.tar.gz
[root@master231 sa]# kubectl apply -f oldboyedu-sa-rbac.yaml
serviceaccount/dezyan created
deployment.apps/xiuxian created
clusterrole.rbac.authorization.k8s.io/reader-dezyan created
clusterrolebinding.rbac.authorization.k8s.io/reader-dezyan-bind created
[root@master231 sa]#
[root@master231 sa]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
xiuxian-6dffdd86b-m8f2h 1/1 Running 0 2m3s 10.100.140.78 worker233 <none> <none>
[root@master231 sa]#
3.安装依赖包
[root@master231 sa]# kubectl exec -it xiuxian-6dffdd86b-m8f2h -- sh
/ #
/ # pip install kubernetes -i https://pypi.tuna.tsinghua.edu.cn/simple/
...
Successfully installed cachetools-5.5.2 certifi-2025.1.31 charset-normalizer-3.4.1 durationpy-0.9 google-auth-2.38.0 idna-3.10 kubernetes-32.0.1 oauthlib-3.2.2 pyasn1-0.6.1 pyasn1-modules-0.4.2 python-dateutil-2.9.0.post0 pyyaml-6.0.2 requests-2.32.3 requests-oauthlib-2.0.0 rsa-4.9 six-1.17.0 urllib3-2.4.0 websocket-client-1.8.0
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
WARNING: You are using pip version 22.0.4; however, version 25.0.1 is available.
You should consider upgrading via the '/usr/local/bin/python -m pip install --upgrade pip' command.
/ #
4.编写python脚本
/ # cat > view-k8s-resources.py <<EOF
from kubernetes import client, config
with open('/var/run/secrets/kubernetes.io/serviceaccount/token') as f:
token = f.read()
configuration = client.Configuration()
configuration.host = "https://kubernetes" # APISERVER地址
configuration.ssl_ca_cert="/var/run/secrets/kubernetes.io/serviceaccount/ca.crt" # CA证书
configuration.verify_ssl = True # 启用证书验证
configuration.api_key = {"authorization": "Bearer " + token} # 指定Token字符串
client.Configuration.set_default(configuration)
apps_api = client.AppsV1Api()
core_api = client.CoreV1Api()
try:
print("###### Deployment列表 ######")
#列出default命名空间所有deployment名称
for dp in apps_api.list_namespaced_deployment("default").items:
print(dp.metadata.name)
except:
print("没有权限访问Deployment资源!")
try:
#列出default命名空间所有pod名称
print("###### Pod列表 ######")
for po in core_api.list_namespaced_pod("default").items:
print(po.metadata.name)
except:
print("没有权限访问Pod资源!")
EOF
5.运行python脚本
/ # python3 view-k8s-resources.py
###### Deployment列表 ######
xiuxian
###### Pod列表 ######
xiuxian-6dffdd86b-m8f2h
/ #
6.更新权限
[root@master231 scheduler]# kubectl edit clusterrole reader-dezyan
...
15 rules:
16 - apiGroups:
17 - ""
18 resources: # 注意哈,此处我删除了Pod的资源!表示无法访问
19 - services
20 verbs:
21 - get
22 - watch
23 - list
24 - delete
25 - apiGroups:
26 - apps
27 resources:
28 - deployments
29 verbs:
30 - get
31 - watch
32 - list
33 - delete
7.再次测试验证
/ # python3 view-k8s-resources.py
###### Deployment列表 ######
xiuxian
###### Pod列表 ######
没有权限访问Pod资源!
/ #
十五、K8S面向用户的四种内置集群角色概述
- cluster-admin:
- 允许用户在目标范围内的任意资源上执行任意操作;使用ClusterRoleBinding关联至用户时,授权操作集群及所有名称空间中任何资源;使用RoleBinding关联至用户时,授权控制其所属名称空间中的所有资源,包括Namespace资源自身,隶属于"system:masters 组"。
- admin:
- 管理员权限,主要用于结合RoleBinding为特定名称空间快速授权生成管理员用户,它能够将RoleBinding所属名称空间中的大多数资源的读/写权限授予目标用户,包括创建Role和RoleBinding的能力;但不支持对ResourceQuota及Namespace本身进行操作;
- edit:
- 接近于admin的权限,支持对名称空间内的大多数对象进行读/写操作,包括Secret,但不允许查看或修改Role及RoleBinding;
- view:
- 允许以只读方式访问名称空间中的大多数对象,但不包括Role、RoleBinding和Secret
[root@master231 ~]# kubectl get clusterrole cluster-admin admin edit view
NAME CREATED AT
cluster-admin 2025-04-07T03:00:12Z
admin 2025-04-07T03:00:12Z
edit 2025-04-07T03:00:12Z
view 2025-04-07T03:00:12Z
[root@master231 ~]#
[root@master231 ~]# kubectl get clusterrole cluster-admin -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
creationTimestamp: "2025-04-07T03:00:12Z"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: cluster-admin
resourceVersion: "87"
uid: 5fe34722-b36a-4008-bc80-927cde3096bc
rules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- '*'
- nonResourceURLs:
- '*'
verbs:
- '*'
[root@master231 ~]#
[root@master231 ~]# kubectl get clusterrole cluster-admin -o yaml | wc -l
22
[root@master231 ~]#
[root@master231 ~]# kubectl get clusterrole admin -o yaml | wc -l
290
[root@master231 ~]#
[root@master231 ~]# kubectl get clusterrole edit -o yaml | wc -l
271
[root@master231 ~]#
[root@master231 ~]# kubectl get clusterrole view -o yaml | wc -l
143
[root@master231 ~]#
十六、kubeconfig的加载优先级
1.使用"--kubeconfig"
[root@worker232 ~]# kubectl --kubeconfig=./yinzhengjie-k8s.conf get pods,deploy,rs -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/xiuxian-6dffdd86b-m8f2h 1/1 Running 0 45m 10.100.140.78 worker233 <none> <none>
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/xiuxian 1/1 1 1 45m apps python:3.9.16-alpine3.16 app=xiuxian
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/xiuxian-6dffdd86b 1 1 1 45m apps python:3.9.16-alpine3.16 app=xiuxian,pod-template-hash=6dffdd86b
2.使用变量指定Kubeconfig文件
[root@worker232 ~]# kubectl get pods,deploy,rs -o wide
The connection to the server localhost:8080 was refused - did you specify the right host or port?
[root@worker232 ~]#
[root@worker232 ~]# export KUBECONFIG=/root/yinzhengjie-k8s.conf
[root@worker232 ~]#
[root@worker232 ~]# ll /root/yinzhengjie-k8s.conf
-rw------- 1 root root 1941 Apr 14 11:33 /root/yinzhengjie-k8s.conf
[root@worker232 ~]#
[root@worker232 ~]# kubectl get pods,deploy,rs -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/xiuxian-6dffdd86b-m8f2h 1/1 Running 0 46m 10.100.140.78 worker233 <none> <none>
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/xiuxian 1/1 1 1 46m apps python:3.9.16-alpine3.16 app=xiuxian
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/xiuxian-6dffdd86b 1 1 1 46m apps python:3.9.16-alpine3.16 app=xiuxian,pod-template-hash=6dffdd86b
3.加载默认的路径"~/.kube/config"
[root@worker232 ~]# unset KUBECONFIG
[root@worker232 ~]#
[root@worker232 ~]# kubectl get pods,deploy,rs -o wide
The connection to the server localhost:8080 was refused - did you specify the right host or port?
[root@worker232 ~]#
[root@worker232 ~]# mv /root/yinzhengjie-k8s.conf /root/.kube/config
[root@worker232 ~]#
[root@worker232 ~]# kubectl get pods,deploy,rs -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/xiuxian-6dffdd86b-m8f2h 1/1 Running 0 48m 10.100.140.78 worker233 <none> <none>
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/xiuxian 1/1 1 1 48m apps python:3.9.16-alpine3.16 app=xiuxian
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/xiuxian-6dffdd86b 1 1 1 48m apps python:3.9.16-alpine3.16 app=xiuxian,pod-template-hash=6dffdd86b
十七、验证Kubeconfig加载的优先级
1.环境准备
[root@worker232 ~]# cat /tmp/jasonyin.kubeconfig
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJMU1EUXdOekF6TURBd05Gb1hEVE0xTURRd05UQXpNREF3TkZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTTl4Cmh0RHhVQVJsUGo0NlFEa1Rwd3dPWnJsN2d1bG5IUzRYN1Y1S1pFN3cyZVZRakJXUmpRMENnSzNjMFFBa3hoT1YKWXl4Y1pSbVg2U3FkRFZOWFBNQVZzSmNUeDd4VkRWNk9DYVQxSjRkZmcxVWNGTTNidXM5R3VMMzBITVBRYVEvaApyN2RrcnkxTUlLaVh3MUU5SkFSc05PMnhnamJBMHJEWlpIOXRRRlpwMlpUa1BNU1AzMG5WTWJvNWh3MHZLUGplCnoxNlB6Q3JwUjJIRkZrc0dXRmI3SnVobHlkWmpDaVQwOFJPY3N5ZERUTVFXZWZBdTNEcUJvMHpOSmtrcVovaVAKWkFFZ29DNXZ2MEg2N0Q4SEJxSzArRmUrZjJCaUs1SGNoYkF1WndwWjNkQ0pMTXVmU3FSWkNVVmFtTW56dWlaRApQTmVJbmdPSCtsMWZReTFad0pzQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZCRms1eStsM2RFMUhtT3lkSUYybDlDMDgvbk9NQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBQmxjZ0l1YUsxSVZydVBTVzk2SwpkTTZ6V294WmJlaVpqTWdpd2Q2R3lSL0JBdjI2QzB5V1piZjFHY3A4TlBISDJLdlhscTliUGpSODZSUkNpRFQ4Ci9VZGlTWVpQejByNnJrcTVCZ2x1Rk5XNlRTTXJyRndEVDlubVh0d0pZdzVQU29sS0JHQjIvaThaVTVwL3FkQUMKZ2Z3bU1sY3NPV3ZFUVV5bTVUYmZiWVU3NStxODJsNjY5ZGpGenh2VHFEWEIvZ0hoK1JvRXVaRTNSdjd5Slc1MwpMbkVhVWZSYjRCcmxGclFrKzlPRXZKMUF5UTE0LzcwTjlhVlJXZVZpTkxyQVdJTTNnajN1WmVHMk5yMXdic1ozCjM3VDF5MSs3TVlRcUpiUWRleUpyUVRyaGNjMXlRWTJIOEpaOXBqOERhNVVpSjlkQ1ZMeEtJSlFMeTV4b0RXaTgKL2hvPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://10.0.0.231:6443
name: myk8s
contexts:
- context:
cluster: myk8s
user: jasonyin
name: jasonyin@myk8s
current-context: jasonyin@myk8s
kind: Config
preferences: {}
users:
- name: jasonyin
user:
token: 497804.9fc391f505052952
[root@worker232 ~]#
[root@worker232 ~]# cat /tmp/yinzhengjie.kubeconfig
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJMU1EUXdOekF6TURBd05Gb1hEVE0xTURRd05UQXpNREF3TkZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTTl4Cmh0RHhVQVJsUGo0NlFEa1Rwd3dPWnJsN2d1bG5IUzRYN1Y1S1pFN3cyZVZRakJXUmpRMENnSzNjMFFBa3hoT1YKWXl4Y1pSbVg2U3FkRFZOWFBNQVZzSmNUeDd4VkRWNk9DYVQxSjRkZmcxVWNGTTNidXM5R3VMMzBITVBRYVEvaApyN2RrcnkxTUlLaVh3MUU5SkFSc05PMnhnamJBMHJEWlpIOXRRRlpwMlpUa1BNU1AzMG5WTWJvNWh3MHZLUGplCnoxNlB6Q3JwUjJIRkZrc0dXRmI3SnVobHlkWmpDaVQwOFJPY3N5ZERUTVFXZWZBdTNEcUJvMHpOSmtrcVovaVAKWkFFZ29DNXZ2MEg2N0Q4SEJxSzArRmUrZjJCaUs1SGNoYkF1WndwWjNkQ0pMTXVmU3FSWkNVVmFtTW56dWlaRApQTmVJbmdPSCtsMWZReTFad0pzQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZCRms1eStsM2RFMUhtT3lkSUYybDlDMDgvbk9NQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBQmxjZ0l1YUsxSVZydVBTVzk2SwpkTTZ6V294WmJlaVpqTWdpd2Q2R3lSL0JBdjI2QzB5V1piZjFHY3A4TlBISDJLdlhscTliUGpSODZSUkNpRFQ4Ci9VZGlTWVpQejByNnJrcTVCZ2x1Rk5XNlRTTXJyRndEVDlubVh0d0pZdzVQU29sS0JHQjIvaThaVTVwL3FkQUMKZ2Z3bU1sY3NPV3ZFUVV5bTVUYmZiWVU3NStxODJsNjY5ZGpGenh2VHFEWEIvZ0hoK1JvRXVaRTNSdjd5Slc1MwpMbkVhVWZSYjRCcmxGclFrKzlPRXZKMUF5UTE0LzcwTjlhVlJXZVZpTkxyQVdJTTNnajN1WmVHMk5yMXdic1ozCjM3VDF5MSs3TVlRcUpiUWRleUpyUVRyaGNjMXlRWTJIOEpaOXBqOERhNVVpSjlkQ1ZMeEtJSlFMeTV4b0RXaTgKL2hvPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://10.0.0.231:6443
name: myk8s
contexts:
- context:
cluster: myk8s
user: yinzhengjie
name: yinzhengjie@myk8s
current-context: yinzhengjie@myk8s
kind: Config
preferences: {}
users:
- name: yinzhengjie
user:
token: 01b202.d5c4210389cbff08
[root@worker232 ~]#
[root@worker232 ~]# cat /tmp/jiege.kubeconfig
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJMU1EUXdOekF6TURBd05Gb1hEVE0xTURRd05UQXpNREF3TkZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTTl4Cmh0RHhVQVJsUGo0NlFEa1Rwd3dPWnJsN2d1bG5IUzRYN1Y1S1pFN3cyZVZRakJXUmpRMENnSzNjMFFBa3hoT1YKWXl4Y1pSbVg2U3FkRFZOWFBNQVZzSmNUeDd4VkRWNk9DYVQxSjRkZmcxVWNGTTNidXM5R3VMMzBITVBRYVEvaApyN2RrcnkxTUlLaVh3MUU5SkFSc05PMnhnamJBMHJEWlpIOXRRRlpwMlpUa1BNU1AzMG5WTWJvNWh3MHZLUGplCnoxNlB6Q3JwUjJIRkZrc0dXRmI3SnVobHlkWmpDaVQwOFJPY3N5ZERUTVFXZWZBdTNEcUJvMHpOSmtrcVovaVAKWkFFZ29DNXZ2MEg2N0Q4SEJxSzArRmUrZjJCaUs1SGNoYkF1WndwWjNkQ0pMTXVmU3FSWkNVVmFtTW56dWlaRApQTmVJbmdPSCtsMWZReTFad0pzQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZCRms1eStsM2RFMUhtT3lkSUYybDlDMDgvbk9NQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBQmxjZ0l1YUsxSVZydVBTVzk2SwpkTTZ6V294WmJlaVpqTWdpd2Q2R3lSL0JBdjI2QzB5V1piZjFHY3A4TlBISDJLdlhscTliUGpSODZSUkNpRFQ4Ci9VZGlTWVpQejByNnJrcTVCZ2x1Rk5XNlRTTXJyRndEVDlubVh0d0pZdzVQU29sS0JHQjIvaThaVTVwL3FkQUMKZ2Z3bU1sY3NPV3ZFUVV5bTVUYmZiWVU3NStxODJsNjY5ZGpGenh2VHFEWEIvZ0hoK1JvRXVaRTNSdjd5Slc1MwpMbkVhVWZSYjRCcmxGclFrKzlPRXZKMUF5UTE0LzcwTjlhVlJXZVZpTkxyQVdJTTNnajN1WmVHMk5yMXdic1ozCjM3VDF5MSs3TVlRcUpiUWRleUpyUVRyaGNjMXlRWTJIOEpaOXBqOERhNVVpSjlkQ1ZMeEtJSlFMeTV4b0RXaTgKL2hvPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://10.0.0.231:6443
name: myk8s
contexts:
- context:
cluster: myk8s
user: jiege
name: jiege@myk8s
current-context: jiege@myk8s
kind: Config
preferences: {}
users:
- name: jiege
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURDakNDQWZLZ0F3SUJBZ0lSQUtxMEY4YXlpUGlFMkdHUWtpYUN4ZWN3RFFZSktvWklodmNOQVFFTEJRQXcKRlRFVE1CRUdBMVVFQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TlRBME1UUXdNalE0TWpoYUZ3MHlOVEEwTVRVdwpNalE0TWpoYU1DUXhFakFRQmdOVkJBb1RDVzlzWkdKdmVXVmtkVEVPTUF3R0ExVUVBeE1GYW1sbFoyVXdnZ0VpCk1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRRFRvc2doNmVpeU1CVklBNFVWaEpFSWllb0YKSmRxeGNNRDlCVVFLRGs2WUZCQ2xZUE9sd0xPek8xNU1vNk1OZXEwTGIrOFBhQWdMQml4ZERXTTR6TE1yQmxMQgpiL2x3SGkrV3Z5MnQvU1E4WU5MV09HYnhyUCtQVjJ3dUw4OWEyNHBwVk9teFFrdVExcC9XMFJHM3Zxd1RvVnd5ClRzTnlpa0VqZ0xLbXlLZWVVMWFNS3NldTV6TUNNajFYbldRNk5ZMHB3VzcxR0dxbnZ1MjF2VEpqMUllRTRmSjAKd29IMWNpL3ZsS0Y5bERvaUFPNkFJR2VRMEZPbGlNZWkwMHppVVY1aHVPQUZaOEt4eU1oVEZ6eWVjSkl6aFUwcwpLaDdZdWZ4NVR4YUx6ZmNjdk5mUHFMZDQwbmdtUjFlMjQ0aitCclVJcGVDMkVYMHcwV3pBYTdHdjBWWTlBZ01CCkFBR2pSakJFTUJNR0ExVWRKUVFNTUFvR0NDc0dBUVVGQndNQ01Bd0dBMVVkRXdFQi93UUNNQUF3SHdZRFZSMGoKQkJnd0ZvQVVFV1RuTDZYZDBUVWVZN0owZ1hhWDBMVHorYzR3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQURVbQpSVzRoRm83cjlreEszK1FuaENQL0lzVjNGZXltQkN5WUdUWVJoUlJOTCtEQldadlhTTUxuSkppNXRsZkFNSmNtCnY2MWN4MDY0cDRXM25TSG1aU04rODUySUR1alBwWjRXeTJ1VmIwVXR6MUtkM1RBVmJTNGdWTnVRMEgvaGs1aXEKSm9Zelh0WjdiQU4xSEgyQ3RjMUlpSGlNYzBHV1djcUtQQWtzZmNrTjR2Z2lYUDNZVTRFS1lJdXBtVWV4czBLbApoRXVHNUp3aGtLVStYWFZqNm1CWDdrNnBIT3Z3SG5lNEJDRW1sT2lIYnRXU3ZPd2poUTB1ZEJ6OEFKUWYxYVJjCkkyMW5oK2dCekpDdk5oOUpLVXpkemVMSFpld0g2dzB1YndJdEUvWDV3S3l6UmNwMUpweGZoZm1TZW00elRKbnMKS2JnV3pOUzYvUHp0ak90NWV4az0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
client-key-data: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRRFRvc2doNmVpeU1CVkkKQTRVVmhKRUlpZW9GSmRxeGNNRDlCVVFLRGs2WUZCQ2xZUE9sd0xPek8xNU1vNk1OZXEwTGIrOFBhQWdMQml4ZApEV000ekxNckJsTEJiL2x3SGkrV3Z5MnQvU1E4WU5MV09HYnhyUCtQVjJ3dUw4OWEyNHBwVk9teFFrdVExcC9XCjBSRzN2cXdUb1Z3eVRzTnlpa0VqZ0xLbXlLZWVVMWFNS3NldTV6TUNNajFYbldRNk5ZMHB3VzcxR0dxbnZ1MjEKdlRKajFJZUU0Zkowd29IMWNpL3ZsS0Y5bERvaUFPNkFJR2VRMEZPbGlNZWkwMHppVVY1aHVPQUZaOEt4eU1oVApGenllY0pJemhVMHNLaDdZdWZ4NVR4YUx6ZmNjdk5mUHFMZDQwbmdtUjFlMjQ0aitCclVJcGVDMkVYMHcwV3pBCmE3R3YwVlk5QWdNQkFBRUNnZ0VBTnI0TWRubENyNVN3YklnOGpHeFY5NWQwNlEvNW1aeEl6eW5saDVSYjBBcWcKbzZhSVgzK1ErL09IV051YStZbVo2VE55NnRGR0ExUDlkYlJZemdCazkrUVMwK1phNXgxbndkNkJ1bGVZWCtYTApvNDNEVXhBa3FyYzZURmdoa3FibkRvZmdTdkdUQ2t2NTNGOEg3amRyMjBnSnlSbUdoTUl1UnppcS9XazVza0h6CjFWQzRvdWl1Qk1yTStzcXhOWVNmYnJGK3pXV3R1QW05RzBkejVWRzdKSGRIOUEyMHFCeW5uNkF2VU5zempvdm8KYk9jVDVMenc5eGtOKzRjNnlXd3JWdzRRb3hCUWdUVi9Cd0l3bjlqZnB2eXRqaGp4bW9kVEoxcEJZT0ZMb0Q3WQp1YlVoVHdtL1Q1SmZXT0wyR09nZjNOempYeFlVS056WmhvMXJVMVEzSVFLQmdRRHVoV3NwQmdRY2dGZU9OWEhBCjdBQ1MrQldvWFZTeU1TMWdodUF1cFZmakVWL2ZHa3kvOVMyYkZMUVhPaFMrSCtuZUNlZHdUVzZKVC9rNitxYVkKbkVqaGpMenJsTWY3YUt1QkdFUnpZTmc0S2pUekdlOFViaURRRFE2MlRtMDk1eVhVN0lTSjJnS1Vad0RWY0ROUApVR3lBOWFEMHF4aGp1WkJOVFpwaG94MzhId0tCZ1FEakpRRGpscC9uRVFEMFpScm56WFJVSmc4ZTdFUGF6dVBlCkRSYUlrSjFCSzlTRjlBd0pid2hNNkVwRUxWbjNWSnpSZ2JVSENDdnhhbzB0WTFxaldaN1RocTFQb3I4aXQ1RUQKSlE4VG9UMzkrdDgwR0N4T1lZWC8zUUlHcThKa1lGSGtiekhJek9wK1B0UEJESXNIMkdXRWxKUVVrMWo1bG1pWAptdEorRVV4aUl3S0JnUUMwb2FkZ251UzRMTjJobllteS8wY0VCZ3Bvd1oxbGdPYUxaamthT2k4UGo5WFo0RkhsClFTaXplLzlTWTdMWHROVm9TSG5UeTEvOWJ1b2dwemRJOVhvZ0RYUDR1R2ltVlVNa2RadEpBVHRkZFdFNkJSYlEKa3dJWWJQc0tSdVJsNzhudnNOcENoeTVTOHBwb0NSdGlZbFo1Wndyb256WE9OL1kzQktENGRnNDhJd0tCZ0NzMwpYaHp2Q290WEE5eDc1QXVZWG5xb0p4WldFMjd0RUJPdVg4d3AzNUdIdWs2bUtTZ2VWUEQwL1RSTmdLRjdHcjhOCnM1aWI2R2h0UW1FUlZ5eGZIOFhWQ09KdTczaTJma09mNkdkdXRURythbnNwNGp3amQvQS9aMlJIaDV1N2E3bFAKb3FRMndLSzJaMm1DYm0xV3NiSHc1dCtuVFRWbmRZenFxd1BMWE1JTEFvR0FMK21ldGNiejlSSFN1d0NUTW5lRQo0dFFxanBqM3o1OUQwWFR0dytiRG4vYnRBalF1VHJqa2k2Ums2a0E2bG1YYXRKQ2Z3WnVNdTd5L0MyUThUS1hjCjVWcUt1cGNhdnpHTWkzeVJrcmlmSEhpb2V1NGpXNlQyYk1XcDRuUTRoV050cEx1blF5aXNCeGpOZEMzZzBONmEKb2M4eXBOL3ZUVHFGdVB6Q3l2VmxUWEU9Ci0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0K
[root@worker232 ~]#
[root@worker232 ~]# kubectl config get-contexts --kubeconfig=/tmp/yinzhengjie.kubeconfig
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* yinzhengjie@myk8s myk8s yinzhengjie
[root@worker232 ~]#
[root@worker232 ~]#
[root@worker232 ~]#
[root@worker232 ~]# kubectl config get-contexts --kubeconfig=/tmp/jasonyin.kubeconfig
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* jasonyin@myk8s myk8s jasonyin
[root@worker232 ~]#
[root@worker232 ~]#
[root@worker232 ~]# kubectl config get-contexts --kubeconfig=/tmp/jiege.kubeconfig
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* jiege@myk8s myk8s jiege
[root@worker232 ~]#
2.测试验证优先级
[root@worker232 ~]# mv /tmp/jiege.kubeconfig ~/.kube/config
[root@worker232 ~]#
[root@worker232 ~]# kubectl get rc
Error from server (Forbidden): replicationcontrollers is forbidden: User "jiege" cannot list resource "replicationcontrollers" in API group "" in the namespace "default"
[root@worker232 ~]#
[root@worker232 ~]# export KUBECONFIG=/tmp/jasonyin.kubeconfig
[root@worker232 ~]#
[root@worker232 ~]# kubectl get rc
Error from server (Forbidden): replicationcontrollers is forbidden: User "jasonyin" cannot list resource "replicationcontrollers" in API group "" in the namespace "default"
[root@worker232 ~]#
[root@worker232 ~]# kubectl get rc --kubeconfig=/tmp/yinzhengjie.kubeconfig
Error from server (Forbidden): replicationcontrollers is forbidden: User "yinzhengjie" cannot list resource "replicationcontrollers" in API group "" in the namespace "default"
[root@worker232 ~]#
十八、验证"/root/.kube/config"文件默认的集群角色权限
1.导入证书
[root@master231 ~]# kubectl config view --raw -o jsonpath='{.users[0].user.client-certificate-data}' | base64 -d > /opt/admin.crt
[root@master231 ~]#
[root@master231 ~]# ll /opt/admin.crt
-rw-r--r-- 1 root root 1147 Apr 14 17:06 /opt/admin.crt
[root@master231 ~]#
2.查看证书信息
[root@master231 ~]# openssl x509 -noout -text -in /opt/admin.crt
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 4731428108118432283 (0x41a968cbd0b29a1b)
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN = kubernetes
Validity
Not Before: Apr 7 03:00:04 2025 GMT
Not After : Apr 7 03:00:06 2026 GMT
Subject: O = system:masters, CN = kubernetes-admin # 主要关注该字段,'O'代表组,'CN'代表用户
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:ae:43:a8:9c:14:9d:c4:71:5d:bd:13:6f:52:85:
66:01:3b:8c:79:3d:2e:3a:e4:e3:63:8c:8e:7f:2b:
fc:3c:89:43:78:3c:60:9a:ec:f2:f6:97:3e:2e:cb:
c2:11:0a:c5:0f:14:7f:5d:4f:68:fe:0d:70:e7:61:
2c:a0:8a:7f:07:b6:b0:f6:f8:ef:bb:76:9b:b8:f9:
04:fc:9b:25:a5:9f:c5:bf:52:ef:b2:17:4c:01:a8:
73:7c:5b:15:6f:54:12:d4:de:6a:af:60:20:f1:90:
33:c0:96:c8:f4:56:b1:1f:7d:7a:64:aa:62:30:57:
a6:9e:56:cb:51:45:e5:a0:fc:94:70:91:83:a5:d3:
24:aa:ed:0e:fc:bd:bf:95:cf:f9:3e:3e:89:41:e8:
24:41:25:bb:54:64:3c:34:6d:46:a1:2a:98:73:31:
e6:7c:46:2d:33:15:c0:b4:c4:8f:d7:81:06:24:17:
f4:8f:49:e5:c1:3c:00:3a:f2:41:05:e7:1f:06:d8:
48:35:8c:cc:c7:08:0c:73:52:07:58:9c:b6:49:3b:
76:1b:15:49:ae:66:17:24:93:2e:36:4e:89:7f:28:
aa:44:80:37:dc:f0:80:36:ef:c2:a6:d9:96:68:53:
a2:95:eb:84:41:79:30:fd:48:fb:1b:4f:03:9d:78:
1f:1b
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature, Key Encipherment
X509v3 Extended Key Usage:
TLS Web Client Authentication
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Authority Key Identifier:
11:64:E7:2F:A5:DD:D1:35:1E:63:B2:74:81:76:97:D0:B4:F3:F9:CE
Signature Algorithm: sha256WithRSAEncryption
Signature Value:
46:90:b5:79:3c:91:f3:9b:26:7e:c2:10:8f:7c:e0:b0:e0:aa:
02:51:ea:9d:3a:25:07:a9:b4:33:15:7b:50:b7:81:55:f8:7a:
87:b9:2a:ca:a3:ec:cd:9f:ac:df:9d:2b:ab:8c:eb:b3:6e:82:
ff:52:0f:a7:e6:70:d0:7a:15:a2:cb:dc:34:91:68:d6:6d:1c:
8a:24:52:47:98:c8:bb:80:93:8b:07:6d:89:f4:50:1a:f2:61:
7c:04:0b:7d:c6:30:45:e2:eb:50:01:f8:6e:55:c4:15:b9:c9:
fb:76:ab:c6:f3:82:3f:0e:7c:73:f8:88:36:85:98:03:3c:98:
3c:80:dd:ad:22:75:04:d5:45:6c:46:89:f1:71:95:45:a9:be:
1d:d7:78:b3:99:4b:6f:17:f7:5d:83:8e:27:8c:9c:6a:4a:22:
b9:a3:fb:b5:3e:a5:ef:5b:ef:a7:4e:7f:83:ca:7b:1a:c9:56:
4e:da:9a:12:4a:d6:9a:7a:d0:61:6e:d5:bb:73:32:a9:ae:37:
63:1b:50:2e:48:68:5b:76:70:8a:5c:46:e4:c6:7c:7b:0b:b9:
c2:86:53:00:7f:86:d3:0d:82:6a:8f:7c:e0:41:cf:3e:0f:e4:
3f:c5:0e:2d:d0:5e:85:c5:07:d1:26:f3:6a:90:36:d8:28:32:
9f:b2:77:74
[root@master231 ~]#
3.其中'system:masters'组关联的是“cluster-admin”的内置集群角色
[root@master231 ~]# kubectl get clusterrolebindings cluster-admin -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
creationTimestamp: "2025-04-07T03:00:12Z"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: cluster-admin
resourceVersion: "149"
uid: 55322e36-389a-44d2-9697-841ac569272e
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:masters
[root@master231 ~]#
十九、metrics-server环境部署及故障排查案例
1.什么metrics-server
metrics-server为K8S集群的"kubectl top"命令提供数据监控,也提供了"HPA(HorizontalPodAutoscaler)"的使用。
[root@master231 ~]# kubectl top pods
error: Metrics API not available
[root@master231 ~]#
[root@master231 ~]# kubectl top nodes
error: Metrics API not available
[root@master231 ~]#
部署文档
https://github.com/kubernetes-sigs/metrics-server
2.hpa和vpa的区别?
- hpa:
表示Pod数量资源不足时,可以自动增加Pod副本数量,以抵抗流量过多的情况,降低负载。
- vpa:
表示可以动态调整容器的资源上线,比如一个Pod一开始是200Mi内存,如果资源达到定义的阈值,就可以扩展内存,但不会增加pod副本数量。
典型的区别在于vpa具有一定的资源上限问题,因为pod是K8S集群调度的最小单元,不可拆分,因此这个将来扩容时,取决于单节点的资源上限。
3.metrics-server组件本质上是从kubelet组件获取监控数据
[root@master231 pki]# pwd
/etc/kubernetes/pki
[root@master231 pki]#
[root@master231 pki]# ll apiserver-kubelet-client.*
-rw-r--r-- 1 root root 1164 Apr 7 11:00 apiserver-kubelet-client.crt
-rw------- 1 root root 1679 Apr 7 11:00 apiserver-kubelet-client.key
[root@master231 pki]#
[root@master231 pki]# curl -s -k --key apiserver-kubelet-client.key --cert apiserver-kubelet-client.crt https://10.0.0.231:10250/metrics/resource | wc -l
102
[root@master231 pki]#
[root@master231 pki]# curl -s -k --key apiserver-kubelet-client.key --cert apiserver-kubelet-client.crt https://10.0.0.232:10250/metrics/resource | wc -l
67
[root@master231 pki]#
[root@master231 pki]# curl -s -k --key apiserver-kubelet-client.key --cert apiserver-kubelet-client.crt https://10.0.0.233:10250/metrics/resource | wc -l
57
[root@master231 pki]#
4.部署metrics-server组件
4.1 下载资源清单
[root@master231 ~]# wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/high-availability-1.21+.yaml
4.2 编辑配置文件
[root@master231 ~]# vim high-availability-1.21+.yaml
...
114 apiVersion: apps/v1
115 kind: Deployment
116 metadata:
...
144 - args:
145 - --kubelet-insecure-tls # 不要验证Kubelets提供的服务证书的CA。不配置则会报错x509。
...
4.3 部署metrics-server组件
[root@master231 ~]# kubectl apply -f high-availability-1.21+.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
poddisruptionbudget.policy/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
[root@master231 ~]#
4.4 查看镜像是否部署成功
[root@master231 ~]# kubectl get pods -o wide -n kube-system -l k8s-app=metrics-server
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
metrics-server-6b4f784878-gwsf5 1/1 Running 0 27s 10.100.203.150 worker232 <none> <none>
metrics-server-6b4f784878-qjvwr 1/1 Running 0 27s 10.100.140.81 worker233 <none> <none>
4.5 验证metrics组件是否正常工作
[root@master231 ~]# kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
master231 136m 6% 2981Mi 78%
worker232 53m 2% 1707Mi 45%
worker233 45m 2% 1507Mi 39%
[root@master231 ~]#
[root@master231 ~]# kubectl top pods -n kube-system
NAME CPU(cores) MEMORY(bytes)
coredns-6d8c4cb4d-bknzr 1m 11Mi
coredns-6d8c4cb4d-cvp9w 1m 31Mi
etcd-master231 10m 75Mi
kube-apiserver-master231 33m 334Mi
kube-controller-manager-master231 8m 56Mi
kube-proxy-29dbp 4m 19Mi
kube-proxy-hxmzb 7m 18Mi
kube-proxy-k92k2 1m 31Mi
kube-scheduler-master231 2m 17Mi
metrics-server-6b4f784878-gwsf5 2m 17Mi
metrics-server-6b4f784878-qjvwr 2m 17Mi
[root@master231 ~]#
[root@master231 ~]# kubectl top pods -A
NAMESPACE NAME CPU(cores) MEMORY(bytes)
calico-apiserver calico-apiserver-64b779ff45-cspxl 4m 28Mi
calico-apiserver calico-apiserver-64b779ff45-fw6pc 3m 29Mi
calico-system calico-kube-controllers-76d5c7cfc-89z7j 3m 16Mi
calico-system calico-node-4cvnj 16m 140Mi
calico-system calico-node-qbxmn 16m 143Mi
calico-system calico-node-scwkd 17m 138Mi
calico-system calico-typha-595f8c6fcb-bhdw6 1m 18Mi
calico-system calico-typha-595f8c6fcb-f2fw6 2m 22Mi
calico-system csi-node-driver-2mzq6 1m 8Mi
calico-system csi-node-driver-7z4hj 1m 8Mi
calico-system csi-node-driver-m66z9 1m 15Mi
default xiuxian-6dffdd86b-m8f2h 1m 33Mi
kube-system coredns-6d8c4cb4d-bknzr 1m 11Mi
kube-system coredns-6d8c4cb4d-cvp9w 1m 31Mi
kube-system etcd-master231 16m 74Mi
kube-system kube-apiserver-master231 35m 334Mi
kube-system kube-controller-manager-master231 9m 57Mi
kube-system kube-proxy-29dbp 4m 19Mi
kube-system kube-proxy-hxmzb 7m 18Mi
kube-system kube-proxy-k92k2 10m 31Mi
kube-system kube-scheduler-master231 2m 17Mi
kube-system metrics-server-6b4f784878-gwsf5 2m 17Mi
kube-system metrics-server-6b4f784878-qjvwr 2m 17Mi
kuboard kuboard-agent-2-6964c46d56-cm589 5m 9Mi
kuboard kuboard-agent-77dd5dcd78-jc4rh 5m 24Mi
kuboard kuboard-etcd-qs5jh 4m 35Mi
kuboard kuboard-v3-685dc9c7b8-2pd2w 36m 353Mi
metallb-system controller-686c7db689-cnj2c 1m 18Mi
metallb-system speaker-srvw8 3m 31Mi
metallb-system speaker-tgwql 3m 17Mi
metallb-system speaker-zpn5c 3m 17Mi
tigera-operator tigera-operator-8d497bb9f-bcj5s 2m 27Mi
十九、水平Pod伸缩hpa实战
1.什么是hpa
hpa是k8s集群内置的资源,全称为"HorizontalPodAutoscaler"。
可以自动实现Pod水平伸缩,说白了,在业务高峰期可以自动扩容Pod副本数量,在集群的低谷期,可以自动缩容Pod副本数量。
2.hpa
2.1 创建Pod
[root@master231 ~]# cat 01-deploy-hpa.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: stress
spec:
replicas: 1
selector:
matchLabels:
app: stress
template:
metadata:
labels:
app: stress
spec:
containers:
- image: jasonyin2020/oldboyedu-linux-tools:v0.1
name: oldboyedu-linux-tools
args:
- tail
- -f
- /etc/hosts
resources:
requests:
cpu: 0.2
memory: 300Mi
limits:
cpu: 0.5
memory: 500Mi
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: stress-hpa
spec:
maxReplicas: 5
minReplicas: 2
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: stress
targetCPUUtilizationPercentage: 95
[root@master231 ~]# kubectl apply -f 01-deploy-hpa.yaml
deployment.apps/stress created
horizontalpodautoscaler.autoscaling/stress-hpa created
[root@master231 ~]#
[root@master231 ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
stress-5585b5ccc-tlf8p 0/1 ContainerCreating 0 7s <none> worker233 <none> <none>
[root@master231 ~]#
[root@master231 ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
stress-5585b5ccc-tlf8p 1/1 Running 0 15s 10.100.140.80 worker233 <none> <none>
[root@master231 ~]#
2.2 测试验证
[root@master231 ~]# kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
stress 2/2 2 2 94s
[root@master231 ~]#
[root@master231 ~]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
stress-hpa Deployment/stress 0%/95% 2 5 2 98s
[root@master231 ~]#
[root@master231 ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
stress-5585b5ccc-6hm85 1/1 Running 0 85s 10.100.203.154 worker232 <none> <none>
stress-5585b5ccc-tlf8p 1/1 Running 0 100s 10.100.140.80 worker233 <none> <none>
[root@master231 ~]#
响应式创建hpa
[root@master231 horizontalpodautoscalers]# kubectl autoscale deploy stress --min=2 --max=5 --cpu-percent=95 -o yaml --dry-run=client
2.3 压力测试
[root@master231 ~]# kubectl exec stress-5585b5ccc-6hm85 -- stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --timeout 10m
stress: info: [7] dispatching hogs: 8 cpu, 4 io, 2 vm, 0 hdd
2.4 查看Pod副本数量
[root@master231 ~]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
stress-hpa Deployment/stress 125%/95% 2 5 3 4m48s
[root@master231 ~]#
[root@master231 ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
stress-5585b5ccc-6hm85 1/1 Running 0 5m34s 10.100.203.154 worker232 <none> <none>
stress-5585b5ccc-b2wdd 1/1 Running 0 78s 10.100.140.83 worker233 <none> <none>
stress-5585b5ccc-tlf8p 1/1 Running 0 5m49s 10.100.140.80 worker233 <none> <none>
[root@master231 ~]#
[root@master231 ~]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
stress-hpa Deployment/stress 83%/95% 2 5 3 5m31s
2.5 再次压测
[root@master231 ~]# kubectl exec stress-5585b5ccc-b2wdd -- stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --timeout 10m
stress: info: [6] dispatching hogs: 8 cpu, 4 io, 2 vm, 0 hdd
[root@master231 ~]# kubectl exec stress-5585b5ccc-tlf8p -- stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --timeout 10m
stress: info: [7] dispatching hogs: 8 cpu, 4 io, 2 vm, 0 hdd
2.6 发现最多有5个Pod创建
[root@master231 ~]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
stress-hpa Deployment/stress 177%/95% 2 5 3 7m27s
[root@master231 ~]#
[root@master231 ~]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
stress-hpa Deployment/stress 250%/95% 2 5 5 7m33s
[root@master231 ~]#
[root@master231 ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
stress-5585b5ccc-6hm85 1/1 Running 0 7m59s 10.100.203.154 worker232 <none> <none>
stress-5585b5ccc-b2wdd 1/1 Running 0 3m43s 10.100.140.83 worker233 <none> <none>
stress-5585b5ccc-l6d97 1/1 Running 0 58s 10.100.203.149 worker232 <none> <none>
stress-5585b5ccc-sqlzz 1/1 Running 0 58s 10.100.140.82 worker233 <none> <none>
stress-5585b5ccc-tlf8p 1/1 Running 0 8m14s 10.100.140.80 worker233 <none> <none>
[root@master231 ~]#
[root@master231 ~]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
stress-hpa Deployment/stress 150%/95% 2 5 5 8m26s
[root@master231 ~]#
2.7 取消压测后
需要等待5min会自动缩容Pod数量到2个。
3.故障排查案例
[root@master231 ~]# kubectl get pods -o wide -n kube-system -l k8s-app=metrics-server
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
metrics-server-6f5b66d8f9-fvbqm 0/1 Running 0 15m 10.100.203.151 worker232 <none> <none>
metrics-server-6f5b66d8f9-n2zxs 0/1 Running 0 15m 10.100.140.77 worker233 <none> <none>
[root@master231 ~]#
[root@master231 ~]# kubectl -n kube-system logs metrics-server-6f5b66d8f9-fvbqm
...
E0414 09:30:03.341444 1 scraper.go:149] "Failed to scrape node" err="Get \"https://10.0.0.233:10250/metrics/resource\": tls: failed to verify certificate: x509: cannot validate certificate for 10.0.0.233 because it doesn't contain any IP SANs" node="worker233"
E0414 09:30:03.352008 1 scraper.go:149] "Failed to scrape node" err="Get \"https://10.0.0.232:10250/metrics/resource\": tls: failed to verify certificate: x509: cannot validate certificate for 10.0.0.232 because it doesn't contain any IP SANs" node="worker232"
E0414 09:30:03.354140 1 scraper.go:149] "Failed to scrape node" err="Get \"https://10.0.0.231:10250/metrics/resource\": tls: failed to verify certificate: x509: cannot validate certificate for 10.0.0.231 because it doesn't contain any IP SANs" node="master231"
问题分析:
证书认证失败,导致无法获取数据。
解决方案:
[root@master231 ~]# vim high-availability-1.21+.yaml
...
114 apiVersion: apps/v1
115 kind: Deployment
116 metadata:
...
144 - args:
145 - --kubelet-insecure-tls # 不要验证Kubelets提供的服务证书的CA。不配置则会报错x509。
...
二十、项目篇: k8s的证书升级(记得拍快照)
1.注意事项
一定要关机拍快照。集群所有节点同时拍快照
2.升级master节点的证书
2.1服务端证书存储路径(很明显,有3套CA证书)
[root@master231 pki]# ll
total 72
drwxr-xr-x 3 root root 4096 Apr 15 10:37 ./
drwxr-xr-x 4 root root 4096 Apr 7 11:00 ../
-rw-r--r-- 1 root root 1285 Apr 7 11:00 apiserver.crt
-rw-r--r-- 1 root root 1155 Apr 7 11:00 apiserver-etcd-client.crt
-rw------- 1 root root 1679 Apr 7 11:00 apiserver-etcd-client.key
-rw------- 1 root root 1679 Apr 7 11:00 apiserver.key
-rw-r--r-- 1 root root 1164 Apr 7 11:00 apiserver-kubelet-client.crt
-rw------- 1 root root 1679 Apr 7 11:00 apiserver-kubelet-client.key
-rw-r--r-- 1 root root 1099 Apr 7 11:00 ca.crt
-rw------- 1 root root 1679 Apr 7 11:00 ca.key
drwxr-xr-x 2 root root 4096 Apr 7 11:00 etcd/
-rw-r--r-- 1 root root 1115 Apr 7 11:00 front-proxy-ca.crt
-rw------- 1 root root 1675 Apr 7 11:00 front-proxy-ca.key
-rw-r--r-- 1 root root 1119 Apr 7 11:00 front-proxy-client.crt
-rw------- 1 root root 1675 Apr 7 11:00 front-proxy-client.key
-rw------- 1 root root 1675 Apr 7 11:00 sa.key
-rw------- 1 root root 451 Apr 7 11:00 sa.pub
-rw-r--r-- 1 root root 257 Apr 14 09:53 token.csv
[root@master231 pki]# ll etcd/
total 40
drwxr-xr-x 2 root root 4096 Apr 7 11:00 ./
drwxr-xr-x 3 root root 4096 Apr 15 10:37 ../
-rw-r--r-- 1 root root 1086 Apr 7 11:00 ca.crt
-rw------- 1 root root 1675 Apr 7 11:00 ca.key
-rw-r--r-- 1 root root 1159 Apr 7 11:00 healthcheck-client.crt
-rw------- 1 root root 1679 Apr 7 11:00 healthcheck-client.key
-rw-r--r-- 1 root root 1200 Apr 7 11:00 peer.crt
-rw------- 1 root root 1675 Apr 7 11:00 peer.key
-rw-r--r-- 1 root root 1200 Apr 7 11:00 server.crt
-rw------- 1 root root 1675 Apr 7 11:00 server.key
2.2 客户端证书存储路径
[root@worker233 ~]# ll /var/lib/kubelet/pki/
total 20
drwxr-xr-x 2 root root 4096 Apr 10 14:50 ./
drwx------ 8 root root 4096 Apr 10 14:50 ../
-rw------- 1 root root 1114 Apr 10 14:50 kubelet-client-2025-04-10-14-50-45.pem
lrwxrwxrwx 1 root root 59 Apr 10 14:50 kubelet-client-current.pem -> /var/lib/kubelet/pki/kubelet-client-2025-04-10-14-50-45.pem
-rw-r--r-- 1 root root 2258 Apr 10 14:50 kubelet.crt
-rw------- 1 root root 1675 Apr 10 14:50 kubelet.key
2.3 检查 kubeadm 管理的本地 PKI 中证书的到期时间
[root@master231 ~]# kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0415 10:39:11.573452 15962 utils.go:69] The recommended value for "resolvConf" in "KubeletConfiguration" is: /run/systemd/resolve/resolv.conf; the provided value is: /run/systemd/resolve/resolv.conf
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Apr 07, 2026 03:00 UTC 357d ca no
apiserver Apr 07, 2026 03:00 UTC 357d ca no
apiserver-etcd-client Apr 07, 2026 03:00 UTC 357d etcd-ca no
apiserver-kubelet-client Apr 07, 2026 03:00 UTC 357d ca no
controller-manager.conf Apr 07, 2026 03:00 UTC 357d ca no
etcd-healthcheck-client Apr 07, 2026 03:00 UTC 357d etcd-ca no
etcd-peer Apr 07, 2026 03:00 UTC 357d etcd-ca no
etcd-server Apr 07, 2026 03:00 UTC 357d etcd-ca no
front-proxy-client Apr 07, 2026 03:00 UTC 357d front-proxy-ca no
scheduler.conf Apr 07, 2026 03:00 UTC 357d ca no
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Apr 05, 2035 03:00 UTC 9y no
etcd-ca Apr 05, 2035 03:00 UTC 9y no
front-proxy-ca Apr 05, 2035 03:00 UTC 9y no
推荐阅读:
https://kubernetes.io/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-certs/
2.4升级master节点的证书
[root@master231 ~]# kubeadm certs renew all
[renew] Reading configuration from the cluster...
[renew] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0415 10:41:47.338086 18695 utils.go:69] The recommended value for "resolvConf" in "KubeletConfiguration" is: /run/systemd/resolve/resolv.conf; the provided value is: /run/systemd/resolve/resolv.conf
certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed
certificate for serving the Kubernetes API renewed
certificate the apiserver uses to access etcd renewed
certificate for the API server to connect to kubelet renewed
certificate embedded in the kubeconfig file for the controller manager to use renewed
certificate for liveness probes to healthcheck etcd renewed
certificate for etcd nodes to communicate with each other renewed
certificate for serving etcd renewed
certificate for the front proxy client renewed
certificate embedded in the kubeconfig file for the scheduler manager to use renewed
Done renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates.
[root@master231 ~]# kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0415 10:41:53.035499 18778 utils.go:69] The recommended value for "resolvConf" in "KubeletConfiguration" is: /run/systemd/resolve/resolv.conf; the provided value is: /run/systemd/resolve/resolv.conf
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Apr 15, 2026 02:41 UTC 364d ca no
apiserver Apr 15, 2026 02:41 UTC 364d ca no
apiserver-etcd-client Apr 15, 2026 02:41 UTC 364d etcd-ca no
apiserver-kubelet-client Apr 15, 2026 02:41 UTC 364d ca no
controller-manager.conf Apr 15, 2026 02:41 UTC 364d ca no
etcd-healthcheck-client Apr 15, 2026 02:41 UTC 364d etcd-ca no
etcd-peer Apr 15, 2026 02:41 UTC 364d etcd-ca no
etcd-server Apr 15, 2026 02:41 UTC 364d etcd-ca no
front-proxy-client Apr 15, 2026 02:41 UTC 364d front-proxy-ca no
scheduler.conf Apr 15, 2026 02:41 UTC 364d ca no
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Apr 05, 2035 03:00 UTC 9y no
etcd-ca Apr 05, 2035 03:00 UTC 9y no
front-proxy-ca Apr 05, 2035 03:00 UTC 9y no
2.5 再次查看证书时间
[root@master231 pki]# ll
total 72
drwxr-xr-x 3 root root 4096 Apr 15 10:37 ./
drwxr-xr-x 4 root root 4096 Apr 7 11:00 ../
-rw-r--r-- 1 root root 1285 Apr 15 10:41 apiserver.crt
-rw-r--r-- 1 root root 1155 Apr 15 10:41 apiserver-etcd-client.crt
-rw------- 1 root root 1679 Apr 15 10:41 apiserver-etcd-client.key
-rw------- 1 root root 1679 Apr 15 10:41 apiserver.key
-rw-r--r-- 1 root root 1164 Apr 15 10:41 apiserver-kubelet-client.crt
-rw------- 1 root root 1679 Apr 15 10:41 apiserver-kubelet-client.key
-rw-r--r-- 1 root root 1099 Apr 7 11:00 ca.crt
-rw------- 1 root root 1679 Apr 7 11:00 ca.key
drwxr-xr-x 2 root root 4096 Apr 7 11:00 etcd/
-rw-r--r-- 1 root root 1115 Apr 7 11:00 front-proxy-ca.crt
-rw------- 1 root root 1675 Apr 7 11:00 front-proxy-ca.key
-rw-r--r-- 1 root root 1119 Apr 15 10:41 front-proxy-client.crt
-rw------- 1 root root 1679 Apr 15 10:41 front-proxy-client.key
-rw------- 1 root root 1675 Apr 7 11:00 sa.key
-rw------- 1 root root 451 Apr 7 11:00 sa.pub
-rw-r--r-- 1 root root 257 Apr 14 09:53 token.csv
[root@master231 pki]#
[root@master231 pki]#
[root@master231 pki]# ll etcd/
total 40
drwxr-xr-x 2 root root 4096 Apr 7 11:00 ./
drwxr-xr-x 3 root root 4096 Apr 15 10:37 ../
-rw-r--r-- 1 root root 1086 Apr 7 11:00 ca.crt
-rw------- 1 root root 1675 Apr 7 11:00 ca.key
-rw-r--r-- 1 root root 1159 Apr 15 10:41 healthcheck-client.crt
-rw------- 1 root root 1675 Apr 15 10:41 healthcheck-client.key
-rw-r--r-- 1 root root 1200 Apr 15 10:41 peer.crt
-rw------- 1 root root 1679 Apr 15 10:41 peer.key
-rw-r--r-- 1 root root 1200 Apr 15 10:41 server.crt
-rw------- 1 root root 1679 Apr 15 10:41 server.key
二十一、升级worker节点的证书
1.升级前查看客户端证书文件
[root@worker233 ~]# ll /var/lib/kubelet/pki/
total 20
drwxr-xr-x 2 root root 4096 Apr 10 14:50 ./
drwx------ 8 root root 4096 Apr 10 14:50 ../
-rw------- 1 root root 1114 Apr 10 14:50 kubelet-client-2025-04-10-14-50-45.pem
lrwxrwxrwx 1 root root 59 Apr 10 14:50 kubelet-client-current.pem -> /var/lib/kubelet/pki/kubelet-client-2025-04-10-14-50-45.pem
-rw-r--r-- 1 root root 2258 Apr 10 14:50 kubelet.crt
-rw------- 1 root root 1675 Apr 10 14:50 kubelet.key
[root@worker233 ~]#
[root@worker232 ~]# openssl x509 -in /var/lib/kubelet/pki/kubelet-client-current.pem -text -noout
Certificate:
Data:
...
Validity
Not Before: Apr 10 06:45:45 2025 GMT
Not After : Apr 10 06:45:45 2026 GMT
Subject: O = system:nodes, CN = system:node:worker233
2.使用kube-controller-manager进行续签证书:
参考链接:
https://kubernetes.io/zh-cn/docs/reference/command-line-tools-reference/kube-controller-manager/
[root@master231 pki]# vim /etc/kubernetes/manifests/kube-controller-manager.yaml
...
spec:
containers:
- command:
- kube-controller-manager
...
# 所签名证书的有效期限。每个 CSR 可以通过设置 spec.expirationSeconds 来请求更短的证书。
- --cluster-signing-duration=87600h0m0s
# 启用controner manager自动签发CSR证书,可以不配置,默认就是启用的,但是建议配置上!害怕未来版本发生变化!
- --feature-gates=RotateKubeletServerCertificate=true
3.验证kube-controller-manager是否启动成功。
[root@master231 pki]# kubectl get pods -n kube-system -l component=kube-controller-manager
NAME READY STATUS RESTARTS AGE
kube-controller-manager-master231 1/1 Running 0 36s
[root@master231 pki]#
[root@master231 pki]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true","reason":""}
[root@master231 pki]#
4.要求kubelet的配置文件中支持证书滚动,默认是启用的,无需配置。
[root@worker232 ~]# vim /var/lib/kubelet/config.yaml
...
rotateCertificates: true
5.客户端节点修改节点的时间
centos操作如下:
[root@worker232 ~]# date -s "2025-6-4"
[root@worker232 ~]#
[root@worker232 ~]# systemctl restart kubelet
ubuntu系统操作如下:
[root@worker232 ~]# timedatectl set-ntp off # 先关闭时间同步服务。
[root@worker232 ~]#
[root@worker232 ~]# timedatectl set-time '2026-04-09 15:30:00' # 修改即将过期的时间的前一天
[root@worker232 ~]#
[root@worker232 ~]# date
Wed Jun 4 03:30:02 PM CST 2025
[root@worker232 ~]#
6.重启kubelet
[root@worker233 ~]# ll /var/lib/kubelet/pki/
total 20
drwxr-xr-x 2 root root 4096 Apr 10 2025 ./
drwx------ 8 root root 4096 Apr 15 2025 ../
-rw------- 1 root root 1114 Apr 10 2025 kubelet-client-2025-04-10-14-50-45.pem
lrwxrwxrwx 1 root root 59 Apr 10 2025 kubelet-client-current.pem -> /var/lib/kubelet/pki/kubelet-client-2025-04-10-14-50-45.pem
-rw-r--r-- 1 root root 2258 Apr 10 2025 kubelet.crt
-rw------- 1 root root 1675 Apr 10 2025 kubelet.key
[root@worker233 ~]#
[root@worker233 ~]#
[root@worker233 ~]# systemctl restart kubelet
[root@worker233 ~]#
[root@worker233 ~]# ll /var/lib/kubelet/pki/
total 24
drwxr-xr-x 2 root root 4096 Apr 9 15:30 ./
drwx------ 8 root root 4096 Apr 15 2025 ../
-rw------- 1 root root 1114 Apr 10 2025 kubelet-client-2025-04-10-14-50-45.pem
-rw------- 1 root root 1114 Apr 9 15:30 kubelet-client-2026-04-09-15-30-29.pem
lrwxrwxrwx 1 root root 59 Apr 9 15:30 kubelet-client-current.pem -> /var/lib/kubelet/pki/kubelet-client-2026-04-09-15-30-29.pem
-rw-r--r-- 1 root root 2258 Apr 10 2025 kubelet.crt
-rw------- 1 root root 1675 Apr 10 2025 kubelet.key
[root@worker233 ~]#
[root@worker233 ~]# date
Thu Apr 9 03:30:58 PM CST 2026
[root@worker233 ~]#
7.查看客户端的证书有效期
[root@worker233 ~]# openssl x509 -in /var/lib/kubelet/pki/kubelet-client-current.pem -text -noout
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
1e:cc:67:3b:33:d7:61:c8:ec:57:c6:f2:8d:71:7a:03
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN = kubernetes
Validity
Not Before: Apr 15 02:48:47 2025 GMT
Not After : Apr 5 03:00:04 2035 GMT # Duang~证书续期了10年!
Subject: O = system:nodes, CN = system:node:worker233
...
8.验证能够正常工作(如果无法创建Pod,则需要删除一下calico的名称空间的Pod)
[root@master231 ~]# cat > test-cni.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
name: xixi
spec:
nodeName: worker232
containers:
- image: harbor250.oldboyedu.com/oldboyedu-xiuxian/apps:v1
name: c1
---
apiVersion: v1
kind: Pod
metadata:
name: haha
spec:
nodeName: worker233
containers:
- image: harbor250.oldboyedu.com/oldboyedu-xiuxian/apps:v1
name: c1
EOF
[root@master231 ~]#
[root@master231 ~]# kubectl apply -f test-cni.yaml
pod/xixi created
pod/haha created
[root@master231 ~]#
[root@master231 ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
haha 1/1 Running 0 58s 10.100.140.98 worker233 <none> <none>
xixi 1/1 Running 0 58s 10.100.203.160 worker232 <none> <none>
[root@master231 ~]#
[root@master231 ~]# curl 10.100.140.98
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8"/>
<title>yinzhengjie apps v1</title>
<style>
div img {
width: 900px;
height: 600px;
margin: 0;
}
</style>
</head>
<body>
<h1 style="color: green">凡人修仙传 v1 </h1>
<div>
<img src="1.jpg">
<div>
</body>
</html>
[root@master231 ~]#
[root@master231 ~]# curl 10.100.203.160
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8"/>
<title>yinzhengjie apps v1</title>
<style>
div img {
width: 900px;
height: 600px;
margin: 0;
}
</style>
</head>
<body>
<h1 style="color: green">凡人修仙传 v1 </h1>
<div>
<img src="1.jpg">
<div>
</body>
</html>
[root@master231 ~]#
注意:
如果不正常可以尝试删除对应的Calico的Pod即可。
[root@master231 ~]# kubectl get pods -o wide -n calico-system
[root@master231 ~]# kubectl get pods -o wide -n calico-apiserver
[root@master231 ~]# kubectl -n calico-apiserver delete pods --all
[root@master231 ~]# kubectl get pods -o wide -n calico-apiserver
生产环境中对于worker证书升级应该注意的事项:
- 对生产环境一定要有敬畏之心,不可随意;
- 对证书有效期有效期进行监控,很多开源组件都支持,比如zabbix,prometheus等。
- 在重启kubelet节点时,应该注意滚动更新,不要批量重启,避免Pod大面积无法访问的情况,从而造成业务的损失,甚至生产故障;
- 尽量在业务的低谷期做升级操作,影响最小;
- 在生产环境操作前,最好是先线下复刻的环境中重复执行3次以上;
本文来自博客园,作者:丁志岩,转载请注明原文链接:https://www.cnblogs.com/dezyan/p/18887714

浙公网安备 33010602011771号