Java相关技术记录

相关技术记录

查询大于多少文件日志命令

find / -type f -size +500M -print0 | xargs -0 du -h

然后执行 echo "" > xxx.log 将空串写入文件

Arthas(阿尔萨斯)docker版本

docker exec -it  ${containerId} /bin/bash -c "wget https://arthas.aliyun.com/arthas-boot.jar && java -jar arthas-boot.jar"

Maven多模块工程打包指定模块工程方法

Maven多模块工程打包指定模块工程执行如下命令:

mvn clean package -pl 指定模块工程名 -am

参数说明:

-am --also-make 同时构建所列模块的依赖模块;
-amd -also-make-dependents 同时构建依赖于所列模块的模块;
-pl --projects 构建制定的模块,模块间用逗号分隔;
-rf -resume-from 从指定的模块恢复反应堆。

git stash 使用说明

// 暂存本地数据
git stash save -u "备注信息"
// 查看暂存列表
git stash list
// 恢复
git stash apply stash@{id}
// 删除stash
git stash drop stash@{id}
// 查看 stash 的内容
git stash show 
git stash show stash@{id}
git stash show -p

在request中获取body参数

/**
 * 从请求体内取出数据,之后request里面就没有数据了,需要重新构造一个request
 *
 * @param serverHttpRequest
 * @return
 */
private String resolveBodyFromRequest(ServerHttpRequest serverHttpRequest) {
    //获取请求体
    Flux<DataBuffer> body = serverHttpRequest.getBody();
    StringBuilder sb = new StringBuilder();
    body.subscribe(buffer -> {
        byte[] bytes = new byte[buffer.readableByteCount()];
        buffer.read(bytes);
        DataBufferUtils.release(buffer);
        String bodyString = new String(bytes, StandardCharsets.UTF_8);
        sb.append(bodyString);
    });
    return sb.toString();

}

设置Docker容器为容器启动自启动

docker container update --restart=always 容器名字

Redis在Docker上启动

Redis配置文件修改(重要)

redis.conf 中daemonize=NO。非后台模式,如果为YES 会的导致 redis 无法启动,因为后台会导致docker无任务可做而退出。

启动命令

docker run --net=host --name redis --cpus=0.5 -m 256m -v /data/redis/redis.conf:/etc/redis.conf -v /data/redis/data:/data -d redis:alpine redis-server /etc/redis.conf --appendonly yes

Nginx在Docker上启动

cd /data/www/nginx/
// 限制cpu0.5 内存 最大128m
docker run -itd --name nginx --cpus=0.5 -m 128m --memory-reservation=64m  -v /data/crt:/data/crt -v $PWD/html:/etc/nginx/html -v $PWD/conf.d:/etc/nginx/conf.d -v $PWD/nginx.conf:/etc/nginx/nginx.conf -v $PWD/logs:/etc/nginx/logs --net=host nginx:alpine

Docker部署Springboot Jar包的一种方式

直接上运行脚本

docker run -d --name 容器名称 --log-opt max-size=200m -v /data/springboot/logs:/logs -v /data/springboot:/app -p 9095:9095 java:8 java -Xms2g -Xmx4g -jar -Duser.timezone=GMT+08 -Dspring.profiles.active=test /app/springboot.jar

说明:

  • -d :后台运行
  • --log-opt max-size=200m :日志输出最大为200m
  • -v /data/springboot/logs:/logs :日志挂载目录
  • -v /data/springboot/:/app/ :jar包所在目录挂载到容器里
  • -p 9095:9095 :端口映射
  • java ... :常规的jar启动命令

Mac安装破解版PS

安装包下载地址:

https://cloud.189.cn/web/share?code=Ff2aEvVfiqua(访问码:4b9b)

安装docker

离线安装docker

离线安装包下载地址

https://cloud.189.cn/web/share?code=B77zuqQrInu2(访问码:ct27)

yum提供了一种只下载安装包,但是不进行安装的方法:

yum install –downloadonly –downloaddir=/home/docker/dockerRpm docker

rpm -ivh XXX.rpm --nodeps --force

rpm -ivh container-selinux-2.21-2.gitba103ac.el7.noarch.rpm docker-ce-17.06.0.ce-1.el7.centos.x86_64.rpm --force --nodeps

//--nodeps 安装时不检查依赖关系
//--force 强制安装
// 开启服务

systemctl start docker

//开机自启docker服务 或 “chkconfig docker on” 命令

systemctl enable docker

有网安装docker

修改DNS配置:

vi /etc/resolv.conf

添加:

nameserver 8.8.8.8

如果你之前安装过 docker,请先删掉

sudo yum remove docker docker-common docker-selinux docker-engine

安装一些依赖

sudo yum install -y yum-utils device-mapper-persistent-data lvm2

根据你的发行版下载repo文件:

wget -O /etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/centos/docker-ce.repo

把软件仓库地址替换为 TUNA:

sudo sed -i 's+download.docker.com+mirrors.tuna.tsinghua.edu.cn/docker-ce+' /etc/yum.repos.d/docker-ce.repo

最后安装:

sudo yum makecache fast
sudo yum install docker-ce

安装固定版本:

yum install https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-17.06.2.ce-1.el7.centos.x86_64.rpm

nginx反向代理丢失Header

从根本解除nginx的限制,nginx默认request的header的那么中包含’_’时,会自动忽略掉。http部分中添加如下配置:underscores_in_headers on;(默认 underscores_in_headers 为off)

kibana在docker上安装教程

使用版本:7.8.0

1.拉取镜像

docker pull kibana:7.8.0

2.启动镜像

  • 配置文件的准备

    mkdir -p /opt/elk7/kibana/config

    vim /opt/elk7/kibana/config/kibana.yml

    server.port: 5601
    server.host: "0"
    elasticsearch.hosts: ["http://192.168.1.120:9200"]
    elasticsearch.username: "kibana_system"
    elasticsearch.password: "password"
    #i18n.locale: "en"
    i18n.locale: "zh-CN"
    
  • 启动

    docker run -itd --name=kib01 --restart=always -p 5601:5601 -v /opt/elk7/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml kibana:7.8.0
    

ElasticSearch在docker上安装教程

使用版本:7.8.0

1.拉取镜像

docker pull elasticsearch:7.8.0

2.启动镜像

  • 配置文件的准备

    mkdir -p /home/elk7/es/config

    vim /home/elk7/es/config/elasticsearch.yml

    cluster.name: "docker-cluster"
    node.name: node-1
    network.host: 0.0.0.0
    http.port: 9200
    
    cluster.initial_master_nodes: ["node-1"]
    
    http.cors.enabled: true
    http.cors.allow-origin: "*"
    http.cors.allow-headers: Authorization
    xpack.security.enabled: true
    xpack.security.transport.ssl.enabled: true
    

    mkdir -p /home/elk7/es/data

    chmod -R 777 /home/elk7/es/data

  • 启动

    docker run --name=es01 -itd -e ES_JAVA_OPT"-Xms1g -Xmx1g" -v /home/elk7/es/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /home/elk7/es/data/:/usr/share/elasticsearch/data/ -v /home/elk7/es/plugins:/usr/share/elasticsearch/plugins  -p 9200:9200 -p 9300:9300 elasticsearch:7.8.0
    

可能出现的错误

[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

解决办法1:

执行命令:

sysctl -w vm.max_map_count=262144

查看结果:

sysctl -a|grep vm.max_map_count

显示:

vm.max_map_count = 262144

上述方法修改之后,如果重启虚拟机将失效,所以:

解决办法:

在 /etc/sysctl.conf文件最后添加一行

vm.max_map_count=262144

即可永久修改

解决办法2:

参考:Docker 部署 elasticsearch( ES开启了密码认证) - evescn - 博客园 (cnblogs.com)

权限用户初始化

# docker exec it es01 bash
# elasticsearch-setup-passwords interactive

Initiating the setup of passwords for reserved users elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user.
You will be prompted to enter passwords as the process progresses.
Please confirm that you would like to continue [y/N]y
Enter password for [elastic]: 
Reenter password for [elastic]: 
Enter password for [apm_system]: 
Reenter password for [apm_system]: 
Enter password for [kibana_system]: 
Reenter password for [kibana_system]: 
Enter password for [logstash_system]: 
Reenter password for [logstash_system]: 
Enter password for [beats_system]: 
Reenter password for [beats_system]: 
Enter password for [remote_monitoring_user]: 
Reenter password for [remote_monitoring_user]: 
Changed password for user [apm_system]
Changed password for user [kibana_system]
Changed password for user [kibana]
Changed password for user [logstash_system]
Changed password for user [beats_system]
Changed password for user [remote_monitoring_user]
Changed password for user [elastic]

Logstash在Docker上安装教程

使用版本:7.8.0

1.拉取镜像

docker pull logstash:7.8.0

2.启动镜像

  1. 配置文件

    mkdir -p /opt/docker/logstash
    chmod 777 -R /opt/docker/logstash
    docker run -d --name logstash logstash:7.8.0
    docker cp logstash:/usr/share/logstash/config ./config/
    cd config/
    

    编辑vim logstash.yml

    pipeline.id: logstash-0
    path.config: /usr/share/logstash/config/conf.d/*.conf
    path.logs: /usr/share/logstash/logs
    

    /opt/docker/logstash/config目录下创建文件夹conf.d

    mkdir conf.d && cd conf.d
    

    新建或编辑vim logstash.conf

    input {
        # 从文件读取日志信息 输送到控制台
        file {
            path => "/logs/system/info/*.log"
            type => "system-info"
            start_position => "beginning"
        }
        file {
            path => "/logs/system/error/*.log"
            type => "system-error"
            start_position => "beginning"
        }
    
    }
    
    filter {
            json {
                source => "message"
            }
     }
    
    output {
        # 标准输出
        # stdout {}
        # 输出进行格式化,采用Ruby库来解析日志
        if[type]=="system-info"{
                     stdout { codec => rubydebug }
                     elasticsearch {
                            hosts => ["127.0.0.1:9200"]
                            index => "system-info-%{+YYYY.MM.dd}"
                            user => "logstash"
                            password => "password"
    
                    }
        }
        if[type]=="system-error"{
                stdout { codec => rubydebug }
                elasticsearch {
                            hosts => ["127.0.0.1:9200"]
                            index => "system-error-%{+YYYY.MM.dd}"
                            user => "logstash"
                            password => "password"
                 }
        }
    
    }
    

    将日志输出目录授权一下

    chmod 644 /logs/system/
    
  2. 启动

    docker run -d \
    --name=logstash \
    --restart=always \
    -p 5043:5044 \
    -v /opt/docker/logstash/config:/usr/share/logstash/config \
    -v /opt/docker/logstash/logs:/usr/share/logstash/logs \
    -v /logs/system:/logs/system \
    logstash:7.8.0
    

可能出现的错误情况

Encountered a retryable error. Will Retry with exponential backoff {:code=>403, :url=>"http://127.0.0.1:9200/_bulk"}

在kibana里面,重新建一个logstash用户可解决

自动集成Docker运行Spring脚本

#!/bin/bash
# docker服务名称
ServiceName=$1
# Jar包名称
JarName=$2
# 服务环境可选test,yun,prod
Env=$3
# 文件夹目录,如果不传这个参数就默认/opt/
Dir=$4
# 日志目录 默认/var/logs
LogDir=$5
# $5 表示最新上传jar路径
SourceJar="$6"
# 文件是否存在
fileExists() {
  if [ ! -f "$1" ]; then
    echo "$1 文件不存在"
    exit
  fi
}

# 目录是否存在
dirExists() {
  if [ ! -d "$1" ]; then
    echo "$1 目录不存在"
    exit
  fi
}

# 服务名称必须传递
if [ -z "$ServiceName" ]; then
  echo "服务名称不能为空:$0 [ServiceName] [JarName] [Env] [Dir]"
  exit
fi

# 服务Jar包名称必须传递
if [ -z "$JarName" ]; then
  echo "服务Jar包名称不能为空:$0 [ServiceName] [JarName] [Env] [Dir]"
  exit
fi

# 服务环境
if [ -z "$Env" ]; then
  echo "服务环境不能为空:$0 [ServiceName] [JarName] [Env] [Dir]"
  exit
fi

echo "[ServiceName:$ServiceName] [JarName:$JarName] [Env:$Env] [Dir:$Dir] [LogDir:$LogDir]"

#if [ "$Env" != "test" || "$Env" != "yun" || "$Env" != "prod" ]; then
#  echo "服务环境错误: $0, 可选test,yun,prod"
#  exit
#fi
# 判断文件夹是否存在,-z 表示不存在,即默认Dir=/opt/[ServiceName]目录
if [ -z "$Dir" ]; then
  Dir="/opt/$ServiceName"
  # 准备发布的Jar包文件
  SourceJar="$Dir/jar/$JarName"
  dirExists "$Dir"
  fileExists "$SourceJar"
else
  dirExists "$Dir"
  if [ -z "$SourceJar" ]; then
    SourceJar="$Dir/jar/$JarName"
    fileExists "$SourceJar"
  else
    echo "$SourceJar 文件存在"
    fileExists "$SourceJar"
  fi
fi
# 旧jar包
OldJar="$Dir/$JarName"

echo "新版本jar包路径:$SourceJar"
echo "老版本jar包路径:$OldJar"
# 进入到项目工作目录
cd "$Dir" || exit
[ -f "$OldJar" ] && mv "$OldJar" "$OldJar.$(date +%Y%m%d%H%M%S)"
echo "====备份jar包完成===="
mv "$SourceJar" "$Dir/"
echo "===替换jar包完成==="

#Contains=$(docker ps -a | grep "$ServiceName" | awk '{print $12}')
Contains=$(docker ps --format "{{.Names}}" | grep -w "$ServiceName")
#LogOpt=" --log-opt mode=non-blocking --log-opt max-buffer-size=4m "
#LogOpt=" "
#Mounts=" -v /var/logs/$ServiceName/:/var/logs/$ServiceName/ -v /opt/$ServiceName/:/opt/$ServiceName/ "
JAVA_OPTS="-Xms1024m -Xmx1024m"
FINAL_ENV="$Env"

if [ -z "$Env" ]; then
  FINAL_ENV="test"
fi
# 日志目录判断
LogDirFinal="/var/logs"
if [ -z "$LogDir" ]; then
    LogDirFinal="$LogDir"
fi

LogDirFinal="$LogDirFinal/$ServiceName"
echo "查询到的容器名称:$Contains, 最终环境:$FINAL_ENV"
ENV="-Duser.timezone=GMT+08 -Dspring.profiles.active=$FINAL_ENV"
# shellcheck disable=SC2086
if [ -z "$Contains" ] || [ "$Contains" != "$ServiceName" ]; then
  docker run -d --name ${ServiceName} --net=host \
    -v ${LogDirFinal}:${LogDirFinal} -v ${Dir}:${Dir} \
    d2 java ${JAVA_OPTS} -jar ${ENV} ${OldJar}
else
  docker restart "$ServiceName"
  echo "===重启服务${ServiceName}完成=== "
fi
echo "${ServiceName} 服务发布成功"

logback日志格式输出

    <!-- 控制台输出 -->
    <appender name="console" class="ch.qos.logback.core.ConsoleAppender">
        <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
            <!--格式化输出:
                    %d表示日期
                    %magenta:洋红
                    %thread表示线程名
                    %highlight:颜色,info为蓝色,warn为浅红,error为加粗红,debug为黑色
                    $cyan:青色
                    %-5level:级别从左显示5个字符宽度
                    %msg:日志消息,%n是换行符
             -->
            <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} %magenta([%-15.15(%thread)]) %highlight(%-5level) %cyan(%-50.50(%logger{50})) - %msg%n</pattern>
            <charset>UTF-8</charset>
        </encoder>
    </appender>

32.Mac

32-1.安装brew

/bin/zsh -c "$(curl -fsSL https://gitee.com/cunkai/HomebrewCN/raw/master/Homebrew.sh)"

32-2.brew安装Docker

macOS 我们可以使用 Homebrew 来安装 Docker。

Homebrew 的 Cask 已经支持 Docker for Mac,因此可以很方便的使用 Homebrew Cask 来进行安装:

$ brew install --cask --appdir=/Applications docker

32-3.镜像加速

鉴于国内网络问题,后续拉取 Docker 镜像十分缓慢,我们可以需要配置加速器来解决,我使用的是网易的镜像地址:http://hub-mirror.c.163.com

在任务栏点击 Docker for mac 应用图标 -> Perferences... -> Daemon -> Registry mirrors。在列表中填写加速器地址即可。修改完成之后,点击 Apply & Restart 按钮,Docker 就会重启并应用配置的镜像地址了。

之后我们可以通过 docker info 来查看是否配置成功。

$ docker info
...
Registry Mirrors:
 http://hub-mirror.c.163.com
Live Restore Enabled: false

K8S-V1.24

安装dashboard

dashboard.yaml文件

# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.5.1
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.7
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}

执行kubectl apply -f dashboard.yaml输出

amespace/kubernetes-dashboard unchanged
serviceaccount/kubernetes-dashboard unchanged
service/kubernetes-dashboard unchanged
secret/kubernetes-dashboard-certs unchanged
secret/kubernetes-dashboard-csrf configured
Warning: resource secrets/kubernetes-dashboard-key-holder is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
secret/kubernetes-dashboard-key-holder configured
configmap/kubernetes-dashboard-settings unchanged
role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
deployment.apps/kubernetes-dashboard configured
service/dashboard-metrics-scraper unchanged
deployment.apps/dashboard-metrics-scraper unchanged

dashboard-user.yaml文件

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

执行kubectl apply -f dashboard-user.yaml输出

Warning: resource serviceaccounts/admin-user is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
serviceaccount/admin-user configured
clusterrolebinding.rbac.authorization.k8s.io/admin-user created

type修改为NodePort

  kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
  type: NodePort
  kubectl get svc kubernetes-dashboard -n kubernetes-dashboard
  

获取TOKEN

$ kubectl -n kubernetes-dashboard create token admin-user
eyJhbGciOiJSUzI1NiIsImtpZCI6IllwU0RpN2RmdlJ2eDBYM2g1UUlnQmUyM29UUXQzYXdtemJITkxRN2V1ckkifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjU0MTM2NzU1LCJpYXQiOjE2NTQxMzMxNTUsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiMzNkZWMyMWItYWJiYy00MjI0LWI1ZDEtMDA3YTE2ZmIxNzJkIn19LCJuYmYiOjE2NTQxMzMxNTUsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.f86rV_WEl7GvYsTTUGaG3PTXKZTQjxToPKZEKMW53Q_rXRPZfNt0rgiIxXzDNHSNwPNIc54VpK92PXfFQUj_-ACCU_T3JycqB4mPsshcsPbrapiHLBt7vWt-_NmDx2HM30pl1WA5U1FVBexh8Se5XQ0-nqQ2B8vkOi_OQ2nD99FicwL7iAdCJehSns3GMGDTdOoEfwfBocrCzUj-xZvlYsJ9RQehi87xHzj8xcuZjk3OlMG9Cvbkvd2QWmKcyWblv-xtc1v6FXTuI5CKU3H9_jN0C6yrtQMx1PPxZwVClKhHrYQHufZpHP9mMWx0gOXQPcGO2m_quaHtGTTj6rEKjQ

持续更新中

TODO

posted @ 2022-02-18 00:11  ChoxSu  阅读(378)  评论(0)    收藏  举报