1.一次构建出X86_64及ARM等多CPU指令集容器镜像
CPU指令集介绍及不同的指令集的使用场景:
当前CPU的两大架构是CISC(复杂指令集)和RISC(精简指令集),x86是CISC的代表架构,占领了95%以上的桌面计算机和服务器市场。
Arm和MIPS都是RISC即精简指令集,尤其Arm在智能手机、可穿戴设备(智能手表、手环、智能眼镜)等移动处理器市场占领主要地位。
RISC-V和MIPS两大精简指令集架构将会得到广泛使用
X86架构:
1968年Intel成立,1978年发明x86架构,并且在后来授权给AMD使用。
1969年AMD成立,2003年AMD推出64位处理器,并授权给Intel使用。
X86主要用于企业服务器及个人办公PC市场。
ARM架构:
成立于1990年,全世界超过95%的智能手机和平板电脑都采用ARM架构。
手机:华为 小米 三星 苹果
pad:华为 小米 三星 苹果
机顶盒:各电视机顶盒
华为泰山服务器-鲲鹏系列ARM系列CPU、阿里倚天710(arm64 v9)
RISC-V架构:
1980年加州大学伯克利分校开始研发RISC课题,2010年开发出RISC-V并开源。
MIPS架构:
MIPS是一种RISC处理器,它最早是在80年代初期由斯坦福(Stanford)大学John L. Hennessy(约翰·亨利斯)教授领导的研究小组研制出来的,MIPS是出现最早的商业RISC架构芯片之一MIPS计算机系统公司创建于1984年,最初的目的是将斯坦福大学MIPS CPU小组的研究成功商业化,商用MIPS CPU 增强了内存管理硬件,并于1985年末作为R2000面世,其后又相继推出了R3000、R4000、R10000等多款处理器。
POWER 架构:
由IBM设计,POWER系列微处理器在不少IBM服务器、超级计算机、小型计算机及工作站中使用。
ppc64 #IBM PowerPC和Power Architecture处理器应用程序的标识符
ppc64le: #由OpenPower基金会推出的技术, 用于将x86的Linux软件的移植到ppc64平台
s390x 架构:
s390x #IBM System z系列大型机,银行或者大型企业或者科研单位用

安装docker环境:
root@ubuntu:~# sudo apt-get install apt-transport-https ca-certificates curl gnupg2 software-properties-common
root@ubuntu:~# sudo mkdir -m 0755 -p /etc/apt/keyrings
root@ubuntu:~# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
root@ubuntu:~# echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg]
https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
root@ubuntu:~# sudo apt-get update
root@ubuntu:~# apt-cache madison docker-ce docker-ce-cli
root@ubuntu:~# apt install docker-ce=5:20.10.23~3-0~ubuntu-focal docker-ce-cli=5:20.10.23~3-0~ubuntu-focal
root@ubuntu:~# docker version
Client: Docker Engine - Community
Version: 20.10.23xxxxxxxxxx
qemu-user-static:是一个可以在当前操作系统来运行其它架构的一个仿真器,可以通过x86的机器编译出其它不同架构的docker镜像。
binfmt-support:解析不同系统的二进制文件给内核执行,再将执行结果返回给用户空间的进程。

~# apt install -y qemu-user-static binfmt-support
配置多平台CPU指令集模拟器:
~# docker run --rm --privileged multiarch/qemu-user-static:register
~# ls /usr/bin/qemu-aarch64-static
/usr/bin/qemu-aarch64-static
~# docker run --rm -it arm64v8/ubuntu:20.04 uname -a
Linux 896cafbef5a7 5.4.0-92-generic #103-Ubuntu SMP Fri May 26 16:13:00 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
~# docker run --rm -it riscv64/ubuntu:22.04 uname -a
Linux d119768a6f9c 5.4.0-92-generic #103-Ubuntu SMP Fri May 26 16:13:00 UTC 2023
初始化buildx:使用BuildKit的功能(Buildkit 是由 Docker 公司开发的 下一代docker build工具,2018 年7月正式内置于 Docker-ce 18.06.0
~# docker buildx create --name mybuilder
~# docker buildx use mybuilder
~# docker buildx inspect --bootstrap

验证构建环境:
~# docker buildx inspect

构建介绍:
#构建镜像并上传到镜像仓库,可以同时构建x86_64及arm等多本镜像
# nerdctl build --platform=arm64 -t registry.cn-qingdao.aliyuncs.com/zhangshijie/nginx:v1. #单独构建arm64架构镜像
# docker buildx build -t ${TAG} --platform linux/amd64,linux/arm64 . --push
#构建arm64镜像并保存到本地,导入到本地只能构建单独一个平台的镜像,比如单独的arm64或单独的x86_64,不能同时指定
# docker buildx build -t ${TAG} --platform linux/arm64 . --load
执行构建:
root@ubuntu:~/ubuntu-dockerfile-case# ls
Dockerfile build-command.sh frontend.tar.gz html nginx-1.22.0.tar.gz nginx.conf sources.list
root@ubuntu:~/ubuntu-dockerfile-case# bash build-command.sh
构建过程:

镜像仓库验证镜像:

ARM主机准备:
[root@20230215-instance ~]# uname -a
Linux 20230215-instance 5.10.0-5.10.0.24.oe1.aarch64 #1 SMP Wed May 29 20:01:37 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
安装docker:
https://mirrors.tuna.tsinghua.edu.cn/help/docker-ce/
验证:
# docker pull registry.cn-hangzhou.aliyuncs.com/base/nginx:v1.22.0_multi-platform
# docker images
# docker inspect registry.cn-hangzhou.aliyuncs.com/base/nginx:v1.22.0_multi-platform
假如CPU架构不匹配:
# docker run -it registry.cn-hangzhou.aliyuncs.com/zhangshijie/dubbo-provider:2022-12-22_07-38-47
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform(linux/arm64/v8) and no specific platform was requested exec /apps/dubbo/provider/bin/run_java.sh: exec format error
2.总结kubernetes环境日志收集方案
2.1 日志收集简介
日志收集的目的:
分布式日志数据统一收集,实现集中式查询和管理
故障排查
安全信息和事件管理
报表统计及展示功能
日志收集的价值:
日志查询,问题排查,故障恢复,故障自愈
应用日志分析,错误报警
性能分析,用户行为分析
日志收集流程

2.2 日志收集方式:
https://kubernetes.io/zh/docs/concepts/cluster-administration/logging/
1.node节点收集,基于daemonset部署日志收集进程,实现json-file类型(标准输出/dev/stdout、错误输出/dev/stderr)日志收集。即应用程序产生的标准输出和错误输出的日志。因为容器里的日志都是输出到标准输出、错误输出,然后需要提前把容器里的日志驱动与日志类型改成jsonfile类型。实现方式:将容器内的日志改好jsonfile之后挂载到宿主机,再把宿主机的日志挂载到logstash中进行过滤,这样就收集起来了。
优点:对应用和Pod完全无侵入,一个节点仅需部署一个agent
缺点:要求应用日志直接输出到容器的stdout和stderr
2.使用sidcar容器(一个pod多容器)收集当前pod内一个或者多个业务容器的日志(通常基于emptyDir实现业务容器与sidcar之间的日志共享)。容器之间的文件系统是隔离的,通常emptyDir来实现日志的共享,应该就是把业务容器的日志路径挂载到emptyDir,sidcar容器收集日志的路径就是这个emptyDir
优点:部署简单,对宿主机友好
缺点:sidecar容器可能消耗较多资源,甚至拖挂应用容器;无法使用kubectl logs命令查看容器日志
3.在容器内置日志收集服务进程。
优点:能够继续使用日志收集方式
缺点:成倍增加磁盘占用,造成浪费。(应用和sidecar容器写入两份相同日志文件)

从上述表格中可以看出:
1.原生方式相对功能太弱,一般不建议在生产系统中使用,否则问题调查、数据统计等工作很难完成;
2.DaemonSet方式在每个节点只允许一个日志agent,相对资源占用要小很多,但扩展性、租户隔离性受限,比较适用于功能单一或业务不是很多的集群;
3.Sidecar方式为每个POD单独部署日志agent,相对资源占用较多,但灵活性以及多租户隔离性较强,建议大型的K8S集群或作为PAAS平台为多个业务方服务的集群使用该方式。
3..部署ES集群及kafka环境用于日志收集
elasticsearch软件下载地址:https://www.elastic.co/cn/downloads/elasticsearch
kafka下载地址:https://shackles.cn/Software/kafka_2.13-2.6.2.tgz
zookeeper下载地址:https://shackles.cn/Software/apache-zookeeper-3.7.1-bin.tar.gz
3.1 部署ES集群
root@elasticsearch:~# dpkg -i elasticsearch-7.12.1-amd64.deb
#192.168.121.123
root@elasticsearch:~# cat /etc/elasticsearch/elasticsearch.yml
cluster.name: my-es
node.name: node1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 192.168.121.123
discovery.seed_hosts: ["192.168.121.123", "192.168.121.124","192.168.121.125"] #主机发现,相互通告
cluster.initial_master_nodes: ["192.168.121.123", "192.168.121.124","192.168.121.125"] #参与master选举
action.destructive_requires_name: true #删除索引时不允许模糊匹配
#192.168.121.124
root@elasticsearch:~# cat /etc/elasticsearch/elasticsearch.yml
cluster.name: my-es
node.name: node2
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 192.168.121.124
http.port: 9200
discovery.seed_hosts: ["192.168.121.123", "192.168.121.124","192.168.121.125"] #主机发现,相互通告
cluster.initial_master_nodes: ["192.168.121.123", "192.168.121.124","192.168.121.125"] #参与master选举
action.destructive_requires_name: true #删除索引时不允许模糊匹配
#192.168.121.125
root@elasticsearch:~# cat /etc/elasticsearch/elasticsearch.yml
cluster.name: my-es
node.name: node3
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 192.168.121.125
discovery.seed_hosts: ["192.168.121.123", "192.168.121.124","192.168.121.125"] #主机发现,相互通告
cluster.initial_master_nodes: ["192.168.121.123", "192.168.121.124","192.168.121.125"] #参与master选举
action.destructive_requires_name: true #删除索引时不允许模糊匹配
注意:/var/log/elasticsearch和/var/lib/elasticsearch的属主和属组都必须是elasticsearch
root@elasticsearch:~# systemctl restart elasticsearch.service
3.2 部署zookeeper集群
三个节点操作:
root@kafka:~# apt -y install openjdk-8-jdk
root@kafka:~# mkdir /apps
root@kafka:~# mv /usr/local/src/apache-zookeeper-3.7.1-bin.tar.gz /apps
root@kafka:~# mv /usr/local/src/kafka_2.13-2.6.2.tgz /apps
root@kafka:/apps# tar -xvf apache-zookeeper-3.7.1-bin.tar.gz
root@kafka:/apps# ln -sv /apps/apache-zookeeper-3.7.1-bin /apps/zookeeper
root@kafka:/apps# cd /apps/zookeeper/conf
root@kafka:/apps/zookeeper/conf# cp zoo_sample.cfg zoo.cfg
root@kafka:/apps/zookeeper/conf# cat zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper
clientPort=2181
server.1=192.1168.121.126:2888:3888
server.2=192.1168.121.127:2888:3888
server.3=192.1168.121.128:2888:3888
tickTime:Zookeeper中使用的基本时间单位,以毫秒为单位。这里设置为2000ms。
initLimit:用于限制Zookeeper服务器启动时连接到Zookeeper集合中的follower最长时间限制。这里设置为10。
syncLimit:用于限制Zookeeper服务器之间保持同步的领导选举时间。这里设置为5。
dataDir:Zookeeper存储数据的目录,这里设置为/data/zookeeper。
clientPort:Zookeeper客户端连接服务器使用的端口号,这里设置为2181
root@kafka:/apps/zookeeper/conf# mkdir -p /data/zookeeper
#192.1168.121.126
root@kafka:/apps/zookeeper/conf# echo 1 > /data/zookeeper/myid
#192.1168.121.127
root@kafka:/apps/zookeeper/conf# echo 2 > /data/zookeeper/myid
#192.1168.121.128
root@kafka:/apps/zookeeper/conf# echo 3 > /data/zookeeper/myid
root@kafka:/apps# /apps/zookeeper/bin/zkServer.sh start
root@kafka:/apps# /apps/zookeeper/bin/zkServer.sh status
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /apps/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.Client SSL: false.
Mode: follower
3.3部署kafka集群
root@kafka:/apps# tar -xvf kafka_2.13-2.6.2.tgz
root@kafka:/apps# ln -sv /apps/kafka_2.13-2.6.2 /apps/kafka
#192.168.121.126
root@kafka:/apps# cat /apps/kafka/config/server.properties
broker.id=126
listeners=PLAINTEXT://192.168.121.126:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/data/kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=192.168.121.126:2181,192.168.121.127:2181,192.168.121.128:2181
zookeeper.connection.timeout.ms=18000
group.initial.rebalance.delay.ms=0
#192.168.121.127
root@kafka:/apps# cat /apps/kafka/config/server.properties
broker.id=127
listeners=PLAINTEXT://192.168.121.127:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/data/kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=192.168.121.126:2181,192.168.121.127:2181,192.168.121.128:2181
zookeeper.connection.timeout.ms=18000
group.initial.rebalance.delay.ms=0
#192.168.121.128
root@kafka:/apps# cat /apps/kafka/config/server.properties
broker.id=128
listeners=PLAINTEXT://192.168.121.128:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/data/kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=192.168.121.126:2181,192.168.121.127:2181,192.168.121.128:2181
zookeeper.connection.timeout.ms=18000
group.initial.rebalance.delay.ms=0
broker.id:指定当前Kafka Broker的唯一标识符
listeners:指定Kafka Broker监听的地址和端口号
num.network.threads:指定Kafka Broker用于处理网络I/O的线程数,这里设置为3。
num.io.threads:指定Kafka Broker用于处理磁盘I/O的线程数,这里设置为8。
socket.send.buffer.bytes:设置Kafka Broker用于网络传输中的Socket发送缓冲区大小,这里设置为102400。
socket.receive.buffer.bytes:设置Kafka Broker用于网络传输中的Socket接收缓冲区大小,这里设置为102400。
socket.request.max.bytes:设置在Kafka Broker上发送或接收任何单个请求时所允许的最大字节数,这里设置为104857600字节。
log.dirs:指定Kafka Broker用于存储日志文件的目录,这里设置为/data/kafka-logs。
num.partitions:指定创建新主题时每个默认分区的数量,这里设置为1。
num.recovery.threads.per.data.dir:指定每个数据目录中日志恢复线程的数量,这里设置为1。
offsets.topic.replication.factor:指定储存消费者组和偏移量信息的内部主题的副本数量,这里设置为1。
transaction.state.log.replication.factor:指定事务状态日志内部主题的副本数量,这里设置为1。
transaction.state.log.min.isr:指定一个副本集合中必须确认事务状态的最小数量,这里设置为1。
log.retention.hours:指定日志文件在自动删除之前保留的时间,这里设置为168小时(即7天)。
log.segment.bytes:指定日志文件的最大大小,当文件达到此大小时,将创建一个新的日志段,这里设置为1073741824字节(即1GB)。
log.retention.check.interval.ms:指定删除旧日志文件任务之间的时间间隔,这里设置为300000ms(即5分钟)。
zookeeper.connect:指定Zookeeper的连接地址(所有Kafka Broker都将向此地址注册)。这里设置为多个Zookeeper实例的地址,用逗号分隔。
zookeeper.connection.timeout.ms:指定连接Zookeeper服务器的最长时间,这里设置为18000ms(即18秒)。
group.initial.rebalance.delay.ms:当新的消费者加入消费者组时,指定延迟重新平衡的时间。这里设置为0表示立即开始重新平衡
root@kafka:/apps# /apps/kafka/bin/kafka-server-start.sh -daemon /apps/kafka/config/server.properties
4.实现基于daemonset类型的日志收集
基于daemonset运行日志收集服务,主要收集以下类型日志:
1.node节点收集,基于daemonset部署日志收集进程,实现json-file类型(标准输出/dev/stdout、错误输出/dev/stderr)日志收集,即应用程序产生的标准输出和错误输出的日志。
2.宿主机系统日志等以日志文件形式保存的日志

生产者logstash采集日志推送到kafka集群
#构建logstash镜像
root@k8s-master1:~# cat Dockerfile
FROM logstash:7.12.1
USER root
WORKDIR /usr/share/logstash
#RUN rm -rf config/logstash-sample.conf
ADD logstash.yml /usr/share/logstash/config/logstash.yml
ADD logstash.conf /usr/share/logstash/pipeline/logstash.conf
root@k8s-master1:~# cat logstash.yaml
http.host: "0.0.0.0"
#xpack.monitoring.elasticsearch.hosts: [ "http://elasticsearch:9200" ]
#采集日志
root@k8s-master1:~# cat logstash.conf
input {
file {
#path => "/var/lib/docker/containers/*/*-json.log" #docker
path => "/var/log/pods/*/*/*.log"
start_position => "beginning"
type => "jsonfile-daemonset-applog"
}
file {
path => "/var/log/*.log"
start_position => "beginning"
type => "jsonfile-daemonset-syslog"
}
}
output {
if [type] == "jsonfile-daemonset-applog" {
kafka {
bootstrap_servers => "${KAFKA_SERVER}"
topic_id => "${TOPIC_ID}"
batch_size => 16384 #logstash每次向ES传输的数据量大小,单位为字节
codec => "${CODEC}"
} }
if [type] == "jsonfile-daemonset-syslog" {
kafka {
bootstrap_servers => "${KAFKA_SERVER}"
topic_id => "${TOPIC_ID}"
batch_size => 16384
codec => "${CODEC}" #系统日志不是json格式
}}
}
root@k8s-master1:~# cat build-commond.sh
#!/bin/bash
nerdctl build -t harbor.linuxarchitect.io/baseimages/logstash:v7.12.1-json-file-log-v1 .
nerdctl push harbor.linuxarchitect.io/baseimages/logstash:v7.12.1-json-file-log-v1
K8S部署生产者logstash
root@k8s-master1:~# cat DaemonSet-logstash.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: logstash-elasticsearch
namespace: kube-system
labels:
k8s-app: logstash-logging
spec:
selector:
matchLabels:
name: logstash-elasticsearch
template:
metadata:
labels:
name: logstash-elasticsearch
spec:
tolerations:
# this toleration is to have the daemonset runnable on master nodes
# remove it if your masters can't run pods
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
containers:
- name: logstash-elasticsearch
image: harbor.linuxarchitect.io/baseimages/logstash:v7.12.1-json-file-log-v1
env:
- name: "KAFKA_SERVER"
value: "192.168.121.126:9092,192.168.121.127:9092,192.168.121.128:9092"
- name: "TOPIC_ID"
value: "jsonfile-log-topic"
- name: "CODEC"
value: "json"
# resources:
# limits:
# cpu: 1000m
# memory: 1024Mi
# requests:
# cpu: 500m
# memory: 1024Mi
volumeMounts:
- name: varlog #定义宿主机系统日志挂载路径
mountPath: /var/log #宿主机系统日志挂载点
- name: varlibdockercontainers #定义容器日志挂载路径,和logstash配置文件中的收集路径保持一直
#mountPath: /var/lib/docker/containers #docker挂载路径
mountPath: /var/log/pods #containerd挂载路径,此路径与logstash的日志收集路径必须一致
readOnly: false
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log #宿主机系统日志
- name: varlibdockercontainers
hostPath:
#path: /var/lib/docker/containers #docker的宿主机日志路径
path: /var/log/pods #containerd的宿主机日志路径
root@k8s-master1:~# kubectl apply -f DaemonSet-logstash.yaml
传统方式部署消费者logstash
root@logstash:~# dpkg -i logstash-7.12.1-amd64.deb
root@logstash:~# cat log-to-es.conf
input {
kafka {
bootstrap_servers => "192.168.121.126:9092,192.168.121.127:9092,192.168.121.128:9092"
topics => ["jsonfile-log-topic"]
codec => "json"
}
}
output {
#if [fields][type] == "app1-access-log" {
if [type] == "jsonfile-daemonset-applog" {
elasticsearch {
hosts => ["192.168.121.123:9200","192.168.121.124:9200"]
index => "jsonfile-daemonset-applog-%{+YYYY.MM.dd}"
}}
if [type] == "jsonfile-daemonset-syslog" {
elasticsearch {
hosts => ["192.168.121.123:9200","192.168.121.124:9200"]
index => "jsonfile-daemonset-syslog-%{+YYYY.MM.dd}"
}}
}
root@logstash:~# systemctl restart logstash
root@logstash:~# systemctl status logstash
root@logstash:~# tail -f /var/log/logstash/logstash-plain.log
部署kibana
root@kibana:~# dpkg -i kibana-7.12.1-amd64.deb
root@kibana:~# cat /etc/kibana/kibana.yaml
server.host: "192.168.121.129"
elasticsearch.hosts: ["http://192.168.121.123:9200"]
il8n.locale: "zh-CN"
root@kibana:~# systemctl restart kibana
验证:

5.实现基于sidecar类型的日志收集

构建业务镜像
#构建logstash镜像
root@k8s-master1:~# cat Dockerfile
FROM logstash:7.12.1
USER root
WORKDIR /usr/share/logstash
#RUN rm -rf config/logstash-sample.conf
ADD logstash.yml /usr/share/logstash/config/logstash.yml
ADD logstash.conf /usr/share/logstash/pipeline/logstash.conf
root@k8s-master1:~# cat logstash.conf
input {
file {
path => "/var/log/applog/catalina.out"
start_position => "beginning"
type => "app1-sidecar-catalina-log"
}
file {
path => "/var/log/applog/localhost_access_log.*.txt"
start_position => "beginning"
type => "app1-sidecar-access-log"
}
}
output {
if [type] == "app1-sidecar-catalina-log" {
kafka {
bootstrap_servers => "${KAFKA_SERVER}"
topic_id => "${TOPIC_ID}"
batch_size => 16384 #logstash每次向ES传输的数据量大小,单位为字节
codec => "${CODEC}"
} }
if [type] == "app1-sidecar-access-log" {
kafka {
bootstrap_servers => "${KAFKA_SERVER}"
topic_id => "${TOPIC_ID}"
batch_size => 16384
codec => "${CODEC}"
}}
}
root@k8s-master1:~# cat logstash.yaml
http.host: "0.0.0.0"
#xpack.monitoring.elasticsearch.hosts: [ "http://elasticsearch:9200" ]
#推送镜像
root@k8s-master1:~# cat build-commond.sh
#!/bin/bash
nerdctl build -t harbor.linuxarchitect.io/baseimages/logstash:v7.12.1-sidecar .
nerdctl push harbor.linuxarchitect.io/baseimages/logstash:v7.12.1-sidecar
部署web服务
root@k8s-master1:~# cat 2.tomcat-app1
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
labels:
app: magedu-tomcat-app1-deployment-label
name: magedu-tomcat-app1-deployment #当前版本的deployment 名称
namespace: magedu
spec:
replicas: 2
selector:
matchLabels:
app: magedu-tomcat-app1-selector
template:
metadata:
labels:
app: magedu-tomcat-app1-selector
spec:
containers:
- name: magedu-tomcat-app1-container
image: registry.cn-hangzhou.aliyuncs.com/zhangshijie/tomcat-app1:v1
imagePullPolicy: IfNotPresent
#imagePullPolicy: Always
ports:
- containerPort: 8080
protocol: TCP
name: http
env:
- name: "password"
value: "123456"
- name: "age"
value: "18"
resources:
limits:
cpu: 1
memory: "512Mi"
requests:
cpu: 500m
memory: "512Mi"
volumeMounts:
- name: applogs
mountPath: /apps/tomcat/logs
startupProbe:
httpGet:
path: /myapp/index.html
port: 8080
initialDelaySeconds: 5 #首次检测延迟5s
failureThreshold: 3 #从成功转为失败的次数
periodSeconds: 3 #探测间隔周期
readinessProbe:
httpGet:
#path: /monitor/monitor.html
path: /myapp/index.html
port: 8080
initialDelaySeconds: 5
periodSeconds: 3
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
livenessProbe:
httpGet:
#path: /monitor/monitor.html
path: /myapp/index.html
port: 8080
initialDelaySeconds: 5
periodSeconds: 3
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
- name: sidecar-container
image: harbor.linuxarchitect.io/baseimages/logstash:v7.12.1-sidecar
imagePullPolicy: IfNotPresent
#imagePullPolicy: Always
env:
- name: "KAFKA_SERVER"
value: "192.168.121.127:9092,192.168.121.128:9092,192.168.121.129:9092"
- name: "TOPIC_ID"
value: "tomcat-app1-topic"
- name: "CODEC"
value: "json"
volumeMounts:
- name: applogs
mountPath: /var/log/applog
volumes:
- name: applogs #定义通过emptyDir实现业务容器与sidecar容器的日志共享,以让sidecar收集业务容器中的日志
emptyDir: {}
root@k8s-master1:~# kubectl apply -f tomcat-app1.yaml
root@k8s-master1:~# cat tomcat-service.yaml
kind: Service
apiVersion: v1
metadata:
labels:
app: magedu-tomcat-app1-service-label
name: magedu-tomcat-app1-service
namespace: magedu
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
nodePort: 40080
selector:
app: magedu-tomcat-app1-selector
root@k8s-master1:~# kubectl apply -f tomcat-service.yaml
传统方式部署消费者logstash
root@logstash:~# dpkg -i logstash-7.12.1-amd64.deb
root@logstash:~# cat log-to-es.conf
input {
kafka {
bootstrap_servers => "192.168.121.126:9092,192.168.121.127:9092,192.168.121.128:9092"
topics => ["tomcat-app1-topic"]
codec => "json"
}
}
output {
if [type] == "app1-sidecar-access-log" {
elasticsearch {
hosts => ["192.168.121.123:9200","192.168.121.124:9200"]
index => "sidecar-app1-accesslog-%{+YYYY.MM.dd}"
}
}
if [type] == "app1-sidecar-catalina-log" {
elasticsearch {
hosts => ["192.168.121.123:9200","192.168.121.124:9200"]
index => "sidecar-app1--catalinalog-%{+YYYY.MM.dd}"
}
}
}
root@logstash:~# systemctl restart logstash
root@logstash:~# systemctl status logstash
root@logstash:~# tail -f /var/log/logstash/logstash-plain.log
验证:

6.实现基于容器内置日志进行日志收集

构建业务镜像
root@k8s-master1:~# cat Dockerfile
FROM harbor.linuxarchitect.io/pub-images/tomcat-base:v8.5.43
ADD catalina.sh /apps/tomcat/bin/catalina.sh
ADD server.xml /apps/tomcat/conf/server.xml
ADD run_tomcat.sh /apps/tomcat/bin/run_tomcat.sh
ADD filebeat.yml /etc/filebeat/filebeat.yml
ADD myapp.tar.gz /data/tomcat/webapps/myapp/
RUN chown -R tomcat.tomcat /data/ /apps/
EXPOSE 8080 8443
CMD ["/apps/tomcat/bin/run_tomcat.sh"]
root@k8s-master1:~# chmod +x ./*
total 24
drwxr-xr-x 3 root root 210 Aug 16 08:24 ./
drwxr-xr-x 5 root root 113 Aug 15 08:43 ../
drwxr-xr-x 3 root root 211 Aug 16 08:24 Dockerfile
-rwxr-xr-x 1 root root 633 May 24 01:20 build-command.sh
-rwxr-xr-x 1 root root 967 Aug 16 08:24 catalina.sh
-rwxr-xr-x 1 root root 345 May 24 01:04 server.xml
-rwxr-xr-x 1 root root 569 May 24 02:02 filebeat.yml
-rwxr-xr-x 1 root root 6148 May 29 14:53 myapp.tar.gz
-rwxr-xr-x 1 root root 3128 May 29 14:53 run_tomcat.sh
root@k8s-master1:~# cat run_tomcat.sh
#!/bin/bash
#echo "nameserver 223.6.6.6" > /etc/resolv.conf
#echo "192.168.7.248 k8s-vip.example.com" >> /etc/hosts
/usr/share/filebeat/bin/filebeat -e -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat &
su - tomcat -c "/apps/tomcat/bin/catalina.sh start"
tail -f /etc/hosts
root@k8s-master1:~# cat filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /apps/tomcat/logs/catalina.out
fields:
type: filebeat-tomcat-catalina
- type: log
enabled: true
paths:
- /apps/tomcat/logs/localhost_access_log.*.txt
fields:
type: filebeat-tomcat-accesslog
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 1
setup.kibana:
output.kafka:
hosts: ["192.168.121.126:9092","192.168.121.127:9092","192.168.121.128:9092"]
required_acks: 1
topic: "filebeat-magedu-app1"
compression: gzip
max_message_bytes: 1000000
#output.redis:
# hosts: ["192.168.121.129:6379"]
# key: "k8s-magedu-app1"
# db: 1
# timeout: 5
# password: "123456"
root@k8s-master1:~# cat build-command.sh
#!/bin/bash
TAG=$1
nerdctl build -t harbor.linuxarchitect.io/magedu/tomcat-app1:${TAG} .
nerdctl push harbor.linuxarchitect.io/magedu/tomcat-app1:${TAG}
部署web服务
root@k8s-master1:~# cat tomcat-app1.yaml
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
labels:
app: magedu-tomcat-app1-filebeat-deployment-label
name: magedu-tomcat-app1-filebeat-deployment
namespace: magedu
spec:
replicas: 3
selector:
matchLabels:
app: magedu-tomcat-app1-filebeat-selector
template:
metadata:
labels:
app: magedu-tomcat-app1-filebeat-selector
spec:
containers:
- name: magedu-tomcat-app1-filebeat-container
image: harbor.linuxarchitect.io/magedu/tomcat-app1:v2
#imagePullPolicy: IfNotPresent
imagePullPolicy: Always
ports:
- containerPort: 8080
protocol: TCP
name: http
env:
- name: "password"
value: "123456"
- name: "age"
value: "18"
resources:
limits:
cpu: 1
memory: "512Mi"
requests:
cpu: 500m
memory: "512Mi"
root@k8s-master1:~# kubectl apply -f tomcat-app1.yaml
root@k8s-master1:~# cat tomcat-service.yaml
---
kind: Service
apiVersion: v1
metadata:
labels:
app: magedu-tomcat-app1-filebeat-service-label
name: magedu-tomcat-app1-filebeat-service
namespace: magedu
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
nodePort: 30092
selector:
app: magedu-tomcat-app1-filebeat-selector
root@k8s-master1:~# kubectl apply -f tomcat-service.yaml
传统方式部署消费者logstash
root@k8s-master1:~# cat logstash-filebeat-process-kafka-to-es.conf
input {
kafka {
bootstrap_servers => "192.168.121.126:9092","192.168.121.127:9092","192.168.121.128:9092"
topics => ["filebeat-magedu-app1"]
codec => "json"
}
}
output {
if [fields][type] == "filebeat-tomcat-catalina" {
elasticsearch {
hosts => ["192.168.121.123:9200","192.168.121.124:9200"]
index => "filebeat-tomcat-catalina-%{+YYYY.MM.dd}"
}}
if [fields][type] == "filebeat-tomcat-accesslog" {
elasticsearch {
hosts => ["192.168.121.123:9200","192.168.121.124:9200"]
index => "filebeat-tomcat-accesslog-%{+YYYY.MM.dd}"
}}
}
root@logstash:~# systemctl restart logstash
root@logstash:~# systemctl status logstash
root@logstash:~# tail -f /var/log/logstash/logstash-plain.log
验证:

浙公网安备 33010602011771号