1、简述flannel原理及实现过程
2、k8s部署Nginx+Tomcat+NFS实现动静分离Web站点

#===============================================================================

1、简述flannel原理及实现过程

1.1 原理

#每个node分配pod网段下的独立子网,24位掩码
# cat /run/flannel/subnet.env

#flannel网络模型:
 UDP        #使用UDP封装完成报文跨主机传输,生产不使用
 VXLAN      #本质为一种tunnel(隧道)协议,基于3层网络实现虚拟2层网络
              VXLAN_Directrouting,在同一二层网络中的node节点启用直接路由机制,类似host-gw模式
 Host-gw    #要求node节点处于同一局域网当中,通过在node节点创建到达目标节点的路由实现报文转发

#组件:
 cni0         #网桥设备,cni0连接pod内部eth0,cni0占用分配的pod网段的第一个可用地址
 flannel.1    #overlay网络设备,用于VXLAN报文封装与解封装,不同node之间的pod通过flannel.1以隧道的形式通信

1.2 不同网络模型的实现过程

1)VXLAN
#修改配置文件
# grep ^[^#] /etc/ansible/roles/flannel/defaults/main.yml
FLANNEL_BACKEND: "vxlan"
DIRECT_ROUTING: false
flanneld_image: "easzlab/flannel:v0.11.0-amd64"
flannel_offline: "flannel_v0.11.0-amd64.tar"
#部署
# ansible-playbook /etc/ansible/06.network.yml
#重启node节点
# reboot

#创建pod进行网络测试
# kubectl run net-test1 --image=alpine --replicas 2 sleep 360000

#node节点路由表
# ip route
......    
172.31.1.0/24 via 172.31.1.0 dev flannel.1 onlink     #去往其他node节点pod网段,下一跳为对方flannel.1 IP,出接口为本机flannel.1
172.31.2.0/24 via 172.31.2.0 dev flannel.1 onlink

#进入容器
# kubectl exec -it net-test1-5fcc69db59-9hcnr sh
#测试到达其他node节点的pod
/ # traceroute 172.31.2.3
traceroute to 172.31.2.3 (172.31.2.3), 30 hops max, 46 byte packets
 1  172.31.6.1 (172.31.6.1)  0.012 ms  0.010 ms  0.011 ms        #第1跳为本机cni0
 2  172.31.2.0 (172.31.2.0)  2.222 ms  50.109 ms  31.306 ms      #第2跳为对方flannel.1 IP
 3  172.31.2.3 (172.31.2.3)  8.207 ms  1.898 ms  8.372 ms        #第3跳到达对方pod
 
#===============================================================
2)VXLAN_Directrouting
#修改配置文件
# grep ^[^#] /etc/ansible/roles/flannel/defaults/main.yml
FLANNEL_BACKEND: "vxlan"
DIRECT_ROUTING: true
flanneld_image: "easzlab/flannel:v0.11.0-amd64"
flannel_offline: "flannel_v0.11.0-amd64.tar"
#部署
# ansible-playbook /etc/ansible/06.network.yml
#重启node节点
# reboot

#创建pod进行网络测试
# kubectl run net-test1 --image=alpine --replicas 2 sleep 360000

#node节点路由表
# ip route
......
172.31.0.0/24 via 10.0.0.7 dev eth0         #去往其他node节点pod网段,下一跳为对方宿主机IP,出接口为本机eth0
172.31.1.0/24 via 10.0.0.27 dev eth0

#进入容器
# kubectl exec -it net-test1-5fcc69db59-frkh7 sh
#测试到达其他node节点的pod
/ # traceroute 172.31.6.3
traceroute to 172.31.6.3 (172.31.6.3), 30 hops max, 46 byte packets
 1  172.31.2.1 (172.31.2.1)  0.011 ms  0.009 ms  0.005 ms        #第1跳为本机cni0
 2  10.0.0.67 (10.0.0.67)  0.358 ms  0.427 ms  0.557 ms          #第2跳为对方宿主机IP
 3  172.31.6.3 (172.31.6.3)  0.796 ms  0.872 ms  0.517 ms        #第3跳到达对方pod

#===============================================================
3)Host-gw
#修改配置文件
# grep ^[^#] /etc/ansible/roles/flannel/defaults/main.yml
FLANNEL_BACKEND: "host-gw"
DIRECT_ROUTING: true
flanneld_image: "easzlab/flannel:v0.11.0-amd64"
flannel_offline: "flannel_v0.11.0-amd64.tar"
#部署
# ansible-playbook /etc/ansible/06.network.yml
#重启node节点
# reboot

#创建pod进行网络测试
# kubectl run net-test1 --image=alpine --replicas 2 sleep 360000

#node节点路由表
# ip route
......
172.31.2.0/24 via 10.0.0.47 dev eth0         #去往其他node节点pod网段,下一跳为对方宿主机IP,出接口为本机eth0
172.31.3.0/24 via 10.0.0.57 dev eth0

#进入容器
# kubectl exec -it net-test1-5fcc69db59-9hcnr sh
#测试到达其他node节点的pod
/ # traceroute 172.31.2.2
traceroute to 172.31.2.2 (172.31.2.2), 30 hops max, 46 byte packets
 1  172.31.6.1 (172.31.6.1)  0.013 ms  0.007 ms  0.008 ms        #第1跳为本机cni0
 2  10.0.0.47 (10.0.0.47)  2.037 ms  25.848 ms  24.287 ms        #第2跳为对方宿主机IP
 3  172.31.2.2 (172.31.2.2)  3.641 ms  2.514 ms  7.655 ms        #第3跳到达对方pod

2、k8s部署Nginx+Tomcat+NFS实现动静分离Web站点

2.1 目录结构

# mkdir -p /opt/k8s-data/{dockerfile,yaml}                        #镜像创建目录、k8s对象资源创建目录
# mkdir -p /opt/k8s-data/dockerfile/{system,web}                 #系统镜像创建目录、应用镜像创建目录
# mkdir -p /opt/k8s-data/dockerfile/web/{linux39,pub-images}     #应用业务镜像创建目录、应用基础镜像创建目录
# mkdir -p /opt/k8s-data/yaml/{linux39,namespaces}                #k8s-pod资源创建目录、k8s-namespace资源创建目录

#harbor主机创建三个项目,存放不同镜像
 os_base_images            #存放系统基础镜像
 web_base_images          #存放应用基础镜像
 app_images                #存放应用业务镜像

2.2 准备镜像

#系统基础镜像
# docker pull centos:7.9.2009
# docker tag centos:7.9.2009 10.0.0.77/os_base_images/centos:7.9.2009
# docker push 10.0.0.77/os_base_images/centos:7.9.2009

# ls /opt/k8s-data/dockerfile/system/
 archive.tar.gz  build-command.sh  Dockerfile
# tar tf archive.tar.gz
./
./filebeat-7.6.1-x86_64.rpm
./limits.conf
./sysctl.conf
./CentOS7.repo

# cat /opt/k8s-data/dockerfile/system/Dockerfile
#自定义CentOS基础镜像

FROM 10.0.0.77/os_base_images/centos:7.9.2009

LABEL maintainer "Marko.Ou <oxz@qq.com>"

ADD archive.tar.gz /usr/local/src

RUN cd /etc/yum.repos.d; \
    mkdir repodir; \
    mv *.repo repodir; \
    mv /usr/local/src/CentOS7.repo /etc/yum.repos.d/; \
    mv /usr/local/src/sysctl.conf /etc/sysctl.d/; \
    \mv /usr/local/src/limits.conf /etc/security/; \
    yum clean all; \
    yum makecache; \
    yum -y install bash-completion vim autofs lvm2 traceroute mtr mailx postfix bc lrzsz nmap \
        psmisc tree wget expect sysstat pcp-system-tools iotop iftop nload glances lsof \
        screen tmux at fuse-sshfs sshpass pssh aide chrony genisoimage rsync tcpdump strace \
        ltrace zip unzip socat gcc gcc-c++ glibc glibc-devel pcre pcre-devel openssl \
        openssl-devel zlib-devel ntpdate telnet libevent libevent-devel iproute make \
        /usr/local/src/filebeat-7.6.1-x86_64.rpm; \
    rm -f  /etc/localtime /usr/local/src/filebeat-7.6.1-x86_64.rpm; \
    ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime; \
    useradd -u 2019 nginx && useradd -u 2020 www; \
    sed -ri '/session.*system-auth/s/(.*)/#\1/' /etc/pam.d/su

# cat /opt/k8s-data/dockerfile/system/build-command.sh
#!/bin/bash
docker build -t 10.0.0.77/os_base_images/centos-base:7.9.2009 .
sleep 1
docker login -u admin -p harbor 10.0.0.77
docker push 10.0.0.77/os_base_images/centos-base:7.9.2009

#构建系统基础镜像
# bash /opt/k8s-data/dockerfile/system/build-command.sh

#=====================================================================
#nginx应用基础镜像
# mkdir /opt/k8s-data/dockerfile/web/pub-images/nginx-base
# ls /opt/k8s-data/dockerfile/web/pub-images/nginx-base
 build-command.sh  Dockerfile  nginx-1.22.1.tar.gz
# cat /opt/k8s-data/dockerfile/web/pub-images/nginx-base/Dockerfile
#nginx应用基础镜像
FROM 10.0.0.77/os_base_images/centos-base:7.9.2009

LABEL maintainer "Marko.Ou <oxz@qq.com>"

ADD nginx-1.22.1.tar.gz /usr/local

RUN yum -y install gcc pcre-devel openssl-devel unzip make; \
    sed -ri '/^static.*ngx_http_server_string/s#nginx"#KNM/1.1"#' /usr/local/nginx-1.22.1/src/http/ngx_http_header_filter_module.c; \
    cd /usr/local/nginx-1.22.1; \
    ./configure \
    --prefix=/usr/local/nginx_1.22.1 \
    --user=nginx \
    --group=nginx \
    --with-http_ssl_module \
    --with-http_v2_module \
    --with-http_realip_module \
    --with-http_stub_status_module \
    --with-http_gzip_static_module \
    --with-pcre \
    --with-stream \
    --with-stream_ssl_module \
    --with-stream_realip_module && make -j 4 && make install; \
    ln -s /usr/local/nginx_1.22.1/ /usr/local/nginx; \
    mkdir /usr/local/nginx/conf.d; \
    chown -R nginx.nginx /usr/local/nginx/; \
    rm -rf /usr/local/nginx-1.22.1.tar.gz

# cat /opt/k8s-data/dockerfile/web/pub-images/nginx-base/build-command.sh
#!/bin/bash
docker build -t 10.0.0.77/web_base_images/nginx-base:1.22.1 .
sleep 1
docker login -u admin -p harbor 10.0.0.77
docker push 10.0.0.77/web_base_images/nginx-base:1.22.1

#构建nginx应用基础镜像
# bash /opt/k8s-data/dockerfile/web/pub-images/nginx-base/build-command.sh

#=====================================================================
#nginx应用业务镜像
# mkdir /opt/k8s-data/dockerfile/web/linux39/nginx
# ls /opt/k8s-data/dockerfile/web/linux39/nginx
 build-command.sh  Dockerfile  index.html  linux39.conf  nginx.conf  nginx-webapp.tar.gz

# cat /opt/k8s-data/dockerfile/web/linux39/nginx/Dockerfile
#nginx应用业务镜像
FROM 10.0.0.77/web_base_images/nginx-base:1.22.1

LABEL maintainer "Marko.Ou <oxz@qq.com>"

ADD nginx.conf /usr/local/nginx/conf/nginx.conf
ADD linux39.conf /usr/local/nginx/conf.d/linux39.conf
ADD nginx-webapp.tar.gz /usr/local/nginx/html
ADD index.html  /usr/local/nginx/html/index.html

#静态资源挂载路径
RUN mkdir -p /usr/local/nginx/html/webapp/{static,images}

ENV PATH $PATH:/usr/local/nginx/sbin
CMD ["nginx", "-g", "daemon off;"]

# cat /opt/k8s-data/dockerfile/web/linux39/nginx/build-command.sh
#!/bin/bash
TAG=$1
docker build -t 10.0.0.77/app_images/nginx-app:${TAG} .
echo "镜像构建完成,即将上传到harbor"
sleep 1
docker login -u admin -p harbor 10.0.0.77
docker push 10.0.0.77/app_images/nginx-app:${TAG}
echo "镜像上传到harbor完成"

# cat /opt/k8s-data/dockerfile/web/linux39/nginx/linux39.conf
server {
  location / {
    root html;
    index index.html;
  }
  location /webapp {
    root html;
    index index.html;
  }
}

#构建nginx应用业务镜像
# bash /opt/k8s-data/dockerfile/web/linux39/nginx/build-command.sh v1

#测试nginx应用业务镜像
# docker run --rm -it -p 80:80 10.0.0.77/app_images/nginx-app:v1
# curl 10.0.0.47
 nginx-root-page
# curl 10.0.0.47/webapp/index.html
 nginx-webapp-page

#=====================================================================
#jdk基础镜像
# mkdir -p /opt/k8s-data/dockerfile/web/pub-images/jdk-base
# ls /opt/k8s-data/dockerfile/web/pub-images/jdk-base
 build-command.sh  Dockerfile  jdk-8u241-linux-x64.tar.gz  profile

# tail -n3 /opt/k8s-data/dockerfile/web/pub-images/jdk-base/profile
 export JAVA_HOME=/usr/local/jdk
 export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
 export PATH=$PATH:$JAVA_HOME/bin

# cat /opt/k8s-data/dockerfile/web/pub-images/jdk-base/Dockerfile 
#JDK基础镜像
FROM 10.0.0.77/os_base_images/centos-base:7.9.2009

LABEL maintainer "Marko.Ou <oxz@qq.com>"

ADD jdk-8u241-linux-x64.tar.gz /usr/local
ADD profile /etc/profile

RUN ln -s /usr/local/jdk1.8.0_241 /usr/local/jdk; \
    chown -R root.root /usr/local/jdk/

ENV JAVA_HOME /usr/local/jdk
ENV JRE_HOME $JAVA_HOME/jre
ENV CLASSPATH $JAVA_HOME/lib:$JRE_HOME/lib
ENV PATH $PATH:$JAVA_HOME/bin 

# cat /opt/k8s-data/dockerfile/web/pub-images/jdk-base/build-command.sh
#!/bin/bash
HARBOR_IP="10.0.0.77"
HARBOR_USER="admin"
HARBOR_PASSWD="harbor"

docker build -t ${HARBOR_IP}/web_base_images/jdk-base:1.8.241  .
echo "镜像构建完成,即将上传到harbor"
sleep 1
docker login -u ${HARBOR_USER} -p ${HARBOR_PASSWD} ${HARBOR_IP}
docker push ${HARBOR_IP}/web_base_images/jdk-base:1.8.241
echo "镜像上传到harbor完成"

#构建jdk基础镜像
# bash /opt/k8s-data/dockerfile/web/pub-images/jdk-base/build-command.sh

#=====================================================================
#tomcat应用基础镜像
# mkdir /opt/k8s-data/dockerfile/web/pub-images/tomcat-base
# ls /opt/k8s-data/dockerfile/web/pub-images/tomcat-base
 apache-tomcat-8.5.85.tar.gz  build-command.sh  Dockerfile

# cat /opt/k8s-data/dockerfile/web/pub-images/tomcat-base/Dockerfile
#tomcat应用基础镜像
FROM 10.0.0.77/web_base_images/jdk-base:1.8.241

LABEL maintainer "Marko.Ou <oxz@qq.com>"

ADD apache-tomcat-8.5.85.tar.gz /usr/local

RUN ln -s /usr/local/apache-tomcat-8.5.85/ /usr/local/tomcat; \
    mkdir -p /data/tomcat/webapps /data/tomcat/logs; \
    useradd -u 2021 tomcat; \
    chown -R tomcat.tomcat /data

# cat /opt/k8s-data/dockerfile/web/pub-images/tomcat-base/build-command.sh
#!/bin/bash
HARBOR_IP="10.0.0.77"
HARBOR_USER="admin"
HARBOR_PASSWD="harbor"

docker build -t ${HARBOR_IP}/web_base_images/tomcat-base:8.5.85 .
echo "镜像构建完成,即将上传到harbor"
sleep 1
docker login -u ${HARBOR_USER} -p ${HARBOR_PASSWD} ${HARBOR_IP}
docker push ${HARBOR_IP}/web_base_images/tomcat-base:8.5.85
echo "镜像上传到harbor完成"

#构建jdk基础镜像
# bash /opt/k8s-data/dockerfile/web/pub-images/tomcat-base/build-command.sh

#=====================================================================
#tomcat应用业务镜像
# mkdir /opt/k8s-data/dockerfile/web/linux39/tomcat
# ls /opt/k8s-data/dockerfile/web/linux39/tomcat
 app1.tar.gz  build-command.sh  catalina.sh  Dockerfile  filebeat.yml  myapp  run_tomcat.sh  server.xml

# cat /opt/k8s-data/dockerfile/web/linux39/tomcat/Dockerfile
#tomcat应用业务镜像
FROM 10.0.0.77/web_base_images/tomcat-base:8.5.85

LABEL maintainer "Marko.Ou <oxz@qq.com>"

ADD catalina.sh /usr/local/tomcat/bin/catalina.sh
ADD server.xml /usr/local/tomcat/conf/server.xml
#ADD myapp/* /data/tomcat/webapps/myapp/
ADD app1.tar.gz /data/tomcat/webapps/myapp/
ADD run_tomcat.sh /usr/local/tomcat/bin/run_tomcat.sh
ADD filebeat.yml /etc/filebeat/filebeat.yml
RUN chown -R tomcat.tomcat /data/ /usr/local/tomcat/; \
    chmod +x /usr/local/tomcat/bin/*.sh

EXPOSE 8080 8443
CMD ["/usr/local/tomcat/bin/run_tomcat.sh"]

# cat /opt/k8s-data/dockerfile/web/linux39/tomcat/catalina.sh
......
JAVA_OPTS="-server -Xms1g -Xmx1g -Xss512k -Xmn1g -XX:CMSInitiatingOccupancyFraction=65 -XX:+AggressiveOpts -XX:+UseBiasedLocking -XX:+DisableExplicitGC -XX:MaxTenuringThreshold=10 -XX:NewSize=2048M -XX:MaxNewSize=2048M -XX:NewRatio=2 -XX:PermSize=128M -XX:MaxPermSize=512m -XX:CMSFullGCsBeforeCompaction=5 -XX:+ExplicitGCInvokesConcurrent -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSParallelRemarkEnabled -XX:+UseCMSCompactAtFullCollection -XX:LargePageSizeInBytes=128m -XX:+UseFastAccessorMethods"

# cat /opt/k8s-data/dockerfile/web/linux39/tomcat/run_tomcat.sh
#!/bin/bash
/usr/share/filebeat/bin/filebeat -e -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat &
su - tomcat -c "/usr/local/tomcat/bin/catalina.sh start"
tail -f /etc/hosts

# cat /opt/k8s-data/dockerfile/web/linux39/tomcat/build-command.sh
#!/bin/bash
HARBOR_IP="10.0.0.77"
HARBOR_USER="admin"
HARBOR_PASSWD="harbor"
TAG=$1

docker build -t ${HARBOR_IP}/app_images/tomcat-app:${TAG} .
echo "镜像构建完成,即将上传到harbor"
sleep 1
docker login -u ${HARBOR_USER} -p ${HARBOR_PASSWD} ${HARBOR_IP}
docker push ${HARBOR_IP}/app_images/tomcat-app:${TAG}

#构建tomcat应用业务镜像
# bash /opt/k8s-data/dockerfile/web/linux39/tomcat/build-command.sh v1

#============================================================================
#测试tomcat应用业务镜像
# docker run --rm -it -p 8080:8080 10.0.0.77/app_images/tomcat-app:v1
# curl 10.0.0.47:8080/myapp/index.html
 tomcat-myapp-v1

2.3 准备 k8s 对象资源 yaml 文件

#=====================================================================
#nfs配置
# mkdir -p /data/linux39/{images,static}
# cat /etc/exports
 /data/linux39/static *(rw,no_root_squash)
 /data/linux39/images *(rw,no_root_squash)
# exportfs -r
# showmount -e 10.0.0.17

#=====================================================================
#namespace资源
# cat /opt/k8s-data/yaml/namespaces/linux39-ns.yaml
apiVersion: v1
kind: Namespace
metadata: 
  name: linux39

#=====================================================================
#nginx-pod资源
# mkdir -p /opt/k8s-data/yaml/linux39/nginx
# cat /opt/k8s-data/yaml/linux39/nginx/nginx.yaml
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: linux39-nginx-deployment
  labels:
    app: linux39-nginx-deployment-label
  namespace: linux39
spec:
  replicas: 1
  selector:
    matchLabels:
      app: linux39-nginx-selector
  template:
    metadata:
      labels:
        app: linux39-nginx-selector
    spec:
      containers:
      - name: linux39-nginx-container
        image: 10.0.0.77/app_images/nginx-app:v1
        #command: ["/apps/tomcat/bin/run_tomcat.sh"]
        #imagePullPolicy: IfNotPresent                        #镜像重拉策略,根据tag是否能够改变
        imagePullPolicy: Always
        ports:
        - containerPort: 80
          protocol: TCP
          name: http
        - containerPort: 443
          protocol: TCP
          name: https
        env:
        - name: "password"
          value: "123456"
        - name: "age"
          value: "20"
        resources:
          limits:
            cpu: 1
            memory: 512Mi
          requests:
            cpu: 200m
            memory: 256Mi
        volumeMounts:
        - name: linux39-images
          mountPath: /usr/local/nginx/html/webapp/images
          readOnly: false
        - name: linux39-static
          mountPath: /usr/local/nginx/html/webapp/static
          readOnly: false
      volumes:
      - name: linux39-images
        nfs:
          server: 10.0.0.17
          path: /data/linux39/images 
      - name: linux39-static
        nfs:
          server: 10.0.0.17
          path: /data/linux39/static
      #nodeSelector:
      #  group: linux39

---
kind: Service
apiVersion: v1
metadata:
  name: linux39-nginx-service
  labels:
    app: linux39-nginx-service-label
  namespace: linux39
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80
    nodePort: 40002
  - name: https
    port: 443
    protocol: TCP
    targetPort: 443
    nodePort: 40443
  selector:
    app: linux39-nginx-selecto

#=====================================================================
#测试nginx-pod资源
#创建linux39-namespace
# kubectl apply -f /opt/k8s-data/yaml/namespaces/linux39-ns.yaml
#在master节点修改nodeport暴露端口范围(kubeadm安装k8s,默认范围30000-32767)
# vi /etc/kubernetes/manifests/kube-apiserver.yaml
    - --service-cluster-ip-range=192.168.0.0/20        #在下面新增一行
    - --service-node-port-range=30000-60000
# reboot
#创建nginx-pod资源
# kubectl apply -f /opt/k8s-data/yaml/linux39/nginx/nginx.yaml
#测试访问
# curl 10.0.0.7:40002
 nginx-root-page
# curl 10.0.0.7:40002/webapp/index.html
 nginx-webapp-page

#=====================================================================
#tomcat-pod资源
# mkdir -p /opt/k8s-data/yaml/linux39/tomcat-app1
# cat /opt/k8s-data/yaml/linux39/tomcat-app1/tomcat-app1.yaml
---
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  name: linux39-tomcat-app1-deployment
  labels:
    app: linux39-tomcat-app1-deployment-label
  namespace: linux39
spec:
  replicas: 1
  selector:
    matchLabels:
      app: linux39-tomcat-app1-selector
  template:
    metadata:
      labels:
        app: linux39-tomcat-app1-selector
    spec:
      containers:
      - name: linux39-tomcat-app1-container
        image: 10.0.0.77/app_images/tomcat-app:v1
        #command: ["/apps/tomcat/bin/run_tomcat.sh"]
        #imagePullPolicy: IfNotPresent
        imagePullPolicy: Always
        ports:
        - containerPort: 8080
          protocol: TCP
          name: http
        env:
        - name: "password"
          value: "123456"
        - name: "age"
          value: "18"
        resources:
          limits:
            cpu: 1
            memory: "512Mi"
          requests:
            cpu: 500m
            memory: "512Mi"

---
kind: Service
apiVersion: v1
metadata:
  name: linux39-tomcat-app1-service
  labels:
    app: linux39-tomcat-app1-service-label
  namespace: linux39
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 8080
    nodePort: 40003
  selector:
    app: linux39-tomcat-app1-selector

#=====================================================================
#测试tomcat-pod资源
#创建tomcat-pod资源
# kubectl apply -f /opt/k8s-data/yaml/linux39/tomcat-app1/tomcat-app1.yaml
#测试访问
# curl 10.0.0.7:40003/myapp/index.html
 tomcat-myapp-v1

#=====================================================================
#版本升级:nginx-pod调用后端tomcat-pod
#删除nginx-pod资源
# kubectl delete -f /opt/k8s-data/yaml/linux39/nginx/nginx.yaml

#修改nginx子配置文件,加入后端tomcat服务器组
# cat /opt/k8s-data/dockerfile/web/linux39/nginx/linux39.conf 
upstream tomcat-webserver {                                            #后端tomcat服务器组名称不能包含"_"(下划线)
  server linux39-tomcat-app1-service.linux39.svc.testou.com:80;
}

server {
  location / {
    root html;
    index index.html;
  }
  location /webapp {
    root html;
    index index.html;
  }
  location /myapp {
    proxy_pass http://tomcat-webserver;
  }
}

#重构nginx应用业务镜像
# bash /opt/k8s-data/dockerfile/web/linux39/nginx/build-command.sh v2

#修改nginx-pod资源配置
# vi /opt/k8s-data/yaml/linux39/nginx/nginx.yaml
...
        image: 10.0.0.77/app_images/nginx-app:v2            #更改镜像版本号
...

#重新创建nginx-pod资源
# kubectl apply -f /opt/k8s-data/yaml/linux39/nginx/nginx.yaml

#访问测试
# curl 10.0.0.7:40002
 nginx-root-page
# curl 10.0.0.7:40002/webapp/index.html
 nginx-webapp-page
# curl 10.0.0.7:40002/myapp/index.html
 tomcat-myapp-v1
posted on 2023-07-19 14:20  不期而至  阅读(33)  评论(0)    收藏  举报