Knative - Tekton Pipeline使用进阶 【十七】

WorkSpace

#Workspace是什么?
    ◼ Workspace用于为Task中的各Step提供工作目录,基于该Task运行的TaskRun需要在运行时提供该目录
    ◼ TaskRun的实际运行形式为Pod,因而Workspace对应的实际组件为Pod上的Volume
        ◆ConfigMap和Secret:只读式的Workspace
        ◆PersistentVolumeClaim:支持跨Task共享数据的Workspace
            ⚫ 静态预配
            ⚫ 动态预配:基于VolumeClaimTemplate动态创建
        ◆emptyDir:临时工作目录,用后即弃

#Workspace的功用
    ◼ 跨Task共享数据
        ◆定义在Pipeline上的Workspace
    ◼ 借助于Secrets加载机密凭据
    ◼ 借助于ConfigMap加载配置数据
    ◼ 持久化存储数据
        ◼ 为Task提供缓存以加速构建过程
        ◆定义在Task上的Workspace
        ◆也可用于与Sidecar共享数据
        
#另外,Task上也可以直接使用volumes定义要使用的存储卷,但其管理和使用方式与Workspace不同

在Task上使用Workspace

#在Task配置Workspace
    ◼ 定义在spec.wordspaces字段中
    ◼ 支持嵌套如下字段
        ◆name:必选字段,该Workspace的唯一标识符
        ◆description:描述信息,通常标明其使用目的
        ◆readOnly:是否为只读,默认为false
        ◆optional:是否为可选,默认为false
        ◆mountPath:在各Step中的挂载路径,默认为“/workspace/<name>”,其中<name>是当前Workspace的名称

#在Task中可用的workspace变量
    ◼ $(workspaces.<name>.path):由<name>指定的Workspace挂载的路径,对于可选且TaskRun未声明时,其值为空;
    ◼ $(workspaces.<name>.bound):其值为true或false,用于标识指定的Workspace是已经绑定;
        ◆对于optional为false的Workspace,该变量的值将始终为true;
    ◼ $(workspaces.<name>.claim):由<name>标示的Workspace所使用的PVC的名称
        ◆对于非PVC类型的存储卷,该变量值为空
    ◼ $(workspaces.<name>.volume):由<name>标示的Workspace所使用的存储卷的名称

数据的持久化和共享

1、基于VolumeClaimTemplate支撑的Workspace,能够在同一个PipelineRun的多个TaskRun之间共享数据;
2、TaskRun上基于PVC卷直接存储的数据,是可以跨多个PipelineRun共享的;

 WorkSpace 实验

【01】- Workspace简单应用以及 基于WorkSpace 在两个step之间共享数据没问题

查看代码
[root@xksmaster1 03-tekton-advanced]# cat 01-task-workspace-demo.yaml
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: workspace-demo
spec:
  params:
  - name: target
    type: string
    default: MageEdu
  steps:
    - name: write-message
      image: alpine:3.16
      script: |
        #!/bin/sh
        set -xe
        if [ "$(workspaces.messages.bound)" == "true" ] ; then
          echo "Hello $(params.target)" > $(workspaces.messages.path)/message
          cat $(workspaces.messages.path)/message
        fi
        echo "Mount Path: $(workspaces.messages.path)"
        echo "Volume Name: $(workspaces.messages.volume)"
  workspaces:
    - name: messages
      description: |
        The folder where we write the message to. If no workspace
        is provided then the message will not be written.
      optional: true
      mountPath: /data
      
[root@xksmaster1 03-tekton-advanced]# kubectl apply -f 01-task-workspace-demo.yaml
task.tekton.dev/workspace-demo created

[root@xianchaomaster1 03-tekton-advanced]# tkn task list
NAME             DESCRIPTION   AGE
workspace-demo                 6 seconds ago

[root@xianchaomaster1 03-tekton-advanced]# tkn task start workspace-demo --showlog -p target="MageEdu Cloud Native Course" -w name=messages,emptyDir=""
TaskRun started: workspace-demo-run-b249n
Waiting for logs to be available...
[write-message] + '[' true '==' true ]
[write-message] + echo 'Hello MageEdu Cloud Native Course'
[write-message] + cat /data/message
[write-message] Hello MageEdu Cloud Native Course
[write-message] + echo 'Mount Path: /data'
[write-message] + echo 'Volume Name: ws-cwgwz'
[write-message] Mount Path: /data
[write-message] Volume Name: ws-cwgwz


# 实验证明:基于WorkSpace 在两个step之间共享数据没问题
#第一步生成数据、第二步显示数据
[root@xksmaster1 03-tekton-advanced]# cat 01-task-workspace-demo.yaml
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: workspace-demo
spec:
  params:
  - name: target
    type: string
    default: MageEdu
  steps:
    - name: write-message
      image: alpine:3.16
      script: |
        #!/bin/sh
        set -xe
        if [ "$(workspaces.messages.bound)" == "true" ] ; then
          echo "Hello $(params.target)" > $(workspaces.messages.path)/message
          cat $(workspaces.messages.path)/message
        fi
        echo "Mount Path: $(workspaces.messages.path)"
        echo "Volume Name: $(workspaces.messages.volume)"
    - name: print-message
      image: alpine:3.16
      script: |
        #!/bin/sh
        set -xe
        if [ "$(workspaces.messages.bound)" == "true" ] ; then
          cat $(workspaces.messages.path)/message
        fi
  workspaces:
    - name: messages
      description: |
        The folder where we write the message to. If no workspace
        is provided then the message will not be written.
      optional: true
      mountPath: /data

[root@xksmaster1 03-tekton-advanced]# kubectl apply -f 01-task-workspace-demo.yaml
task.tekton.dev/workspace-demo configured

[root@xianchaomaster1 03-tekton-advanced]# tkn task start workspace-demo --showlog -p target="MageEdu Cloud Native Course" -w name=messages,emptyDir=""
TaskRun started: workspace-demo-run-xlkpx
Waiting for logs to be available...
[write-message] + '[' true '==' true ]
[write-message] + echo 'Hello MageEdu Cloud Native Course'
[write-message] + cat /data/message
[write-message] Hello MageEdu Cloud Native Course
[write-message] + echo 'Mount Path: /data'
[write-message] + echo 'Volume Name: ws-6zsn7'
[write-message] Mount Path: /data
[write-message] Volume Name: ws-6zsn7

[print-message] + '[' true '==' true ]
[print-message] + cat /data/message
[print-message] Hello MageEdu Cloud Native Course

【02】- Task 中 WorkSpace -实现从私有GitLab中clone代码 并且 显示

#在一个Task内部多个Step之间基于Workspace来完成共享数据
#同一Task下的所有Step运行于同一Pod中,这些Step可共享该Task的Workspace;
#另外,该Workspace直接关联的emptyDir存储卷的生命周期也就与该Pod相同,这意味着,TaskRun结束后,它即被删除,相关的数据也将被删除;
[root@xksmaster1 03-tekton-advanced]# cat 02-task-with-workspace.yaml
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: source-lister
spec:
  params:
  - name: git-repo
    type: string
    description: Git repository to be cloned
  workspaces:
  - name: source
  steps:
  - name: git-clone
    image: alpine/git:v2.36.1
    script: git clone -v $(params.git-repo) $(workspaces.source.path)/source
  - name: list-files
    image: alpine:3.16
    command:
    - /bin/sh
    args:
    - '-c'
    - 'ls $(workspaces.source.path)/source'

[root@xksnode1 ~]# crictl pull alpine/git:v2.36.1
[root@xksnode2 ~]# crictl pull alpine/git:v2.36.1

[root@xianchaomaster1 03-tekton-advanced]# kubectl apply -f 02-task-with-workspace.yaml
task.tekton.dev/source-lister created
[root@xianchaomaster1 03-tekton-advanced]# tkn task list
NAME             DESCRIPTION   AGE
source-lister                  12 seconds ago

#执行命令clone 项目 
#git-repo使用之前 Gitlab 实验 
#仓库地址:http://gitlab.gitlab.svc.cluster.local/root/spring-boot-helloWorld.git
[root@xianchaomaster1 03-tekton-advanced]# tkn task start source-lister --showlog -p git-repo=http://gitlab.gitlab.svc.cluster.local/root/spring-boot-helloWorld.git -w name=source,emptyDir=""
TaskRun started: source-lister-run-d8lmw
Waiting for logs to be available...
[git-clone] Cloning into '/workspace/source/source'...
[git-clone] POST git-upload-pack (175 bytes)
[git-clone] POST git-upload-pack (517 bytes)

[list-files] Dockerfile
[list-files] Jenkinsfile
[list-files] LICENSE
[list-files] README.md
[list-files] deploy
[list-files] pom.xml
[list-files] rollouts
[list-files] src

【03】Pipeline上的Workspace和Parameters

#以“PipelineRun → Pipeline → (TaskRun)Task”的方式运行Pipeline及其Task的场景中,在Pipeline资源的配置上
    ◼ 在spec.parameters上定义Parameter,而后在引用或内联定义的Task上通过引用进行赋值
    ◼ 在spec.workspaces上定义Workspace,而后在引用或内联定义的Task上通过引用进行关联
#实例化为PipelineRun时,Pipeline资源上定义的Parameter和Workspace都要进行明确定义,除非它们设定有默认值;
#定义了WS:codebase 
#WS:codebase 将 source-lister上 WS:source 相关联
[root@xksmaster1 03-tekton-advanced]# cat 03-pipeline-workspace.yaml
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
  name: pipeline-source-lister
spec:
  workspaces:
  - name: codebase
  params:
  - name: git-url
    type: string
    description: Git repository url to be cloned
  tasks:
  - name: git-clone
    taskRef:
      name: source-lister
    workspaces:
    - name: source
      workspace: codebase
    params:
    - name: git-repo
      value: $(params.git-url)

[root@xksmaster1 03-tekton-advanced]# kubectl apply -f 03-pipeline-workspace.yaml
pipeline.tekton.dev/pipeline-source-lister created

[root@xianchaomaster1 03-tekton-advanced]# tkn pipeline list
NAME                     AGE              LAST RUN                         STARTED       DURATION   STATUS
pipeline-source-lister   10 seconds ago   ---                              ---           ---        ---

#需要Pipeline Run
[root@xianchaomaster1 03-tekton-advanced]# tkn pipeline start pipeline-source-lister --showlog -p git-url=http://gitlab.gitlab.svc.cluster.local/root/spring-boot-helloWorld.git -w name=codebase,emptyDir=""
PipelineRun started: pipeline-source-lister-run-hjl6t
Waiting for logs to be available...
[git-clone : git-clone] Cloning into '/workspace/source/source'...
[git-clone : git-clone] POST git-upload-pack (175 bytes)
[git-clone : git-clone] POST git-upload-pack (517 bytes)

[git-clone : list-files] Dockerfile
[git-clone : list-files] Jenkinsfile
[git-clone : list-files] LICENSE
[git-clone : list-files] README.md
[git-clone : list-files] deploy
[git-clone : list-files] pom.xml
[git-clone : list-files] rollouts
[git-clone : list-files] src

【04】持久卷方式挂在Workspace

04.1创建NFS

#马哥Github 地址
https://github.com/iKubernetes/tekton-and-argocd-in-practise/tree/main/nfs-csi-driver
#参考步骤
https://github.com/kubernetes-csi/csi-driver-nfs/blob/master/deploy/example/README.md
https://github.com/kubernetes-csi/csi-driver-nfs/blob/master/docs/install-csi-driver-v3.1.0.md

#1.下载代码 vpn不行以下配置文件【实验文件】
https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/deploy/example/nfs-provisioner/nfs-server.yaml

#2.下载依赖镜像 每个node都要下载            
[root@xksnode1 ~]# crictl pull itsthenetwork/nfs-server-alpine:latest
[root@xksnode2 ~]# crictl pull itsthenetwork/nfs-server-alpine:latest

#3.创建一个NS:nfs
[root@xksmaster1 Knative]# kubectl create ns nfs
namespace/nfs created

[root@xksmaster1 Knative]# cat nfs-server.yaml
---
kind: Service
apiVersion: v1
metadata:
  name: nfs-server
  labels:
    app: nfs-server
spec:
  type: ClusterIP  # use "LoadBalancer" to get a public ip
  selector:
    app: nfs-server
  ports:
    - name: tcp-2049
      port: 2049
      protocol: TCP
    - name: udp-111
      port: 111
      protocol: UDP
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-server
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nfs-server
  template:
    metadata:
      name: nfs-server
      labels:
        app: nfs-server
    spec:
      nodeSelector:
        "kubernetes.io/os": linux
      containers:
        - name: nfs-server
          image: itsthenetwork/nfs-server-alpine:latest
          env:
            - name: SHARED_DIRECTORY
              value: "/exports"
          volumeMounts:
            - mountPath: /exports
              name: nfs-vol
          securityContext:
            privileged: true
          ports:
            - name: tcp-2049
              containerPort: 2049
              protocol: TCP
            - name: udp-111
              containerPort: 111
              protocol: UDP
      volumes:
        - name: nfs-vol
          hostPath:
            path: /nfs-vol  # modify this to specify another path to store nfs share data
            type: DirectoryOrCreate

#4.应用nfs-server.yaml
[root@xksmaster1 Knative]# kubectl create -f nfs-server.yaml -n nfs
service/nfs-server created
deployment.apps/nfs-server created

[root@xksmaster1 Knative]# kubectl get pods -n nfs
NAME                          READY   STATUS    RESTARTS   AGE
nfs-server-857f859f57-jzcpf   1/1     Running   0          54s

[root@xksmaster1 Knative]# kubectl get svc -n nfs
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)            AGE
nfs-server   ClusterIP   10.98.51.209   <none>        2049/TCP,111/UDP   30s

#5.配置NFS CSI Driver [实验使用:v3.1.0版本]
#Install NFS CSI driver v3.1.0 version on a kubernetes cluster
#参考文档地址:https://github.com/kubernetes-csi/csi-driver-nfs/blob/master/docs/install-csi-driver-v3.1.0.md

#5.1
#[remote install]-下载不了不选用
curl -skSL https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/v3.1.0/deploy/install-driver.sh | bash -s v3.1.0 --
#[local install]-使用本地 先clone【实验使用此】
git clone https://github.com/kubernetes-csi/csi-driver-nfs.git
cd csi-driver-nfs

#5.2
#修改脚本install-driver.sh
#路径:解压目录/csi-driver-nfs-master/deploy/install-driver.sh
#因为 install-driver.sh中的 参数 v3.1.0文件夹中文件不对应 需要进行修改
#kubectl apply -f $repo/rbac-csi-nfs.yaml 注释37行
#kubectl apply -f $repo/rbac-csi-nfs-controller.yaml 添加此行
[root@xianchaomaster1 deploy]# vim install-driver.sh
#!/bin/bash

# Copyright 2020 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

set -euo pipefail

ver="master"
if [[ "$#" -gt 0 ]]; then
  ver="$1"
fi

repo="https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/$ver/deploy"
if [[ "$#" -gt 1 ]]; then
  if [[ "$2" == *"local"* ]]; then
    echo "use local deploy"
    repo="./deploy"
  fi
fi

if [ $ver != "master" ]; then
  repo="$repo/$ver"
fi

echo "Installing NFS CSI driver, version: $ver ..."
#kubectl apply -f $repo/rbac-csi-nfs.yaml
kubectl apply -f $repo/rbac-csi-nfs-controller.yaml
kubectl apply -f $repo/csi-nfs-driverinfo.yaml
kubectl apply -f $repo/csi-nfs-controller.yaml
kubectl apply -f $repo/csi-nfs-node.yaml

if [[ "$#" -gt 1 ]]; then
  if [[ "$2" == *"snapshot"* ]]; then
    echo "install snapshot driver ..."
    kubectl apply -f $repo/crd-csi-snapshot.yaml
    kubectl apply -f $repo/rbac-snapshot-controller.yaml
    kubectl apply -f $repo/csi-snapshot-controller.yaml
  fi
fi

echo 'NFS CSI driver installed successfully.'

#5.3【镜像拉取】由于官方的镜像无法下载 此处省略寻找镜像操作 导入到自己仓库中
#Aliyun 公开镜像才能下载
#通过containerd  crictl 进行拉取镜像
#csi-node-driver-registrar:v2.4.0有问题 不能使用
#crictl pull registry.cn-hangzhou.aliyuncs.com/birkhoff/csi-node-driver-registrar:v2.4.0
1.crictl pull registry.cn-hangzhou.aliyuncs.com/birkhoff/csi-provisioner:v2.2.2
2.crictl pull registry.cn-hangzhou.aliyuncs.com/birkhoff/livenessprobe:v2.5.0
3.crictl pull registry.cn-hangzhou.aliyuncs.com/birkhoff/nfsplugin:v3.1.0
#csi-node-driver-registrar:v2.5.0 以下两个镜像一样
4.crictl pull registry.cn-hangzhou.aliyuncs.com/birkhoff/csi-node-driver-registrar:v2.5.0 | crictl pull objectscale/csi-node-driver-registrar:v2.5.0

[root@xianchaonode1 ~]# crictl images list | grep birkhoff
W0711 11:32:40.248267   78385 util_unix.go:103] Using "/run/containerd/containerd.sock" as endpoint is deprecated, please consider using full url format "unix:///run/containerd/containerd.sock".
registry.cn-hangzhou.aliyuncs.com/birkhoff/csi-node-driver-registrar                                  v2.5.0                cb03930a2bd42       9.13MB
registry.cn-hangzhou.aliyuncs.com/birkhoff/csi-node-driver-registrar                                  v2.4.0                260ff90a19e40       46.9MB
registry.cn-hangzhou.aliyuncs.com/birkhoff/csi-provisioner                                            v2.2.2                e18077242e6d7       22.6MB
registry.cn-hangzhou.aliyuncs.com/birkhoff/livenessprobe                                              v2.5.0                25ee177dd4596       8.69MB
registry.cn-hangzhou.aliyuncs.com/birkhoff/nfsplugin                                                  v3.1.0                9973920bc6790       60.9MB

#5.4替换文件中 镜像配置为 Aliyun仓库地址
sed -i 's#registry.k8s.io/sig-storage#registry.cn-hangzhou.aliyuncs.com/birkhoff#g' *.yaml

#csi-nfs-node.yaml 修改镜像
#csi-node-driver-registrar:v2.4.0有问题 不能使用
          #image: registry.cn-hangzhou.aliyuncs.com/birkhoff/csi-node-driver-registrar:v2.4.0
          image: registry.cn-hangzhou.aliyuncs.com/birkhoff/csi-node-driver-registrar:v2.5.0
          #image: docker.io/objectscale/csi-node-driver-registrar:v2.5.0

#进入v3.1.0文件夹下
#注意这里docker.io/objectscale/csi-node-driver-registrar:v2.5.0 可以替换为 
[root@xianchaomaster1 v3.1.0]# grep image: *.yaml
csi-nfs-controller.yaml:          image: registry.cn-hangzhou.aliyuncs.com/birkhoff/csi-provisioner:v2.2.2
csi-nfs-controller.yaml:          image: registry.cn-hangzhou.aliyuncs.com/birkhoff/livenessprobe:v2.5.0
csi-nfs-controller.yaml:          image: registry.cn-hangzhou.aliyuncs.com/birkhoff/nfsplugin:v3.1.0
csi-nfs-node.yaml:          image: registry.cn-hangzhou.aliyuncs.com/birkhoff/livenessprobe:v2.5.0
csi-nfs-node.yaml:          image: docker.io/objectscale/csi-node-driver-registrar:v2.5.0
csi-nfs-node.yaml:          image: registry.cn-hangzhou.aliyuncs.com/birkhoff/nfsplugin:v3.1.0

==========【结束结束】==========

#6.执行安装脚本
[root@xksmaster1 csi-driver-nfs-master]# cd /root/Knative/csi-driver-nfs-master
/root/Knative/csi-driver-nfs-master
#执行脚本【官方使用】
[root@ca-k8s-master01 csi-driver-nfs-master]# ./deploy/install-driver.sh v3.1.0 local
use local deploy
Installing NFS CSI driver, version: v3.1.0 ...
serviceaccount/csi-nfs-controller-sa created
clusterrole.rbac.authorization.k8s.io/nfs-external-provisioner-role created
clusterrolebinding.rbac.authorization.k8s.io/nfs-csi-provisioner-binding created
csidriver.storage.k8s.io/nfs.csi.k8s.io created
deployment.apps/csi-nfs-controller created
daemonset.apps/csi-nfs-node created
NFS CSI driver installed successfully.
#或者直接进入、版本执行脚本【实验使用】
[root@xianchaomaster1 v3.1.0]# cd /root/KnativeSrc/csi-driver-nfs-master/deploy/v3.1.0
[root@xianchaomaster1 v3.1.0]# kubectl apply -f ./

#7.[check pods status:]
[root@xianchaomaster1 v3.1.0]# kubectl -n kube-system get pod -o wide -l app=csi-nfs-controller
NAME                                  READY   STATUS    RESTARTS   AGE     IP               NODE            NOMINATED NODE   READINESS GATES
csi-nfs-controller-665589d6cf-lbjw2   3/3     Running   0          4m18s   192.168.40.181   xianchaonode1   <none>           <none>

[root@xianchaomaster1 v3.1.0]# kubectl -n kube-system get pod -o wide -l app=csi-nfs-node
NAME                 READY   STATUS    RESTARTS   AGE     IP               NODE              NOMINATED NODE   READINESS GATES
csi-nfs-node-6wkjb   3/3     Running   0          4m30s   192.168.40.181   xianchaonode1     <none>           <none>
csi-nfs-node-pzlr8   3/3     Running   0          4m30s   192.168.40.180   xianchaomaster1   <none>           <none>

#8.创建存储类
#https://github.com/kubernetes-csi/csi-driver-nfs/blob/master/deploy/example/storageclass-nfs.yaml
[root@xksmaster1 Knative]# cat storageclass-nfs.yaml
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-csi
provisioner: nfs.csi.k8s.io
parameters:
  server: nfs-server.nfs.svc.cluster.local
  share: /
reclaimPolicy: Delete
volumeBindingMode: Immediate
mountOptions:
  - nfsvers=4.1
[root@xksmaster1 Knative]# kubectl apply -f storageclass-nfs.yaml
storageclass.storage.k8s.io/nfs-csi created
[root@xksmaster1 Knative]# kubectl get sc
NAME      PROVISIONER      RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-csi   nfs.csi.k8s.io   Delete          Immediate           false                  12s

#9.配置动态存储卷
https://github.com/kubernetes-csi/csi-driver-nfs/blob/master/deploy/example/pvc-nfs-csi-dynamic.yaml
[root@xksmaster1 Knative]# cat pvc-nfs-csi-dynamic.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-nfs-dynamic
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
  storageClassName: nfs-csi
[root@xksmaster1 Knative]# kubectl apply -f pvc-nfs-csi-dynamic.yaml
persistentvolumeclaim/pvc-nfs-dynamic created
[root@xksmaster1 Knative]# kubectl get pvc
NAME              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-nfs-dynamic   Bound    pvc-0e8c332d-d77e-439f-8d06-6d4f7869e60d   10Gi       RWX            nfs-csi        8s

04.2 实验一:配置文件(第一个Task通过workspace建立PVC 第两个Task可以获取到第一个Task clone的内容 否者会访问不到)

[root@xianchaomaster1 03-tekton-advanced]# cat 04-pipeline-worlspace-02.yaml
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
  name: volume-share
spec:
  params:
    - name: git-url
      type: string
  workspaces:
    - name: codebase
  tasks:
    - name: fetch-from-source
      params:
        - name: url
          value: $(params.git-url)
      taskSpec:
        workspaces:
          - name: source
        params:
          - name: url
        steps:
          - name: git-clone
            image: alpine/git:v2.36.1
            script: git clone -v $(params.url) $(workspaces.source.path)/source
      workspaces:
        - name: source
          workspace: codebase
    - name: source-lister
      runAfter:
        - fetch-from-source
      taskSpec:
        steps:
          - name: list-files
            image: alpine:3.15
            script: ls $(workspaces.source.path)/source
        workspaces:
          - name: source
      workspaces:
        - name: source
          workspace: codebase
---
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
  name: volume-share-run-xxxx
spec:
  pipelineRef:
    name: volume-share
  params:
    - name: git-url
      value: http://code.gitlab.svc/root/spring-boot-helloWorld.git
      #下面参数也可以实际 仓库内部地址
      #value: http://gitlab.gitlab.svc.cluster.local/root/spring-boot-helloWorld.git
  workspaces:
    - name: codebase
      volumeClaimTemplate:
        spec:
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 1Gi
          storageClassName: nfs-csi

[root@xksmaster1 03-tekton-advanced]# kubectl apply -f 04-pipeline-worlspace-02.yaml
pipeline.tekton.dev/volume-share created
pipelinerun.tekton.dev/volume-share-run-xxxx created

#PVC创建自动创建空间pvc-59d10d8ee6-affinity-assistant-30083ce8b3-0 

[root@xksmaster1 03-tekton-advanced]# kubectl get pvc
NAME                                             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-59d10d8ee6-affinity-assistant-30083ce8b3-0   Bound    pvc-489586e0-5b8c-47f6-a300-93c3ed5b922c   1Gi        RWO            nfs-csi        4m37s
pvc-nfs-dynamic                                  Bound    pvc-0e8c332d-d77e-439f-8d06-6d4f7869e60d   10Gi       RWX            nfs-csi        6m52s

04.3 实验二:Workspace中使用存储卷 进行 clone、maven打包

[root@xianchaonode1 ~]# crictl pull maven:3.8-openjdk-11-slim

[root@xksmaster1 03-tekton-advanced]# cat 05-pipeline-source-to-package.yaml
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
  name: source-2-package
spec:
  params:
    - name: git-url
      type: string
  workspaces:
    - name: codebase
  tasks:
    - name: fetch-from-source
      params:
        - name: url
          value: $(params.git-url)
      taskSpec:
        workspaces:
          - name: source
        params:
          - name: url
        steps:
          - name: git-clone
            image: alpine/git:v2.36.1
            script: git clone -v $(params.url) $(workspaces.source.path)/source
      workspaces:
        - name: source
          workspace: codebase
    - name: build-package
      runAfter:
        - fetch-from-source
      taskSpec:
        steps:
          - name: build
            image: maven:3.8-openjdk-11-slim
            workingDir: $(workspaces.source.path)/source
            script: |
              mvn clean install
        workspaces:
          - name: source
      workspaces:
        - name: source
          workspace: codebase
---
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
  name: source-2-package-run-001
spec:
  pipelineRef:
    name: source-2-package
  params:
    - name: git-url
      #value: https://gitee.com/mageedu/spring-boot-helloWorld.git
      value: http://code.gitlab.svc.cluster.local/root/spring-boot-helloWorld.git
  workspaces:
    - name: codebase
      volumeClaimTemplate:
        spec:
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 1Gi
          storageClassName: nfs-csi

[root@xksmaster1 03-tekton-advanced]# kubectl apply -f 05-pipeline-source-to-package.yaml
pipeline.tekton.dev/source-2-package created
pipelinerun.tekton.dev/source-2-package-run-001 created

[root@xianchaomaster1 03-tekton-advanced]# tkn pipelinerun list
NAME                               STARTED          DURATION   STATUS
source-2-package-run-001           2 minutes ago    ---        Running

#此时都是 Maven下载 依赖 会很长时间
[root@xianchaomaster1 03-tekton-advanced]# tkn pipelinerun logs source-2-package-run-001 -f
[build-package : build] [INFO] Installing /workspace/source/source/target/spring-boot-helloworld-0.9.6-SNAPSHOT.jar to /root/.m2/repository/com/neo/spring-boot-helloworld/0.9.6-SNAPSHOT/spring-boot-helloworld-0.9.6-SNAPSHOT.jar
[build-package : build] [INFO] Installing /workspace/source/source/pom.xml to /root/.m2/repository/com/neo/spring-boot-helloworld/0.9.6-SNAPSHOT/spring-boot-helloworld-0.9.6-SNAPSHOT.pom
[build-package : build] [INFO] ------------------------------------------------------------------------
[build-package : build] [INFO] BUILD SUCCESS
[build-package : build] [INFO] ------------------------------------------------------------------------
[build-package : build] [INFO] Total time:  06:23 min
[build-package : build] [INFO] Finished at: 2023-07-11T03:11:56Z
[build-package : build] [INFO] ------------------------------------------------------------------------

 

posted @ 2023-06-26 17:22  しみずよしだ  阅读(241)  评论(0)    收藏  举报