Hyperledger Fabric区块链离线部署
前言: 在最近的业务上确实需要用到这个区块链进行数据上链及存证溯源,我也花了一些时间在研究在麒麟v10服务器去部署这个fabric(超级账本)区块链
也确实在部署的过程中遇到了很多的一些问题,花费很多时间在排查这些问题上, 也是想通过这篇博客来为后来者及自己在以后部署fabric区块链时能少走点弯路
部署准备工作
1.部署dokcer
这个可以参考我之前的一篇文章: https://www.cnblogs.com/magepi/p/19637502
2.配置docker下载镜像源
先创建docker目录 mkdir -p /etc/docker 在编辑daemon.json配置文件 vim /etc/docker/daemon.json
daemon.json(第一个是我自己配置的阿里云镜像源加速地址,可根据需要配置自己的)
{ "registry-mirrors" : [ "https://test.mirror.aliyuncs.com", "https://docker.m.daocloud.io", "https://docker.xuanyuan.me", "https://docker.1ms.run", "https://dockerproxy.com", "https://docker.mirrors.ustc.edu.cn", "https://docker.nju.edu.cn", "https://mirror.baidubce.com" ] }
配置完后记得重启下docker systemctl restart docker
3.下载离线所需镜像包
当然离线部署一般是找一台有网络的服务器先下载好再传到离线服务器并导入镜像,有时候网络比较慢,下载这些镜像还是比较耗时的
docker pull hyperledger/fabric-tools:2.4.9
docker pull hyperledger/fabric-peer:2.4.9
docker pull hyperledger/fabric-orderer:2.4.9
docker pull hyperledger/fabric-ca:1.5.7
docker pull hyperledger/fabric-nodeenv:2.4
docker pull hyperledger/fabric-ccenv:2.4
docker pull hyperledger/fabric-baseos:2.4
我这边因为之前下过,直接导入镜像即可

4.下载fabric官方示例包
需要下载fabric-samples-2.4.9.tar,这是fabric区块链的官方示例包, 在部署时需要用到,我这边下载的2.4.9的包如果你核心镜像也是2.4.9最好和我一样的版本
下载地址: wget https://github.com/hyperledger/fabric-samples/archive/refs/tags/v2.4.9.tar.gz
fabric区块链开始部署
1.配置文件夹及文件
vdc是我挂载的磁盘目录,可以根据自己服务器情况选择放到那个目录下
# 创建核心目录+子目录(证书/区块/配置全放这里) mkdir -p /vdc/docker/fabric-offline/config/{crypto-config,channel-artifacts} # 进入核心目录(后续操作默认在此目录,避免路径混乱) cd /vdc/docker/fabric-offline/config
这一步go或node.js都可以先把对应目录拷过来起
这一步可以先做(创建链码时需要用到的node.js源码(fabric-samples2.4.9是下载的))
mkdir -p /vdc/docker/fabric-offline/config/chaincode/fabcar
#宿主机执行:fabcar链码 → 已挂载的config/chaincode目录,容器内实时同步
cp -r /opt/docker/fabric-samples-2.4.9/chaincode/fabcar/javascript /vdc/docker/fabric-offline/config/chaincode/fabcar/
# 宿主机执行:检查复制后的链码文件
ls /vdc/docker/fabric-offline/config/chaincode/fabcar/javascript
这个fabcar链码存放目录要注意用自己服务器上的

出现index.js等三个目录说明已经复制过去了
将我准备好的三个配置文件直接放进去/vdc/docker/fabric-offline/config里crypto-config.yaml,configtx.yaml,docker-compose.yaml
crypto-config.yaml
OrdererOrgs: - Name: Orderer Domain: example.com Specs: - Hostname: orderer PeerOrgs: - Name: Org1 Domain: org1.example.com Template: Count: 1 Users: Count: 1
configtx.yaml
# Copyright IBM Corp. All Rights Reserved. # # SPDX-License-Identifier: Apache-2.0 # --- Organizations: - &OrdererOrg Name: OrdererMSP ID: OrdererMSP MSPDir: /opt/gopath/src/github.com/hyperledger/fabric/crypto-config/ordererOrganizations/example.com/msp Policies: Readers: Type: Signature Rule: "OR('OrdererMSP.member')" Writers: Type: Signature Rule: "OR('OrdererMSP.member')" Admins: Type: Signature Rule: "OR('OrdererMSP.admin')" Endorsement: Type: Signature Rule: "OR('OrdererMSP.member')" OrdererEndpoints: - orderer.example.com:7050 - &Org1 Name: Org1MSP ID: Org1MSP MSPDir: /opt/gopath/src/github.com/hyperledger/fabric/crypto-config/peerOrganizations/org1.example.com/msp Policies: Readers: Type: Signature Rule: "OR('Org1MSP.member')" Writers: Type: Signature Rule: "OR('Org1MSP.member')" Admins: Type: Signature Rule: "OR('Org1MSP.admin')" Endorsement: Type: Signature Rule: "OR('Org1MSP.member')" OrdererEndpoints: - orderer.example.com:7050 AnchorPeers: - Host: peer0.org1.example.com Port: 7051 Capabilities: Global: &ChannelCapabilities V2_0: true Orderer: &OrdererCapabilities V2_0: true Application: &ApplicationCapabilities V2_0: true # 核心新增:Channel级默认策略(Fabric 2.4.9强制要求) Channel: &ChannelDefaults Policies: Readers: Type: ImplicitMeta Rule: "ANY Readers" Writers: Type: ImplicitMeta Rule: "ANY Writers" Admins: Type: Signature Rule: "OR('Org1MSP.member')" Capabilities: <<: *ChannelCapabilities Application: &ApplicationDefaults Organizations: Policies: Readers: Type: ImplicitMeta Rule: "ANY Readers" Writers: Type: ImplicitMeta Rule: "ANY Writers" Admins: Type: Signature Rule: "OR('Org1MSP.admin')" Endorsement: Type: Signature Rule: "OutOf(1, 'Org1MSP.member')" Capabilities: <<: *ApplicationCapabilities Orderer: &OrdererDefaults OrdererType: solo Addresses: - orderer.example.com:7050 BatchTimeout: 2s BatchSize: MaxMessageCount: 10 AbsoluteMaxBytes: 99 MB PreferredMaxBytes: 512 KB Organizations: Policies: Readers: Type: ImplicitMeta Rule: "ANY Readers" Writers: Type: ImplicitMeta Rule: "ANY Writers" Admins: Type: Signature Rule: "OR('OrdererMSP.admin', 'Org1MSP.admin')" BlockValidation: Type: ImplicitMeta Rule: "ANY Writers" Profiles: OneOrgOrdererGenesis: <<: *ChannelDefaults # 关联Channel级策略 Capabilities: <<: *ChannelCapabilities Orderer: <<: *OrdererDefaults Organizations: - *OrdererOrg Capabilities: <<: *OrdererCapabilities Consortiums: SampleConsortium: Organizations: - *Org1 OneOrgChannel: <<: *ChannelDefaults # 关联Channel级策略(生成通道交易文件必须) Consortium: SampleConsortium Application: <<: *ApplicationDefaults Organizations: - *Org1 Capabilities: <<: *ApplicationCapabilities
docker-compose.yaml
version: '2.4' # 定义网络:避免IP/网络冲突 networks: config_fabric-network: name: config_fabric-network driver: bridge ipam: config: - subnet: 172.19.0.0/16 # 匹配当前服务器网段 # 通用TLS配置模板:避免重复编写,Orderer/Peer统一引用 x-tls-common: &tls-common CORE_PEER_TLS_ENABLED: "true" CORE_PEER_TLS_CERT_FILE: /etc/hyperledger/fabric/tls/server.crt CORE_PEER_TLS_KEY_FILE: /etc/hyperledger/fabric/tls/server.key CORE_PEER_TLS_ROOTCERT_FILE: /etc/hyperledger/fabric/tls/ca.crt services: # ========== Orderer节点(启用TLS,固定IP:172.19.0.3)========== orderer.example.com: image: hyperledger/fabric-orderer:2.4.9 # 离线镜像,无需修改 container_name: orderer.example.com restart: always # 意外挂掉自动重启,提升稳定性 environment: # 基础日志/监听配置 - ORDERER_GENERAL_LOGLEVEL=info - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 - ORDERER_GENERAL_LISTENPORT=7050 # 核心TLS配置(精准匹配生成的TLS证书) - ORDERER_GENERAL_TLS_ENABLED=true - ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt - ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key - ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt] # 创世块/MSP配置(匹配已生成的文件) - ORDERER_GENERAL_GENESISMETHOD=file - ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/genesis.block - ORDERER_GENERAL_LOCALMSPID=OrdererMSP - ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp # 挂载:精准对应你的本地目录,无需修改 volumes: - ./channel-artifacts/genesis.block:/var/hyperledger/orderer/genesis.block - ./crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/msp:/var/hyperledger/orderer/msp - ./crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls:/var/hyperledger/orderer/tls ports: - 7050:7050 # 对外端口,和你之前一致 networks: config_fabric-network: ipv4_address: 172.19.0.3 # 固定IP,Java代码无需修改 # ========== Peer0节点(启用TLS,固定IP:172.19.0.4)========== peer0.org1.example.com: image: hyperledger/fabric-peer:2.4.9 # 离线镜像,无需修改 container_name: peer0.org1.example.com restart: always # 意外挂掉自动重启 depends_on: - orderer.example.com # 先启动Orderer,再启动Peer environment: # 基础Peer配置【去掉所有-,改成映射格式,支持YAML锚点合并】 CORE_VM_ENDPOINT: unix:///host/var/run/docker.sock CORE_PEER_ID: peer0.org1.example.com CORE_PEER_ADDRESS: peer0.org1.example.com:7051 CORE_PEER_LOCALMSPID: Org1MSP CORE_PEER_MSPCONFIGPATH: /etc/hyperledger/fabric/msp CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE: config_fabric-network CORE_PEER_CHAINCODELISTENADDRESS: 0.0.0.0:7052 # 引入通用TLS配置 <<: *tls-common # 挂载:精准对应本地目录,包含链码目录 volumes: - /var/run/:/host/var/run/ # 对接宿主机Docker,用于启动链码容器 - ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp:/etc/hyperledger/fabric/msp - ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls:/etc/hyperledger/fabric/tls - ./channel-artifacts:/etc/hyperledger/fabric/channel-artifacts - ./chaincode:/etc/hyperledger/fabric/chaincode # 链码目录,提前挂载 ports: - 7051:7051 # Peer主端口 - 7052:7052 # 链码通信端口,不可省略 networks: config_fabric-network: ipv4_address: 172.19.0.4 # 固定IP,Java代码无需修改 # ========== Tools工具容器(后续部署链码/操作通道专用,无需手动操作)========== fabric-tools: image: hyperledger/fabric-tools:2.4.9 # 离线镜像,无需修改 container_name: fabric-tools restart: always tty: true stdin_open: true environment: # 提前定义环境变量,后续命令直接引用,无需手动输超长路径 - CHANNEL_NAME=mychannel - ORDERER_ADDRESS=orderer.example.com:7050 - PEER0_ORG1_ADDRESS=peer0.org1.example.com:7051 # TLS根证书路径(精准匹配挂载的证书) - TLS_ROOTCERT_ORDERER=/etc/hyperledger/fabric/orderer-tls/ca.crt - TLS_ROOTCERT_PEER0=/etc/hyperledger/fabric/peer0-tls/ca.crt # 挂载TLS证书/通道文件,方便后续操作 volumes: - /var/run/:/host/var/run/ - ./crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls:/etc/hyperledger/fabric/orderer-tls - ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls:/etc/hyperledger/fabric/peer0-tls - ./channel-artifacts:/etc/hyperledger/fabric/channel-artifacts - ./chaincode:/etc/hyperledger/fabric/chaincode # 挂载链码目录 working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer # 工具默认工作目录 networks: - config_fabric-network
这里三个文件一定要放到这个config目录下并与其它三个目录平级,因为上述几个配置文件挂载时用了本地的一些相对路径进行挂载

2.创建区块+证书+通道交易文件生成
一键生成所有文件(证书 + 创世块 + 通道交易) docker run -it --rm \ -v $(pwd)/crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/crypto-config \ -v $(pwd)/channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/channel-artifacts \ -v $(pwd)/crypto-config.yaml:/opt/gopath/src/github.com/hyperledger/fabric/crypto-config.yaml \ -v $(pwd)/configtx.yaml:/opt/gopath/src/github.com/hyperledger/fabric/configtx.yaml \ -w /opt/gopath/src/github.com/hyperledger/fabric \ hyperledger/fabric-tools:2.4.9 \ bash -c " export FABRIC_CFG_PATH=/opt/gopath/src/github.com/hyperledger/fabric; # 生成带TLS的证书 cryptogen generate --config=/opt/gopath/src/github.com/hyperledger/fabric/crypto-config.yaml; # 生成创世块 configtxgen -profile OneOrgOrdererGenesis -outputBlock /opt/gopath/src/github.com/hyperledger/fabric/channel-artifacts/genesis.block -channelID system-channel; # 生成通道交易文件 configtxgen -profile OneOrgChannel -outputCreateChannelTx /opt/gopath/src/github.com/hyperledger/fabric/channel-artifacts/mychannel.tx -channelID mychannel; echo '✅ 所有带TLS的证书/通道文件生成完成!' "
检查通道文件是否生成
ls channel-artifacts/
已经成功创建

3.服务节点一键启动
注意也是在/vdc/docker/fabric-offline/config目录下执行
docker-compose up -d

服务都已正常启动
4.创建通道
在 /vdc/docker/fabric-offline/config 目录下执行 1. # 复制完整的crypto-config目录到fabric-tools容器 docker cp ./crypto-config fabric-tools:/etc/hyperledger/fabric/ 进入tools容器命令 docker exec -it fabric-tools bash 2.进入fabric-tools 容器内,查找路径是否存在 find /etc/hyperledger/fabric -name "Admin@org1.example.com" 3.验证 $ORDERER_CA 变量是否正确设置(证书) echo $ORDERER_CA 4. 如果输出是 /etc/hyperledger/fabric/orderer-tls/ca.crt,说明变量正确;如果是空的,重新设置 export ORDERER_CA=/etc/hyperledger/fabric/orderer-tls/ca.crt 5.设置环境变量 # 设置正确的Admin MSP路径 export CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/fabric/crypto-config/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp export CORE_PEER_LOCALMSPID="Org1MSP" 6. 用正确的格式重新执行通道创建命令 peer channel create -o orderer.example.com:7050 -c $CHANNEL_NAME -f /etc/hyperledger/fabric/channel-artifacts/mychannel.tx --tls true --cafile "${ORDERER_CA}"

5.将通道加入peer节点
#基础通道和Orderer配置(这一步是设置环境变量不然执行让peer0加入通道会报错) export CHANNEL_NAME=mychannel export ORDERER_CA=/etc/hyperledger/fabric/orderer-tls/ca.crt # 2. Peer0的身份和网络配置 export CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/fabric/crypto-config/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp export CORE_PEER_LOCALMSPID="Org1MSP" export CORE_PEER_ADDRESS=peer0.org1.example.com:7051 export CORE_PEER_TLS_ENABLED=true export CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/peer0-tls/ca.crt #让Peer0加入通道 peer channel join -b mychannel.block --tls true --cafile $ORDERER_CA # 查询Peer0已加入的通道 peer channel list

通道已经成功加入了peer节点 mychannel
6.node.js链码安装
链码又分为node.js链码和go链码,链码部署流程都差不多, 但需要注意离线环境优先选择go链码安装部署, 有网环境随便,所以到这一步有点分叉路的意思,可以接着往下看完再决定用那种方式安装链码
1. tools容器内执行:检查复制后的链码文件有没有复制进容器 ls /etc/hyperledger/fabric/chaincode/fabcar/javascript 2.打包链码 peer lifecycle chaincode package fabcar.tar.gz \ --path /etc/hyperledger/fabric/chaincode/fabcar/javascript \ --lang node \ --label fabcar_1.0 3.安装链码 peer lifecycle chaincode install fabcar.tar.gz 4.批准链码(链id记得换成自己的) peer lifecycle chaincode approveformyorg \ -o orderer.example.com:7050 \ -C mychannel \ -n fabcar \ --version 1.0 \ --package-id fabcar_1.0:a656dc59c2488cdf7f14e68ee60c79ef0c0bc5f500789a33b33025ca65988a75 \ --sequence 1 \ --tls true \ --cafile $ORDERER_CA 批准状态检查命令(Org1MSP对应的approvals列显示true,就能确认链码批准成功了) peer lifecycle chaincode checkcommitreadiness \ -C mychannel \ -n fabcar \ --version 1.0 \ --sequence 1 \ --tls true \ --cafile $ORDERER_CA 5.提交链码 peer lifecycle chaincode commit \ -o orderer.example.com:7050 \ -C mychannel \ -n fabcar \ --version 1.0 \ --sequence 1 \ --tls true \ --cafile $ORDERER_CA \ --peerAddresses peer0.org1.example.com:7051 \ --tlsRootCertFiles $CORE_PEER_TLS_ROOTCERT_FILE

链码打包-安装-批准-提交成功
7.区块链写入查询测试
这个写入测试车辆数据及查询方法是因为我们之前用的官方示例文件fabric-samples-2.4.9.tar,里面已经存在node.js及go链码的所有链码文件,有内置了这个车辆的上链及查询的方法,直接用这个测试即可
1.初始化账本(写入测试车辆数据) peer chaincode invoke \ -o orderer.example.com:7050 \ -C mychannel \ -n fabcar \ -c '{"Args":["initLedger"]}' \ --tls true \ --cafile $ORDERER_CA \ --peerAddresses peer0.org1.example.com:7051 \ --tlsRootCertFiles $CORE_PEER_TLS_ROOTCERT_FILE 2. 验证链码查询功能 peer chaincode query -C mychannel -n fabcar -c '{"Args":["queryAllCars"]}'

验证区块链数据上链及查询成功
链码方法进阶
从部署fabric区块链的角度来说到这一步已经是部署成功了且在容器内能上链能查询,但官方示例的这个车辆上链方法显然是不能满足我们正常的一个业务数据上链需求,正常的业务数据上链一般是key-value形式, key通常为业务主键,value为各类业务的一串数据为json来进行上链,要创建key-value上链及查询方法需要追加node.js方法
1.进入链码目录对fabcar.js文件准备进行追加key-value方法(这些目录之前已经是复制进去的,注意核对自己的目录位置)
cd /vdc/docker/fabric-offline/config/chaincode/fabcar/javascript/lib
2.在fabcar.js后追加其方法(在服务器上修改即可,因为之前已经挂载过这个目录会自动同步到容器内的,后续在容器内执行链码安装等命令无影响)
/* * Copyright IBM Corp. All Rights Reserved. * * SPDX-License-Identifier: Apache-2.0 */ 'use strict'; const { Contract } = require('fabric-contract-api'); class FabCar extends Contract { async initLedger(ctx) { console.info('============= START : Initialize Ledger ==========='); const cars = [ { color: 'blue', make: 'Toyota', model: 'Prius', owner: 'Tomoko', }, { color: 'red', make: 'Ford', model: 'Mustang', owner: 'Brad', }, { color: 'green', make: 'Hyundai', model: 'Tucson', owner: 'Jin Soo', }, { color: 'yellow', make: 'Volkswagen', model: 'Passat', owner: 'Max', }, { color: 'black', make: 'Tesla', model: 'S', owner: 'Adriana', }, { color: 'purple', make: 'Peugeot', model: '205', owner: 'Michel', }, { color: 'white', make: 'Chery', model: 'S22L', owner: 'Aarav', }, { color: 'violet', make: 'Fiat', model: 'Punto', owner: 'Pari', }, { color: 'indigo', make: 'Tata', model: 'Nano', owner: 'Valeria', }, { color: 'brown', make: 'Holden', model: 'Barina', owner: 'Shotaro', }, ]; for (let i = 0; i < cars.length; i++) { cars[i].docType = 'car'; await ctx.stub.putState('CAR' + i, Buffer.from(JSON.stringify(cars[i]))); console.info('Added <--> ', cars[i]); } console.info('============= END : Initialize Ledger ==========='); } async queryCar(ctx, carNumber) { const carAsBytes = await ctx.stub.getState(carNumber); // get the car from chaincode state if (!carAsBytes || carAsBytes.length === 0) { throw new Error(`${carNumber} does not exist`); } console.log(carAsBytes.toString()); return carAsBytes.toString(); } async createCar(ctx, carNumber, make, model, color, owner) { console.info('============= START : Create Car ==========='); const car = { color, docType: 'car', make, model, owner, }; await ctx.stub.putState(carNumber, Buffer.from(JSON.stringify(car))); console.info('============= END : Create Car ==========='); } async queryAllCars(ctx) { const startKey = ''; const endKey = ''; const allResults = []; for await (const {key, value} of ctx.stub.getStateByRange(startKey, endKey)) { const strValue = Buffer.from(value).toString('utf8'); let record; try { record = JSON.parse(strValue); } catch (err) { console.log(err); record = strValue; } allResults.push({ Key: key, Record: record }); } console.info(allResults); return JSON.stringify(allResults); } async changeCarOwner(ctx, carNumber, newOwner) { console.info('============= START : changeCarOwner ==========='); const carAsBytes = await ctx.stub.getState(carNumber); // get the car from chaincode state if (!carAsBytes || carAsBytes.length === 0) { throw new Error(`${carNumber} does not exist`); } const car = JSON.parse(carAsBytes.toString()); car.owner = newOwner; await ctx.stub.putState(carNumber, Buffer.from(JSON.stringify(car))); console.info('============= END : changeCarOwner ==========='); } // ====================================== // 新增:KV格式上链方法(和官方写法完全一致) // 入参:ctx(必传) + key(唯一ID) + jsonValue(业务JSON字符串) // ====================================== async createKVData(ctx, key, jsonValue) { console.info('============= START : Create KV Data ==========='); // 非空校验,和原有queryCar错误抛出格式一致 if (!key || !jsonValue) { throw new Error(`key and jsonValue can not be empty`); } // 直接将JSON字符串存入Fabric KV账本,和createCar的putState用法完全一致 await ctx.stub.putState(key, Buffer.from(jsonValue)); console.info('============= END : Create KV Data ==========='); // 返回成功提示,方便客户端接收 return `KV data create success, key: ${key}`; } // ====================================== // 新增:KV格式查询方法(和官方写法完全一致) // 入参:ctx(必传) + key(上链的唯一ID) // ====================================== async queryKVData(ctx, key) { // 读取数据,和queryCar的getState用法完全一致 const jsonAsBytes = await ctx.stub.getState(key); // 空值校验,复用官方queryCar的校验逻辑 if (!jsonAsBytes || jsonAsBytes.length === 0) { throw new Error(`${key} does not exist`); } console.log(jsonAsBytes.toString()); // 直接返回原始JSON字符串,交给客户端解析 return jsonAsBytes.toString(); } } module.exports = FabCar;
增加了createKVData和queryKVData两个方法,其它地方还是保持原样
3.重新进行链码打包-安装-批准-提交
但要注意之前已经链码已经执行过这些流程了,改动链码文件需要重新执行将其版本改为2即可
先进入tools容器验证下通道是否还在peer节点上
进入tools容器命令 docker exec -it fabric-tools bash # 查询Peer0已加入的通道 peer channel list

可以看到执行peer channel list报错了,这个是因为重新进入容器之前的配置环境丢失了需要重新设置下即可

环境问题已经正常,直接执行链码2.0的执行流程
1.打包链码2.0 peer lifecycle chaincode package fabcar-kv.tar.gz \ --path /etc/hyperledger/fabric/chaincode/fabcar/javascript \ --lang node \ --label fabcar_2.0 2.安装链码2.0 peer lifecycle chaincode install fabcar-kv.tar.gz 3.批准链码2.0(链id记得换) peer lifecycle chaincode approveformyorg \ -o orderer.example.com:7050 \ -C mychannel \ -n fabcar \ --version 2.0 \ --package-id fabcar_2.0:b15317a776fcf0bdfa0e31e03b582c43f2e972d6ffa3f13f3be49ec653dfb727 \ --sequence 2 \ --tls true \ --cafile $ORDERER_CA 批准状态检查命令2.0 peer lifecycle chaincode checkcommitreadiness \ -C mychannel \ -n fabcar \ --version 2.0 \ --sequence 2 \ --tls true \ --cafile $ORDERER_CA 4.提交链码2.0 peer lifecycle chaincode commit \ -o orderer.example.com:7050 \ -C mychannel \ -n fabcar \ --version 2.0 \ --sequence 2 \ --tls true \ --cafile $ORDERER_CA \ --peerAddresses peer0.org1.example.com:7051 \ --tlsRootCertFiles $CORE_PEER_TLS_ROOTCERT_FILE

需要注意就是2.0的一些版本号等, 然后链码名称用的还是fabcar等,大体上是一致的(后面在java代码里进行测试上链方法)
go链码安装
问: 为什么我要在这里来介绍go链码的安装呢? 明明node.js链码安装已经能够使用了.
答: 我之前是在测试环境(有网)下载镜像包括安装node.js链码,但是我在线上真实的离线环境一步步安装时会卡在链码批准这一步, 因为这一步node.js内部还是会执行一个npm install xxx的命令去下依赖, 所以它并不是完全离线操作,且我在尝试提前下载好依赖的情况去绕过这个命令但尝试多次均无果, 所以启用另一个方案用go链码进行安装, 当然go链码也需要提前下载好一些依赖才行
1.前期准备工作
需要你的服务器有go环境, 如果没有的话自己先百度配置下
#我自己配了一个是1.19.1版本
[root@localhost docker]# go version go version go1.19.1 linux/amd64 [root@localhost docker]#
环境及文件准备
从fabric-sample2.4.9(官方基础包里面有node.js和go链码的基础配置文件等)包中复制go整个目录放到config/chaincode/fabcar下
复制go目录不用担心在容器中是否存在的事,前面文件配置挂载时已经将整个目录挂载过去了
进入fabric-samples-2.4.9/chaincode/fabcar将go目录整个复制过去
cp -r go /vdc/docker/fabric-offline/config/chaincode/fabcar/go

注意自己目录存放的位置及fabric-samples-2.4.9的位置,别搞错了
提前下载好go链码的离线依赖文件vendor
确保在一个有网的服务器执行 进入这个/vdc/docker/fabric-offline/config/chaincode/fabcar/go目录执行以下命令 这个配置是因为直接执行go mod download -x无法下载所以带上七牛云的地址进行下载(也可以设置全局环境将七牛云地址) GOPROXY=https://goproxy.cn,direct GOSUMDB=off 下载当前模块的依赖到本地缓存 GOPROXY=https://goproxy.cn,direct GOSUMDB=off go mod download -x 将依赖的模块复制到项目的vendor目录中 GOPROXY=https://goproxy.cn,direct GOSUMDB=off go mod vendor -v 执行完后go目录就有了vendor文件夹,然后将整个go打包上传至离线服务器的config/chaincode/fabcar下继续打包-安装-批准-提交如下命令
还有fabcar.go链码文件也需要提前加上key-value方法不然又要执行两次链码安装过程
fabcar.go
/* SPDX-License-Identifier: Apache-2.0 */ package main import ( "encoding/json" "fmt" "strconv" "strings" "github.com/hyperledger/fabric-contract-api-go/contractapi" ) // SmartContract provides functions for managing a car type SmartContract struct { contractapi.Contract } // Car describes basic details of what makes up a car type Car struct { Make string `json:"make"` Model string `json:"model"` Colour string `json:"colour"` Owner string `json:"owner"` } // QueryResult structure used for handling result of query type QueryResult struct { Key string `json:"Key"` Record *Car } // InitLedger adds a base set of cars to the ledger func (s *SmartContract) InitLedger(ctx contractapi.TransactionContextInterface) error { cars := []Car{ Car{Make: "Toyota", Model: "Prius", Colour: "blue", Owner: "Tomoko"}, Car{Make: "Ford", Model: "Mustang", Colour: "red", Owner: "Brad"}, Car{Make: "Hyundai", Model: "Tucson", Colour: "green", Owner: "Jin Soo"}, Car{Make: "Volkswagen", Model: "Passat", Colour: "yellow", Owner: "Max"}, Car{Make: "Tesla", Model: "S", Colour: "black", Owner: "Adriana"}, Car{Make: "Peugeot", Model: "205", Colour: "purple", Owner: "Michel"}, Car{Make: "Chery", Model: "S22L", Colour: "white", Owner: "Aarav"}, Car{Make: "Fiat", Model: "Punto", Colour: "violet", Owner: "Pari"}, Car{Make: "Tata", Model: "Nano", Colour: "indigo", Owner: "Valeria"}, Car{Make: "Holden", Model: "Barina", Colour: "brown", Owner: "Shotaro"}, } for i, car := range cars { carAsBytes, _ := json.Marshal(car) err := ctx.GetStub().PutState("CAR"+strconv.Itoa(i), carAsBytes) if err != nil { return fmt.Errorf("Failed to put to world state. %s", err.Error()) } } return nil } // CreateCar adds a new car to the world state with given details func (s *SmartContract) CreateCar(ctx contractapi.TransactionContextInterface, carNumber string, make string, model string, colour string, owner string) error { car := Car{ Make: make, Model: model, Colour: colour, Owner: owner, } carAsBytes, _ := json.Marshal(car) return ctx.GetStub().PutState(carNumber, carAsBytes) } // QueryCar returns the car stored in the world state with given id func (s *SmartContract) QueryCar(ctx contractapi.TransactionContextInterface, carNumber string) (*Car, error) { carAsBytes, err := ctx.GetStub().GetState(carNumber) if err != nil { return nil, fmt.Errorf("Failed to read from world state. %s", err.Error()) } if carAsBytes == nil { return nil, fmt.Errorf("%s does not exist", carNumber) } car := new(Car) _ = json.Unmarshal(carAsBytes, car) return car, nil } // QueryAllCars returns all cars found in world state func (s *SmartContract) QueryAllCars(ctx contractapi.TransactionContextInterface) ([]QueryResult, error) { startKey := "" endKey := "" resultsIterator, err := ctx.GetStub().GetStateByRange(startKey, endKey) if err != nil { return nil, err } defer resultsIterator.Close() results := []QueryResult{} for resultsIterator.HasNext() { queryResponse, err := resultsIterator.Next() if err != nil { return nil, err } car := new(Car) _ = json.Unmarshal(queryResponse.Value, car) queryResult := QueryResult{Key: queryResponse.Key, Record: car} results = append(results, queryResult) } return results, nil } // ChangeCarOwner updates the owner field of car with given id in world state func (s *SmartContract) ChangeCarOwner(ctx contractapi.TransactionContextInterface, carNumber string, newOwner string) error { car, err := s.QueryCar(ctx, carNumber) if err != nil { return err } car.Owner = newOwner carAsBytes, _ := json.Marshal(car) return ctx.GetStub().PutState(carNumber, carAsBytes) } // ====================================== // 新增:CreateKVData 完全匹配Node.js版 // 入参:key(唯一ID) + jsonValue(业务JSON字符串)// ====================================== func (s *SmartContract) CreateKVData(ctx contractapi.TransactionContextInterface, key string, jsonValue string) error { // 非空校验(去空格,和Node.js一致) key = strings.TrimSpace(key) jsonValue = strings.TrimSpace(jsonValue) if key == "" || jsonValue == "" { return fmt.Errorf("key and jsonValue can not be empty") } // 直接存JSON字符串,和Node.js putState完全一致 err := ctx.GetStub().PutState(key, []byte(jsonValue)) if err != nil { return fmt.Errorf("Failed to create KV data: %s", err.Error()) } // 成功返回nil,fabric-contract-api会自动处理成功响应,和Node.js返回提示一致 return nil } // ====================================== // 新增:QueryKVData 完全匹配Node.js版 // 入参:key(上链的唯一ID)// ====================================== func (s *SmartContract) QueryKVData(ctx contractapi.TransactionContextInterface, key string) (string, error) { // key非空校验(去空格) key = strings.TrimSpace(key) if key == "" { return "", fmt.Errorf("key and jsonValue can not be empty") } // 读取数据,和Node.js getState完全一致 jsonAsBytes, err := ctx.GetStub().GetState(key) if err != nil { return "", fmt.Errorf("Failed to query KV data: %s", err.Error()) } // 空值校验,错误信息和Node.js完全一致 if jsonAsBytes == nil || len(jsonAsBytes) == 0 { return "", fmt.Errorf("%s does not exist", key) } // 直接返回原始JSON字符串,交给客户端解析,和Node.js一致 return string(jsonAsBytes), nil } func main() { chaincode, err := contractapi.NewChaincode(new(SmartContract)) if err != nil { fmt.Printf("Error create fabcar chaincode: %s", err.Error()) return } if err := chaincode.Start(); err != nil { fmt.Printf("Error starting fabcar chaincode: %s", err.Error()) } }
fabcar.go包括之前的fabcar.js都是官方2.4.9的基础配置,也是同样加上了CreateKVData和QueryKVData,方法参数返回都是一样只是go链码的方法第一个字母是严格大写的, import下的string类型也是追加的
go链码打包-安装-批准-提交
1.打包链码 peer lifecycle chaincode package fabcar-go.tar.gz \ --path /etc/hyperledger/fabric/chaincode/fabcar/go \ --lang golang \ --label fabcar-go_1.0 2.安装链码 peer lifecycle chaincode install fabcar-go.tar.gz 3.验证链码安装 peer lifecycle chaincode queryinstalled 4.批准链码(链id记得换) peer lifecycle chaincode approveformyorg \ -o orderer.example.com:7050 \ -C mychannel \ -n fabcar-go \ --version 1.0 \ --package-id fabcar-go_1.0:8028b2a89970df0e444107dcd758b627d138d62ae7df6b01dfe9aaaa03aee87f \ --sequence 1 \ --tls true \ --cafile $ORDERER_CA peer lifecycle chaincode checkcommitreadiness \ -C mychannel \ -n fabcar-go \ --version 1.0 \ --sequence 1 \ --tls true \ --cafile $ORDERER_CA 5.提交链码 peer lifecycle chaincode commit \ -o orderer.example.com:7050 \ -C mychannel \ -n fabcar-go \ --version 1.0 \ --sequence 1 \ --tls true \ --cafile $ORDERER_CA \ --peerAddresses peer0.org1.example.com:7051 \ --tlsRootCertFiles $CORE_PEER_TLS_ROOTCERT_FILE

兜底策略
如果在安装过程中频繁出问题或环境被污染不如索性重头开始
删指定目录 + 停止并删除所有 Fabric 运行容器,镜像完全保留,执行完后环境回到干净状态
# 1. 删除指定的/vdc/docker/fabric-offline/config目录
rm -rf /vdc/docker/fabric-offline/config
# 2. 停止所有运行中的Fabric容器(orderer/peer/tools)
docker stop $(docker ps -a | grep -E "orderer|peer|fabric-tools" | awk '{print $1}')
# 3. 删除所有已停止的Fabric容器
docker rm $(docker ps -a | grep -E "orderer|peer|fabric-tools" | awk '{print $1}')
# 4. 清理旧的链码容器/镜像(避免残留)
docker rm -f $(docker ps -a | grep "dev-" | awk '{print $1}')
docker rmi -f $(docker images | grep "dev-" | awk '{print $3}')
# 5. 清理Fabric相关的临时数据
rm -rf /vdc/docker/fabric-offline/*.block /vdc/docker/fabric-offline/*.tx /vdc/docker/fabric-offline/crypto-config
删除fabric网络(如果网络是叫这个名字的话)
docker network rm config_fabric-network
# 列出所有Docker网络
docker network ls
# 查看每个网络的网段,找出与172.19.0.0/16(这是配置在上面文件得网段)重叠的网络
for net in $(docker network ls -q); do
echo "=== 网络ID: $net ===";
docker network inspect $net | grep -A 3 "Subnet";
done
下载镜像并打包
docker save -o offline-images.tar \
hyperledger/fabric-ca:1.5.7 \
hyperledger/fabric-orderer:2.4.9 \
hyperledger/fabric-peer:2.4.9 \
hyperledger/fabric-tools:2.4.9 \
hyperledger/fabric-nodeenv:2.4 \
hyperledger/fabric-ccenv:2.4 \
hyperledger/fabric-baseos:2.4
# 导入镜像到离线服务器docker
docker load -i offline-images.tar

浙公网安备 33010602011771号