Flink系列-- Flink state processor api 学习和模拟使用
1、Flink 状态背景介绍
1.1 Flink
官方定义:Apache Flink is a framework and distributed processing engine for stateful computations over unbounded and bounded data streams.
Apache Flink是一个框架和分布式处理引擎,用于对无界和有界数据流进行状态计算.
1.2 什么是状态(state)
官方定义:While many operations in a dataflow simply look at one individual event at a time (for example an event parser), some operations remember information across multiple events (for example window operators). These operations are called stateful.
state用来存放计算过程的节点中间结果或元数据等,并提供Exactly-Once语义,
例如:执行aggregation时在state中记录中间聚合结果
例如:从kafka中摄取记录时,是需要记录对应的partition的offset,而这些state数据在计算过程中会进行持久化的。state就变成了与时间相关的是对flink任务内部数据的快照。
State:维护/存储的是某一个Operator的运行的状态/历史值
1.3 状态类型
state主要有两大类:
- KeyedState:一般groupby或PartitionBy组成的内容,即为key,每个key都有自己的state,并且key与key间的state是不可见的
- OperatorState:一般存在与source,sink connector中,如kafka中的offset
1.4 checkoint/savepoint
状态State与检查点Checkpoint之间关系:Checkpoint将某个时刻应用状态State进行快照Snapshot保存
作用: 状态保存,任务恢复,exactly_once保证
两者区别
checkpoint:应用定时触发,用户保存状态,会过期,内部应用失败重启的时候启用,但是手动cancel时,会删除之前的checkpoint
savepoint:用户手动置顶,相当于状态的备份,可以在bin/flink cancel xx的时候调用,一般用于修改并行度,程序升级等等。
常用状态后端Fs,rocksdb
- 内存HeapStateBackend:存放数据量小,用于开发测试使用;生产不建议使用
- HDFS的FsStateBackend :分布式文件持久化,每次都会产生网络io,可用于大state,不支持增量;可用于生产
- RocksDB的RocksDBStateBackend:本地文件 + 异步hdfs持久化,也可用于大state数据量,唯一支持增量,可用于生产;
2 flink state processor api
2.1 介绍
Apache Flink’s State Processor API provides powerful functionality to reading, writing, and modifying savepoints and checkpoints using Flink’s DataStream API under BATCH execution
批模式下,使用datastream api读取、写入、修改 savepoint和checpoint的状态处理API
官方链接:https://nightlies.apache.org/flink/flink-docs-master/docs/libs/state_processor_api/
2.2 依赖
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-state-processor-api</artifactId>
<version>1.15.1</version>
</dependency>
2.3 测试keyed state 示例
maven 依赖
<properties>
<maven.compiler.source>8</maven.compiler.source>
<maven.compiler.target>8</maven.compiler.target>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<scala.binary.version>2.12</scala.binary.version>
<flink.version>1.15.1</flink.version>
<slf4j.version>1.7.25</slf4j.version>
<junit.jupiter.version>5.8.2</junit.jupiter.version>
<junit.version>4.13.2</junit.version>
</properties>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-simple</artifactId>
<version>${slf4j.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-runtime-web_2.11</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter-api</artifactId>
<version>${junit.jupiter.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-statebackend-rocksdb</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-java</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-runtime</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-java</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-clients</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-runtime-web</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-kafka</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-table-api-java</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-base</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-table-planner_2.12</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-state-processor-api</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>${junit.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-core</artifactId>
<version>${flink.version}</version>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-statebackend-rocksdb</artifactId>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-java</artifactId>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-runtime</artifactId>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-java</artifactId>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-clients</artifactId>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-runtime-web</artifactId>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-kafka</artifactId>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-table-api-java</artifactId>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-base</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-table-planner_2.12</artifactId>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-state-processor-api</artifactId>
</dependency>
</dependencies>
测试从source产生word数据,keyby转成tuple,然后在map中定义keyed state类型的mapState,然后通过print打印到控制台
import org.apache.flink.api.common.functions.MapFunction; import org.apache.flink.api.common.functions.RichMapFunction; import org.apache.flink.api.common.restartstrategy.RestartStrategies; import org.apache.flink.api.common.state.MapState; import org.apache.flink.api.common.state.MapStateDescriptor; import org.apache.flink.api.common.typeinfo.TypeInformation; import org.apache.flink.api.java.tuple.Tuple2; import org.apache.flink.configuration.Configuration; import org.apache.flink.runtime.state.hashmap.HashMapStateBackend; import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; import org.example.flink.demo.datastream.keystate.DataGenWordsSourceFunction; public class DataStreamKeyedStateDemo { public static void main(String[] args) throws Exception { Configuration configuration = new Configuration(); configuration.setInteger("rest.port", 8081); StreamExecutionEnvironment env = StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(configuration); env.enableCheckpointing(5 * 1000); env.setStateBackend(new HashMapStateBackend()); env.getCheckpointConfig() .setCheckpointStorage("file:///D:/IDE/FlinkTableApiTest/demo/checkpoint"); // env.setStateBackend(new EmbeddedRocksDBStateBackend()); // env.getCheckpointConfig().setCheckpointStorage("file:///D:/IDE/FlinkTableApiTest"); env.setParallelism(1); env.setMaxParallelism(1); env.setRestartStrategy(new RestartStrategies.NoRestartStrategyConfiguration()); env.addSource(new DataGenWordsSourceFunction()) .map(new MapFunction<String, Tuple2<String, Integer>>() { @Override public Tuple2<String, Integer> map(String value) throws Exception { return new Tuple2<>(value,1); } }) .keyBy(0) .map(new RichMapFunction<Tuple2<String, Integer>, Tuple2<String, Long>>() { private transient MapState<String,Long> mapState; @Override public Tuple2<String, Long> map(Tuple2<String, Integer> value) throws Exception { String word = value.f0; Long count = mapState.get(word); if(null == count){ count = 0L; } mapState.put(word,count + 1); return new Tuple2<>(word,mapState.get(word)); } @Override public void open(Configuration parameters) throws Exception { MapStateDescriptor<String,Long> mapStateDescriptor = new MapStateDescriptor<>( "wordCount", // the state name TypeInformation.of(String.class), TypeInformation.of(Long.class)); mapState = getRuntimeContext().getMapState(mapStateDescriptor); } }) .uid("wordCountUid") .print(); // streamOperator.addSink(new PrintSinkFunction<>()); env.execute("test wordCount"); } }
DataGenWordsSourceFunction 定义类如下:
import org.apache.flink.configuration.Configuration; import org.apache.flink.streaming.api.functions.source.RichSourceFunction; import java.util.Random; public class DataGenWordsSourceFunction extends RichSourceFunction<String> { private final String[] words = new String[]{ "Welcome", "to", "the", "world", "of", "flink" }; @Override public void run(SourceContext<String> ctx) throws Exception { Random random = new Random(); while (true) { int index = random.nextInt(words.length); ctx.collect(words[index]); Thread.sleep(100); } } @Override public void cancel() { } @Override public void open(Configuration parameters) throws Exception { super.open(parameters); } }
2.4 测试state 读取类
import org.apache.flink.api.common.state.MapState; import org.apache.flink.api.common.state.MapStateDescriptor; import org.apache.flink.api.common.typeinfo.TypeInformation; import org.apache.flink.api.java.tuple.Tuple1; import org.apache.flink.api.java.tuple.Tuple2; import org.apache.flink.configuration.Configuration; import org.apache.flink.state.api.SavepointReader; import org.apache.flink.state.api.functions.KeyedStateReaderFunction; import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; import org.apache.flink.util.Collector; import java.util.Map; public class DataStreamKeyedStateDemoTest { public static void main(String[] args) throws Exception { String metadataPath = "file:///D:/IDE/FlinkTableApiTest/demo/checkpoint/2ffb4aaf956c8274d27cea90ee4106ac/chk-10"; StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); env.setMaxParallelism(1); env.setParallelism(1); SavepointReader savepointReader = SavepointReader.read( env, metadataPath ,null); savepointReader.readKeyedState("wordCountUid", new KeyedStateReaderFunction<Tuple1<String>, Tuple2<String,Long>>() { private transient MapState<String,Long> mapState; @Override public void open(Configuration parameters) throws Exception { MapStateDescriptor<String,Long> mapStateDescriptor = new MapStateDescriptor<>( "wordCount", // the state name TypeInformation.of(String.class), TypeInformation.of(Long.class)); mapState = getRuntimeContext().getMapState(mapStateDescriptor); } @Override public void readKey(Tuple1<String> key, Context ctx, Collector<Tuple2<String,Long>> out) throws Exception { Iterable<Map.Entry<String, Long>> entries = mapState.entries(); for (Map.Entry<String, Long> entry : entries) { out.collect(new Tuple2<>(entry.getKey(),entry.getValue())); } } }).print(); env.execute(); } }
2.5 State Processor Api存在问题
开发成本:需要额外开发代码读取状态
使用门槛:需要了解状态定义细节,对于sql作业难以实现
查询限制:一次只能查询一个算子
内容匮乏:无法查询状态元信息
3 状态读取的几种方式
方式1:基于提供API开发
面对Datastream程序,设置uid,开发查询状态程序 (针对固定的业务有用)
优点:开发简单,可以跟着开源节奏走
缺点:对于变化的业务和sql任务,维护难、工作量大且难以实现
方式2:解析checkpoint/savepoint 直接读取状态信息
源码阅读,了解存储结构;checkpoint/savepoint 存储有足够的元信息用于解析且可访问
优点:工作量相对较少,在一定版本范围内,可以很方便进行状态查询
缺点:可能仍需要修改flink少量源码,flink版本变更后,可能存在不兼容情况
方式3:修改源码,存储读取state所需的元信息,存储到checkpoint/savepoint中
优点:状态信息透明,可以很简单的查询使用,用于状态查看,任务调整,任务恢复等
缺点:需要对Flink checkpoint机制、Jobmanager,taskmanager通信、checkpoint元信息特别清楚;
时间周期长,需要修改较多源码,实现难度大
3.1 初步结论
参考 HeapRestoreOperation 类中restore源码可以直接读取checkpoint/savepoint 中keyed state状态数据,只需要传入metadataPath即可
此外,1、需要修改其中一些不必要逻辑
2、个别不可访问的类,需要修改源码为public
3、有些类型的state数据print时是类名称,需要显示成string,可以通过如下方式获取值和转换类型
public static StringData objectToString(TypeSerializer<?> serializer,Object value){ StringData result = StringData.fromString(value.toString()); if(serializer instanceof RowDataSerializer && value instanceof BinaryRowData){ RowDataSerializer rowDataSerializer = (RowDataSerializer) serializer; try { Field typesField = rowDataSerializer.getClass().getDeclaredField("types"); //必须设置可访问,否则下面的操作做不了 typesField.setAccessible(true); LogicalType logicalType = ((LogicalType[]) typesField.get(rowDataSerializer))[0]; LogicalTypeRoot typeRoot = logicalType.getTypeRoot(); switch (typeRoot){ case BIGINT: result = StringData.fromString(String.valueOf(((BinaryRowData)value).getLong(0))); break; case VARCHAR: result = ((BinaryRowData)value).getString(0); break; //todo 追加 其他类型转化 default: } }catch (Exception e){ return result; } } return result; }
大致产品形态:

本文来自博客园,作者:life_start,转载请注明原文链接:https://www.cnblogs.com/yangh2016/articles/16639439.html

浙公网安备 33010602011771号