先上代码:
public class WordCountKeyedState {
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
// 初始化测试单词数据流
DataStreamSource lineDS = env.addSource(new RichSourceFunction() {
private boolean isCanaled = false;
@Override
public void run(SourceContext ctx) throws Exception {
while(!isCanaled) {
ctx.collect("hadoop flink spark");
Thread.sleep(1000);
}
}
@Override
public void cancel() {
isCanaled = true;
}
});
// 切割单词,并转换为元组
SingleOutputStreamOperator> wordTupleDS = lineDS.flatMap((String line, Collector> ctx) -> {
Arrays.stream(line.split(" ")).forEach(word -> ctx.collect(Tuple2.of(word, 1)));
}).returns(Types.TUPLE(Types.STRING, Types.INT));
// 按照单词进行分组
KeyedStream, Integer> keyedWordTupleDS = wordTupleDS.keyBy(t -> t.f1);
// 对单词进行计数
keyedWordTupleDS.flatMap(new RichFlatMapFunction, Tuple2>() {
private transient ValueState> countSumValueState;
@Override
public void open(Configuration parameters) throws Exception {
// 初始化ValueState
ValueStateDescriptor> countSumValueStateDesc = new ValueStateDescriptor("countSumValueState",
TypeInformation.of(new TypeHint>() {})
);
countSumValueState = getRuntimeContext().getState(countSumValueStateDesc);
}
@Override
public void flatMap(Tuple2 value, Collector> out) throws Exception {
if(countSumValueState.value() == null) {
countSumValueState.update(Tuple2.of(0, 0));
}
Integer count = countSumValueState.value().f0;
count++;
Integer valueSum = countSumValueState.value().f1;
valueSum += value.f1;
countSumValueState.update(Tuple2.of(count, valueSum));
// 每当达到3次,发送到下游
if(count > 3) {
out.collect(Tuple2.of(value.f0, valueSum));
// 清除计数
countSumValueState.update(Tuple2.of(0, valueSum));
}
}
}).print();
env.execute("KeyedState State");
}
}
代码说明:
构建测试数据源,每秒钟发送一次文本,为了测试方便,这里就发一个包含三个单词的文本行
对句子按照空格切分,并将单词转换为元组,每个单词初始出现的次数为1
按照单词进行分组
自定义FlatMap
初始化ValueState,注意:ValueState只能在KeyedStream中使用,而且每一个ValueState都对一个一个key。每当一个并发处理ValueState,都会从上下文获取到Key的取值,所以每个处理逻辑拿到的ValueStated都是对应指定key的ValueState,这个部分是由Flink自动完成的。
注意:
带默认初始值的ValueStateDescriptor已经过期了,官方推荐让我们手动在处理时检查是否为空
instead and manually manage the default value by checking whether the contents of the state is null.
”
@Deprecated
public ValueStateDescriptor(String name, TypeSerializertypeSerializer, T defaultValue) {
super(name, typeSerializer, defaultValue);
}
逻辑实现
在flatMap逻辑中判断ValueState是否已经初始化,如果没有手动给一个初始值。并进行累加后更新。每当count > 3发送计算结果到下游,并清空计数。