一、有界流
1、代码
package wc;import org.apache.flink.api.common.functions.FlatMapFunction;
import org.apache.flink.api.java.functions.KeySelector;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.datastream.KeyedStream;
import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.util.Collector;public class BoundedStreamWordCount {public static void main(String[] args) throws Exception {//TODO 1.创建流式的执行环境StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();//TODO 2.读取文件DataStreamSource<String> lineDS = env.readTextFile("input/words.txt");//TODO 3.处理数据:切分、转换、分组、求和//flatMap方法的参数是一个接口,该接口需要重写flatMap方法//这里使用的是匿名实现类//value为读入的每条数据的,数据类型//out为采集器,用来返回数据SingleOutputStreamOperator<Tuple2<String, Integer>> wordAndOne = lineDS.flatMap(new FlatMapFunction<String, Tuple2<String, Integer>>() {@Overridepublic void flatMap(String value, Collector<Tuple2<String, Integer>> out) throws Exception {String[] words = value.split(" ");for (String word : words) {Tuple2<String, Integer> wordAndOne = Tuple2.of(word, 1); //将每个单词转换成2元组out.collect(wordAndOne);//使用Collector向下游发送数据}}});//TODO 4.按照word分组//new KeySelector<Tuple2<String, Integer>, String> 第一个类型指的是传入的数据的类型,第二个类型指的是key的数据类型KeyedStream<Tuple2<String, Integer>, String> wordAndOneKS = wordAndOne.keyBy(new KeySelector<Tuple2<String, Integer>, String>() {@Overridepublic String getKey(Tuple2<String, Integer> value) throws Exception {return value.f0;}});//TODO 5.聚合SingleOutputStreamOperator<Tuple2<String, Integer>> sumDS = wordAndOneKS.sum(1);//TODO 6.打印sumDS.print();//TODO 7.执行env.execute(); //默认核数为电脑的所有核数}
}
2、说明
假如接口A,里面有一个方法a()
1)正常写法:定义一个class B,去实现接口A,并且实现它的方法a()
B b=new B()
2)匿名实现类写法
new A(){
实现a(){ }
}
二、无界流
1、代码
package wc;import org.apache.flink.api.common.typeinfo.Types;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.util.Collector;public class StreamWordCount {public static void main(String[] args) throws Exception {//TODO 1.创建流式执行环境StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();//TODO 2.读取数据:socketDataStreamSource<String> lineDataStream = env.socketTextStream("hadoop1",7777);//TODO 3.处理数据SingleOutputStreamOperator<Tuple2<String, Integer>> sum = lineDataStream.flatMap((String value, Collector<Tuple2<String, Integer>> out) -> {String[] words = value.split("\\s+");for (String word : words) {out.collect(Tuple2.of(word, 1));}}).returns(Types.TUPLE(Types.STRING, Types.INT)) //存在泛型擦除的问题,需要指定flatmap.keyBy((value) -> value.f0).sum(1); //value:只有一个参数的时候,类型可以不写//TODO 4.打印sum.print();//TODO 5.启动执行env.execute(); //默认核数为电脑的所有核数}
}
2、在hadoop上启动
nc -lk 7777
3、报错
1)报错原因:泛型擦除
没有指定Collector的类型
2)解决方法:增加returns方法,指定Collector的类型