随笔分类 - Flink
摘要:1.代码 import java.sql.{Connection, DriverManager, PreparedStatement}import org.apache.flink.configuration.Configurationimport org.apache.flink.streamin
阅读全文
摘要:1.代码 import java.utilimport org.apache.flink.api.common.functions.RuntimeContextimport org.apache.flink.streaming.api.scala.StreamExecutionEnvironment
阅读全文
摘要:1.代码 import org.apache.flink.streaming.api.scala.StreamExecutionEnvironmentimport org.apache.flink.streaming.connectors.redis.RedisSinkimport org.apac
阅读全文
摘要:方式一:读取文件输出到Kafka 1.代码 import org.apache.flink.api.common.serialization.SimpleStringSchemaimport org.apache.flink.streaming.api.scala.StreamExecutionEn
阅读全文
摘要:import org.apache.flink.api.common.functions.FilterFunctionimport org.apache.flink.streaming.api.scala.StreamExecutionEnvironmentobject TransformTest
阅读全文
摘要:代码:import java.util.Propertiesimport org.apache.flink.api.common.serialization.SimpleStringSchemaimport org.apache.flink.streaming.api.scala.StreamExe
阅读全文
摘要:①下载flink-1.7.2-bin-hadoop27-scala_2.11.tgz,解压后进入bin,双击:start-cluster.bat(Windows环境)。 然后打开7777端口: ②进入http://localhost:8081,界面如下: ③ ④ ⑤点击Submit ⑥结果 如下界面
阅读全文
摘要:第一步:准备netcat(Linux环境可忽略) 由于本次代码结果的验证是在Windows环境下进行,所以需要安装一下netcat以使用nc命令,netcat的安装方法可参考链接:https://blog.csdn.net/BoomLee/article/details/102563472 第二步:
阅读全文
摘要:代码:import org.apache.flink.api.scala._object WordCount1 { def main(args: Array[String]): Unit ={ //创建执行环境 val env = ExecutionEnvironment.getExecutionE
阅读全文
摘要:<dependencies> <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-scala_2.11</artifactId> <version>1.7.2</version> </dependency> <depe
阅读全文

浙公网安备 33010602011771号