
flink 1.7.2 安装详解
发布日期:2021-05-09 09:32:47
浏览次数:14
分类:博客文章
本文共 2932 字,大约阅读时间需要 9 分钟。
##flink 1.7.2 安装
需要java环境下载地址 https://flink.apache.org/downloads.html
#1、单机版#创建用户flink
useradd flink -d /home/flinkecho "flink123" | passwd flink --stdin#解压
tar -zxvf flink-1.7.2-bin-hadoop26-scala_2.11.tgz#启动
cd flink-1.7.2/bin/ && ./start-cluster.sh#测试
1、web页面查看:ip:8081 例如:http://192.168.88.132:80812、运行实例wordcount程序:cd flink-1.7.2/bin/ && ./flink run ../examples/batch/WordCount.jar3、jps 查看进程 #2、独立集群版#准备机器 /etc/hosts192.168.88.130 lgh192.168.88.131 lgh1192.168.88.132 lgh2#创建用户flink(所有机器)
useradd flink -d /home/flinkecho "flink123" | passwd flink --stdin#ssh免密登录(在192.168.88.130,指定一台操作)
su - flinkssh-keygen -t rsa ssh-copy-id 192.168.88.131ssh-copy-id 192.168.88.132#解压
tar -zxvf flink-1.7.2-bin-hadoop26-scala_2.11.tgzcd flink-1.7.2/conf
#修改配置文件
#1、masters192.168.88.130:8081#2、slaves
192.168.88.131192.168.88.132#3、flink-conf.yaml
cat flink-conf.yaml | grep -v ^# | grep -v "^$"jobmanager.rpc.address: 192.168.88.130
jobmanager.rpc.port: 6123env.java.home: /usr/java/defaultjobmanager.heap.size: 1024mtaskmanager.heap.size: 1024mtaskmanager.numberOfTaskSlots: 1parallelism.default: 1rest.port: 8081#分发flink
scp -r flink-1.7.2 flink@192.168.88.131/home/flinkscp -r flink-1.7.2 flink@192.168.88.132/home/flink#启动
在主节点上:cd flink-1.7.2/bin/ && ./start-cluster.sh#测试
1、web页面查看:ip:8081 例如:http://192.168.88.132:80812、运行实例wordcount程序:cd flink-1.7.2/bin/ && ./flink run ../examples/batch/WordCount.jar3、jps 查看 #3、基于yarn的集群版(前提是安装好了hadoop)hadoop的安装:可参考https://www.cnblogs.com/zsql/p/10736420.html在独立集群的基础上:新增如下#配置hadoop的环境变量 /etc/profile 或者 ~/.bashrc
export HADOOP_HOME=/apps/opt/cloudera/parcels/CDH/lib/hadoopexport HADOOP_CONF_DIR=/etc/hadoop/confexport YARN_CONF_DIR=/etc/hadoop/conf然后使用source命令使配置文件生效
#分发flink
scp -r flink-1.7.2 flink@192.168.88.131/home/flinkscp -r flink-1.7.2 flink@192.168.88.132/home/flink#启动集群和yarn-session
cd flink-1.7.2 && ./start-cluster.shcd flink-1.7.2 && nohup ./bin/yarn-session.sh &#测试
1、jps 查看进程2、执行程序:./bin/flink run -m yarn-cluster -yn 2 ./examples/batch/WordCount.jar然后去hadoop的yarn页面查看作业,ip:8088
#4、高可用版(安装zookeeper,或者修改conf/zoo.cfg,推荐安装zookeeper)
zookeeper的安装可以参考:https://www.cnblogs.com/zsql/p/10736420.html在基于yarn版本集群的基础上修改如下配置:
#1、masters192.168.88.130:8081192.168.88.131:8082#2、flink-conf.yaml (一定要注意空格..踩过坑)
jobmanager.rpc.address: 192.168.88.130jobmanager.rpc.port: 6123env.java.home: /usr/java/default jobmanager.heap.size: 1024mtaskmanager.heap.size: 1024mtaskmanager.numberOfTaskSlots: 1parallelism.default: 1high-availability: zookeeperhigh-availability.zookeeper.path.root:/user/flink/root high-availability.storageDir: hdfs:///user/flink/ha/ #该目录flink用户一定要有权限high-availability.zookeeper.quorum: 192.168.88.130:2181,192.168.88.131:2181,192.168.88.132:2181rest.port: 8081#分发flink
scp -r flink-1.7.2 flink@192.168.88.131/home/flinkscp -r flink-1.7.2 flink@192.168.88.132/home/flink#测试如上yarn集群
新增测试,kill掉一个主节点进程,看是否能跑作业