Giter VIP home page Giter VIP logo

ambari-flink-service's Introduction

An Ambari Service for Flink

Ambari service for easily installing and managing Flink on HDP clusters. Apache Flink is an open source platform for distributed stream and batch data processing More details on Flink and how it is being used in the industry today available here: http://flink-forward.org/?post_type=session

The Ambari service lets you easily install/compile Flink on HDP 2.6.5

  • Features:
    • By default, downloads prebuilt package of Flink 1.8.1, but also gives option to build the latest Flink from source instead
    • Exposes flink-conf.yaml in Ambari UI

Limitations:

  • This is not an officially supported service and is not meant to be deployed in production systems. It is only meant for testing demo/purposes
  • It does not support Ambari/HDP upgrade process and will cause upgrade problems if not removed prior to upgrade

Author: Ali Bajwa

  • Thanks to Davide Vergari for enhancing to run in clustered env
  • Thanks to Ben Harris for updating libraries to work with HDP 2.5.3
  • Thanks to Anand Subramanian for updating libraries to work with HDP 2.6.5 and flink version 1.8.1
  • Thanks to jzyhappy for updating libraries to work with HDP 2.6.5 and flink version 1.9.1
  • Download HDP 2.6 sandbox VM image (HDP_2.6.5_virtualbox_180626.ova) from Cloudera website
  • Import HDP_2.6.5_virtualbox_180626.ova into VMWare and set the VM memory size to 8GB
  • Now start the VM
  • After it boots up, find the IP address of the VM and add an entry into your machines hosts file. For example:
192.168.191.241 sandbox.hortonworks.com sandbox    
  • Note that you will need to replace the above with the IP for your own VM

  • Connect to the VM via SSH (password hadoop)

  • To download the Flink service folder, run below
VERSION=`hdp-select status hadoop-client | sed 's/hadoop-client - \([0-9]\.[0-9]\).*/\1/'`
sudo git clone https://github.com/abajwa-hw/ambari-flink-service.git   /var/lib/ambari-server/resources/stacks/HDP/$VERSION/services/FLINK   
  • Restart Ambari
#sandbox
service ambari restart

#non sandbox
sudo service ambari-server restart
  • Then you can click on 'Add Service' from the 'Actions' dropdown menu in the bottom left of the Ambari dashboard:

On bottom left -> Actions -> Add service -> check Flink server -> Next -> Next -> Change any config you like (e.g. install dir, memory sizes, num containers or values in flink-conf.yaml) -> Next -> Deploy

  • By default:

    • Container memory is 1024 MB
    • Job manager memory of 768 MB
    • Number of YARN container is 1
  • On successful deployment you will see the Flink service as part of Ambari stack and will be able to start/stop the service from here: Image

  • You can see the parameters you configured under 'Configs' tab Image

  • One benefit to wrapping the component in Ambari service is that you can now monitor/manage this service remotely via REST API

export SERVICE=FLINK
export PASSWORD=admin
export AMBARI_HOST=localhost

#detect name of cluster
output=`curl -u admin:$PASSWORD -i -H 'X-Requested-By: ambari'  http://$AMBARI_HOST:8080/api/v1/clusters`
CLUSTER=`echo $output | sed -n 's/.*"cluster_name" : "\([^\"]*\)".*/\1/p'`


#get service status
curl -u admin:$PASSWORD -i -H 'X-Requested-By: ambari' -X GET http://$AMBARI_HOST:8080/api/v1/clusters/$CLUSTER/services/$SERVICE

#start service
curl -u admin:$PASSWORD -i -H 'X-Requested-By: ambari' -X PUT -d '{"RequestInfo": {"context" :"Start $SERVICE via REST"}, "Body": {"ServiceInfo": {"state": "STARTED"}}}' http://$AMBARI_HOST:8080/api/v1/clusters/$CLUSTER/services/$SERVICE

#stop service
curl -u admin:$PASSWORD -i -H 'X-Requested-By: ambari' -X PUT -d '{"RequestInfo": {"context" :"Stop $SERVICE via REST"}, "Body": {"ServiceInfo": {"state": "INSTALLED"}}}' http://$AMBARI_HOST:8080/api/v1/clusters/$CLUSTER/services/$SERVICE
  • ...and also install via Blueprint. See example here on how to deploy custom services via Blueprints

set Flink version

  • configuration/flink-ambari-config.xml
<property>
    <name>flink_download_url</name>
    <value>http://X.X.151.15/Package/flink-1.9.0-bin-scala_2.11.tgz</value>
    <description>Snapshot download location. Downloaded when setup_prebuilt is true</description>
 </property>

value from http://apachemirror.wuchna.com/flink/ or http://www.us.apache.org/dist/flink/ or https://archive.apache.org/dist/or customize repo

  • metainfo.xml
<name>FLINK</name>
            <displayName>Flink</displayName>
            <comment>Apache Flink is a streaming dataflow engine that provides data distribution, communication, and fault tolerance for distributed computations over data streams.</comment>
            <version>1.9.0</version>

vsersion = your flink version

Flink on Yarn

  • metainfo.xml
<property>
	<name>yarn.client.failover-proxy-provider</name>
	<value>org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider</value>
</property>

restart yarn

Flink Configuration

Image

  • java_home is consistent with / etc / profile
hdp-select status hadoop-client
hadoop-client - <version>
  • hadodp_conf_dir = /etc/hadoop//0

Use Flink

  • Run word count job
su flink
export HADOOP_CONF_DIR=/etc/hadoop/conf
export HADOOP_CLASSPATH=`hadoop classpath`
cd /opt/flink
./bin/flink run --jobmanager yarn-cluster -yn 1 -ytm 768 -yjm 768 ./examples/batch/WordCount.jar
  • This should generate a series of word counts Image

  • Open the YARN ResourceManager UI. Notice Flink is running on YARN Image

  • Click the ApplicationMaster link to access Flink webUI Image

  • Use the History tab to review details of the job that ran: Image

  • View metrics in the Task Manager tab: Image

Other things to try

More details on Flink and how it is being used in the industry today available here: http://flink-forward.org/?post_type=session

Remove service

  • To remove the Flink service:
    • Stop the service via Ambari
    • Unregister the service
export SERVICE=FLINK
export PASSWORD=admin
export AMBARI_HOST=localhost

# detect name of cluster
output=`curl -u admin:$PASSWORD -i -H 'X-Requested-By: ambari'  http://$AMBARI_HOST:8080/api/v1/clusters`
CLUSTER=`echo $output | sed -n 's/.*"cluster_name" : "\([^\"]*\)".*/\1/p'`

curl -u admin:$PASSWORD -i -H 'X-Requested-By: ambari' -X DELETE http://$AMBARI_HOST:8080/api/v1/clusters/$CLUSTER/services/$SERVICE

If above errors out, run below first to fully stop the service

curl -u admin:$PASSWORD -i -H 'X-Requested-By: ambari' -X PUT -d '{"RequestInfo": {"context" :"Stop $SERVICE via REST"}, "Body": {"ServiceInfo": {"state": "INSTALLED"}}}' http://$AMBARI_HOST:8080/api/v1/clusters/$CLUSTER/services/$SERVICE
  • Remove artifacts
rm -rf /opt/flink*
rm /tmp/flink.tgz

ambari-flink-service's People

Contributors

abajwa-hw avatar anandsubbu avatar arnd avatar jamesbenharris avatar jzyhappy avatar nielszeilemaker avatar zhangyyun avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

ambari-flink-service's Issues

wget fails due to proxy

The "install service" can fail due at flink.py Line 45 due to the proxy variables not being set

This is despite adding proxy variables to both AMBARI_JVM_ARGS and http_proxy in /var/lib/ambari-server/ambari-env.sh

A workaround is to run this command before installing the service:

wget http://www.us.apache.org/dist/flink/flink-1.1.2/flink-1.1.2-bin-hadoop27-scala_2.11.tgz -O /tmp/flink.tgz

How would you submit job to Flink Application running on Yarn (flinkapp-from-ambari)

Executing your test command

./bin/flink run --jobmanager yarn-cluster -yn 1 -ytm 768 -yjm 768 ./examples/batch/WordCount.jar

will create new application called "Flink Application: org.apache.flink.examples.java.wordcount.WordCount" (according to Yarn ResourceManager UI) instead of submitting job to the "flinkapp-from-ambari" application.

So, how would you submit a job to "flinkapp-from-ambari" that's started by Ambari Service?

Cannot import name format_hdp_stack_version

Hello! I have problem when I install flink in Hortonworks. Help to solve the problem

Traceback (most recent call last): File "/var/lib/ambari-agent/cache/stacks/HDP/2.5/services/FLINK/package/scripts/flink.py", line 164, in <module> Master().execute() File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute method(env) File "/var/lib/ambari-agent/cache/stacks/HDP/2.5/services/FLINK/package/scripts/flink.py", line 12, in install import params File "/var/lib/ambari-agent/cache/stacks/HDP/2.5/services/FLINK/package/scripts/params.py", line 5, in <module> from resource_management.libraries.functions.version import format_hdp_stack_version ImportError: cannot import name format_hdp_stack_version

Ambari Flink Service HDP3

I forked your service and brought it into HDP3 via a Management Pack: dfhz_hdp_mpack. I was successful with just a few adjustments to the start function. For some reason the -d (detached) and -nm (app name) at the back of the cmd were not being honored. My solution was to move to the front.

cmd = format("export HADOOP_CONF_DIR={hadoop_conf_dir}; export HADOOP_CLASSPATH={hadoop_classpath}; {bin_dir}/yarn-session.sh -d -nm {flink_appname} -n {flink_numcontainers} -s {flink_numberoftaskslots} -jm {flink_jobmanager_memory} -tm {flink_container_memory} -qu {flink_queue}")

taskmanager.numberOfTaskSlots doesn't work in YARN session

When flink is started as ambari service (yarn session), the command will be launched without the -s parameter which was established in the log as taskmanager.numberOfTaskSlots:

Execute['export HADOOP_CONF_DIR=/etc/hadoop/conf; /opt/flink/bin/yarn-session.sh -n 10 -jm 1024 -tm 2048 -qu default -nm flinkapp-from-ambari -d -st >> /var/log/flink/flink-setup.log'] {'user': 'flink'}

It's wrong because it needs one container for each parallel task with this configuration and it can not change the number of slots.

flink.tgz不能解压

flink.tgz不能解压,导致flink配置的安装目录为空,后续失败。
看了源码部分,在params.py下 tem_file='/tmp/flink.gz' 这部分写死了

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.