Hi Gang,
I’ve been trying to get some Flink code running in Amazon Web Services’s Elastic MapReduce, but so far the only success I’ve had required me to log into the master node, download my jar from S3 to there, and then run it on the master node from the command line using something like the following: % bin/flink run -m yarn-cluster -yn 2 -p 4 <my jar name> <my main program arguments> The two other approaches I’ve tried (based on the AWS EMR Flink documentation) that didn’t work were: 1) Add an EMR Step to launch my program as part of a Flink session - I couldn’t figure out how to get my job jar deployed as part of the step, and I couldn’t successfully configure a Bootstrap Action to deploy it before running that step. 2) Start a Long-Running Flink Session via an EMR Step (which worked) and then use the Flink Web UI to upload my job jar from my workstation - It killed the ApplicationMaster that was running the Flink Web UI without providing much interesting logging. I’ve appended both the container log output and the jobmanager.log contents to the end of this email. In addition, it would be nice to gain access to S3 resources using credentials. I’ve tried using an AmazonS3ClientBuilder, and passing an EnvironmentVariableCredentialsProvider to its setCredentials method. I’d hoped that this might pick up the credentials I set up on my master node in the $ Here’s a list of interesting version numbers: flink-java-1.2.0.jar flink-core-1.2.0.jar flink-annotations-1.2.0.jar emr-5.4.0 with Flink 1.2.0 installed Any help would be greatly appreciated. I’m lusting after an example showing how to deploy a simple Flink jar from S3 to a running EMR cluster and then get Flink to launch it with an arbitrary set of Flink and user arguments. Bonus points for setting up an AmazonS3 Java client object without including those credentials within my Java source code. Best Regards, - Chris Here’s the container logging from my attempt to submit my job via the Flink web UI: Application application_1496707031947_0002 failed 1 times due to AM Container for appattempt_1496707031947_0002_000001 exited with exitCode: 255 There's a bunch of startup messages in the jobmanager.log, but only the following output was generated by my attempt to submit my Flink job: 2017-06-06 00:41:55,332 INFO org.apache.flink.runtime.blob.BlobServer - Stopped BLOB server at 0.0.0.0:44948 ----------------------------------------- Chris Schneider http://www.scaleunlimited.com custom big data solutions ----------------------------------------- |
1)
Since the jar is only required on the master node you should be able to just run a step with a very simple script like ‘bash –c “aws s3 cp s3://mybucket/myjar.jar .”’ So if you were to do that using the step similar to outlined in the EMR documentation, but replacing withArgs with the above command as args (I think there’s an example of this on that
same EMR docs page you refer to). Then add another step after that which actually runs the flink job. The jar will be located in /home/hadoop. In the future, I’m hoping this can just be simplified to flink run -yn 2 -p
4 s3://mybucket/myjar.jar … but it doesn’t seem to be the case right now. 2)
If you ran this as a step, you should be able to see the error the Flink driver gives in the step’s logs.
3)
Provided your S3 bucket and EMR cluster EC2 IAM role/”instance profile” belong to the same account (or at least the permissions are setup such that you can download a file from
S3 to your EC2 instances), you should be able to use the
DefaultAWSCredentialsProviderChain, which won’t require you enter any credentials as it uses the EC2 instance profile credentials provider.
Hope that helps.
Thanks, Craig From:
Chris Schneider <[hidden email]> Hi Gang, I’ve been trying to get some Flink code running in Amazon Web Services’s Elastic MapReduce, but so far the only success I’ve had required me to log into the master node, download my jar from S3 to there, and then run it on the master node
from the command line using something like the following: % bin/flink run -m yarn-cluster -yn 2 -p 4 <my jar name> <my main program arguments> The two other approaches I’ve tried (based on the AWS EMR Flink documentation) that didn’t work were: 1) Add an EMR Step to launch my program as part of a Flink session - I couldn’t figure out how to get my job jar deployed as part of the step, and I couldn’t successfully configure a Bootstrap
Action to deploy it before running that step. 2) Start a Long-Running Flink Session via an EMR Step (which worked) and then use the Flink Web UI to upload my job jar from my workstation - It killed the ApplicationMaster that was running the Flink Web UI without providing much interesting
logging. I’ve appended both the container log output and the jobmanager.log contents to the end of this email. In addition, it would be nice to gain access to S3 resources using credentials. I’ve tried using an AmazonS3ClientBuilder,
and passing an EnvironmentVariableCredentialsProvider to its setCredentials method.
I’d hoped that this might pick up the credentials I set up on my master node in the $ Here’s a list of interesting version numbers: flink-java-1.2.0.jar flink-core-1.2.0.jar flink-annotations-1.2.0.jar emr-5.4.0 with Flink 1.2.0 installed Any help would be greatly appreciated. I’m lusting after an example showing how to deploy a simple Flink jar from S3 to a running EMR cluster and then get Flink to launch it with an arbitrary set of Flink and user arguments. Bonus points
for setting up an AmazonS3 Java client object without including those credentials within my Java source code. Best Regards, - Chris Here’s the container logging from my attempt to submit my job via the Flink web UI: Application application_1496707031947_0002 failed 1 times due to AM Container for appattempt_1496707031947_0002_000001 exited with exitCode: 255 There's a bunch of startup messages in the jobmanager.log, but only the following output was generated by my attempt to submit my Flink job: 2017-06-06 00:41:55,332 INFO org.apache.flink.runtime.blob.BlobServer - Stopped BLOB server at 0.0.0.0:44948 ----------------------------------------- Chris Schneider http://www.scaleunlimited.com ----------------------------------------- |
Ah, maybe (1) wasn’t entirely clear so here’s the copy/pasted example with what I suggested: HadoopJarStepConfig copyJar = new HadoopJarStepConfig() .withJar("command-runner.jar") .withArgs("bash","-c", "aws s3 cp s3://mybucket/myjar.jar /home/hadoop"
); From:
"Foster, Craig" <[hidden email]> 1)
Since the jar is only required on the master node you should be able to just run a step with a very simple script like ‘bash –c “aws s3 cp s3://mybucket/myjar.jar .”’ So if you were to do that using the step similar to outlined in the EMR documentation, but replacing withArgs with the above command as args (I think there’s an example of this on that
same EMR docs page you refer to). Then add another step after that which actually runs the flink job. The jar will be located in /home/hadoop. In the future, I’m hoping this can just be simplified to flink run -yn 2 -p
4 s3://mybucket/myjar.jar … but it doesn’t seem to be the case right now. 2)
If you ran this as a step, you should be able to see the error the Flink driver gives in the step’s logs.
3)
Provided your S3 bucket and EMR cluster EC2 IAM role/”instance profile” belong to the same account (or at least the permissions are setup such that you can download a file from S3 to
your EC2 instances), you should be able to use the
DefaultAWSCredentialsProviderChain, which won’t require you enter any credentials as it uses the EC2 instance profile credentials provider.
Hope that helps.
Thanks, Craig From:
Chris Schneider <[hidden email]> Hi Gang, I’ve been trying to get some Flink code running in Amazon Web Services’s Elastic MapReduce, but so far the only success I’ve had required me to log into the master node, download my jar from S3 to there, and then run it on the master node
from the command line using something like the following: % bin/flink run -m yarn-cluster -yn 2 -p 4 <my jar name> <my main program arguments> The two other approaches I’ve tried (based on the AWS EMR Flink documentation) that didn’t work were: 1) Add an EMR Step to launch my program as part of a Flink session - I couldn’t figure out how to get my job jar deployed as part of the step, and I couldn’t successfully configure a Bootstrap
Action to deploy it before running that step. 2) Start a Long-Running Flink Session via an EMR Step (which worked) and then use the Flink Web UI to upload my job jar from my workstation - It killed the ApplicationMaster that was running the Flink Web UI without providing much interesting
logging. I’ve appended both the container log output and the jobmanager.log contents to the end of this email. In addition, it would be nice to gain access to S3 resources using credentials. I’ve tried using an AmazonS3ClientBuilder,
and passing an EnvironmentVariableCredentialsProvider to its setCredentials method.
I’d hoped that this might pick up the credentials I set up on my master node in the $ Here’s a list of interesting version numbers: flink-java-1.2.0.jar flink-core-1.2.0.jar flink-annotations-1.2.0.jar emr-5.4.0 with Flink 1.2.0 installed Any help would be greatly appreciated. I’m lusting after an example showing how to deploy a simple Flink jar from S3 to a running EMR cluster and then get Flink to launch it with an arbitrary set of Flink and user arguments. Bonus points
for setting up an AmazonS3 Java client object without including those credentials within my Java source code. Best Regards, - Chris Here’s the container logging from my attempt to submit my job via the Flink web UI: Application application_1496707031947_0002 failed 1 times due to AM Container for appattempt_1496707031947_0002_000001 exited with exitCode: 255 There's a bunch of startup messages in the jobmanager.log, but only the following output was generated by my attempt to submit my Flink job: 2017-06-06 00:41:55,332 INFO org.apache.flink.runtime.blob.BlobServer - Stopped BLOB server at 0.0.0.0:44948 ----------------------------------------- Chris Schneider http://www.scaleunlimited.com ----------------------------------------- |
Free forum by Nabble | Edit this page |