Hi all,
I’m running this command, on the master in an EMR cluster: ./bin/flink run -m yarn-cluster -yn 25 -yjm 1024 -ytm 4096 -c <class name> <path to job jar> -planner flink -inputdir xxx Everything seems to be starting up fine, up to: All TaskManagers are connected Using the parallelism provided by the remote cluster (25). To use another parallelism, set it at the ./bin/flink client. My main class then tries to parse the passed arguments, and fails. The arguments being passed to my main() class aren’t what I expect. I get... -p lanner flink inputdir <a href="s3n://su-wikidump/wikidump-20151112/data/" class="">s3n://su-wikidump/wikidump-20151112/data/ It looks like Flink is trying to process all of my arguments, so -planner looks like “-p”, and -inputdir gets the ‘-‘ consumed before it realizes that there is no ‘-i<anything>’ parameter that it knows about. I was assuming that everything after the jar parameter would be sent through as is. Any ideas? Thanks, — Ken PS - this is with flink 1.0.2 -------------------------- Ken Krugler +1 530-210-6378 custom big data solutions & training Hadoop, Cascading, Cassandra & Solr |
Hi Ken, I have built parameter parser in my jar to work with '--' instead of '-' and it works fine (on 1.0.0 and on current master). After a cursory look at parameter parser Flink uses (http://commons.apache.org/proper/commons-cli/) it seems that double vs single dash could make a difference, so you could try it. On Tue, Apr 26, 2016 at 11:45 AM, Ken Krugler <[hidden email]> wrote:
|
Hi Timur,
Thanks, using ‘--<parameter name>’ seems to work. I’ve filed https://issues.apache.org/jira/browse/FLINK-3838 to allow ‘-<parameter name>’ as well. — Ken
-------------------------- Ken Krugler +1 530-210-6378 custom big data solutions & training Hadoop, Cascading, Cassandra & Solr |
Free forum by Nabble | Edit this page |