Job manager URI rpc address:port

classic Classic list List threaded Threaded
10 messages Options
Reply | Threaded
Open this post in threaded view
|

Job manager URI rpc address:port

Som Lima
Hi,

After running 
$ ./bin/start-cluster.sh
The following line of code defaults jobmanager  to localhost:6123 

final  ExecutionEnvironment env = Environment.getExecutionEnvironment();

which is same on spark.

val spark = SparkSession.builder.master(local[*]).appname("anapp").getOrCreate

However if I wish to run the servers on a different physical computer.
Then in Spark I can do it this way using the spark URI in my IDE.

Conf =  SparkConf().setMaster("spark://<hostip>:<port>").setAppName("anapp")

Can you please tell me the equivalent change to make so I can run my servers and my IDE from different physical computers.












Reply | Threaded
Open this post in threaded view
|

Re: Job manager URI rpc address:port

tison
You can change flink-conf.yaml "jobmanager.address" or "jobmanager.port" options before run the program or take a look at RemoteStreamEnvironment which enables configuring host and port.

Best,
tison.


Som Lima <[hidden email]> 于2020年4月19日周日 下午5:58写道:
Hi,

After running 
$ ./bin/start-cluster.sh
The following line of code defaults jobmanager  to localhost:6123 

final  ExecutionEnvironment env = Environment.getExecutionEnvironment();

which is same on spark.

val spark = SparkSession.builder.master(local[*]).appname("anapp").getOrCreate

However if I wish to run the servers on a different physical computer.
Then in Spark I can do it this way using the spark URI in my IDE.

Conf =  SparkConf().setMaster("spark://<hostip>:<port>").setAppName("anapp")

Can you please tell me the equivalent change to make so I can run my servers and my IDE from different physical computers.












Reply | Threaded
Open this post in threaded view
|

Re: Job manager URI rpc address:port

Som Lima
Thanks.
flink-conf.yaml does allow me to do what I need to do without making any changes to client source code.

But
RemoteStreamEnvironment constructor  expects a jar file as the third parameter also.

RemoteStreamEnvironment(String host, int port, String... jarFiles)
Creates a new RemoteStreamEnvironment that points to the master (JobManager) described by the given host name and port.

On Sun, 19 Apr 2020, 11:02 tison, <[hidden email]> wrote:
You can change flink-conf.yaml "jobmanager.address" or "jobmanager.port" options before run the program or take a look at RemoteStreamEnvironment which enables configuring host and port.

Best,
tison.


Som Lima <[hidden email]> 于2020年4月19日周日 下午5:58写道:
Hi,

After running 
$ ./bin/start-cluster.sh
The following line of code defaults jobmanager  to localhost:6123 

final  ExecutionEnvironment env = Environment.getExecutionEnvironment();

which is same on spark.

val spark = SparkSession.builder.master(local[*]).appname("anapp").getOrCreate

However if I wish to run the servers on a different physical computer.
Then in Spark I can do it this way using the spark URI in my IDE.

Conf =  SparkConf().setMaster("spark://<hostip>:<port>").setAppName("anapp")

Can you please tell me the equivalent change to make so I can run my servers and my IDE from different physical computers.












Reply | Threaded
Open this post in threaded view
|

Re: Job manager URI rpc address:port

Zahid Rahman
Hi Tison,

I think I may have found what I want in example 22.

I need to create Configuration object first as shown .

Also I think  flink-conf.yaml file may contain configuration for client rather than  server. So before starting is irrelevant. 
I am going to play around and see but if the Configuration class allows me to set configuration programmatically and overrides the yaml file then that would be great. 



On Sun, 19 Apr 2020, 11:35 Som Lima, <[hidden email]> wrote:
Thanks.
flink-conf.yaml does allow me to do what I need to do without making any changes to client source code.

But
RemoteStreamEnvironment constructor  expects a jar file as the third parameter also.

RemoteStreamEnvironment(String host, int port, String... jarFiles)
Creates a new RemoteStreamEnvironment that points to the master (JobManager) described by the given host name and port.

On Sun, 19 Apr 2020, 11:02 tison, <[hidden email]> wrote:
You can change flink-conf.yaml "jobmanager.address" or "jobmanager.port" options before run the program or take a look at RemoteStreamEnvironment which enables configuring host and port.

Best,
tison.


Som Lima <[hidden email]> 于2020年4月19日周日 下午5:58写道:
Hi,

After running 
$ ./bin/start-cluster.sh
The following line of code defaults jobmanager  to localhost:6123 

final  ExecutionEnvironment env = Environment.getExecutionEnvironment();

which is same on spark.

val spark = SparkSession.builder.master(local[*]).appname("anapp").getOrCreate

However if I wish to run the servers on a different physical computer.
Then in Spark I can do it this way using the spark URI in my IDE.

Conf =  SparkConf().setMaster("spark://<hostip>:<port>").setAppName("anapp")

Can you please tell me the equivalent change to make so I can run my servers and my IDE from different physical computers.












Reply | Threaded
Open this post in threaded view
|

Re: Job manager URI rpc address:port

Jeff Zhang
Hi Som, 

You can take a look at flink on zeppelin, in zeppelin you can connect to a remote flink cluster via a few configuration, and you don't need to worry about the jars. Flink interpreter will ship necessary jars for you. Here's a list of tutorials.



Zahid Rahman <[hidden email]> 于2020年4月19日周日 下午7:27写道:
Hi Tison,

I think I may have found what I want in example 22.

I need to create Configuration object first as shown .

Also I think  flink-conf.yaml file may contain configuration for client rather than  server. So before starting is irrelevant. 
I am going to play around and see but if the Configuration class allows me to set configuration programmatically and overrides the yaml file then that would be great. 



On Sun, 19 Apr 2020, 11:35 Som Lima, <[hidden email]> wrote:
Thanks.
flink-conf.yaml does allow me to do what I need to do without making any changes to client source code.

But
RemoteStreamEnvironment constructor  expects a jar file as the third parameter also.

RemoteStreamEnvironment(String host, int port, String... jarFiles)
Creates a new RemoteStreamEnvironment that points to the master (JobManager) described by the given host name and port.

On Sun, 19 Apr 2020, 11:02 tison, <[hidden email]> wrote:
You can change flink-conf.yaml "jobmanager.address" or "jobmanager.port" options before run the program or take a look at RemoteStreamEnvironment which enables configuring host and port.

Best,
tison.


Som Lima <[hidden email]> 于2020年4月19日周日 下午5:58写道:
Hi,

After running 
$ ./bin/start-cluster.sh
The following line of code defaults jobmanager  to localhost:6123 

final  ExecutionEnvironment env = Environment.getExecutionEnvironment();

which is same on spark.

val spark = SparkSession.builder.master(local[*]).appname("anapp").getOrCreate

However if I wish to run the servers on a different physical computer.
Then in Spark I can do it this way using the spark URI in my IDE.

Conf =  SparkConf().setMaster("spark://<hostip>:<port>").setAppName("anapp")

Can you please tell me the equivalent change to make so I can run my servers and my IDE from different physical computers.














--
Best Regards

Jeff Zhang
Reply | Threaded
Open this post in threaded view
|

Re: Job manager URI rpc address:port

Som Lima
Thanks for the info and links.

I had a lot of problems I am not sure what I was doing wrong.

May be conflicts with setup from apache spark.  I think I may need to setup users for each development.


Anyway I kept doing fresh installs about four altogether I think. 

Everything works fine now
Including remote access  of zeppelin on machines across the local area network.

Next step  setup remote clusters
 Wish me luck ! 







On Sun, 19 Apr 2020, 14:58 Jeff Zhang, <[hidden email]> wrote:
Hi Som, 

You can take a look at flink on zeppelin, in zeppelin you can connect to a remote flink cluster via a few configuration, and you don't need to worry about the jars. Flink interpreter will ship necessary jars for you. Here's a list of tutorials.



Zahid Rahman <[hidden email]> 于2020年4月19日周日 下午7:27写道:
Hi Tison,

I think I may have found what I want in example 22.

I need to create Configuration object first as shown .

Also I think  flink-conf.yaml file may contain configuration for client rather than  server. So before starting is irrelevant. 
I am going to play around and see but if the Configuration class allows me to set configuration programmatically and overrides the yaml file then that would be great. 



On Sun, 19 Apr 2020, 11:35 Som Lima, <[hidden email]> wrote:
Thanks.
flink-conf.yaml does allow me to do what I need to do without making any changes to client source code.

But
RemoteStreamEnvironment constructor  expects a jar file as the third parameter also.

RemoteStreamEnvironment(String host, int port, String... jarFiles)
Creates a new RemoteStreamEnvironment that points to the master (JobManager) described by the given host name and port.

On Sun, 19 Apr 2020, 11:02 tison, <[hidden email]> wrote:
You can change flink-conf.yaml "jobmanager.address" or "jobmanager.port" options before run the program or take a look at RemoteStreamEnvironment which enables configuring host and port.

Best,
tison.


Som Lima <[hidden email]> 于2020年4月19日周日 下午5:58写道:
Hi,

After running 
$ ./bin/start-cluster.sh
The following line of code defaults jobmanager  to localhost:6123 

final  ExecutionEnvironment env = Environment.getExecutionEnvironment();

which is same on spark.

val spark = SparkSession.builder.master(local[*]).appname("anapp").getOrCreate

However if I wish to run the servers on a different physical computer.
Then in Spark I can do it this way using the spark URI in my IDE.

Conf =  SparkConf().setMaster("spark://<hostip>:<port>").setAppName("anapp")

Can you please tell me the equivalent change to make so I can run my servers and my IDE from different physical computers.














--
Best Regards

Jeff Zhang
Reply | Threaded
Open this post in threaded view
|

Re: Job manager URI rpc address:port

Jeff Zhang
Som, Let us know when you have any problems

Som Lima <[hidden email]> 于2020年4月20日周一 上午2:31写道:
Thanks for the info and links.

I had a lot of problems I am not sure what I was doing wrong.

May be conflicts with setup from apache spark.  I think I may need to setup users for each development.


Anyway I kept doing fresh installs about four altogether I think. 

Everything works fine now
Including remote access  of zeppelin on machines across the local area network.

Next step  setup remote clusters
 Wish me luck ! 







On Sun, 19 Apr 2020, 14:58 Jeff Zhang, <[hidden email]> wrote:
Hi Som, 

You can take a look at flink on zeppelin, in zeppelin you can connect to a remote flink cluster via a few configuration, and you don't need to worry about the jars. Flink interpreter will ship necessary jars for you. Here's a list of tutorials.



Zahid Rahman <[hidden email]> 于2020年4月19日周日 下午7:27写道:
Hi Tison,

I think I may have found what I want in example 22.

I need to create Configuration object first as shown .

Also I think  flink-conf.yaml file may contain configuration for client rather than  server. So before starting is irrelevant. 
I am going to play around and see but if the Configuration class allows me to set configuration programmatically and overrides the yaml file then that would be great. 



On Sun, 19 Apr 2020, 11:35 Som Lima, <[hidden email]> wrote:
Thanks.
flink-conf.yaml does allow me to do what I need to do without making any changes to client source code.

But
RemoteStreamEnvironment constructor  expects a jar file as the third parameter also.

RemoteStreamEnvironment(String host, int port, String... jarFiles)
Creates a new RemoteStreamEnvironment that points to the master (JobManager) described by the given host name and port.

On Sun, 19 Apr 2020, 11:02 tison, <[hidden email]> wrote:
You can change flink-conf.yaml "jobmanager.address" or "jobmanager.port" options before run the program or take a look at RemoteStreamEnvironment which enables configuring host and port.

Best,
tison.


Som Lima <[hidden email]> 于2020年4月19日周日 下午5:58写道:
Hi,

After running 
$ ./bin/start-cluster.sh
The following line of code defaults jobmanager  to localhost:6123 

final  ExecutionEnvironment env = Environment.getExecutionEnvironment();

which is same on spark.

val spark = SparkSession.builder.master(local[*]).appname("anapp").getOrCreate

However if I wish to run the servers on a different physical computer.
Then in Spark I can do it this way using the spark URI in my IDE.

Conf =  SparkConf().setMaster("spark://<hostip>:<port>").setAppName("anapp")

Can you please tell me the equivalent change to make so I can run my servers and my IDE from different physical computers.














--
Best Regards

Jeff Zhang


--
Best Regards

Jeff Zhang
Reply | Threaded
Open this post in threaded view
|

Re: Job manager URI rpc address:port

Som Lima
I will thanks.  Once I had it set up and working. 
I switched  my computers around from client to server to server to client. 
With your excellent instructions I was able to do it in 5 .minutes 

On Mon, 20 Apr 2020, 00:05 Jeff Zhang, <[hidden email]> wrote:
Som, Let us know when you have any problems

Som Lima <[hidden email]> 于2020年4月20日周一 上午2:31写道:
Thanks for the info and links.

I had a lot of problems I am not sure what I was doing wrong.

May be conflicts with setup from apache spark.  I think I may need to setup users for each development.


Anyway I kept doing fresh installs about four altogether I think. 

Everything works fine now
Including remote access  of zeppelin on machines across the local area network.

Next step  setup remote clusters
 Wish me luck ! 







On Sun, 19 Apr 2020, 14:58 Jeff Zhang, <[hidden email]> wrote:
Hi Som, 

You can take a look at flink on zeppelin, in zeppelin you can connect to a remote flink cluster via a few configuration, and you don't need to worry about the jars. Flink interpreter will ship necessary jars for you. Here's a list of tutorials.



Zahid Rahman <[hidden email]> 于2020年4月19日周日 下午7:27写道:
Hi Tison,

I think I may have found what I want in example 22.

I need to create Configuration object first as shown .

Also I think  flink-conf.yaml file may contain configuration for client rather than  server. So before starting is irrelevant. 
I am going to play around and see but if the Configuration class allows me to set configuration programmatically and overrides the yaml file then that would be great. 



On Sun, 19 Apr 2020, 11:35 Som Lima, <[hidden email]> wrote:
Thanks.
flink-conf.yaml does allow me to do what I need to do without making any changes to client source code.

But
RemoteStreamEnvironment constructor  expects a jar file as the third parameter also.

RemoteStreamEnvironment(String host, int port, String... jarFiles)
Creates a new RemoteStreamEnvironment that points to the master (JobManager) described by the given host name and port.

On Sun, 19 Apr 2020, 11:02 tison, <[hidden email]> wrote:
You can change flink-conf.yaml "jobmanager.address" or "jobmanager.port" options before run the program or take a look at RemoteStreamEnvironment which enables configuring host and port.

Best,
tison.


Som Lima <[hidden email]> 于2020年4月19日周日 下午5:58写道:
Hi,

After running 
$ ./bin/start-cluster.sh
The following line of code defaults jobmanager  to localhost:6123 

final  ExecutionEnvironment env = Environment.getExecutionEnvironment();

which is same on spark.

val spark = SparkSession.builder.master(local[*]).appname("anapp").getOrCreate

However if I wish to run the servers on a different physical computer.
Then in Spark I can do it this way using the spark URI in my IDE.

Conf =  SparkConf().setMaster("spark://<hostip>:<port>").setAppName("anapp")

Can you please tell me the equivalent change to make so I can run my servers and my IDE from different physical computers.














--
Best Regards

Jeff Zhang


--
Best Regards

Jeff Zhang
Reply | Threaded
Open this post in threaded view
|

Re: Job manager URI rpc address:port

Jeff Zhang
Glad to hear that. 

Som Lima <[hidden email]> 于2020年4月20日周一 上午8:08写道:
I will thanks.  Once I had it set up and working. 
I switched  my computers around from client to server to server to client. 
With your excellent instructions I was able to do it in 5 .minutes 

On Mon, 20 Apr 2020, 00:05 Jeff Zhang, <[hidden email]> wrote:
Som, Let us know when you have any problems

Som Lima <[hidden email]> 于2020年4月20日周一 上午2:31写道:
Thanks for the info and links.

I had a lot of problems I am not sure what I was doing wrong.

May be conflicts with setup from apache spark.  I think I may need to setup users for each development.


Anyway I kept doing fresh installs about four altogether I think. 

Everything works fine now
Including remote access  of zeppelin on machines across the local area network.

Next step  setup remote clusters
 Wish me luck ! 







On Sun, 19 Apr 2020, 14:58 Jeff Zhang, <[hidden email]> wrote:
Hi Som, 

You can take a look at flink on zeppelin, in zeppelin you can connect to a remote flink cluster via a few configuration, and you don't need to worry about the jars. Flink interpreter will ship necessary jars for you. Here's a list of tutorials.



Zahid Rahman <[hidden email]> 于2020年4月19日周日 下午7:27写道:
Hi Tison,

I think I may have found what I want in example 22.

I need to create Configuration object first as shown .

Also I think  flink-conf.yaml file may contain configuration for client rather than  server. So before starting is irrelevant. 
I am going to play around and see but if the Configuration class allows me to set configuration programmatically and overrides the yaml file then that would be great. 



On Sun, 19 Apr 2020, 11:35 Som Lima, <[hidden email]> wrote:
Thanks.
flink-conf.yaml does allow me to do what I need to do without making any changes to client source code.

But
RemoteStreamEnvironment constructor  expects a jar file as the third parameter also.

RemoteStreamEnvironment(String host, int port, String... jarFiles)
Creates a new RemoteStreamEnvironment that points to the master (JobManager) described by the given host name and port.

On Sun, 19 Apr 2020, 11:02 tison, <[hidden email]> wrote:
You can change flink-conf.yaml "jobmanager.address" or "jobmanager.port" options before run the program or take a look at RemoteStreamEnvironment which enables configuring host and port.

Best,
tison.


Som Lima <[hidden email]> 于2020年4月19日周日 下午5:58写道:
Hi,

After running 
$ ./bin/start-cluster.sh
The following line of code defaults jobmanager  to localhost:6123 

final  ExecutionEnvironment env = Environment.getExecutionEnvironment();

which is same on spark.

val spark = SparkSession.builder.master(local[*]).appname("anapp").getOrCreate

However if I wish to run the servers on a different physical computer.
Then in Spark I can do it this way using the spark URI in my IDE.

Conf =  SparkConf().setMaster("spark://<hostip>:<port>").setAppName("anapp")

Can you please tell me the equivalent change to make so I can run my servers and my IDE from different physical computers.














--
Best Regards

Jeff Zhang


--
Best Regards

Jeff Zhang


--
Best Regards

Jeff Zhang
Reply | Threaded
Open this post in threaded view
|

Re: Job manager URI rpc address:port

Som Lima
In reply to this post by tison
This is the code I was looking for,  which will allow me programmatically to connect to remote jobmanager same as  spark remote master .
The spark master which shares the compute load with slaves , in the case of flink jobmanager with taskmanagers.


Configuration conf = new Configuration();
conf.setString("mykey","myvalue");
final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
env.getConfig().setGlobalJobParameters(conf);

I found it at the bottom of this page .





On Sun, 19 Apr 2020, 11:02 tison, <[hidden email]> wrote:
You can change flink-conf.yaml "jobmanager.address" or "jobmanager.port" options before run the program or take a look at RemoteStreamEnvironment which enables configuring host and port.

Best,
tison.


Som Lima <[hidden email]> 于2020年4月19日周日 下午5:58写道:
Hi,

After running 
$ ./bin/start-cluster.sh
The following line of code defaults jobmanager  to localhost:6123 

final  ExecutionEnvironment env = Environment.getExecutionEnvironment();

which is same on spark.

val spark = SparkSession.builder.master(local[*]).appname("anapp").getOrCreate

However if I wish to run the servers on a different physical computer.
Then in Spark I can do it this way using the spark URI in my IDE.

Conf =  SparkConf().setMaster("spark://<hostip>:<port>").setAppName("anapp")

Can you please tell me the equivalent change to make so I can run my servers and my IDE from different physical computers.