Hi,
After running
The following line of code defaults jobmanager to localhost:6123 final ExecutionEnvironment env = Environment.getExecutionEnvironment(); which is same on spark. val spark = SparkSession.builder.master(local[*]).appname("anapp").getOrCreate However if I wish to run the servers on a different physical computer. Then in Spark I can do it this way using the spark URI in my IDE. Conf = SparkConf().setMaster("spark://<hostip>:<port>").setAppName("anapp") Can you please tell me the equivalent change to make so I can run my servers and my IDE from different physical computers. |
You can change flink-conf.yaml "jobmanager.address" or "jobmanager.port" options before run the program or take a look at RemoteStreamEnvironment which enables configuring host and port. Best, tison. Som Lima <[hidden email]> 于2020年4月19日周日 下午5:58写道:
|
Thanks. flink-conf.yaml does allow me to do what I need to do without making any changes to client source code. But RemoteStreamEnvironment constructor expects a jar file as the third parameter also. RemoteStreamEnvironment(String host, int port, String... jarFiles) Creates a new RemoteStreamEnvironment that points to the master (JobManager) described by the given host name and port. On Sun, 19 Apr 2020, 11:02 tison, <[hidden email]> wrote:
|
Hi Tison, I think I may have found what I want in example 22. I need to create Configuration object first as shown . Also I think flink-conf.yaml file may contain configuration for client rather than server. So before starting is irrelevant. I am going to play around and see but if the Configuration class allows me to set configuration programmatically and overrides the yaml file then that would be great. On Sun, 19 Apr 2020, 11:35 Som Lima, <[hidden email]> wrote:
|
Hi Som, You can take a look at flink on zeppelin, in zeppelin you can connect to a remote flink cluster via a few configuration, and you don't need to worry about the jars. Flink interpreter will ship necessary jars for you. Here's a list of tutorials. 1) Get started https://link.medium.com/oppqD6dIg5
2) Batch https://link.medium.com/3qumbwRIg5
3) Streaming https://link.medium.com/RBHa2lTIg5
4) Advanced usage https://link.medium.com/CAekyoXIg5 Zahid Rahman <[hidden email]> 于2020年4月19日周日 下午7:27写道:
Best Regards
Jeff Zhang |
Thanks for the info and links. I had a lot of problems I am not sure what I was doing wrong. May be conflicts with setup from apache spark. I think I may need to setup users for each development. Anyway I kept doing fresh installs about four altogether I think. Everything works fine now Including remote access of zeppelin on machines across the local area network. Next step setup remote clusters Wish me luck ! On Sun, 19 Apr 2020, 14:58 Jeff Zhang, <[hidden email]> wrote:
|
Som, Let us know when you have any problems Som Lima <[hidden email]> 于2020年4月20日周一 上午2:31写道:
Best Regards
Jeff Zhang |
I will thanks. Once I had it set up and working. I switched my computers around from client to server to server to client. With your excellent instructions I was able to do it in 5 .minutes On Mon, 20 Apr 2020, 00:05 Jeff Zhang, <[hidden email]> wrote:
|
Glad to hear that. Som Lima <[hidden email]> 于2020年4月20日周一 上午8:08写道:
Best Regards
Jeff Zhang |
In reply to this post by tison
This is the code I was looking for, which will allow me programmatically to connect to remote jobmanager same as spark remote master . The spark master which shares the compute load with slaves , in the case of flink jobmanager with taskmanagers.
I found it at the bottom of this page . On Sun, 19 Apr 2020, 11:02 tison, <[hidden email]> wrote:
|
Free forum by Nabble | Edit this page |