Flink - start-cluster.sh

classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|

Flink - start-cluster.sh

Punit Naik
Hi

I did all the settings required for cluster setup. but when I ran the start-cluster.sh script, it only started one jobmanager on the master node. Logs are written only on the master node. Slaves don't have any logs. And when I ran a program it said:

Resources available to scheduler: Number of instances=0, total number of slots=0, available slots=0

Can anyone help please?

--
Thank You

Regards

Punit Naik
Reply | Threaded
Open this post in threaded view
|

Re: Flink - start-cluster.sh

Balaji Rajagopalan
What is the flink documentation you were following to set up your cluster , can you point to that ? 

On Tue, May 3, 2016 at 6:21 PM, Punit Naik <[hidden email]> wrote:
Hi

I did all the settings required for cluster setup. but when I ran the start-cluster.sh script, it only started one jobmanager on the master node. Logs are written only on the master node. Slaves don't have any logs. And when I ran a program it said:

Resources available to scheduler: Number of instances=0, total number of slots=0, available slots=0

Can anyone help please?

--
Thank You

Regards

Punit Naik

Reply | Threaded
Open this post in threaded view
|

Re: Flink - start-cluster.sh

Flavio Pompermaier
I think your slaves didn't come up...have you configured ssh password-less login between the master node (the one running the start-cluster.sh) and the task managers (listed in the conf/slaves file)?

Best,
Flavio

On Wed, May 4, 2016 at 8:49 AM, Balaji Rajagopalan <[hidden email]> wrote:
What is the flink documentation you were following to set up your cluster , can you point to that ? 

On Tue, May 3, 2016 at 6:21 PM, Punit Naik <[hidden email]> wrote:
Hi

I did all the settings required for cluster setup. but when I ran the start-cluster.sh script, it only started one jobmanager on the master node. Logs are written only on the master node. Slaves don't have any logs. And when I ran a program it said:

Resources available to scheduler: Number of instances=0, total number of slots=0, available slots=0

Can anyone help please?

--
Thank You

Regards

Punit Naik

Reply | Threaded
Open this post in threaded view
|

Re: Flink - start-cluster.sh

Punit Naik
Passwordless SSH has been setup across all the machines. And when I execute the spark-clsuter.sh script, I can see the master logging into the slaves but it does not start anything. It just logs in and logs out.

I have referred to the documentation on official site.

https://ci.apache.org/projects/flink/flink-docs-release-1.0/quickstart/setup_quickstart.html

On Wed, May 4, 2016 at 12:43 PM, Flavio Pompermaier <[hidden email]> wrote:
I think your slaves didn't come up...have you configured ssh password-less login between the master node (the one running the start-cluster.sh) and the task managers (listed in the conf/slaves file)?

Best,
Flavio

On Wed, May 4, 2016 at 8:49 AM, Balaji Rajagopalan <[hidden email]> wrote:
What is the flink documentation you were following to set up your cluster , can you point to that ? 

On Tue, May 3, 2016 at 6:21 PM, Punit Naik <[hidden email]> wrote:
Hi

I did all the settings required for cluster setup. but when I ran the start-cluster.sh script, it only started one jobmanager on the master node. Logs are written only on the master node. Slaves don't have any logs. And when I ran a program it said:

Resources available to scheduler: Number of instances=0, total number of slots=0, available slots=0

Can anyone help please?

--
Thank You

Regards

Punit Naik




--
Thank You

Regards

Punit Naik
Reply | Threaded
Open this post in threaded view
|

Re: Flink - start-cluster.sh

Punit Naik
Okay, so it was a configuration mistake on my part. but still for me the start-cluster.sh command won't work. It only starts the Jobmanager on the master node for me. Therefore I had to manually start Taskmanagers on every node and it worked fine. Anyone familiar with this issue?

On Wed, May 4, 2016 at 1:33 PM, Punit Naik <[hidden email]> wrote:
Passwordless SSH has been setup across all the machines. And when I execute the spark-clsuter.sh script, I can see the master logging into the slaves but it does not start anything. It just logs in and logs out.

I have referred to the documentation on official site.

https://ci.apache.org/projects/flink/flink-docs-release-1.0/quickstart/setup_quickstart.html

On Wed, May 4, 2016 at 12:43 PM, Flavio Pompermaier <[hidden email]> wrote:
I think your slaves didn't come up...have you configured ssh password-less login between the master node (the one running the start-cluster.sh) and the task managers (listed in the conf/slaves file)?

Best,
Flavio

On Wed, May 4, 2016 at 8:49 AM, Balaji Rajagopalan <[hidden email]> wrote:
What is the flink documentation you were following to set up your cluster , can you point to that ? 

On Tue, May 3, 2016 at 6:21 PM, Punit Naik <[hidden email]> wrote:
Hi

I did all the settings required for cluster setup. but when I ran the start-cluster.sh script, it only started one jobmanager on the master node. Logs are written only on the master node. Slaves don't have any logs. And when I ran a program it said:

Resources available to scheduler: Number of instances=0, total number of slots=0, available slots=0

Can anyone help please?

--
Thank You

Regards

Punit Naik




--
Thank You

Regards

Punit Naik



--
Thank You

Regards

Punit Naik
Reply | Threaded
Open this post in threaded view
|

Re: Flink - start-cluster.sh

Flavio Pompermaier
Do you run the start-cluster.sh script with the same user having the ssh passwordless login?

On Thu, May 5, 2016 at 11:03 AM, Punit Naik <[hidden email]> wrote:
Okay, so it was a configuration mistake on my part. but still for me the start-cluster.sh command won't work. It only starts the Jobmanager on the master node for me. Therefore I had to manually start Taskmanagers on every node and it worked fine. Anyone familiar with this issue?

On Wed, May 4, 2016 at 1:33 PM, Punit Naik <[hidden email]> wrote:
Passwordless SSH has been setup across all the machines. And when I execute the spark-clsuter.sh script, I can see the master logging into the slaves but it does not start anything. It just logs in and logs out.

I have referred to the documentation on official site.

https://ci.apache.org/projects/flink/flink-docs-release-1.0/quickstart/setup_quickstart.html

On Wed, May 4, 2016 at 12:43 PM, Flavio Pompermaier <[hidden email]> wrote:
I think your slaves didn't come up...have you configured ssh password-less login between the master node (the one running the start-cluster.sh) and the task managers (listed in the conf/slaves file)?

Best,
Flavio

On Wed, May 4, 2016 at 8:49 AM, Balaji Rajagopalan <[hidden email]> wrote:
What is the flink documentation you were following to set up your cluster , can you point to that ? 

On Tue, May 3, 2016 at 6:21 PM, Punit Naik <[hidden email]> wrote:
Hi

I did all the settings required for cluster setup. but when I ran the start-cluster.sh script, it only started one jobmanager on the master node. Logs are written only on the master node. Slaves don't have any logs. And when I ran a program it said:

Resources available to scheduler: Number of instances=0, total number of slots=0, available slots=0

Can anyone help please?

--
Thank You

Regards

Punit Naik




--
Thank You

Regards

Punit Naik



--
Thank You

Regards

Punit Naik

Reply | Threaded
Open this post in threaded view
|

Re: Flink - start-cluster.sh

Punit Naik
Yes.

On Thu, May 5, 2016 at 3:04 PM, Flavio Pompermaier <[hidden email]> wrote:
Do you run the start-cluster.sh script with the same user having the ssh passwordless login?


On Thu, May 5, 2016 at 11:03 AM, Punit Naik <[hidden email]> wrote:
Okay, so it was a configuration mistake on my part. but still for me the start-cluster.sh command won't work. It only starts the Jobmanager on the master node for me. Therefore I had to manually start Taskmanagers on every node and it worked fine. Anyone familiar with this issue?

On Wed, May 4, 2016 at 1:33 PM, Punit Naik <[hidden email]> wrote:
Passwordless SSH has been setup across all the machines. And when I execute the spark-clsuter.sh script, I can see the master logging into the slaves but it does not start anything. It just logs in and logs out.

I have referred to the documentation on official site.

https://ci.apache.org/projects/flink/flink-docs-release-1.0/quickstart/setup_quickstart.html

On Wed, May 4, 2016 at 12:43 PM, Flavio Pompermaier <[hidden email]> wrote:
I think your slaves didn't come up...have you configured ssh password-less login between the master node (the one running the start-cluster.sh) and the task managers (listed in the conf/slaves file)?

Best,
Flavio

On Wed, May 4, 2016 at 8:49 AM, Balaji Rajagopalan <[hidden email]> wrote:
What is the flink documentation you were following to set up your cluster , can you point to that ? 

On Tue, May 3, 2016 at 6:21 PM, Punit Naik <[hidden email]> wrote:
Hi

I did all the settings required for cluster setup. but when I ran the start-cluster.sh script, it only started one jobmanager on the master node. Logs are written only on the master node. Slaves don't have any logs. And when I ran a program it said:

Resources available to scheduler: Number of instances=0, total number of slots=0, available slots=0

Can anyone help please?

--
Thank You

Regards

Punit Naik




--
Thank You

Regards

Punit Naik



--
Thank You

Regards

Punit Naik




--
Thank You

Regards

Punit Naik