Scala Shell gives error "rest.address must be set"

classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|

Scala Shell gives error "rest.address must be set"

Craig Foster
Hi:
When I upgraded from Flink 1.9.1 to Flink 1.10.0 I can't execute
programs at the Scala shell.

It gives me an error that the REST address must be set. This looks
like it comes from HA but I don't have HA configured for Flink and it
was very hard to find this documented other than in the PR/JIRA in the
history so don't have much context. Can someone point me to how to
configure this properly? For reference, I put the example stacktrace
below.

scala> val text = benv.fromElements("To be, or not to be,--that is the
question:--");
text: org.apache.flink.api.scala.DataSet[String] =
org.apache.flink.api.scala.DataSet@2396408a

scala> val counts = text.flatMap { _.toLowerCase.split("\\W+")}.map {
(_, 1) }.groupBy(0).sum(1);
counts: org.apache.flink.api.scala.AggregateDataSet[(String, Int)] =
org.apache.flink.api.scala.AggregateDataSet@38bce2ed

scala> counts.print()
20/03/17 21:15:34 INFO java.ExecutionEnvironment: The job has 0
registered types and 0 default Kryo serializers
20/03/17 21:15:34 INFO configuration.GlobalConfiguration: Loading
configuration property: env.yarn.conf.dir, /etc/hadoop/conf
20/03/17 21:15:34 INFO configuration.GlobalConfiguration: Loading
configuration property: env.hadoop.conf.dir, /etc/hadoop/conf
java.lang.RuntimeException: Couldn't retrieve standalone cluster
  at org.apache.flink.client.deployment.StandaloneClusterDescriptor.lambda$retrieve$0(StandaloneClusterDescriptor.java:53)
  at org.apache.flink.client.deployment.executors.AbstractSessionClusterExecutor.execute(AbstractSessionClusterExecutor.java:64)
  at org.apache.flink.api.java.ExecutionEnvironment.executeAsync(ExecutionEnvironment.java:944)
  at org.apache.flink.api.java.ExecutionEnvironment.execute(ExecutionEnvironment.java:860)
  at org.apache.flink.api.java.ScalaShellEnvironment.execute(ScalaShellEnvironment.java:81)
  at org.apache.flink.api.java.ExecutionEnvironment.execute(ExecutionEnvironment.java:844)
  at org.apache.flink.api.java.DataSet.collect(DataSet.java:413)
  at org.apache.flink.api.java.DataSet.print(DataSet.java:1652)
  at org.apache.flink.api.scala.DataSet.print(DataSet.scala:1864)
  ... 30 elided
Caused by: java.lang.NullPointerException: rest.address must be set
  at org.apache.flink.util.Preconditions.checkNotNull(Preconditions.java:104)
  at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.getWebMonitorAddress(HighAvailabilityServicesUtils.java:196)
  at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createClientHAService(HighAvailabilityServicesUtils.java:146)
  at org.apache.flink.client.program.rest.RestClusterClient.<init>(RestClusterClient.java:161)
  at org.apache.flink.client.deployment.StandaloneClusterDescriptor.lambda$retrieve$0(StandaloneClusterDescriptor.java:51)
  ... 38 more
Reply | Threaded
Open this post in threaded view
|

Re: Scala Shell gives error "rest.address must be set"

Jeff Zhang
It looks like you are running under standalone mode, what is your command to start scala shell. ?

Craig Foster <[hidden email]> 于2020年3月18日周三 上午5:23写道:
Hi:
When I upgraded from Flink 1.9.1 to Flink 1.10.0 I can't execute
programs at the Scala shell.

It gives me an error that the REST address must be set. This looks
like it comes from HA but I don't have HA configured for Flink and it
was very hard to find this documented other than in the PR/JIRA in the
history so don't have much context. Can someone point me to how to
configure this properly? For reference, I put the example stacktrace
below.

scala> val text = benv.fromElements("To be, or not to be,--that is the
question:--");
text: org.apache.flink.api.scala.DataSet[String] =
org.apache.flink.api.scala.DataSet@2396408a

scala> val counts = text.flatMap { _.toLowerCase.split("\\W+")}.map {
(_, 1) }.groupBy(0).sum(1);
counts: org.apache.flink.api.scala.AggregateDataSet[(String, Int)] =
org.apache.flink.api.scala.AggregateDataSet@38bce2ed

scala> counts.print()
20/03/17 21:15:34 INFO java.ExecutionEnvironment: The job has 0
registered types and 0 default Kryo serializers
20/03/17 21:15:34 INFO configuration.GlobalConfiguration: Loading
configuration property: env.yarn.conf.dir, /etc/hadoop/conf
20/03/17 21:15:34 INFO configuration.GlobalConfiguration: Loading
configuration property: env.hadoop.conf.dir, /etc/hadoop/conf
java.lang.RuntimeException: Couldn't retrieve standalone cluster
  at org.apache.flink.client.deployment.StandaloneClusterDescriptor.lambda$retrieve$0(StandaloneClusterDescriptor.java:53)
  at org.apache.flink.client.deployment.executors.AbstractSessionClusterExecutor.execute(AbstractSessionClusterExecutor.java:64)
  at org.apache.flink.api.java.ExecutionEnvironment.executeAsync(ExecutionEnvironment.java:944)
  at org.apache.flink.api.java.ExecutionEnvironment.execute(ExecutionEnvironment.java:860)
  at org.apache.flink.api.java.ScalaShellEnvironment.execute(ScalaShellEnvironment.java:81)
  at org.apache.flink.api.java.ExecutionEnvironment.execute(ExecutionEnvironment.java:844)
  at org.apache.flink.api.java.DataSet.collect(DataSet.java:413)
  at org.apache.flink.api.java.DataSet.print(DataSet.java:1652)
  at org.apache.flink.api.scala.DataSet.print(DataSet.scala:1864)
  ... 30 elided
Caused by: java.lang.NullPointerException: rest.address must be set
  at org.apache.flink.util.Preconditions.checkNotNull(Preconditions.java:104)
  at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.getWebMonitorAddress(HighAvailabilityServicesUtils.java:196)
  at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createClientHAService(HighAvailabilityServicesUtils.java:146)
  at org.apache.flink.client.program.rest.RestClusterClient.<init>(RestClusterClient.java:161)
  at org.apache.flink.client.deployment.StandaloneClusterDescriptor.lambda$retrieve$0(StandaloneClusterDescriptor.java:51)
  ... 38 more


--
Best Regards

Jeff Zhang
Reply | Threaded
Open this post in threaded view
|

Re: Scala Shell gives error "rest.address must be set"

Craig Foster
Yeah, I was wondering about that. I'm using
`/usr/lib/flink/bin/start-scala-shell.sh yarn`-- previously I'd use
`/usr/lib/flink/bin/start-scala-shell.sh yarn -n ${NUM}`
 but that deprecated option was removed.


On Tue, Mar 17, 2020 at 4:11 PM Jeff Zhang <[hidden email]> wrote:

>
> It looks like you are running under standalone mode, what is your command to start scala shell. ?
>
> Craig Foster <[hidden email]> 于2020年3月18日周三 上午5:23写道:
>>
>> Hi:
>> When I upgraded from Flink 1.9.1 to Flink 1.10.0 I can't execute
>> programs at the Scala shell.
>>
>> It gives me an error that the REST address must be set. This looks
>> like it comes from HA but I don't have HA configured for Flink and it
>> was very hard to find this documented other than in the PR/JIRA in the
>> history so don't have much context. Can someone point me to how to
>> configure this properly? For reference, I put the example stacktrace
>> below.
>>
>> scala> val text = benv.fromElements("To be, or not to be,--that is the
>> question:--");
>> text: org.apache.flink.api.scala.DataSet[String] =
>> org.apache.flink.api.scala.DataSet@2396408a
>>
>> scala> val counts = text.flatMap { _.toLowerCase.split("\\W+")}.map {
>> (_, 1) }.groupBy(0).sum(1);
>> counts: org.apache.flink.api.scala.AggregateDataSet[(String, Int)] =
>> org.apache.flink.api.scala.AggregateDataSet@38bce2ed
>>
>> scala> counts.print()
>> 20/03/17 21:15:34 INFO java.ExecutionEnvironment: The job has 0
>> registered types and 0 default Kryo serializers
>> 20/03/17 21:15:34 INFO configuration.GlobalConfiguration: Loading
>> configuration property: env.yarn.conf.dir, /etc/hadoop/conf
>> 20/03/17 21:15:34 INFO configuration.GlobalConfiguration: Loading
>> configuration property: env.hadoop.conf.dir, /etc/hadoop/conf
>> java.lang.RuntimeException: Couldn't retrieve standalone cluster
>>   at org.apache.flink.client.deployment.StandaloneClusterDescriptor.lambda$retrieve$0(StandaloneClusterDescriptor.java:53)
>>   at org.apache.flink.client.deployment.executors.AbstractSessionClusterExecutor.execute(AbstractSessionClusterExecutor.java:64)
>>   at org.apache.flink.api.java.ExecutionEnvironment.executeAsync(ExecutionEnvironment.java:944)
>>   at org.apache.flink.api.java.ExecutionEnvironment.execute(ExecutionEnvironment.java:860)
>>   at org.apache.flink.api.java.ScalaShellEnvironment.execute(ScalaShellEnvironment.java:81)
>>   at org.apache.flink.api.java.ExecutionEnvironment.execute(ExecutionEnvironment.java:844)
>>   at org.apache.flink.api.java.DataSet.collect(DataSet.java:413)
>>   at org.apache.flink.api.java.DataSet.print(DataSet.java:1652)
>>   at org.apache.flink.api.scala.DataSet.print(DataSet.scala:1864)
>>   ... 30 elided
>> Caused by: java.lang.NullPointerException: rest.address must be set
>>   at org.apache.flink.util.Preconditions.checkNotNull(Preconditions.java:104)
>>   at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.getWebMonitorAddress(HighAvailabilityServicesUtils.java:196)
>>   at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createClientHAService(HighAvailabilityServicesUtils.java:146)
>>   at org.apache.flink.client.program.rest.RestClusterClient.<init>(RestClusterClient.java:161)
>>   at org.apache.flink.client.deployment.StandaloneClusterDescriptor.lambda$retrieve$0(StandaloneClusterDescriptor.java:51)
>>   ... 38 more
>
>
>
> --
> Best Regards
>
> Jeff Zhang
Reply | Threaded
Open this post in threaded view
|

Re: Scala Shell gives error "rest.address must be set"

Craig Foster
If I specify these options, it seems to work...but I thought I could
have this dynamically determined when submitting jobs just using the
"yarn" option:

/usr/lib/flink/bin/start-scala-shell.sh yarn -s 4 -jm 1024m -tm 4096m

I guess what isn't clear here to me is that if you use `yarn` alone
there needs to be an existing yarn cluster already started.


On Tue, Mar 17, 2020 at 4:22 PM Craig Foster <[hidden email]> wrote:

>
> Yeah, I was wondering about that. I'm using
> `/usr/lib/flink/bin/start-scala-shell.sh yarn`-- previously I'd use
> `/usr/lib/flink/bin/start-scala-shell.sh yarn -n ${NUM}`
>  but that deprecated option was removed.
>
>
> On Tue, Mar 17, 2020 at 4:11 PM Jeff Zhang <[hidden email]> wrote:
> >
> > It looks like you are running under standalone mode, what is your command to start scala shell. ?
> >
> > Craig Foster <[hidden email]> 于2020年3月18日周三 上午5:23写道:
> >>
> >> Hi:
> >> When I upgraded from Flink 1.9.1 to Flink 1.10.0 I can't execute
> >> programs at the Scala shell.
> >>
> >> It gives me an error that the REST address must be set. This looks
> >> like it comes from HA but I don't have HA configured for Flink and it
> >> was very hard to find this documented other than in the PR/JIRA in the
> >> history so don't have much context. Can someone point me to how to
> >> configure this properly? For reference, I put the example stacktrace
> >> below.
> >>
> >> scala> val text = benv.fromElements("To be, or not to be,--that is the
> >> question:--");
> >> text: org.apache.flink.api.scala.DataSet[String] =
> >> org.apache.flink.api.scala.DataSet@2396408a
> >>
> >> scala> val counts = text.flatMap { _.toLowerCase.split("\\W+")}.map {
> >> (_, 1) }.groupBy(0).sum(1);
> >> counts: org.apache.flink.api.scala.AggregateDataSet[(String, Int)] =
> >> org.apache.flink.api.scala.AggregateDataSet@38bce2ed
> >>
> >> scala> counts.print()
> >> 20/03/17 21:15:34 INFO java.ExecutionEnvironment: The job has 0
> >> registered types and 0 default Kryo serializers
> >> 20/03/17 21:15:34 INFO configuration.GlobalConfiguration: Loading
> >> configuration property: env.yarn.conf.dir, /etc/hadoop/conf
> >> 20/03/17 21:15:34 INFO configuration.GlobalConfiguration: Loading
> >> configuration property: env.hadoop.conf.dir, /etc/hadoop/conf
> >> java.lang.RuntimeException: Couldn't retrieve standalone cluster
> >>   at org.apache.flink.client.deployment.StandaloneClusterDescriptor.lambda$retrieve$0(StandaloneClusterDescriptor.java:53)
> >>   at org.apache.flink.client.deployment.executors.AbstractSessionClusterExecutor.execute(AbstractSessionClusterExecutor.java:64)
> >>   at org.apache.flink.api.java.ExecutionEnvironment.executeAsync(ExecutionEnvironment.java:944)
> >>   at org.apache.flink.api.java.ExecutionEnvironment.execute(ExecutionEnvironment.java:860)
> >>   at org.apache.flink.api.java.ScalaShellEnvironment.execute(ScalaShellEnvironment.java:81)
> >>   at org.apache.flink.api.java.ExecutionEnvironment.execute(ExecutionEnvironment.java:844)
> >>   at org.apache.flink.api.java.DataSet.collect(DataSet.java:413)
> >>   at org.apache.flink.api.java.DataSet.print(DataSet.java:1652)
> >>   at org.apache.flink.api.scala.DataSet.print(DataSet.scala:1864)
> >>   ... 30 elided
> >> Caused by: java.lang.NullPointerException: rest.address must be set
> >>   at org.apache.flink.util.Preconditions.checkNotNull(Preconditions.java:104)
> >>   at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.getWebMonitorAddress(HighAvailabilityServicesUtils.java:196)
> >>   at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createClientHAService(HighAvailabilityServicesUtils.java:146)
> >>   at org.apache.flink.client.program.rest.RestClusterClient.<init>(RestClusterClient.java:161)
> >>   at org.apache.flink.client.deployment.StandaloneClusterDescriptor.lambda$retrieve$0(StandaloneClusterDescriptor.java:51)
> >>   ... 38 more
> >
> >
> >
> > --
> > Best Regards
> >
> > Jeff Zhang
Reply | Threaded
Open this post in threaded view
|

Re: Scala Shell gives error "rest.address must be set"

Jeff Zhang
I agree, this is really confusing for users. Do you mind to create a ticket for that ?

Craig Foster <[hidden email]> 于2020年3月18日周三 上午8:36写道:
If I specify these options, it seems to work...but I thought I could
have this dynamically determined when submitting jobs just using the
"yarn" option:

/usr/lib/flink/bin/start-scala-shell.sh yarn -s 4 -jm 1024m -tm 4096m

I guess what isn't clear here to me is that if you use `yarn` alone
there needs to be an existing yarn cluster already started.


On Tue, Mar 17, 2020 at 4:22 PM Craig Foster <[hidden email]> wrote:
>
> Yeah, I was wondering about that. I'm using
> `/usr/lib/flink/bin/start-scala-shell.sh yarn`-- previously I'd use
> `/usr/lib/flink/bin/start-scala-shell.sh yarn -n ${NUM}`
>  but that deprecated option was removed.
>
>
> On Tue, Mar 17, 2020 at 4:11 PM Jeff Zhang <[hidden email]> wrote:
> >
> > It looks like you are running under standalone mode, what is your command to start scala shell. ?
> >
> > Craig Foster <[hidden email]> 于2020年3月18日周三 上午5:23写道:
> >>
> >> Hi:
> >> When I upgraded from Flink 1.9.1 to Flink 1.10.0 I can't execute
> >> programs at the Scala shell.
> >>
> >> It gives me an error that the REST address must be set. This looks
> >> like it comes from HA but I don't have HA configured for Flink and it
> >> was very hard to find this documented other than in the PR/JIRA in the
> >> history so don't have much context. Can someone point me to how to
> >> configure this properly? For reference, I put the example stacktrace
> >> below.
> >>
> >> scala> val text = benv.fromElements("To be, or not to be,--that is the
> >> question:--");
> >> text: org.apache.flink.api.scala.DataSet[String] =
> >> org.apache.flink.api.scala.DataSet@2396408a
> >>
> >> scala> val counts = text.flatMap { _.toLowerCase.split("\\W+")}.map {
> >> (_, 1) }.groupBy(0).sum(1);
> >> counts: org.apache.flink.api.scala.AggregateDataSet[(String, Int)] =
> >> org.apache.flink.api.scala.AggregateDataSet@38bce2ed
> >>
> >> scala> counts.print()
> >> 20/03/17 21:15:34 INFO java.ExecutionEnvironment: The job has 0
> >> registered types and 0 default Kryo serializers
> >> 20/03/17 21:15:34 INFO configuration.GlobalConfiguration: Loading
> >> configuration property: env.yarn.conf.dir, /etc/hadoop/conf
> >> 20/03/17 21:15:34 INFO configuration.GlobalConfiguration: Loading
> >> configuration property: env.hadoop.conf.dir, /etc/hadoop/conf
> >> java.lang.RuntimeException: Couldn't retrieve standalone cluster
> >>   at org.apache.flink.client.deployment.StandaloneClusterDescriptor.lambda$retrieve$0(StandaloneClusterDescriptor.java:53)
> >>   at org.apache.flink.client.deployment.executors.AbstractSessionClusterExecutor.execute(AbstractSessionClusterExecutor.java:64)
> >>   at org.apache.flink.api.java.ExecutionEnvironment.executeAsync(ExecutionEnvironment.java:944)
> >>   at org.apache.flink.api.java.ExecutionEnvironment.execute(ExecutionEnvironment.java:860)
> >>   at org.apache.flink.api.java.ScalaShellEnvironment.execute(ScalaShellEnvironment.java:81)
> >>   at org.apache.flink.api.java.ExecutionEnvironment.execute(ExecutionEnvironment.java:844)
> >>   at org.apache.flink.api.java.DataSet.collect(DataSet.java:413)
> >>   at org.apache.flink.api.java.DataSet.print(DataSet.java:1652)
> >>   at org.apache.flink.api.scala.DataSet.print(DataSet.scala:1864)
> >>   ... 30 elided
> >> Caused by: java.lang.NullPointerException: rest.address must be set
> >>   at org.apache.flink.util.Preconditions.checkNotNull(Preconditions.java:104)
> >>   at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.getWebMonitorAddress(HighAvailabilityServicesUtils.java:196)
> >>   at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createClientHAService(HighAvailabilityServicesUtils.java:146)
> >>   at org.apache.flink.client.program.rest.RestClusterClient.<init>(RestClusterClient.java:161)
> >>   at org.apache.flink.client.deployment.StandaloneClusterDescriptor.lambda$retrieve$0(StandaloneClusterDescriptor.java:51)
> >>   ... 38 more
> >
> >
> >
> > --
> > Best Regards
> >
> > Jeff Zhang


--
Best Regards

Jeff Zhang
Reply | Threaded
Open this post in threaded view
|

Re: Scala Shell gives error "rest.address must be set"

Craig Foster
Sure, will do. Thanks!

On Tue, Mar 17, 2020 at 7:05 PM Jeff Zhang <[hidden email]> wrote:
I agree, this is really confusing for users. Do you mind to create a ticket for that ?

Craig Foster <[hidden email]> 于2020年3月18日周三 上午8:36写道:
If I specify these options, it seems to work...but I thought I could
have this dynamically determined when submitting jobs just using the
"yarn" option:

/usr/lib/flink/bin/start-scala-shell.sh yarn -s 4 -jm 1024m -tm 4096m

I guess what isn't clear here to me is that if you use `yarn` alone
there needs to be an existing yarn cluster already started.


On Tue, Mar 17, 2020 at 4:22 PM Craig Foster <[hidden email]> wrote:
>
> Yeah, I was wondering about that. I'm using
> `/usr/lib/flink/bin/start-scala-shell.sh yarn`-- previously I'd use
> `/usr/lib/flink/bin/start-scala-shell.sh yarn -n ${NUM}`
>  but that deprecated option was removed.
>
>
> On Tue, Mar 17, 2020 at 4:11 PM Jeff Zhang <[hidden email]> wrote:
> >
> > It looks like you are running under standalone mode, what is your command to start scala shell. ?
> >
> > Craig Foster <[hidden email]> 于2020年3月18日周三 上午5:23写道:
> >>
> >> Hi:
> >> When I upgraded from Flink 1.9.1 to Flink 1.10.0 I can't execute
> >> programs at the Scala shell.
> >>
> >> It gives me an error that the REST address must be set. This looks
> >> like it comes from HA but I don't have HA configured for Flink and it
> >> was very hard to find this documented other than in the PR/JIRA in the
> >> history so don't have much context. Can someone point me to how to
> >> configure this properly? For reference, I put the example stacktrace
> >> below.
> >>
> >> scala> val text = benv.fromElements("To be, or not to be,--that is the
> >> question:--");
> >> text: org.apache.flink.api.scala.DataSet[String] =
> >> org.apache.flink.api.scala.DataSet@2396408a
> >>
> >> scala> val counts = text.flatMap { _.toLowerCase.split("\\W+")}.map {
> >> (_, 1) }.groupBy(0).sum(1);
> >> counts: org.apache.flink.api.scala.AggregateDataSet[(String, Int)] =
> >> org.apache.flink.api.scala.AggregateDataSet@38bce2ed
> >>
> >> scala> counts.print()
> >> 20/03/17 21:15:34 INFO java.ExecutionEnvironment: The job has 0
> >> registered types and 0 default Kryo serializers
> >> 20/03/17 21:15:34 INFO configuration.GlobalConfiguration: Loading
> >> configuration property: env.yarn.conf.dir, /etc/hadoop/conf
> >> 20/03/17 21:15:34 INFO configuration.GlobalConfiguration: Loading
> >> configuration property: env.hadoop.conf.dir, /etc/hadoop/conf
> >> java.lang.RuntimeException: Couldn't retrieve standalone cluster
> >>   at org.apache.flink.client.deployment.StandaloneClusterDescriptor.lambda$retrieve$0(StandaloneClusterDescriptor.java:53)
> >>   at org.apache.flink.client.deployment.executors.AbstractSessionClusterExecutor.execute(AbstractSessionClusterExecutor.java:64)
> >>   at org.apache.flink.api.java.ExecutionEnvironment.executeAsync(ExecutionEnvironment.java:944)
> >>   at org.apache.flink.api.java.ExecutionEnvironment.execute(ExecutionEnvironment.java:860)
> >>   at org.apache.flink.api.java.ScalaShellEnvironment.execute(ScalaShellEnvironment.java:81)
> >>   at org.apache.flink.api.java.ExecutionEnvironment.execute(ExecutionEnvironment.java:844)
> >>   at org.apache.flink.api.java.DataSet.collect(DataSet.java:413)
> >>   at org.apache.flink.api.java.DataSet.print(DataSet.java:1652)
> >>   at org.apache.flink.api.scala.DataSet.print(DataSet.scala:1864)
> >>   ... 30 elided
> >> Caused by: java.lang.NullPointerException: rest.address must be set
> >>   at org.apache.flink.util.Preconditions.checkNotNull(Preconditions.java:104)
> >>   at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.getWebMonitorAddress(HighAvailabilityServicesUtils.java:196)
> >>   at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createClientHAService(HighAvailabilityServicesUtils.java:146)
> >>   at org.apache.flink.client.program.rest.RestClusterClient.<init>(RestClusterClient.java:161)
> >>   at org.apache.flink.client.deployment.StandaloneClusterDescriptor.lambda$retrieve$0(StandaloneClusterDescriptor.java:51)
> >>   ... 38 more
> >
> >
> >
> > --
> > Best Regards
> >
> > Jeff Zhang


--
Best Regards

Jeff Zhang