Hi,
Im having some wierd issue with the JM recovery , I using HDFS and ZOOKEEPER for HA stand alone cluster . Iv stop the cluster change some parameters in the flink conf (Memory). But now when i start the cluster again im having an error that preventing from JM to start. somehow the checkpoint file doesn't exists in HDOOP and JM wont start . full log JM log file 2018-05-31 11:57:05,568 ERROR org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Fatal error occurred in the cluster entrypoint.
|
Hi Miki, could you check whether the files are really no longer stored on HDFS? How did you terminate the cluster? Simply calling `bin/stop-cluster.sh`? I just tried it locally and it could recover the job after calling `bin/start-cluster.sh` again. What would be helpful are the logs from the initial run of the job. So if you can reproduce the problem, then this log would be very helpful. Cheers, Till On Thu, May 31, 2018 at 6:14 PM, miki haiat <[hidden email]> wrote:
|
Hi Till ,
I wondered if this code could cause this issue , the way in using checkpoint StateBackend sb = new FsStateBackend("hdfs://***/flink/my_city/checkpoints"); On Fri, Jun 1, 2018 at 6:19 PM Till Rohrmann <[hidden email]> wrote:
|
Hmmm, Flink should not delete the stored blobs on the HA storage. Could you try to reproduce the problem and then send us the logs on DEBUG level? Please also check before shutting the cluster down, that the files were there. Cheers, Till On Sun, Jun 3, 2018 at 1:10 PM miki haiat <[hidden email]> wrote:
|
HI Till, Iv`e managed to do reproduce it. On Mon, Jun 4, 2018 at 10:33 AM Till Rohrmann <[hidden email]> wrote:
|
Hi Miki, it looks as if you did not submit a job to the cluster of which you shared the logs. At least I could not see a submit job call. Cheers, Till On Mon, Jun 4, 2018 at 12:31 PM miki haiat <[hidden email]> wrote:
|
I had some zookeeper errors that crashed the cluster ERROR org.apache.flink.shaded.org.apache.curator.ConnectionState - Authentication failedWhat happen to Flink checkpoint and state if zookeeper cluster is crashed ? Is it possible that the checkpoint/state is written in zookeeper but not in Hadoop and then when i try to restart the flink cluster im getting the file not find error ?? On Mon, Jun 4, 2018 at 4:27 PM Till Rohrmann <[hidden email]> wrote:
|
Hi Miki, Flink tries first to store the checkpoint data in Hadoop before writing the handle to the meta data in ZooKeeper. Thus, if the handle is in ZooKeeper, then it should also have been written to HDFS. Maybe you could check the HDFS logs whether you find something suspicious. If ZooKeeper fails while writing the meta data state handle, then the checkpoint should be automatically discarded. But you might want to investigate why the ZooKeeper authentication failed. Flink needs a working ZooKeeper quorum to run in HA mode. Maybe you could try to reproduce a failing run and share the log files with us. They might be helpful to further investigate the problem. Cheers, Till On Wed, Jun 6, 2018 at 1:06 PM miki haiat <[hidden email]> wrote:
|
Free forum by Nabble | Edit this page |