Hi all, We are having an issue where Flink Application Master is unable to automatically restart Flink job after its delegation token has expired. We are using Flink 1.11 with YARN 3.1.1 in single job per yarn-cluster mode. We have also add valid keytab configuration and taskmanagers are able to login with keytabs correctly. However, it seems YARN Application Master still use delegation tokens instead of the keytab. Any idea how to resolve this would be much appreciated. Thanks Kien |
Hi, Kien,
Do you config the "security.kerberos.login.principal" and the "security.kerberos.login.keytab" together? If you only set the keytab, it will not take effect. Best, Yangze Guo On Tue, Nov 17, 2020 at 3:03 PM Kien Truong <[hidden email]> wrote: > > Hi all, > > We are having an issue where Flink Application Master is unable to automatically restart Flink job after its delegation token has expired. > > We are using Flink 1.11 with YARN 3.1.1 in single job per yarn-cluster mode. We have also add valid keytab configuration and taskmanagers are able to login with keytabs correctly. However, it seems YARN Application Master still use delegation tokens instead of the keytab. > > Any idea how to resolve this would be much appreciated. > > Thanks > Kien > > > > |
Hi, Yes, I did. There're also logs about logging in using keytab successfully in both Job Manager and Task Manager. I found some YARN docs about token renewal on AM restart > Therefore, to survive AM restart after token expiry, your AM has to get the NMs to localize the keytab or make no HDFS accesses until (somehow) a new token has been passed to them from a client. Maybe Flink did access HDFS with an expired token, before switching to use the localized keytab ? Regards, Kien Hi, Kien, |
Hi,
AFAIK, Flink does exclude the HDFS_DELEGATION_TOKEN in the HadoopModule when user provides the keytab and principal. I'll try to do a deeper investigation to figure out is there any HDFS access before the HadoopModule installed. Best, Yangze Guo On Tue, Nov 17, 2020 at 4:36 PM Kien Truong <[hidden email]> wrote: > > Hi, > > Yes, I did. There're also logs about logging in using keytab successfully in both Job Manager and Task Manager. > > I found some YARN docs about token renewal on AM restart > > > > Therefore, to survive AM restart after token expiry, your AM has to get the NMs to localize the keytab or make no HDFS accesses until (somehow) a new token has been passed to them from a client. > > Maybe Flink did access HDFS with an expired token, before switching to use the localized keytab ? > > Regards, > Kien > > > > On 17 Nov 2020 at 15:14, Yangze Guo <[hidden email]> wrote: > > Hi, Kien, > > > > Do you config the "security.kerberos.login.principal" and the > > "security.kerberos.login.keytab" together? If you only set the keytab, > > it will not take effect. > > > > Best, > > Yangze Guo > > > > On Tue, Nov 17, 2020 at 3:03 PM Kien Truong <[hidden email]> wrote: > > > > > > Hi all, > > > > > > We are having an issue where Flink Application Master is unable to automatically restart Flink job after its delegation token has expired. > > > > > > We are using Flink 1.11 with YARN 3.1.1 in single job per yarn-cluster mode. We have also add valid keytab configuration and taskmanagers are able to login with keytabs correctly. However, it seems YARN Application Master still use delegation tokens instead of the keytab. > > > > > > Any idea how to resolve this would be much appreciated. > > > > > > Thanks > > > Kien > > > > > > > > > > > > > |
Hi,
There is a login operation in YarnEntrypointUtils.logYarnEnvironmentInformation without the keytab. One suspect is that Flink may access the HDFS when it tries to build the PackagedProgram. Does this issue only happen in the application mode? If so, I would cc @kkloudas. Best, Yangze Guo On Tue, Nov 17, 2020 at 4:52 PM Yangze Guo <[hidden email]> wrote: > > Hi, > > AFAIK, Flink does exclude the HDFS_DELEGATION_TOKEN in the > HadoopModule when user provides the keytab and principal. I'll try to > do a deeper investigation to figure out is there any HDFS access > before the HadoopModule installed. > > Best, > Yangze Guo > > > On Tue, Nov 17, 2020 at 4:36 PM Kien Truong <[hidden email]> wrote: > > > > Hi, > > > > Yes, I did. There're also logs about logging in using keytab successfully in both Job Manager and Task Manager. > > > > I found some YARN docs about token renewal on AM restart > > > > > > > Therefore, to survive AM restart after token expiry, your AM has to get the NMs to localize the keytab or make no HDFS accesses until (somehow) a new token has been passed to them from a client. > > > > Maybe Flink did access HDFS with an expired token, before switching to use the localized keytab ? > > > > Regards, > > Kien > > > > > > > > On 17 Nov 2020 at 15:14, Yangze Guo <[hidden email]> wrote: > > > > Hi, Kien, > > > > > > > > Do you config the "security.kerberos.login.principal" and the > > > > "security.kerberos.login.keytab" together? If you only set the keytab, > > > > it will not take effect. > > > > > > > > Best, > > > > Yangze Guo > > > > > > > > On Tue, Nov 17, 2020 at 3:03 PM Kien Truong <[hidden email]> wrote: > > > > > > > > > > Hi all, > > > > > > > > > > We are having an issue where Flink Application Master is unable to automatically restart Flink job after its delegation token has expired. > > > > > > > > > > We are using Flink 1.11 with YARN 3.1.1 in single job per yarn-cluster mode. We have also add valid keytab configuration and taskmanagers are able to login with keytabs correctly. However, it seems YARN Application Master still use delegation tokens instead of the keytab. > > > > > > > > > > Any idea how to resolve this would be much appreciated. > > > > > > > > > > Thanks > > > > > Kien > > > > > > > > > > > > > > > > > > > > > > |
Hi Yangze, Thanks for checking. I'm not using the new application mode, but the old single job yarn-cluster mode. I'll try to get some more logs tomorrow. Regards, Kien Hi, |
Hi all, So I've checked the log and it seems that the expired delegation error was triggered during resource localization. Maybe there's something wrong with my Hadoop setup, NMs are supposed to get a good token from RM in order to localize resources automatically. Regards, Kiên
On Tue, Nov 17, 2020 at 5:33 PM Kien Truong <[hidden email]> wrote:
|
Free forum by Nabble | Edit this page |