Hi contributors! I’m trying to setup Flink v1.12.2 in Kubernetes Session Mode, but I found that I cannot mount
log4j.properties in configmap to the jobmanager container. Is this a expected behavior? Could you share me some ways to mount
log4j.properties to my container? My
yaml: apiVersion: v1 data: flink-conf.yaml: |- taskmanager.numberOfTaskSlots: 1 blob.server.port: 6124 kubernetes.rest-service.exposed.type: ClusterIP kubernetes.jobmanager.cpu: 1.00 high-availability.storageDir: s3p://hulu-caposv2-flink-s3-bucket/session-cluster-test/ha-backup/ queryable-state.proxy.ports: 6125 kubernetes.service-account: stream-app high-availability: org.apache.flink.kubernetes.highavailability.KubernetesHaServicesFactory jobmanager.memory.process.size: 1024m taskmanager.memory.process.size: 1024m kubernetes.taskmanager.annotations: cluster-autoscaler.kubernetes.io/safe-to-evict:false kubernetes.namespace: test123 restart-strategy: fixed-delay restart-strategy.fixed-delay.attempts: 5 kubernetes.taskmanager.cpu: 1.00 state.backend: filesystem parallelism.default: 4 kubernetes.container.image: cubox.prod.hulu.com/proxy/flink:1.12.2-scala_2.11-java8-stdout7 kubernetes.taskmanager.labels: capos_id:session-cluster-test,stream-component:jobmanager state.checkpoints.dir: s3p://hulu-caposv2-flink-s3-bucket/session-cluster-test/checkpoints/ kubernetes.cluster-id: session-cluster-test kubernetes.jobmanager.annotations: cluster-autoscaler.kubernetes.io/safe-to-evict:false state.savepoints.dir: s3p://hulu-caposv2-flink-s3-bucket/session-cluster-test/savepoints/ restart-strategy.fixed-delay.delay: 15s taskmanager.rpc.port: 6122 jobmanager.rpc.address: session-cluster-test-flink-jobmanager kubernetes.jobmanager.labels: capos_id:session-cluster-test,stream-component:jobmanager jobmanager.rpc.port: 6123 log4j.properties: |- logger.kafka.name = org.apache.kafka logger.hadoop.level = INFO appender.rolling.type = RollingFile appender.rolling.filePattern = ${sys:log.file}.%i appender.rolling.layout.pattern = %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p %-60c %x - %m%n logger.netty.name = org.apache.flink.shaded.akka.org.jboss.netty.channel.DefaultChannelPipeline rootLogger = INFO, rolling logger.akka.name = akka appender.rolling.strategy.type = DefaultRolloverStrategy logger.akka.level = INFO appender.rolling.append = false logger.hadoop.name = org.apache.hadoop appender.rolling.fileName = ${sys:log.file} appender.rolling.policies.type = Policies rootLogger.appenderRef.rolling.ref = RollingFileAppender logger.kafka.level = INFO appender.rolling.name = RollingFileAppender appender.rolling.layout.type = PatternLayout appender.rolling.policies.size.type = SizeBasedTriggeringPolicy appender.rolling.policies.size.size = 100MB appender.rolling.strategy.max = 10 logger.netty.level = OFF logger.zookeeper.name = org.apache.zookeeper logger.zookeeper.level = INFO kind: ConfigMap metadata: labels: app: session-cluster-test capos_id: session-cluster-test name: session-cluster-test-flink-config namespace: test123 --- apiVersion: batch/v1 kind: Job metadata: labels: capos_id: session-cluster-test name: session-cluster-test-flink-startup namespace: test123 spec: backoffLimit: 6 completions: 1 parallelism: 1 template: metadata: annotations: caposv2.prod.hulu.com/streamAppSavepointId: "0" cluster-autoscaler.kubernetes.io/safe-to-evict: "false" creationTimestamp: null labels: capos_id: session-cluster-test stream-component: start-up spec: containers: - command: - ./bin/kubernetes-session.sh - -Dkubernetes.cluster-id=session-cluster-test image: cubox.prod.hulu.com/proxy/flink:1.12.2-scala_2.11-java8-stdout7 imagePullPolicy: IfNotPresent name: flink-startup resources: {} securityContext: runAsUser: 9999 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /opt/flink/conf name: flink-config-volume dnsPolicy: ClusterFirst restartPolicy: Never schedulerName: default-scheduler securityContext: {} serviceAccount: stream-app serviceAccountName: stream-app terminationGracePeriodSeconds: 30 volumes: - configMap: defaultMode: 420 items: - key: flink-conf.yaml path: flink-conf.yaml - key: log4j.properties path: log4j.properties name: session-cluster-test-flink-config name: flink-config-volume ttlSecondsAfterFinished: 86400 I cannot see
log4j.properties in jobmanager container volume mount. volumes: - configMap: defaultMode: 420 items: - key: flink-conf.yaml path: flink-conf.yaml name: flink-config-session-cluster-test name: flink-config-volume And there doesn’t log config file in jobmanager container. root@session-cluster-test-689b595f8f-dg4h6:/opt/flink# ls -l $FLINK_HOME/conf/ total 0 lrwxrwxrwx 1 root root 22 Jun 19 09:23 flink-conf.yaml -> ..data/flink-conf.yaml After I deep dive in the flink source code, I found the root cause could be here: It only add
flink-conf.yaml
to container volume mount. Could you please give me some guide or support? Thanks so much! BRs. Chenyu Zheng |
Free forum by Nabble | Edit this page |