FabianBest,Maybe you need to check the code to generate filenames.I would assume that your output format tries to create a file that already exists.Hi,the exception says "org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedEx ception): Failed to CREATE_FILE /y=2017/m=01/d=10/H=18/M=12/_p art-0-0.in-progress for DFSClient_NONMAPREDUCE_1062142 735_3". 2017-01-11 3:13 GMT+01:00 Biswajit Das <[hidden email]>:Hello ,I have to create a custom Parquet writer with rolling sink , I'm seeing error like this , I'm expecting every partition should write in a new file ?? Any tips ?
---------------
18:12:12.551 [flink-akka.actor.default-dispatcher-5] DEBUG akka.event.EventStream - shutting down: StandardOutLogger started
Exception in thread "main" org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$ handleMessage$1$$anonfun$ applyOrElse$8.apply$mcV$sp( JobManager.scala:822)
at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$ handleMessage$1$$anonfun$ applyOrElse$8.apply(JobManager .scala:768)
at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$ handleMessage$1$$anonfun$ applyOrElse$8.apply(JobManager .scala:768)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.lifte dTree1$1(Future.scala:24)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(F uture.scala:24)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41 )
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask. exec(AbstractDispatcher.scala: 401)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask. java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.pollAndExec All(ForkJoinPool.java:1253)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask( ForkJoinPool.java:1346)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPoo l.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinW orkerThread.java:107)
Caused by: java.lang.RuntimeException: Could not forward element to next operator
at org.apache.flink.streaming.runtime.tasks.OperatorChain$Copyi ngChainingOutput.collect(Opera torChain.java:376)
at org.apache.flink.streaming.runtime.tasks.OperatorChain$Copyi ngChainingOutput.collect(Opera torChain.java:358)
at org.apache.flink.streaming.runtime.tasks.OperatorChain$Broad castingOutputCollector.collect (OperatorChain.java:399)
at org.apache.flink.streaming.runtime.tasks.OperatorChain$Broad castingOutputCollector.collect (OperatorChain.java:381)
at org.apache.flink.streaming.api.operators.AbstractStreamOpera tor$CountingOutput.collect(Abs tractStreamOperator.java:346)
at org.apache.flink.streaming.api.operators.AbstractStreamOpera tor$CountingOutput.collect(Abs tractStreamOperator.java:329)
at org.apache.flink.streaming.api.operators.StreamSource$NonTim estampContext.collect(StreamSo urce.java:161)
at org.apache.flink.streaming.connectors.kafka.internals.Abstra ctFetcher.emitRecord(AbstractF etcher.java:225)
at org.apache.flink.streaming.connectors.kafka.internals.Simple ConsumerThread.run(SimpleConsu merThread.java:379)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop. hdfs.protocol.AlreadyBeingCrea tedException): Failed to CREATE_FILE /y=2017/m=01/d=10/H=18/M=12/_p art-0-0.in-progress for DFSClient_NONMAPREDUCE_1062142 735_3
Free forum by Nabble | Edit this page |