hi,i read file from hdfs,but there is error when run jon on yarn clutster,
-----------------------------------------------
val dataSeg = env.readTextFile("hdfs:///user/hadoop/text").filter(!_.startsWith("#")).map { x =>
val values = x.split("\t")
(values.apply(0),values.apply(1).split(" "))
}
logger.info("****dataSeg****="+dataSeg.count())
the error is following:
--------------------------------------------------------------
2017-03-24 11:32:15,012 INFO org.apache.hadoop.yarn.client.api.async.impl.AMRMClientAsyncImpl - Interrupted while waiting for queue
java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2052)
at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at org.apache.hadoop.yarn.client.api.async.impl.AMRMClientAsyncImpl$CallbackHandlerThread.run(AMRMClientAsyncImpl.java:274)
-----------------------------
hadoop is 2.6
flink is 1.1.0-hadoop2.6-scala-2.11
(the org.apache.hadoop.yarn.client.api.async.impl.AMRMClientAsyncImpl is in flink-shaded-hadoop2-1.1.0 )