HBaseTableSource for SQL query errors

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

HBaseTableSource for SQL query errors

圣眼之翼
I am using the HBaseTableSource class for SQL query errors.No error outside Flink using HBase demo.
My flink version is 1.8.1,use flink table & SQL API
 
flink code show as below:
  // environment configuration
        ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
        BatchTableEnvironment tEnv = BatchTableEnvironment.create(env);
        String currentTableName = "table";
        //取得一个数据库连接的配置参数对象
        Configuration conf = HBaseConfiguration.create();
 ...
        conf.set("hbase.zookeeper.quorum", quorum);
        HBaseTableSource hSrc = new HBaseTableSource(conf, "table");
        hSrc.addColumn("base", "rowkey", String.class);
        tEnv.registerTableSource(currentTableName, hSrc);

        Table res = tEnv.sqlQuery("select * from table");
        DataSet<Row> result = tEnv.toDataSet(res, Row.class);
        result.print();
        env.execute();
 
Error is as follows:
<2019-09-24 15:40:36,436>[ WARN] Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect - org.apache.zookeeper.ClientCnxn
java.net.ConnectException: Connection refused: no further information
 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
 at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
 at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
ERROR RecoverableZooKeeper ZooKeeper getData failed after 4 attempts
ERROR ZooKeeperWatcher hconnection-0x39ee0d3e0x0, quorum=localhost:2181, baseZNode=/hbase Received unexpected KeeperException, re-throwing exception
 org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/meta-region-server
 at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
 at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
 at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)
 at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:354)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.getData(ZKUtil.java:625)
 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.getMetaRegionState(MetaTableLocator.java:486)
 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.getMetaRegionLocation(MetaTableLocator.java:167)
 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:606)
 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:587)
 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:560)
 at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getMetaRegionLocation(ZooKeeperRegistry.java:61)
 at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateMeta(ConnectionManager.java:1227)
 at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1194)
 at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:303)
 at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:156)
 at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
 at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:212)
 at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:314)
 at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:289)
 at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:164)
 at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:159)
 at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:796)
 at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:193)
 at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:89)
 at org.apache.hadoop.hbase.client.MetaScanner.listTableRegionLocations(MetaScanner.java:343)
 at org.apache.hadoop.hbase.client.HRegionLocator.listRegionLocations(HRegionLocator.java:141)
 at org.apache.hadoop.hbase.client.HRegionLocator.getStartEndKeys(HRegionLocator.java:117)
 at org.apache.flink.addons.hbase.AbstractTableInputFormat.createInputSplits(AbstractTableInputFormat.java:205)
 at org.apache.flink.addons.hbase.AbstractTableInputFormat.createInputSplits(AbstractTableInputFormat.java:44)
 at org.apache.flink.runtime.executiongraph.ExecutionJobVertex.<init>(ExecutionJobVertex.java:253)
 at org.apache.flink.runtime.executiongraph.ExecutionGraph.attachJobGraph(ExecutionGraph.java:853)
 at org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:232)
 at org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:100)
 at org.apache.flink.runtime.jobmaster.JobMaster.createExecutionGraph(JobMaster.java:1198)
 at org.apache.flink.runtime.jobmaster.JobMaster.createAndRestoreExecutionGraph(JobMaster.java:1178)
 at org.apache.flink.runtime.jobmaster.JobMaster.<init>(JobMaster.java:287)
 at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:83)
 at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:37)
 at org.apache.flink.runtime.jobmaster.JobManagerRunner.<init>(JobManagerRunner.java:146)
 at org.apache.flink.runtime.dispatcher.DefaultJobManagerRunnerFactory.createJobManagerRunner(DefaultJobManagerRunnerFactory.java:76)
 at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$createJobManagerRunner$5(Dispatcher.java:351)
 at org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:34)
 at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
 at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:39)
 at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:415)
 at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
 at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
 at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
 at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
 
Thanks!
 
Reply | Threaded
Open this post in threaded view
|

回复:HBaseTableSource for SQL query errors

圣眼之翼
The following exception was thrown in the MiniCluster.executeJobBlocking method via the debug source code.
 
akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://flink/user/dispatcher#997865675]] after [10000 ms]. Sender[null] sent message of type "org.apache.flink.runtime.rpc.messages.LocalFencedMessage".
 
May be a previous bug, how can we solve it?

------------------ 原始邮件 ------------------
发件人: "圣眼之翼"<[hidden email]>;
发送时间: 2019年9月24日(星期二) 下午4:04
收件人: "user"<[hidden email]>;
主题: HBaseTableSource for SQL query errors

I am using the HBaseTableSource class for SQL query errors.No error outside Flink using HBase demo.
My flink version is 1.8.1,use flink table & SQL API
 
flink code show as below:
  // environment configuration
        ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
        BatchTableEnvironment tEnv = BatchTableEnvironment.create(env);
        String currentTableName = "table";
        //取得一个数据库连接的配置参数对象
        Configuration conf = HBaseConfiguration.create();
 ...
        conf.set("hbase.zookeeper.quorum", quorum);
        HBaseTableSource hSrc = new HBaseTableSource(conf, "table");
        hSrc.addColumn("base", "rowkey", String.class);
        tEnv.registerTableSource(currentTableName, hSrc);

        Table res = tEnv.sqlQuery("select * from table");
        DataSet<Row> result = tEnv.toDataSet(res, Row.class);
        result.print();
        env.execute();
 
Error is as follows:
<2019-09-24 15:40:36,436>[ WARN] Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect - org.apache.zookeeper.ClientCnxn
java.net.ConnectException: Connection refused: no further information
 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
 at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
 at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
ERROR RecoverableZooKeeper ZooKeeper getData failed after 4 attempts
ERROR ZooKeeperWatcher hconnection-0x39ee0d3e0x0, quorum=localhost:2181, baseZNode=/hbase Received unexpected KeeperException, re-throwing exception
 org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/meta-region-server
 at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
 at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
 at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)
 at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:354)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.getData(ZKUtil.java:625)
 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.getMetaRegionState(MetaTableLocator.java:486)
 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.getMetaRegionLocation(MetaTableLocator.java:167)
 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:606)
 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:587)
 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:560)
 at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getMetaRegionLocation(ZooKeeperRegistry.java:61)
 at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateMeta(ConnectionManager.java:1227)
 at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1194)
 at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:303)
 at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:156)
 at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
 at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:212)
 at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:314)
 at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:289)
 at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:164)
 at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:159)
 at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:796)
 at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:193)
 at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:89)
 at org.apache.hadoop.hbase.client.MetaScanner.listTableRegionLocations(MetaScanner.java:343)
 at org.apache.hadoop.hbase.client.HRegionLocator.listRegionLocations(HRegionLocator.java:141)
 at org.apache.hadoop.hbase.client.HRegionLocator.getStartEndKeys(HRegionLocator.java:117)
 at org.apache.flink.addons.hbase.AbstractTableInputFormat.createInputSplits(AbstractTableInputFormat.java:205)
 at org.apache.flink.addons.hbase.AbstractTableInputFormat.createInputSplits(AbstractTableInputFormat.java:44)
 at org.apache.flink.runtime.executiongraph.ExecutionJobVertex.<init>(ExecutionJobVertex.java:253)
 at org.apache.flink.runtime.executiongraph.ExecutionGraph.attachJobGraph(ExecutionGraph.java:853)
 at org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:232)
 at org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:100)
 at org.apache.flink.runtime.jobmaster.JobMaster.createExecutionGraph(JobMaster.java:1198)
 at org.apache.flink.runtime.jobmaster.JobMaster.createAndRestoreExecutionGraph(JobMaster.java:1178)
 at org.apache.flink.runtime.jobmaster.JobMaster.<init>(JobMaster.java:287)
 at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:83)
 at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:37)
 at org.apache.flink.runtime.jobmaster.JobManagerRunner.<init>(JobManagerRunner.java:146)
 at org.apache.flink.runtime.dispatcher.DefaultJobManagerRunnerFactory.createJobManagerRunner(DefaultJobManagerRunnerFactory.java:76)
 at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$createJobManagerRunner$5(Dispatcher.java:351)
 at org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:34)
 at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
 at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:39)
 at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:415)
 at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
 at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
 at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
 at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
 
Thanks!