I am using Hive 3.1.1The table has many fields, each field is corresponded to a feild in the RobotUploadData0101 class.CREATE TABLE `robotparquet`(`robotid` int, `framecount` int, `robottime` bigint, `robotpathmode` int, `movingmode` int, `submovingmode` int, `xlocation` int, `ylocation` int, `robotradangle` int, `velocity` int, `acceleration` int, `angularvelocity` int, `angularacceleration` int, `literangle` int, `shelfangle` int, `onloadshelfid` int, `rcvdinstr` int, `sensordist` int, `pathstate` int, `powerpresent` int, `neednewpath` int, `pathelenum` int, `taskstate` int, `receivedtaskid` int, `receivedcommcount` int, `receiveddispatchinstr` int, `receiveddispatchcount` int, `subtaskmode` int, `versiontype` int, `version` int, `liftheight` int, `codecheckstatus` int, `cameraworkmode` int, `backrimstate` int, `frontrimstate` int, `pathselectstate` int, `codemisscount` int, `groundcameraresult` int, `shelfcameraresult` int, `softwarerespondframe` int, `paramstate` int, `pilotlamp` int, `codecount` int, `dist2waitpoint` int, `targetdistance` int, `obstaclecount` int, `obstacleframe` int, `cellcodex` int, `cellcodey` int, `cellangle` int, `shelfqrcode` int, `shelfqrangle` int, `shelfqrx` int, `shelfqry` int, `trackthetaerror` int, `tracksideerror` int, `trackfuseerror` int, `lifterangleerror` int, `lifterheighterror` int, `linearcmdspeed` int, `angluarcmdspeed` int, `liftercmdspeed` int, `rotatorcmdspeed` int) PARTITIONED BY (`hour` string) STORED AS parquet;Thanks,
Lei
From: [hidden email]Date: 2020-04-09 21:45To: [hidden email]Subject: Re: Re: fink sql client not able to read parquet format tableHi lei,Which hive version did you use?Can you share the complete hive DDL?Best,Jingsong LeeI am using the newest 1.10 blink planner.Perhaps it is because of the method i used to write the parquet file.Receive kafka message, transform each message to a Java class Object, write the Object to HDFS using StreamingFileSink, add the HDFS path as a partition of the hive tableNo matter what the order of the field description in hive ddl statement, the hive client will work, as long as the field name is the same with Java Object field name.But flink sql client will not work.DataStream<RobotUploadData0101> sourceRobot = source.map( x->transform(x));
final StreamingFileSink<RobotUploadData0101> sink;
sink = StreamingFileSink
.forBulkFormat(new Path("hdfs://172.19.78.38:8020/user/root/wanglei/robotdata/parquet"),
ParquetAvroWriters.forReflectRecord(RobotUploadData0101.class))For exampleRobotUploadData0101 has two fields: robotId int, robotTime longCREATE TABLE `robotparquet`( `robotid` int, `robottime` bigint ) andCREATE TABLE `robotparquet`( `robottime` bigint, `robotid` int)is the same for hive client, but is different for flink-sql clientIt is an expected behavior?Thanks,Lei
From: [hidden email]Date: 2020-04-09 14:48CC: [hidden email]Subject: Re: fink sql client not able to read parquet format tableHi Lei,Are you using the newest 1.10 blink planner?I'm not familiar with Hive and parquet, but I know [hidden email] and [hidden email] are experts on this. Maybe they can help on this question.Best,JarkHive table stored as parquet.Under hive client:hive> select robotid from robotparquet limit 2;OK12910971291044But under flink sql-client the result is 0Flink SQL> select robotid from robotparquet limit 2;robotid00Any insight on this?Thanks,Lei
--Best, Jingsong Lee
Free forum by Nabble | Edit this page |