Hi Alexis,
First of all, I think you leverage the partitioning and sorting properties of the data returned by the database using SplitDataProperties.
However, please be aware that SplitDataProperties are a rather experimental feature.
If used without query parameters, the JDBCInputFormat generates a single split and queries the database just once. If you want to leverage parallelism, you have to specify a query with parameters in the WHERE clause to read different parts of the table.
Note, depending on the configuration of the database, multiple queries result in multiple full scans. Hence, it might make sense to have an index on the partitioning columns.
If properly configured, the JDBCInputFormat generates multiple splits which are partitioned. Since the partitioning is encoded in the query, it is opaque to Flink and must be explicitly declared.
This can be done with SDPs. The SDP.splitsPartitionedBy() method tells Flink that all records with the same value in the partitioning field are read from the same split, i.e, the full data is partitioned on the attribute across splits.
The same can be done for ordering if the queries of the JDBCInputFormat is specified with an ORDER BY clause.
Partitioning and grouping are two different things. You can define a query that partitions on hostname and orders by hostname and timestamp and declare these properties in the SDP.
You can get a SDP object by calling DataSource.getSplitDataProperties(). In your example this would be source.getSplitDataProperties().
Whatever you do, you should carefully check the execution plan (ExecutionEnvironment.getExecutionPlan()) using the plan visualizer [1] and validate that the result are identical whether you use SDP or not.
Best, Fabian