Hi Benjamin,
Flink reads data usually in parallel. This is done by splitting the input (e.g., a file) into several input splits. Each input split is independently processed. Since splits are usually concurrently processed by more than one task, Flink does not care about the order by default.
You can implement a special InputFormat that uses a custom InputSplitAssigner to ensure that splits are handed out in order.
This would requires a bit of coding though.
A DataSet is usually distributed among multiple partitions/tasks and does also not have the concept (complete) order. It is possible to sort the data of a data set in each individual partition by calling DataSet.sortPartition(key, order). If you do that with a parallelism of one (DataSet.sortPartition().setParallelism(1)), you'll have a fully ordered data set, however only on one machine.
Flink does also support range partitioning (DataSet.partitionByRange()) in case you want to sort the data in parallel.
Best, Fabian