how can I ues Dataset to shuffle a large whole dataset?

The Dataset.shuffle() implementation is designed for data that could be shuffled in memory; we're considering whether to add support for external-memory shuffles, but this is in the early stages. In case it works for you, here's the usual approach we use when the data are too large to fit in memory:

Randomly shuffle the entire data once using a MapReduce/Spark/Beam/etc. job to create a set of roughly equal-sized files ("shards").
In each epoch:

  1. Randomly shuffle the list of shard filenames, using Dataset.list_files(...).shuffle(num_shards).
  2. Use dataset.interleave(lambda filename: tf.data.TextLineDataset(filename), cycle_length=N) to mix together records from N different shards.
  3. Use dataset.shuffle(B) to shuffle the resulting dataset. Setting B might require some experimentation, but you will probably want to set it to some value larger than the number of records in a single shard.
posted @ 2018-06-26 09:32  狂徒归来  阅读(414)  评论(0编辑  收藏  举报