Here's a little summary of the different parts for those curious: 1/5
The Dataset has to be passed to the DataLoader. It's where you transform your data and where the inputs and labels are stored.
It is basically one big list of (input, label) tuples. So when you index in like dataset[i], it returns (input[ i ], label[ i ]).
2/5
The sampler and batch sampler are used to choose which inputs & labels to yield to the next batch.
Artistic license warning 👨🎨⚠️: They don't actually grab the inputs and labels, and the dataset doesn't actually deplete. They tell the DataLoader which indices to grab.
3/5
The collate function is what collates/combines the batch of examples into single x_batch, y_batch tensors.
@PyTorch handles a bunch of different types and shapes of data that could need to be collated.
4/5
If you need custom behaviour, you can pass in a sampler / batch_sampler and/or a collate function to your DataLoader.
Here's a blog post I wrote that goes into more detail about each part and shows how to customise them. 5/5 scottcondron.com/jupyter/visual…
• • •
Missing some Tweet in this thread? You can try to
force a refresh