Have a question? Connect with the community at the TensorFlow Forum Visit Forum


DataLoader for question answering.

Used in the notebooks

Used in the tutorials

dataset A tf.data.Dataset object that contains a potentially large set of elements, where each element is a pair of (input_data, target). The input_data means the raw input data, like an image, a text etc., while the target means some ground truth of the raw input data, such as the classification label of the image etc.
size The size of the dataset. tf.data.Dataset donesn't support a function to get the length directly since it's lazy-loaded and may be infinite.



Loads data in SQuAD format and preproecess text according to model_spec.

filename Name of the file.
model_spec Specification for the model.
is_training Whether the loaded data is for training or not.
version_2_with_negative Whether it's SQuAD 2.0 format.
cache_dir The cache directory to save preprocessed data. If None, generates a temporary directory to cache preprocessed data.

QuestionAnswerDataLoader object.


Generate a shared and batched tf.data.Dataset for training/evaluation.

batch_size A integer, the returned dataset will be batched by this size.
is_training A boolean, when True, the returned dataset will be optionally shuffled and repeated as an endless dataset.
shuffle A boolean, when True, the returned dataset will be shuffled to create randomness during model training.
input_pipeline_context A InputContext instance, used to shared dataset among multiple workers when distribution strategy is used.
preprocess A function taking three arguments in order, feature, label and boolean is_training.
drop_remainder boolean, whether the finaly batch drops remainder.

A TF dataset ready to be consumed by Keras model.


Splits dataset into two sub-datasets with the given fraction.

Primarily used for splitting the data set into training and testing sets.

fraction float, demonstrates the fraction of the first returned subdataset in the original data.

The splitted two sub datasets.