ML Community Day is November 9! Join us for updates from TensorFlow, JAX, and more Learn more

使用 tf.data 加载 pandas dataframes

View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook

本教程提供了如何将 pandas dataframes 加载到 tf.data.Dataset

本教程使用了一个小型数据集,由克利夫兰诊所心脏病基金会(Cleveland Clinic Foundation for Heart Disease)提供. 此数据集中有几百行CSV。每行表示一个患者,每列表示一个属性(describe)。我们将使用这些信息来预测患者是否患有心脏病,这是一个二分类问题。

使用 pandas 读取数据

!pip install tensorflow-gpu==2.0.0-rc1
import pandas as pd
import tensorflow as tf

下载包含心脏数据集的 csv 文件。

csv_file = tf.keras.utils.get_file('heart.csv', 'https://storage.googleapis.com/applied-dl/heart.csv')

使用 pandas 读取 csv 文件。

df = pd.read_csv(csv_file)
df.head()
df.dtypes
age           int64
sex           int64
cp            int64
trestbps      int64
chol          int64
fbs           int64
restecg       int64
thalach       int64
exang         int64
oldpeak     float64
slope         int64
ca            int64
thal         object
target        int64
dtype: object

thal 列(数据帧(dataframe)中的 object )转换为离散数值。

df['thal'] = pd.Categorical(df['thal'])
df['thal'] = df.thal.cat.codes
df.head()

使用 tf.data.Dataset 读取数据

使用 tf.data.Dataset.from_tensor_slices 从 pandas dataframe 中读取数值。

使用 tf.data.Dataset 的其中一个优势是可以允许您写一些简单而又高效的数据管道(data pipelines)。从 loading data guide 可以了解更多。

target = df.pop('target')
dataset = tf.data.Dataset.from_tensor_slices((df.values, target.values))
2021-10-08 23:27:42.394299: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcudart.so.10.0'; dlerror: libcudart.so.10.0: cannot open shared object file: No such file or directory
2021-10-08 23:27:42.394403: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcublas.so.10.0'; dlerror: libcublas.so.10.0: cannot open shared object file: No such file or directory
2021-10-08 23:27:42.394463: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcufft.so.10.0'; dlerror: libcufft.so.10.0: cannot open shared object file: No such file or directory
2021-10-08 23:27:42.394519: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcurand.so.10.0'; dlerror: libcurand.so.10.0: cannot open shared object file: No such file or directory
2021-10-08 23:27:42.394575: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcusolver.so.10.0'; dlerror: libcusolver.so.10.0: cannot open shared object file: No such file or directory
2021-10-08 23:27:42.394646: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcusparse.so.10.0'; dlerror: libcusparse.so.10.0: cannot open shared object file: No such file or directory
2021-10-08 23:27:42.399567: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1641] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
for feat, targ in dataset.take(5):
  print ('Features: {}, Target: {}'.format(feat, targ))
Features: [ 63.    1.    1.  145.  233.    1.    2.  150.    0.    2.3   3.    0.

   2. ], Target: 0
Features: [ 67.    1.    4.  160.  286.    0.    2.  108.    1.    1.5   2.    3.
   3. ], Target: 1
Features: [ 67.    1.    4.  120.  229.    0.    2.  129.    1.    2.6   2.    2.
   4. ], Target: 0
Features: [ 37.    1.    3.  130.  250.    0.    0.  187.    0.    3.5   3.    0.
   3. ], Target: 0
Features: [ 41.    0.    2.  130.  204.    0.    2.  172.    0.    1.4   1.    0.
   3. ], Target: 0

由于 pd.Series 实现了 __array__ 协议,因此几乎可以在任何使用 np.arraytf.Tensor 的地方透明地使用它。

tf.constant(df['thal'])
<tf.Tensor: id=21, shape=(303,), dtype=int32, numpy=
array([2, 3, 4, 3, 3, 3, 3, 3, 4, 4, 2, 3, 2, 4, 4, 3, 4, 3, 3, 3, 3, 3,
       3, 4, 4, 3, 3, 3, 3, 4, 3, 4, 3, 4, 3, 3, 4, 2, 4, 3, 4, 3, 4, 4,
       2, 3, 3, 4, 3, 3, 4, 3, 3, 3, 4, 3, 3, 3, 3, 3, 3, 4, 4, 3, 3, 4,
       4, 2, 3, 3, 4, 3, 4, 3, 3, 4, 4, 3, 3, 4, 4, 3, 3, 3, 3, 4, 4, 4,
       3, 3, 4, 3, 4, 4, 3, 4, 3, 3, 3, 4, 3, 4, 4, 3, 3, 4, 4, 4, 4, 4,
       3, 3, 3, 3, 4, 3, 4, 3, 4, 4, 3, 3, 2, 4, 4, 2, 3, 3, 4, 4, 3, 4,
       3, 3, 4, 2, 4, 4, 3, 4, 3, 3, 3, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4,
       4, 3, 3, 3, 4, 3, 4, 3, 4, 3, 3, 3, 3, 3, 3, 3, 4, 3, 3, 3, 3, 3,
       3, 3, 3, 3, 3, 3, 3, 4, 4, 3, 3, 3, 3, 3, 3, 3, 3, 4, 3, 4, 3, 2,
       4, 4, 3, 3, 3, 3, 3, 3, 4, 3, 3, 3, 3, 3, 2, 2, 4, 3, 4, 2, 4, 3,
       3, 4, 3, 3, 3, 3, 4, 3, 4, 3, 4, 2, 2, 4, 3, 4, 3, 2, 4, 3, 3, 2,
       4, 4, 4, 4, 3, 0, 3, 3, 3, 3, 1, 4, 3, 3, 3, 4, 3, 4, 3, 3, 3, 4,
       3, 3, 4, 4, 4, 4, 3, 3, 4, 3, 4, 3, 4, 4, 3, 4, 4, 3, 4, 4, 3, 3,
       3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 3, 2, 4, 4, 4, 4], dtype=int32)>

随机读取(shuffle)并批量处理数据集。

train_dataset = dataset.shuffle(len(df)).batch(1)

创建并训练模型

def get_compiled_model():
  model = tf.keras.Sequential([
    tf.keras.layers.Dense(10, activation='relu'),
    tf.keras.layers.Dense(10, activation='relu'),
    tf.keras.layers.Dense(1, activation='sigmoid')
  ])

  model.compile(optimizer='adam',
                loss='binary_crossentropy',
                metrics=['accuracy'])
  return model
model = get_compiled_model()
model.fit(train_dataset, epochs=15)
WARNING:tensorflow:Layer sequential is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2.  The layer has dtype float32 because it's dtype defaults to floatx.

If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.

To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.

Epoch 1/15
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow_core/python/ops/nn_impl.py:183: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
WARNING:tensorflow:Entity <function Function._initialize_uninitialized_variables.<locals>.initialize_variables at 0x7f9e200304d0> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Index'
WARNING: Entity <function Function._initialize_uninitialized_variables.<locals>.initialize_variables at 0x7f9e200304d0> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Index'
303/303 [==============================] - 1s 3ms/step - loss: 4.1756 - accuracy: 0.5281
Epoch 2/15
122/303 [===========>..................] - ETA: 0s - loss: 1.3236 - accuracy: 0.5820
2021-10-08 23:27:43.613728: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
     [[{ {node IteratorGetNext} }]]
303/303 [==============================] - 0s 1ms/step - loss: 1.1895 - accuracy: 0.6139
Epoch 3/15
121/303 [==========>...................] - ETA: 0s - loss: 1.0861 - accuracy: 0.6446
2021-10-08 23:27:44.006623: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
     [[{ {node IteratorGetNext} }]]
303/303 [==============================] - 0s 1ms/step - loss: 0.9821 - accuracy: 0.6931
Epoch 4/15
124/303 [===========>..................] - ETA: 0s - loss: 0.9452 - accuracy: 0.6371
2021-10-08 23:27:44.392248: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
     [[{ {node IteratorGetNext} }]]
303/303 [==============================] - 0s 1ms/step - loss: 0.8581 - accuracy: 0.6931
Epoch 5/15
123/303 [===========>..................] - ETA: 0s - loss: 0.7756 - accuracy: 0.7073
2021-10-08 23:27:44.775526: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
     [[{ {node IteratorGetNext} }]]
303/303 [==============================] - 0s 1ms/step - loss: 0.7814 - accuracy: 0.7096
Epoch 6/15
124/303 [===========>..................] - ETA: 0s - loss: 0.8293 - accuracy: 0.6290
2021-10-08 23:27:45.158928: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
     [[{ {node IteratorGetNext} }]]
303/303 [==============================] - 0s 1ms/step - loss: 0.7260 - accuracy: 0.6865
Epoch 7/15
126/303 [===========>..................] - ETA: 0s - loss: 0.5475 - accuracy: 0.8175
2021-10-08 23:27:45.538516: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
     [[{ {node IteratorGetNext} }]]
303/303 [==============================] - 0s 1ms/step - loss: 0.6435 - accuracy: 0.7459
Epoch 8/15
124/303 [===========>..................] - ETA: 0s - loss: 0.7142 - accuracy: 0.6935
2021-10-08 23:27:45.926334: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
     [[{ {node IteratorGetNext} }]]
303/303 [==============================] - 0s 1ms/step - loss: 0.6331 - accuracy: 0.7162
Epoch 9/15
121/303 [==========>...................] - ETA: 0s - loss: 0.5721 - accuracy: 0.7521
2021-10-08 23:27:46.310324: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
     [[{ {node IteratorGetNext} }]]
303/303 [==============================] - 0s 1ms/step - loss: 0.5740 - accuracy: 0.7426
Epoch 10/15
124/303 [===========>..................] - ETA: 0s - loss: 0.5520 - accuracy: 0.7581
2021-10-08 23:27:46.697437: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
     [[{ {node IteratorGetNext} }]]
303/303 [==============================] - 0s 1ms/step - loss: 0.5287 - accuracy: 0.7492
Epoch 11/15
127/303 [===========>..................] - ETA: 0s - loss: 0.5636 - accuracy: 0.7323
2021-10-08 23:27:47.076248: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
     [[{ {node IteratorGetNext} }]]
303/303 [==============================] - 0s 1ms/step - loss: 0.5053 - accuracy: 0.7591
Epoch 12/15
123/303 [===========>..................] - ETA: 0s - loss: 0.5450 - accuracy: 0.7724
2021-10-08 23:27:47.455192: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
     [[{ {node IteratorGetNext} }]]
303/303 [==============================] - 0s 1ms/step - loss: 0.4727 - accuracy: 0.7888
Epoch 13/15
120/303 [==========>...................] - ETA: 0s - loss: 0.4583 - accuracy: 0.7833
2021-10-08 23:27:47.837644: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
     [[{ {node IteratorGetNext} }]]
303/303 [==============================] - 0s 1ms/step - loss: 0.5036 - accuracy: 0.7756
Epoch 14/15
117/303 [==========>...................] - ETA: 0s - loss: 0.4819 - accuracy: 0.7692
2021-10-08 23:27:48.228689: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
     [[{ {node IteratorGetNext} }]]
303/303 [==============================] - 0s 1ms/step - loss: 0.4790 - accuracy: 0.7690
Epoch 15/15
113/303 [==========>...................] - ETA: 0s - loss: 0.4778 - accuracy: 0.7434
2021-10-08 23:27:48.638536: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
     [[{ {node IteratorGetNext} }]]
303/303 [==============================] - 0s 1ms/step - loss: 0.4409 - accuracy: 0.7756
2021-10-08 23:27:49.038475: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
     [[{ {node IteratorGetNext} }]]
<tensorflow.python.keras.callbacks.History at 0x7f9e20194550>

代替特征列

将字典作为输入传输给模型就像创建 tf.keras.layers.Input 层的匹配字典一样简单,应用任何预处理并使用 functional api。 您可以使用它作为 feature columns 的替代方法。

inputs = {key: tf.keras.layers.Input(shape=(), name=key) for key in df.keys()}
x = tf.stack(list(inputs.values()), axis=-1)

x = tf.keras.layers.Dense(10, activation='relu')(x)
output = tf.keras.layers.Dense(1, activation='sigmoid')(x)

model_func = tf.keras.Model(inputs=inputs, outputs=output)

model_func.compile(optimizer='adam',
                   loss='binary_crossentropy',
                   metrics=['accuracy'])

tf.data 一起使用时,保存 pd.DataFrame 列结构的最简单方法是将 pd.DataFrame 转换为 dict ,并对该字典进行切片。

dict_slices = tf.data.Dataset.from_tensor_slices((df.to_dict('list'), target.values)).batch(16)
for dict_slice in dict_slices.take(1):
  print (dict_slice)
({'age': <tf.Tensor: id=14781, shape=(16,), dtype=int32, numpy=
array([63, 67, 67, 37, 41, 56, 62, 57, 63, 53, 57, 56, 56, 44, 52, 57],
      dtype=int32)>, 'sex': <tf.Tensor: id=14789, shape=(16,), dtype=int32, numpy=array([1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1], dtype=int32)>, 'cp': <tf.Tensor: id=14784, shape=(16,), dtype=int32, numpy=array([1, 4, 4, 3, 2, 2, 4, 4, 4, 4, 4, 2, 3, 2, 3, 3], dtype=int32)>, 'trestbps': <tf.Tensor: id=14793, shape=(16,), dtype=int32, numpy=
array([145, 160, 120, 130, 130, 120, 140, 120, 130, 140, 140, 140, 130,
       120, 172, 150], dtype=int32)>, 'chol': <tf.Tensor: id=14783, shape=(16,), dtype=int32, numpy=
array([233, 286, 229, 250, 204, 236, 268, 354, 254, 203, 192, 294, 256,
       263, 199, 168], dtype=int32)>, 'fbs': <tf.Tensor: id=14786, shape=(16,), dtype=int32, numpy=array([1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0], dtype=int32)>, 'restecg': <tf.Tensor: id=14788, shape=(16,), dtype=int32, numpy=array([2, 2, 2, 0, 2, 0, 2, 0, 2, 2, 0, 2, 2, 0, 0, 0], dtype=int32)>, 'thalach': <tf.Tensor: id=14792, shape=(16,), dtype=int32, numpy=
array([150, 108, 129, 187, 172, 178, 160, 163, 147, 155, 148, 153, 142,
       173, 162, 174], dtype=int32)>, 'exang': <tf.Tensor: id=14785, shape=(16,), dtype=int32, numpy=array([0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0], dtype=int32)>, 'oldpeak': <tf.Tensor: id=14787, shape=(16,), dtype=float32, numpy=
array([2.3, 1.5, 2.6, 3.5, 1.4, 0.8, 3.6, 0.6, 1.4, 3.1, 0.4, 1.3, 0.6,

       0. , 0.5, 1.6], dtype=float32)>, 'slope': <tf.Tensor: id=14790, shape=(16,), dtype=int32, numpy=array([3, 2, 2, 3, 1, 1, 3, 1, 2, 3, 2, 2, 2, 1, 1, 1], dtype=int32)>, 'ca': <tf.Tensor: id=14782, shape=(16,), dtype=int32, numpy=array([0, 3, 2, 0, 0, 0, 2, 0, 1, 0, 0, 0, 1, 0, 0, 0], dtype=int32)>, 'thal': <tf.Tensor: id=14791, shape=(16,), dtype=int32, numpy=array([2, 3, 4, 3, 3, 3, 3, 3, 4, 4, 2, 3, 2, 4, 4, 3], dtype=int32)>}, <tf.Tensor: id=14794, shape=(16,), dtype=int64, numpy=array([0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0])>)
model_func.fit(dict_slices, epochs=15)
Epoch 1/15
WARNING:tensorflow:Entity <function Function._initialize_uninitialized_variables.<locals>.initialize_variables at 0x7f9dd00fbb00> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Index'
WARNING: Entity <function Function._initialize_uninitialized_variables.<locals>.initialize_variables at 0x7f9dd00fbb00> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Index'
19/19 [==============================] - 0s 26ms/step - loss: 3.4552 - accuracy: 0.3960
Epoch 2/15
19/19 [==============================] - 0s 2ms/step - loss: 2.0691 - accuracy: 0.5875
Epoch 3/15
19/19 [==============================] - 0s 2ms/step - loss: 1.9875 - accuracy: 0.5941
Epoch 4/15
19/19 [==============================] - 0s 2ms/step - loss: 1.9019 - accuracy: 0.5710
Epoch 5/15
19/19 [==============================] - 0s 2ms/step - loss: 1.8269 - accuracy: 0.5743
Epoch 6/15
 1/19 [>.............................] - ETA: 0s - loss: 2.4099 - accuracy: 0.4375
2021-10-08 23:27:49.640735: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
     [[{ {node IteratorGetNext} }]]
2021-10-08 23:27:49.685419: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
     [[{ {node IteratorGetNext} }]]
2021-10-08 23:27:49.728186: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
     [[{ {node IteratorGetNext} }]]
2021-10-08 23:27:49.772171: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
     [[{ {node IteratorGetNext} }]]
2021-10-08 23:27:49.815989: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
     [[{ {node IteratorGetNext} }]]
19/19 [==============================] - 0s 2ms/step - loss: 1.7581 - accuracy: 0.5809
Epoch 7/15
19/19 [==============================] - 0s 2ms/step - loss: 1.6915 - accuracy: 0.5743
Epoch 8/15
19/19 [==============================] - 0s 2ms/step - loss: 1.6286 - accuracy: 0.5776
Epoch 9/15
19/19 [==============================] - 0s 2ms/step - loss: 1.5698 - accuracy: 0.5809
Epoch 10/15
19/19 [==============================] - 0s 2ms/step - loss: 1.5131 - accuracy: 0.5809
Epoch 11/15
 1/19 [>.............................] - ETA: 0s - loss: 2.0016 - accuracy: 0.4375
2021-10-08 23:27:49.860637: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
     [[{ {node IteratorGetNext} }]]
2021-10-08 23:27:49.906519: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
     [[{ {node IteratorGetNext} }]]
2021-10-08 23:27:49.951607: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
     [[{ {node IteratorGetNext} }]]
2021-10-08 23:27:49.995942: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
     [[{ {node IteratorGetNext} }]]
2021-10-08 23:27:50.040705: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
     [[{ {node IteratorGetNext} }]]
19/19 [==============================] - 0s 2ms/step - loss: 1.4584 - accuracy: 0.5842
Epoch 12/15
19/19 [==============================] - 0s 2ms/step - loss: 1.4067 - accuracy: 0.5875
Epoch 13/15
19/19 [==============================] - 0s 2ms/step - loss: 1.3575 - accuracy: 0.5875
Epoch 14/15
19/19 [==============================] - 0s 2ms/step - loss: 1.3104 - accuracy: 0.6040
Epoch 15/15
19/19 [==============================] - 0s 2ms/step - loss: 1.2653 - accuracy: 0.6073
2021-10-08 23:27:50.085238: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
     [[{ {node IteratorGetNext} }]]
2021-10-08 23:27:50.129528: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
     [[{ {node IteratorGetNext} }]]
2021-10-08 23:27:50.173533: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
     [[{ {node IteratorGetNext} }]]
2021-10-08 23:27:50.217455: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
     [[{ {node IteratorGetNext} }]]
2021-10-08 23:27:50.259737: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
     [[{ {node IteratorGetNext} }]]
<tensorflow.python.keras.callbacks.History at 0x7f9dd07bb310>