Halaman ini diterjemahkan oleh Cloud Translation API.
Switch to English

Gunakan XLA dengan fungsi tf.

Lihat di TensorFlow.org Jalankan di Google Colab Lihat sumber di GitHub

Tutorial ini melatih model TensorFlow untuk mengklasifikasikan set data MNIST, di mana fungsi pelatihan dikompilasi menggunakan XLA.

Pertama, muat TensorFlow dan aktifkan eksekusi cepat.

# In TF 2.4 jit_compile is called experimental_compile
pip install -q tf-nightly
ERROR: After October 2020 you may experience errors when installing or updating packages. This is because pip will change the way that it resolves dependency conflicts.

We recommend you use --use-feature=2020-resolver to test your packages with the new resolver before it becomes the default.

tensorflow 2.3.1 requires numpy<1.19.0,>=1.16.0, but you'll have numpy 1.19.4 which is incompatible.

import tensorflow as tf
tf.compat.v1.enable_eager_execution()

Kemudian tentukan beberapa konstanta yang diperlukan dan siapkan kumpulan data MNIST.

# Size of each input image, 28 x 28 pixels
IMAGE_SIZE = 28 * 28
# Number of distinct number labels, [0..9]
NUM_CLASSES = 10
# Number of examples in each training batch (step)
TRAIN_BATCH_SIZE = 100
# Number of training steps to run
TRAIN_STEPS = 1000

# Loads MNIST dataset.
train, test = tf.keras.datasets.mnist.load_data()
train_ds = tf.data.Dataset.from_tensor_slices(train).batch(TRAIN_BATCH_SIZE).repeat()

# Casting from raw data to the required datatypes.
def cast(images, labels):
  images = tf.cast(
      tf.reshape(images, [-1, IMAGE_SIZE]), tf.float32)
  labels = tf.cast(labels, tf.int64)
  return (images, labels)
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step

Terakhir, tentukan model dan pengoptimal. Model ini menggunakan satu lapisan padat.

layer = tf.keras.layers.Dense(NUM_CLASSES)
optimizer = tf.keras.optimizers.Adam()

Tentukan fungsi pelatihan

Dalam fungsi pelatihan, Anda mendapatkan label yang diprediksi menggunakan lapisan yang ditentukan di atas, lalu meminimalkan gradien kerugian menggunakan pengoptimal. Untuk mengkompilasi komputasi menggunakan XLA, letakkan di dalam tf.function dengan jit_compile=True .

@tf.function(jit_compile=True)
def train_mnist(images, labels):
    images, labels = cast(images, labels)

    with tf.GradientTape() as tape:
      predicted_labels = layer(images)
      loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(
          logits=predicted_labels, labels=labels
      ))
    layer_variables = layer.trainable_variables
    grads = tape.gradient(loss, layer_variables)
    optimizer.apply_gradients(zip(grads, layer_variables))

Latih dan uji modelnya

Setelah Anda menentukan fungsi pelatihan, tentukan modelnya.

for images, labels in train_ds:
  if optimizer.iterations > TRAIN_STEPS:
    break
  train_mnist(images, labels)

Dan, terakhir, periksa akurasinya:

images, labels = cast(test[0], test[1])
predicted_labels = layer(images)
correct_prediction = tf.equal(tf.argmax(predicted_labels, 1), labels)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print("Prediction accuracy after training: %s" % accuracy)
Prediction accuracy after training: tf.Tensor(0.8794, shape=(), dtype=float32)