|View on TensorFlow.org||Run in Google Colab||View source on GitHub||Download notebook|
TensorFlow also includes the tf.Keras API, a high-level neural network API that provides useful abstractions to reduce boilerplate. However, in this guide, you will use basic classes.
import tensorflow as tf
Solving machine learning problems
Solving a machine learning problem usually consists of the following steps:
- Obtain training data.
- Define the model.
- Define a loss function.
- Run through the training data, calculating loss from the ideal value
- Calculate gradients for that loss and use an optimizer to adjust the variables to fit the data.
- Evaluate your results.
For illustration purposes, in this guide you'll develop a simple linear model, $f(x) = x * W + b$, which has two variables: $W$ (weights) and $b$ (bias).
This is the most basic of machine learning problems: Given $x$ and $y$, try to find the slope and offset of a line via simple linear regression.
Supervised learning uses inputs (usually denoted as x) and outputs (denoted y, often called labels). The goal is to learn from paired inputs and outputs so that you can prediect the value of an output from an input.
Each input of your data, in TensorFlow, is almost always represented by a tensor, and is often a vector. In supervised training, the output (or value you'd like to predict) is also a tensor.
Here is some data synthesized by adding Gaussian (Normal) noise to points along a line.
# The actual line TRUE_W = 3.0 TRUE_B = 2.0 NUM_EXAMPLES = 1000 # A vector of random x values x = tf.random.normal(shape=[NUM_EXAMPLES]) # Generate some noise noise = tf.random.normal(shape=[NUM_EXAMPLES]) # Calculate y y = x * TRUE_W + TRUE_B + noise
# Plot all the data import matplotlib.pyplot as plt plt.scatter(x, y, c="b") plt.show()
Tensors are usually gathered together in batches, or groups of inputs and outputs stacked together. Batching can confer some training benefits and works well with accelerators and vectorized computation. Given how small this dataset is, you can treat the entire dataset as a single batch.
Define the model
tf.Module to encapsulate the variables and the computation. You could use any Python object, but this way it can be easily saved.
Here, you define both w and b as variables.
class MyModel(tf.Module): def __init__(self, **kwargs): super().__init__(**kwargs) # Initialize the weights to `5.0` and the bias to `0.0` # In practice, these should be randomly initialized self.w = tf.Variable(5.0) self.b = tf.Variable(0.0) def __call__(self, x): return self.w * x + self.b model = MyModel() # List the variables tf.modules's built-in variable aggregation. print("Variables:", model.variables) # Verify the model works assert model(3.0).numpy() == 15.0
Variables: (<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=0.0>, <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=5.0>)
The initial variables are set here in a fixed way, but Keras comes with any of a number of initalizers you could use, with or without the rest of Keras.
Define a loss function
A loss function measures how well the output of a model for a given input matches the target output. The goal is to minimize this difference during training. Define the standard L2 loss, also known as the "mean squared" error:
# This computes a single loss value for an entire batch def loss(target_y, predicted_y): return tf.reduce_mean(tf.square(target_y - predicted_y))
Before training the model, you can visualize the loss value by plotting the model's predictions in red and the training data in blue:
plt.scatter(x, y, c="b") plt.scatter(x, model(x), c="r") plt.show() print("Current loss: %1.6f" % loss(model(x), y).numpy())
Current loss: 9.331731
Define a training loop
The training loop consists of repeatedly doing three tasks in order:
- Sending a batch of inputs through the model to generate outputs
- Calculating the loss by comparing the outputs to the output (or label)
- Using gradient tape to find the gradients
- Optimizing the variables with those gradients
For this example, you can train the model using gradient descent.
There are many variants of the gradient descent scheme that are captured in
tf.keras.optimizers. But in the spirit of building from first principles, here you will implement the basic math yourself with the help of
tf.GradientTape for automatic differentiation and
tf.assign_sub for decrementing a value (which combines
# Given a callable model, inputs, outputs, and a learning rate... def train(model, x, y, learning_rate): with tf.GradientTape() as t: # Trainable variables are automatically tracked by GradientTape current_loss = loss(y, model(x)) # Use GradientTape to calculate the gradients with respect to W and b dw, db = t.gradient(current_loss, [model.w, model.b]) # Subtract the gradient scaled by the learning rate model.w.assign_sub(learning_rate * dw) model.b.assign_sub(learning_rate * db)
For a look at training, you can send the same batch of x an y through the training loop, and see how
model = MyModel() # Collect the history of W-values and b-values to plot later Ws, bs = ,  epochs = range(10) # Define a training loop def training_loop(model, x, y): for epoch in epochs: # Update the model with the single giant batch train(model, x, y, learning_rate=0.1) # Track this before I update Ws.append(model.w.numpy()) bs.append(model.b.numpy()) current_loss = loss(y, model(x)) print("Epoch %2d: W=%1.2f b=%1.2f, loss=%2.5f" % (epoch, Ws[-1], bs[-1], current_loss))
print("Starting: W=%1.2f b=%1.2f, loss=%2.5f" % (model.w, model.b, loss(y, model(x)))) # Do the training training_loop(model, x, y) # Plot it plt.plot(epochs, Ws, "r", epochs, bs, "b") plt.plot([TRUE_W] * len(epochs), "r--", [TRUE_B] * len(epochs), "b--") plt.legend(["W", "b", "True W", "True b"]) plt.show()
Starting: W=5.00 b=0.00, loss=9.33173 Epoch 0: W=4.58 b=0.41, loss=6.23475 Epoch 1: W=4.25 b=0.74, loss=4.29156 Epoch 2: W=3.99 b=1.00, loss=3.07231 Epoch 3: W=3.78 b=1.21, loss=2.30730 Epoch 4: W=3.62 b=1.37, loss=1.82729 Epoch 5: W=3.49 b=1.50, loss=1.52611 Epoch 6: W=3.39 b=1.60, loss=1.33714 Epoch 7: W=3.31 b=1.69, loss=1.21857 Epoch 8: W=3.24 b=1.75, loss=1.14417 Epoch 9: W=3.19 b=1.80, loss=1.09749
# Visualize how the trained model performs plt.scatter(x, y, c="b") plt.scatter(x, model(x), c="r") plt.show() print("Current loss: %1.6f" % loss(model(x), y).numpy())
Current loss: 1.097489
The same solution, but with Keras
It's useful to contrast the code above with the equivalent in Keras.
Defining the model looks exactly the same if you subclass
tf.keras.Model. Remember that Keras models inherit ultimately from module.
class MyModelKeras(tf.keras.Model): def __init__(self, **kwargs): super().__init__(**kwargs) # Initialize the weights to `5.0` and the bias to `0.0` # In practice, these should be randomly initialized self.w = tf.Variable(5.0) self.b = tf.Variable(0.0) def __call__(self, x, **kwargs): return self.w * x + self.b keras_model = MyModelKeras() # Reuse the training loop with a Keras model training_loop(keras_model, x, y) # You can also save a checkpoint using Keras's built-in support keras_model.save_weights("my_checkpoint")
Epoch 0: W=4.58 b=0.41, loss=6.23475 Epoch 1: W=4.25 b=0.74, loss=4.29156 Epoch 2: W=3.99 b=1.00, loss=3.07231 Epoch 3: W=3.78 b=1.21, loss=2.30730 Epoch 4: W=3.62 b=1.37, loss=1.82729 Epoch 5: W=3.49 b=1.50, loss=1.52611 Epoch 6: W=3.39 b=1.60, loss=1.33714 Epoch 7: W=3.31 b=1.69, loss=1.21857 Epoch 8: W=3.24 b=1.75, loss=1.14417 Epoch 9: W=3.19 b=1.80, loss=1.09749
Rather than write new training loops each time you create a model, you can use the built-in features of Keras as a shortcut. This can be useful when you do not want to write or debug Python training loops.
If you do, you will need to use
model.compile() to set the parameters, and
model.fit() to train. It can be less code to use Keras implementations of L2 loss and gradient descent, again as a shortcut. Keras losses and optimizers can be used outside of these convenience functions, too, and the previous example could have used them.
keras_model = MyModelKeras() # compile sets the training paramaeters keras_model.compile( # By default, fit() uses tf.function(). You can # turn that off for debugging, but it is on now. run_eagerly=False, # Using a built-in optimizer, configuring as an object optimizer=tf.keras.optimizers.SGD(learning_rate=0.1), # Keras comes with built-in MSE error # However, you could use the loss function # defined above loss=tf.keras.losses.mean_squared_error, )
fit expects batched data or a complete dataset as a NumPy array. NumPy arrays are chopped into batches and default to a batch size of 32.
In this case, to match the behavior of the hand-written loop, you should pass
x in as a single batch of size 1000.
print(x.shape) keras_model.fit(x, y, epochs=10, batch_size=1000)
1000 Epoch 1/10 1/1 [==============================] - 0s 1ms/step - loss: 9.3317 Epoch 2/10 1/1 [==============================] - 0s 895us/step - loss: 6.2348 Epoch 3/10 1/1 [==============================] - 0s 871us/step - loss: 4.2916 Epoch 4/10 1/1 [==============================] - 0s 952us/step - loss: 3.0723 Epoch 5/10 1/1 [==============================] - 0s 951us/step - loss: 2.3073 Epoch 6/10 1/1 [==============================] - 0s 940us/step - loss: 1.8273 Epoch 7/10 1/1 [==============================] - 0s 1ms/step - loss: 1.5261 Epoch 8/10 1/1 [==============================] - 0s 813us/step - loss: 1.3371 Epoch 9/10 1/1 [==============================] - 0s 926us/step - loss: 1.2186 Epoch 10/10 1/1 [==============================] - 0s 933us/step - loss: 1.1442 <tensorflow.python.keras.callbacks.History at 0x7f93781a16a0>
Note that Keras prints out the loss after training, not before, so the first loss appears lower, but otherwise this shows essentially the same training performance.
In this guide, you have seen how to use the core classes of tensors, variables, modules, and gradient tape to build and train a model, and further how those ideas map to Keras.
This is, however, an extremely simple problem. For a more practical introduction, see this tutorial that uses real text.