The documents in this unit dive into the details of how TensorFlow works. The units are as follows:
High Level APIs
- Keras, TensorFlow's high-level API for building and training deep learning models.
- Eager Execution, an API for writing TensorFlow code imperatively, like you would use Numpy.
- Importing Data, easy input pipelines to bring your data into your TensorFlow program.
- Estimators, a high-level API that provides fully-packaged models ready for large-scale training and production.
- Premade Estimators, the basics of premade Estimators.
- Checkpoints, save training progress and resume where you left off.
- Feature Columns, handle a variety of input data types without changes to the model.
- Datasets for Estimators, use
tf.datato input data.
- Creating Custom Estimators, write your own Estimator.
- Using GPUs explains how TensorFlow assigns operations to devices and how you can change the arrangement manually.
- Using TPUs explains how to modify
Estimatorprograms to run on a TPU.
Low Level APIs
- Introduction, which introduces the basics of how you can use TensorFlow outside of the high Level APIs.
- Tensors, which explains how to create, manipulate, and access Tensors--the fundamental object in TensorFlow.
- Variables, which details how to represent shared, persistent state in your program.
- Graphs and Sessions, which explains:
- dataflow graphs, which are TensorFlow's representation of computations as dependencies between operations.
- sessions, which are TensorFlow's mechanism for running dataflow graphs across one or more local or remote devices. If you are programming with the low-level TensorFlow API, this unit is essential. If you are programming with a high-level TensorFlow API such as Estimators or Keras, the high-level API creates and manages graphs and sessions for you, but understanding graphs and sessions can still be helpful.
- Save and Restore, which explains how to save and restore variables and models.
- Ragged Tensors, which explains how to use Ragged Tensors to encode nested variable-length lists.
- Embeddings, which introduces the concept of embeddings, provides a simple example of training an embedding in TensorFlow, and explains how to view embeddings with the TensorBoard Embedding Projector.
- TensorFlow Debugger, which explains how to use the TensorFlow debugger (tfdbg).
Performance is an important consideration when training machine learning models. Performance speeds up and scales research while also providing end users with near instant predictions.
- Performance overview contains a collection of best practices for optimizing your TensorFlow code.
- Data input pipeline describes the
tf.dataAPI for building efficient data input pipelines for TensorFlow.
- Benchmarks contain a collection of benchmark results for a variety of hardware configurations.
This section explains how developers can add functionality to TensorFlow's capabilities.
- TensorFlow architecture presents an architectural overview.
- Create an op, which explains how to create your own operations.
- Custom filesystem plugin, which explains how to add support for your own shared or distributed filesystem.
- Custom file and record formats, which details how to add support for your own file and record formats.
- Language bindings, Python is currently the only language supported by TensorFlow's API stability promises. However, TensorFlow also provides functionality to create or develop features in other languages.
- Model files, for creating tools compatible with TensorFlow's model format.
XLA (Accelerated Linear Algebra) is an experimental compiler for linear algebra that optimizes TensorFlow computations.