Installing TensorFlow on Ubuntu

This guide explains how to install TensorFlow on Ubuntu Linux. While these instructions may work on other Linux variants, they are tested and supported with the following system requirements:

  • 64-bit desktops or laptops
  • Ubuntu 16.04 or higher

Choose which TensorFlow to install

The following TensorFlow variants are available for installation:

  • TensorFlow with CPU support only. If your system does not have a NVIDIA® GPU, you must install this version. This version of TensorFlow is usually easier to install, so even if you have an NVIDIA GPU, we recommend installing this version first.
  • TensorFlow with GPU support. TensorFlow programs usually run much faster on a GPU instead of a CPU. If you run performance-critical applications and your system has an NVIDIA® GPU that meets the prerequisites, you should install this version. See TensorFlow GPU support for details.

How to install TensorFlow

There are a few options to install TensorFlow on your machine:

Use pip in a virtual environment

The Virtualenv tool creates virtual Python environments that are isolated from other Python development on the same machine. In this scenario, you install TensorFlow and its dependencies within a virtual environment that is available when activated. Virtualenv provides a reliable way to install and run TensorFlow while avoiding conflicts with the rest of the system.

1. Install Python, pip, and virtualenv.

On Ubuntu, Python is automatically installed and pip is usually installed. Confirm the python and pip versions:

  python -V  # or: python3 -V
  pip -V     # or: pip3 -V

To install these packages on Ubuntu:

  sudo apt-get install python-pip python-dev python-virtualenv   # for Python 2.7
  sudo apt-get install python3-pip python3-dev python-virtualenv # for Python 3.n

We recommend using pip version 8.1 or higher. If using a release before version 8.1, upgrade pip:

  sudo pip install -U pip

If not using Ubuntu and setuptools is installed, use easy_install to install pip:

  easy_install -U pip
2. Create a directory for the virtual environment and choose a Python interpreter.
  mkdir ~/tensorflow  # somewhere to work out of
  cd ~/tensorflow
  # Choose one of the following Python environments for the ./venv directory:
  virtualenv --system-site-packages venv            # Use python default (Python 2.7)
  virtualenv --system-site-packages -p python3 venv # Use Python 3.n
3. Activate the Virtualenv environment.

Use one of these shell-specific commands to activate the virtual environment:

  source ~/tensorflow/venv/bin/activate      # bash, sh, ksh, or zsh
  source ~/tensorflow/venv/bin/activate.csh  # csh or tcsh
  . ~/tensorflow/venv/bin/activate.fish      # fish

When the Virtualenv is activated, the shell prompt displays as (venv) $.

4. Upgrade pip in the virtual environment.

Within the active virtual environment, upgrade pip:

(venv)$ pip install -U pip

You can install other Python packages within the virtual environment without affecting packages outside the virtualenv.

5. Install TensorFlow in the virtual environment.

Choose one of the available TensorFlow packages for installation:

  • tensorflow —Current release for CPU
  • tensorflow-gpu —Current release with GPU support
  • tf-nightly —Nightly build for CPU
  • tf-nightly-gpu —Nightly build with GPU support

Within an active Virtualenv environment, use pip to install the package:

  pip install -U tensorflow

Use pip list to show the packages installed in the virtual environment. Validate the install and test the version:

(venv)$ python -c "import tensorflow as tf; print(tf.__version__)"

Use the deactivate command to stop the Python virtual environment.

Problems

If the above steps failed, try installing the TensorFlow binary using the remote URL of the pip package:

(venv)$ pip install --upgrade remote-pkg-URL   # Python 2.7
(venv)$ pip3 install --upgrade remote-pkg-URL  # Python 3.n

The remote-pkg-URL depends on the operating system, Python version, and GPU support. See here for the URL naming scheme and location.

See Common Installation Problems if you encounter problems.

Uninstall TensorFlow

To uninstall TensorFlow, remove the Virtualenv directory you created in step 2:

  deactivate  # stop the virtualenv
  rm -r ~/tensorflow/venv

Use pip in your system environment

Use pip to install the TensorFlow package directly on your system without using a container or virtual environment for isolation. This method is recommended for system administrators that want a TensorFlow installation that is available to everyone on a multi-user system.

Since a system install is not isolated, it could interfere with other Python-based installations. But if you understand pip and your Python environment, a system pip install is straightforward.

See the REQUIRED_PACKAGES section of setup.py for a list of packages that TensorFlow installs.

1. Install Python, pip, and virtualenv.

On Ubuntu, Python is automatically installed and pip is usually installed. Confirm the python and pip versions:

  python -V  # or: python3 -V
  pip -V     # or: pip3 -V

To install these packages on Ubuntu:

  sudo apt-get install python-pip python-dev   # for Python 2.7
  sudo apt-get install python3-pip python3-dev # for Python 3.n

We recommend using pip version 8.1 or higher. If using a release before version 8.1, upgrade pip:

  sudo pip install -U pip

If not using Ubuntu and setuptools is installed, use easy_install to install pip:

  easy_install -U pip
2. Install TensorFlow on system.

Choose one of the available TensorFlow packages for installation:

  • tensorflow —Current release for CPU
  • tensorflow-gpu —Current release with GPU support
  • tf-nightly —Nightly build for CPU
  • tf-nightly-gpu —Nightly build with GPU support

And use pip to install the package for Python 2 or 3:

  sudo pip install -U tensorflow   # Python 2.7
  sudo pip3 install -U tensorflow  # Python 3.n

Use pip list to show the packages installed on the system. Validate the install and test the version:

  python -c "import tensorflow as tf; print(tf.__version__)"

Problems

If the above steps failed, try installing the TensorFlow binary using the remote URL of the pip package:

  sudo pip install --upgrade remote-pkg-URL   # Python 2.7
  sudo pip3 install --upgrade remote-pkg-URL  # Python 3.n

The remote-pkg-URL depends on the operating system, Python version, and GPU support. See here for the URL naming scheme and location.

See Common Installation Problems if you encounter problems.

Uninstall TensorFlow

To uninstall TensorFlow on your system, use one of following commands:

  sudo pip uninstall tensorflow   # for Python 2.7
  sudo pip3 uninstall tensorflow  # for Python 3.n

Configure a Docker container

Docker completely isolates the TensorFlow installation from pre-existing packages on your machine. The Docker container contains TensorFlow and all its dependencies. Note that the Docker image can be quite large (hundreds of MBs). You might choose the Docker installation if you are incorporating TensorFlow into a larger application architecture that already uses Docker.

Take the following steps to install TensorFlow through Docker:

  1. Install Docker on your machine as described in the Docker documentation.
  2. Optionally, create a Linux group called docker to allow launching containers without sudo as described in the Docker documentation. (If you don't do this step, you'll have to use sudo each time you invoke Docker.)
  3. To install a version of TensorFlow that supports GPUs, you must first install nvidia-docker, which is stored in github.
  4. Launch a Docker container that contains one of the TensorFlow binary images.

The remainder of this section explains how to launch a Docker container.

CPU-only

To launch a Docker container with CPU-only support (that is, without GPU support), enter a command of the following format:

$ docker run -it -p hostPort:containerPort TensorFlowCPUImage

where:

  • -p hostPort:containerPort is optional. If you plan to run TensorFlow programs from the shell, omit this option. If you plan to run TensorFlow programs as Jupyter notebooks, set both hostPort and containerPort to 8888. If you'd like to run TensorBoard inside the container, add a second -p flag, setting both hostPort and containerPort to 6006.
  • TensorFlowCPUImage is required. It identifies the Docker container. Specify one of the following values:

    • tensorflow/tensorflow, which is the TensorFlow CPU binary image.
    • tensorflow/tensorflow:latest-devel, which is the latest TensorFlow CPU Binary image plus source code.
    • tensorflow/tensorflow:version, which is the specified version (for example, 1.1.0rc1) of TensorFlow CPU binary image.
    • tensorflow/tensorflow:version-devel, which is the specified version (for example, 1.1.0rc1) of the TensorFlow GPU binary image plus source code.

    TensorFlow images are available at dockerhub.

For example, the following command launches the latest TensorFlow CPU binary image in a Docker container from which you can run TensorFlow programs in a shell:

$ docker run -it tensorflow/tensorflow bash

The following command also launches the latest TensorFlow CPU binary image in a Docker container. However, in this Docker container, you can run TensorFlow programs in a Jupyter notebook:

$ docker run -it -p 8888:8888 tensorflow/tensorflow

Docker will download the TensorFlow binary image the first time you launch it.

GPU support

To launch a Docker container with NVidia GPU support, enter a command of the following format (this does not require any local CUDA installation):

$ nvidia-docker run -it -p hostPort:containerPort TensorFlowGPUImage

where:

  • -p hostPort:containerPort is optional. If you plan to run TensorFlow programs from the shell, omit this option. If you plan to run TensorFlow programs as Jupyter notebooks, set both hostPort and containerPort to 8888.
  • TensorFlowGPUImage specifies the Docker container. You must specify one of the following values:
    • tensorflow/tensorflow:latest-gpu, which is the latest TensorFlow GPU binary image.
    • tensorflow/tensorflow:latest-devel-gpu, which is the latest TensorFlow GPU Binary image plus source code.
    • tensorflow/tensorflow:version-gpu, which is the specified version (for example, 0.12.1) of the TensorFlow GPU binary image.
    • tensorflow/tensorflow:version-devel-gpu, which is the specified version (for example, 0.12.1) of the TensorFlow GPU binary image plus source code.

We recommend installing one of the latest versions. For example, the following command launches the latest TensorFlow GPU binary image in a Docker container from which you can run TensorFlow programs in a shell:

$ nvidia-docker run -it tensorflow/tensorflow:latest-gpu bash

The following command also launches the latest TensorFlow GPU binary image in a Docker container. In this Docker container, you can run TensorFlow programs in a Jupyter notebook:

$ nvidia-docker run -it -p 8888:8888 tensorflow/tensorflow:latest-gpu

The following command installs an older TensorFlow version (0.12.1):

$ nvidia-docker run -it -p 8888:8888 tensorflow/tensorflow:0.12.1-gpu

Docker will download the TensorFlow binary image the first time you launch it. For more details see the TensorFlow docker readme.

Next Steps

You should now validate your installation.

Use pip in Anaconda

Anaconda provides the conda utility to create a virtual environment. However, within Anaconda, we recommend installing TensorFlow using the pip install command and not with the conda install command.

Take the following steps to install TensorFlow in an Anaconda environment:

  1. Follow the instructions on the Anaconda download site to download and install Anaconda.

  2. Create a conda environment named tensorflow to run a version of Python by invoking the following command:

    $ conda create -n tensorflow pip python=2.7 # or python=3.3, etc.

  3. Activate the conda environment by issuing the following command:

    $ source activate tensorflow
     (tensorflow)$  # Your prompt should change 

  4. Issue a command of the following format to install TensorFlow inside your conda environment:

    (tensorflow)$ pip install --ignore-installed --upgrade tfBinaryURL

    where tfBinaryURL is the URL of the TensorFlow Python package. For example, the following command installs the CPU-only version of TensorFlow for Python 3.4:

     (tensorflow)$ pip install --ignore-installed --upgrade 
    https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.9.0-cp34-cp34m-linux_x86_64.whl

Validate your installation

To validate your TensorFlow installation, do the following:

  1. Ensure that your environment is prepared to run TensorFlow programs.
  2. Run a short TensorFlow program.

Prepare your environment

If you installed on native pip, Virtualenv, or Anaconda, then do the following:

  1. Start a terminal.
  2. If you installed with Virtualenv or Anaconda, activate your container.
  3. If you installed TensorFlow source code, navigate to any directory except one containing TensorFlow source code.

If you installed through Docker, start a Docker container from which you can run bash. For example:

$ docker run -it tensorflow/tensorflow bash

Run a short TensorFlow program

Invoke python from your shell as follows:

$ python

Enter the following short program inside the python interactive shell:

# Python
import tensorflow as tf
hello = tf.constant('Hello, TensorFlow!')
sess = tf.Session()
print(sess.run(hello))

If the system outputs the following, then you are ready to begin writing TensorFlow programs:

Hello, TensorFlow!

If the system outputs an error message instead of a greeting, see Common installation problems.

To learn more, see the TensorFlow tutorials.

TensorFlow GPU support

To install TensorFlow with GPU support, configure the following NVIDIA® software on your system:

  • CUDA Toolkit 9.0. For details, see NVIDIA's documentation. Append the relevant CUDA pathnames to the LD_LIBRARY_PATH environmental variable as described in the NVIDIA documentation.
  • cuDNN SDK v7. For details, see NVIDIA's documentation. Create the CUDA_HOME environment variable as described in the NVIDIA documentation.
  • A GPU card with CUDA Compute Capability 3.0 or higher for building TensorFlow from source. To use the TensorFlow binaries, version 3.5 or higher is required. See the NVIDIA documentation for a list of supported GPU cards.
  • GPU drivers that support your version of the CUDA Toolkit.
  • The libcupti-dev library is the NVIDIA CUDA Profile Tools Interface. This library provides advanced profiling support. To install this library, use the following command for CUDA Toolkit >= 8.0:
  sudo apt-get install cuda-command-line-tools

Add this path to the LD_LIBRARY_PATH environmental variable:

  export LD_LIBRARY_PATH=${LD_LIBRARY_PATH:+${LD_LIBRARY_PATH}:}/usr/local/cuda/extras/CUPTI/lib64
  • OPTIONAL: For optimized performance during inference, install NVIDIA TensorRT 3.0. To install the minimal amount of TensorRT runtime components required to use with the pre-built tensorflow-gpu package:
  wget https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1404/x86_64/nvinfer-runtime-trt-repo-ubuntu1404-3.0.4-ga-cuda9.0_1.0-1_amd64.deb
  sudo dpkg -i nvinfer-runtime-trt-repo-ubuntu1404-3.0.4-ga-cuda9.0_1.0-1_amd64.deb
  sudo apt-get update
  sudo apt-get install -y --allow-downgrades libnvinfer-dev libcudnn7-dev=7.0.5.15-1+cuda9.0 libcudnn7=7.0.5.15-1+cuda9.0

To build the TensorFlow-TensorRT integration module from source instead of using the pre-built binaries, see the module documentation. For detailed TensorRT installation instructions, see NVIDIA's TensorRT documentation.

To avoid cuDNN version conflicts during later system upgrades, hold the cuDNN version at 7.0.5:

  sudo apt-mark hold libcudnn7 libcudnn7-dev

To allow upgrades, remove the this hold:

  sudo apt-mark unhold libcudnn7 libcudnn7-dev

If you have an earlier version of the preceding packages, upgrade to the specified versions. If upgrading is not possible, you can still run TensorFlow with GPU support by Installing TensorFlow from Sources.

Common installation problems

We are relying on Stack Overflow to document TensorFlow installation problems and their remedies. The following table contains links to Stack Overflow answers for some common installation problems. If you encounter an error message or other installation problem not listed in the following table, search for it on Stack Overflow. If Stack Overflow doesn't show the error message, ask a new question about it on Stack Overflow and specify the tensorflow tag.

Link to GitHub or Stack Overflow Error Message
36159194
ImportError: libcudart.so.Version: cannot open shared object file:
  No such file or directory
41991101
ImportError: libcudnn.Version: cannot open shared object file:
  No such file or directory
36371137 and here
libprotobuf ERROR google/protobuf/src/google/protobuf/io/coded_stream.cc:207] A
  protocol message was rejected because it was too big (more than 67108864 bytes).
  To increase the limit (or to disable these warnings), see
  CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
35252888
Error importing tensorflow. Unless you are using bazel, you should
  not try to import tensorflow from its source directory; please exit the
  tensorflow source tree, and relaunch your python interpreter from
  there.
33623453
IOError: [Errno 2] No such file or directory:
  '/tmp/pip-o6Tpui-build/setup.py'
42006320
ImportError: Traceback (most recent call last):
  File ".../tensorflow/core/framework/graph_pb2.py", line 6, in 
  from google.protobuf import descriptor as _descriptor
  ImportError: cannot import name 'descriptor'
35190574
SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify
  failed
42009190
  Installing collected packages: setuptools, protobuf, wheel, numpy, tensorflow
  Found existing installation: setuptools 1.1.6
  Uninstalling setuptools-1.1.6:
  Exception:
  ...
  [Errno 1] Operation not permitted:
  '/tmp/pip-a1DXRT-uninstall/.../lib/python/_markerlib' 
36933958
  ...
  Installing collected packages: setuptools, protobuf, wheel, numpy, tensorflow
  Found existing installation: setuptools 1.1.6
  Uninstalling setuptools-1.1.6:
  Exception:
  ...
  [Errno 1] Operation not permitted:
  '/tmp/pip-a1DXRT-uninstall/System/Library/Frameworks/Python.framework/
   Versions/2.7/Extras/lib/python/_markerlib'

The URL of the TensorFlow Python package

A few installation mechanisms require the URL of the TensorFlow Python package. The value you specify depends on three factors:

  • operating system
  • Python version
  • CPU only vs. GPU support

This section documents the relevant values for Linux installations.

Python 2.7

CPU only:

https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.9.0-cp27-none-linux_x86_64.whl

GPU support:

https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.9.0-cp27-none-linux_x86_64.whl

Note that GPU support requires the NVIDIA hardware and software described in NVIDIA requirements to run TensorFlow with GPU support.

Python 3.4

CPU only:

https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.9.0-cp34-cp34m-linux_x86_64.whl

GPU support:

https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.9.0-cp34-cp34m-linux_x86_64.whl

Note that GPU support requires the NVIDIA hardware and software described in NVIDIA requirements to run TensorFlow with GPU support.

Python 3.5

CPU only:

https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.9.0-cp35-cp35m-linux_x86_64.whl

GPU support:

https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.9.0-cp35-cp35m-linux_x86_64.whl

Note that GPU support requires the NVIDIA hardware and software described in NVIDIA requirements to run TensorFlow with GPU support.

Python 3.6

CPU only:

https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.9.0-cp36-cp36m-linux_x86_64.whl

GPU support:

https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.9.0-cp36-cp36m-linux_x86_64.whl

Note that GPU support requires the NVIDIA hardware and software described in NVIDIA requirements to run TensorFlow with GPU support.