Differentiable layers for graphics.

import numpy as np
import tensorflow as tf
import trimesh

import tensorflow_graphics.geometry.transformation as tfg_transformation
from tensorflow_graphics.notebooks import threejs_visualization

# Download the mesh.
!wget https://storage.googleapis.com/tensorflow-graphics/notebooks/index/cow.obj
# Load the mesh.
mesh = trimesh.load("cow.obj")
mesh = {"vertices": mesh.vertices, "faces": mesh.faces}
# Visualize the original mesh.
threejs_visualization.triangular_mesh_renderer(mesh, width=400, height=400)
# Set the axis and angle parameters.
axis = np.array((0., 1., 0.))  # y axis.
angle = np.array((np.pi / 4.,))  # 45 degree angle.
# Rotate the mesh.
mesh["vertices"] = tfg_transformation.axis_angle.rotate(mesh["vertices"], axis,
                                                        angle).numpy()
# Visualize the rotated mesh.
threejs_visualization.triangular_mesh_renderer(mesh, width=400, height=400)
Run in a Notebook
TensorFlow Graphics aims at making useful graphics functions widely accessible to the community by providing a set of differentiable graphics layers (e.g. cameras, reflectance models, mesh convolutions) and 3D viewer functionalities (e.g. 3D TensorBoard) that can be used in your machine learning models of choice.

The last few years have seen a rise in novel differentiable graphics layers which can be inserted in neural network architectures. From spatial transformers to differentiable graphics renderers, these new layers leverage the knowledge acquired over years of computer vision and graphics research to build novel and more efficient network architectures. Explicitly modeling geometric priors and constraints into machine learning models opens up the door to architectures that can be trained robustly, efficiently, and more importantly, in a self-supervised fashion.

To get started, see a more detailed overview, the installation guide, and the API.