The sparse update ops modify a subset of the entries in a dense Variable, either overwriting the entries or adding / subtracting a delta. These are useful for training embedding models and similar lookup-based networks, since only a small subset of embedding vectors change in any given step.

Since a sparse update of a large tensor may be generated automatically during gradient computation (as in the gradient of tf.gather), an IndexedSlices class is provided that encapsulates a set of sparse indices and values. IndexedSlices objects are detected and handled automatically by the optimizers in most cases.

### tf.scatter_update(ref, indices, updates, use_locking=None, name=None)

Applies sparse updates to a variable reference.

This operation computes

# Scalar indices

# Vector indices (for each i)

# High rank indices (for each i, ..., j)
ref[indices[i, ..., j], ...] = updates[i, ..., j, ...]


This operation outputs ref after the update is done. This makes it easier to chain operations that need to use the reset value.

If values in ref is to be updated more than once, because there are duplicate entires in indices, the order at which the updates happen for each value is undefined.

Requires updates.shape = indices.shape + ref.shape[1:].

##### Args:
• ref: A mutable Tensor. Should be from a Variable node.
• indices: A Tensor. Must be one of the following types: int32, int64. A tensor of indices into the first dimension of ref.
• updates: A Tensor. Must have the same type as ref. A tensor of updated values to store in ref.
• use_locking: An optional bool. Defaults to True. If True, the assignment will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
• name: A name for the operation (optional).
##### Returns:

Same as ref. Returned as a convenience for operations that want to use the updated values after the update is done.

### tf.scatter_add(ref, indices, updates, use_locking=None, name=None)

This operation computes

# Scalar indices

# Vector indices (for each i)

# High rank indices (for each i, ..., j)
ref[indices[i, ..., j], ...] += updates[i, ..., j, ...]


This operation outputs ref after the update is done. This makes it easier to chain operations that need to use the reset value.

Duplicate entries are handled correctly: if multiple indices reference the same location, their contributions add.

Requires updates.shape = indices.shape + ref.shape[1:].

##### Args:
• ref: A mutable Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. Should be from a Variable node.
• indices: A Tensor. Must be one of the following types: int32, int64. A tensor of indices into the first dimension of ref.
• updates: A Tensor. Must have the same type as ref. A tensor of updated values to add to ref.
• use_locking: An optional bool. Defaults to False. If True, the addition will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
• name: A name for the operation (optional).
##### Returns:

Same as ref. Returned as a convenience for operations that want to use the updated values after the update is done.

### tf.scatter_sub(ref, indices, updates, use_locking=None, name=None)

Subtracts sparse updates to a variable reference.

# Scalar indices

# Vector indices (for each i)

# High rank indices (for each i, ..., j)
ref[indices[i, ..., j], ...] -= updates[i, ..., j, ...]


This operation outputs ref after the update is done. This makes it easier to chain operations that need to use the reset value.

Duplicate entries are handled correctly: if multiple indices reference the same location, their (negated) contributions add.

Requires updates.shape = indices.shape + ref.shape[1:].

##### Args:
• ref: A mutable Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. Should be from a Variable node.
• indices: A Tensor. Must be one of the following types: int32, int64. A tensor of indices into the first dimension of ref.
• updates: A Tensor. Must have the same type as ref. A tensor of updated values to subtract from ref.
• use_locking: An optional bool. Defaults to False. If True, the subtraction will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
• name: A name for the operation (optional).
##### Returns:

Same as ref. Returned as a convenience for operations that want to use the updated values after the update is done.

### tf.scatter_mul(ref, indices, updates, use_locking=None, name=None)

Multiplies sparse updates into a variable reference.

This operation computes

# Scalar indices

# Vector indices (for each i)

# High rank indices (for each i, ..., j)
ref[indices[i, ..., j], ...] *= updates[i, ..., j, ...]


This operation outputs ref after the update is done. This makes it easier to chain operations that need to use the reset value.

Duplicate entries are handled correctly: if multiple indices reference the same location, their contributions multiply.

Requires updates.shape = indices.shape + ref.shape[1:].

##### Args:
• ref: A mutable Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. Should be from a Variable node.
• indices: A Tensor. Must be one of the following types: int32, int64. A tensor of indices into the first dimension of ref.
• updates: A Tensor. Must have the same type as ref. A tensor of updated values to multiply to ref.
• use_locking: An optional bool. Defaults to False. If True, the operation will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
• name: A name for the operation (optional).
##### Returns:

Same as ref. Returned as a convenience for operations that want to use the updated values after the update is done.

### tf.scatter_div(ref, indices, updates, use_locking=None, name=None)

Divides a variable reference by sparse updates.

This operation computes

# Scalar indices

# Vector indices (for each i)

# High rank indices (for each i, ..., j)
ref[indices[i, ..., j], ...] /= updates[i, ..., j, ...]


This operation outputs ref after the update is done. This makes it easier to chain operations that need to use the reset value.

Duplicate entries are handled correctly: if multiple indices reference the same location, their contributions divide.

Requires updates.shape = indices.shape + ref.shape[1:].

##### Args:
• ref: A mutable Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. Should be from a Variable node.
• indices: A Tensor. Must be one of the following types: int32, int64. A tensor of indices into the first dimension of ref.
• updates: A Tensor. Must have the same type as ref. A tensor of values that ref is divided by.
• use_locking: An optional bool. Defaults to False. If True, the operation will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
• name: A name for the operation (optional).
##### Returns:

Same as ref. Returned as a convenience for operations that want to use the updated values after the update is done.

### tf.sparse_mask(a, mask_indices, name=None)

Masks elements of IndexedSlices.

Given an IndexedSlices instance a, returns another IndexedSlices that contains a subset of the slices of a. Only the slices at indices not specified in mask_indices are returned.

This is useful when you need to extract a subset of slices in an IndexedSlices object.

For example:

# a contains slices at indices [12, 26, 37, 45] from a large tensor
# with shape [1000, 10]
a.indices => [12, 26, 37, 45]
tf.shape(a.values) => [4, 10]

# b will be the subset of a slices at its second and third indices, so
# we want to mask its first and last indices (which are at absolute
# indices 12, 45)

b.indices => [26, 37]
tf.shape(b.values) => [2, 10]


##### Args:
• a: An IndexedSlices instance.
• mask_indices: Indices of elements to mask.
• name: A name for the operation (optional).
##### Returns:

The masked IndexedSlices instance.

### class tf.IndexedSlices

A sparse representation of a set of tensor slices at given indices.

This class is a simple wrapper for a pair of Tensor objects:

• values: A Tensor of any dtype with shape [D0, D1, ..., Dn].
• indices: A 1-D integer Tensor with shape [D0].

An IndexedSlices is typically used to represent a subset of a larger tensor dense of shape [LARGE0, D1, .. , DN] where LARGE0 >> D0. The values in indices are the indices in the first dimension of the slices that have been extracted from the larger tensor.

The dense tensor dense represented by an IndexedSlices slices has

dense[slices.indices[i], :, :, :, ...] = slices.values[i, :, :, :, ...]


The IndexedSlices class is used principally in the definition of gradients for operations that have sparse gradients (e.g. tf.gather).

Contrast this representation with SparseTensor, which uses multi-dimensional indices and scalar values.

#### tf.IndexedSlices.__init__(values, indices, dense_shape=None) {:#IndexedSlices.init}

Creates an IndexedSlices.

#### tf.IndexedSlices.values

A Tensor containing the values of the slices.

#### tf.IndexedSlices.indices

A 1-D Tensor containing the indices of the slices.

#### tf.IndexedSlices.dense_shape

A 1-D Tensor containing the shape of the corresponding dense tensor.

#### tf.IndexedSlices.name

The name of this IndexedSlices.

#### tf.IndexedSlices.dtype

The DType of elements in this tensor.

#### tf.IndexedSlices.device

The name of the device on which values will be produced, or None.

#### tf.IndexedSlices.op

The Operation that produces values as an output.

#### tf.IndexedSlices.graph

The Graph that contains the values, indices, and shape tensors.