The sparse update ops modify a subset of the entries in a dense `Variable`

,
either overwriting the entries or adding / subtracting a delta. These are
useful for training embedding models and similar lookup-based networks, since
only a small subset of embedding vectors change in any given step.

Since a sparse update of a large tensor may be generated automatically during
gradient computation (as in the gradient of
`tf.gather`

),
an `IndexedSlices`

class is provided that encapsulates a set
of sparse indices and values. `IndexedSlices`

objects are detected and handled
automatically by the optimizers in most cases.

`tf.scatter_update(ref, indices, updates, use_locking=None, name=None)`

Applies sparse updates to a variable reference.

This operation computes

```
# Scalar indices
ref[indices, ...] = updates[...]
# Vector indices (for each i)
ref[indices[i], ...] = updates[i, ...]
# High rank indices (for each i, ..., j)
ref[indices[i, ..., j], ...] = updates[i, ..., j, ...]
```

This operation outputs `ref`

after the update is done.
This makes it easier to chain operations that need to use the reset value.

If values in `ref`

is to be updated more than once, because there are
duplicate entires in `indices`

, the order at which the updates happen
for each value is undefined.

Requires `updates.shape = indices.shape + ref.shape[1:]`

.

##### Args:

: A mutable`ref`

`Tensor`

. Should be from a`Variable`

node.: A`indices`

`Tensor`

. Must be one of the following types:`int32`

,`int64`

. A tensor of indices into the first dimension of`ref`

.: A`updates`

`Tensor`

. Must have the same type as`ref`

. A tensor of updated values to store in`ref`

.: An optional`use_locking`

`bool`

. Defaults to`True`

. If True, the assignment will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.: A name for the operation (optional).`name`

##### Returns:

Same as `ref`

. Returned as a convenience for operations that want
to use the updated values after the update is done.

`tf.scatter_add(ref, indices, updates, use_locking=None, name=None)`

Adds sparse updates to a variable reference.

This operation computes

```
# Scalar indices
ref[indices, ...] += updates[...]
# Vector indices (for each i)
ref[indices[i], ...] += updates[i, ...]
# High rank indices (for each i, ..., j)
ref[indices[i, ..., j], ...] += updates[i, ..., j, ...]
```

This operation outputs `ref`

after the update is done.
This makes it easier to chain operations that need to use the reset value.

Duplicate entries are handled correctly: if multiple `indices`

reference
the same location, their contributions add.

Requires `updates.shape = indices.shape + ref.shape[1:]`

.

##### Args:

: A mutable`ref`

`Tensor`

. Must be one of the following types:`float32`

,`float64`

,`int64`

,`int32`

,`uint8`

,`uint16`

,`int16`

,`int8`

,`complex64`

,`complex128`

,`qint8`

,`quint8`

,`qint32`

,`half`

. Should be from a`Variable`

node.: A`indices`

`Tensor`

. Must be one of the following types:`int32`

,`int64`

. A tensor of indices into the first dimension of`ref`

.: A`updates`

`Tensor`

. Must have the same type as`ref`

. A tensor of updated values to add to`ref`

.: An optional`use_locking`

`bool`

. Defaults to`False`

. If True, the addition will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.: A name for the operation (optional).`name`

##### Returns:

Same as `ref`

. Returned as a convenience for operations that want
to use the updated values after the update is done.

`tf.scatter_sub(ref, indices, updates, use_locking=None, name=None)`

Subtracts sparse updates to a variable reference.

```
# Scalar indices
ref[indices, ...] -= updates[...]
# Vector indices (for each i)
ref[indices[i], ...] -= updates[i, ...]
# High rank indices (for each i, ..., j)
ref[indices[i, ..., j], ...] -= updates[i, ..., j, ...]
```

This operation outputs `ref`

after the update is done.
This makes it easier to chain operations that need to use the reset value.

Duplicate entries are handled correctly: if multiple `indices`

reference
the same location, their (negated) contributions add.

Requires `updates.shape = indices.shape + ref.shape[1:]`

.

##### Args:

: A mutable`ref`

`Tensor`

. Must be one of the following types:`float32`

,`float64`

,`int64`

,`int32`

,`uint8`

,`uint16`

,`int16`

,`int8`

,`complex64`

,`complex128`

,`qint8`

,`quint8`

,`qint32`

,`half`

. Should be from a`Variable`

node.: A`indices`

`Tensor`

. Must be one of the following types:`int32`

,`int64`

. A tensor of indices into the first dimension of`ref`

.: A`updates`

`Tensor`

. Must have the same type as`ref`

. A tensor of updated values to subtract from`ref`

.: An optional`use_locking`

`bool`

. Defaults to`False`

. If True, the subtraction will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.: A name for the operation (optional).`name`

##### Returns:

Same as `ref`

. Returned as a convenience for operations that want
to use the updated values after the update is done.

`tf.scatter_mul(ref, indices, updates, use_locking=None, name=None)`

Multiplies sparse updates into a variable reference.

This operation computes

```
# Scalar indices
ref[indices, ...] *= updates[...]
# Vector indices (for each i)
ref[indices[i], ...] *= updates[i, ...]
# High rank indices (for each i, ..., j)
ref[indices[i, ..., j], ...] *= updates[i, ..., j, ...]
```

`ref`

after the update is done.
This makes it easier to chain operations that need to use the reset value.

Duplicate entries are handled correctly: if multiple `indices`

reference
the same location, their contributions multiply.

Requires `updates.shape = indices.shape + ref.shape[1:]`

.

##### Args:

: A mutable`ref`

`Tensor`

. Must be one of the following types:`float32`

,`float64`

,`int64`

,`int32`

,`uint8`

,`uint16`

,`int16`

,`int8`

,`complex64`

,`complex128`

,`qint8`

,`quint8`

,`qint32`

,`half`

. Should be from a`Variable`

node.: A`indices`

`Tensor`

. Must be one of the following types:`int32`

,`int64`

. A tensor of indices into the first dimension of`ref`

.: A`updates`

`Tensor`

. Must have the same type as`ref`

. A tensor of updated values to multiply to`ref`

.: An optional`use_locking`

`bool`

. Defaults to`False`

. If True, the operation will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.: A name for the operation (optional).`name`

##### Returns:

`ref`

. Returned as a convenience for operations that want
to use the updated values after the update is done.

`tf.scatter_div(ref, indices, updates, use_locking=None, name=None)`

Divides a variable reference by sparse updates.

This operation computes

```
# Scalar indices
ref[indices, ...] /= updates[...]
# Vector indices (for each i)
ref[indices[i], ...] /= updates[i, ...]
# High rank indices (for each i, ..., j)
ref[indices[i, ..., j], ...] /= updates[i, ..., j, ...]
```

`ref`

after the update is done.
This makes it easier to chain operations that need to use the reset value.

Duplicate entries are handled correctly: if multiple `indices`

reference
the same location, their contributions divide.

Requires `updates.shape = indices.shape + ref.shape[1:]`

.

##### Args:

: A mutable`ref`

`Tensor`

. Must be one of the following types:`float32`

,`float64`

,`int64`

,`int32`

,`uint8`

,`uint16`

,`int16`

,`int8`

,`complex64`

,`complex128`

,`qint8`

,`quint8`

,`qint32`

,`half`

. Should be from a`Variable`

node.: A`indices`

`Tensor`

. Must be one of the following types:`int32`

,`int64`

. A tensor of indices into the first dimension of`ref`

.: A`updates`

`Tensor`

. Must have the same type as`ref`

. A tensor of values that`ref`

is divided by.: An optional`use_locking`

`bool`

. Defaults to`False`

. If True, the operation will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.: A name for the operation (optional).`name`

##### Returns:

`ref`

. Returned as a convenience for operations that want
to use the updated values after the update is done.

`tf.scatter_nd_update(ref, indices, updates, use_locking=None, name=None)`

Applies sparse `updates`

to individual values or slices within a given

variable according to `indices`

.

`ref`

is a `Tensor`

with rank `P`

and `indices`

is a `Tensor`

of rank `Q`

.

`indices`

must be integer tensor, containing indices into `ref`

.
It must be shape `[d_0, ..., d_{Q-2}, K]`

where `0 < K <= P`

.

The innermost dimension of `indices`

(with length `K`

) corresponds to
indices into elements (if `K = P`

) or slices (if `K < P`

) along the `K`

th
dimension of `ref`

.

`updates`

is `Tensor`

of rank `Q-1+P-K`

with shape:

```
[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].
```

For example, say we want to update 4 scattered elements to a rank-1 tensor to 8 elements. In Python, that update would look like this:

```
ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8])
indices = tf.constant([[4], [3], [1] ,[7]])
updates = tf.constant([9, 10, 11, 12])
update = tf.scatter_nd_update(ref, indices, updates)
with tf.Session() as sess:
print sess.run(update)
```

The resulting update to ref would look like this:

```
[1, 11, 3, 10, 9, 6, 7, 12]
```

See tf.scatter_nd for more details about how to make updates to slices.

##### Args:

: A mutable`ref`

`Tensor`

. A mutable Tensor. Should be from a Variable node.: A`indices`

`Tensor`

. Must be one of the following types:`int32`

,`int64`

. A Tensor. Must be one of the following types: int32, int64. A tensor of indices into ref.: A`updates`

`Tensor`

. Must have the same type as`ref`

. A Tensor. Must have the same type as ref. A tensor of updated values to add to ref.: An optional`use_locking`

`bool`

. Defaults to`True`

. An optional bool. Defaults to True. If True, the assignment will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.: A name for the operation (optional).`name`

##### Returns:

A mutable `Tensor`

. Has the same type as `ref`

.
Same as ref. Returned as a convenience for operations that want to
use the updated values after the update is done.

`tf.scatter_nd_add(ref, indices, updates, use_locking=None, name=None)`

Applies sparse addition between `updates`

and individual values or slices

within a given variable according to `indices`

.

`ref`

is a `Tensor`

with rank `P`

and `indices`

is a `Tensor`

of rank `Q`

.

`indices`

must be integer tensor, containing indices into `ref`

.
It must be shape `[d_0, ..., d_{Q-2}, K]`

where `0 < K <= P`

.

The innermost dimension of `indices`

(with length `K`

) corresponds to
indices into elements (if `K = P`

) or slices (if `K < P`

) along the `K`

th
dimension of `ref`

.

`updates`

is `Tensor`

of rank `Q-1+P-K`

with shape:

```
[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].
```

For example, say we want to add 4 scattered elements to a rank-1 tensor to 8 elements. In Python, that addition would look like this:

```
ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8])
indices = tf.constant([[4], [3], [1], [7]])
updates = tf.constant([9, 10, 11, 12])
add = tf.scatter_nd_add(ref, indices, updates)
with tf.Session() as sess:
print sess.run(add)
```

The resulting update to ref would look like this:

```
[1, 13, 3, 14, 14, 6, 7, 20]
```

See tf.scatter_nd for more details about how to make updates to slices.

##### Args:

: A mutable`ref`

`Tensor`

. Must be one of the following types:`float32`

,`float64`

,`int64`

,`int32`

,`uint8`

,`uint16`

,`int16`

,`int8`

,`complex64`

,`complex128`

,`qint8`

,`quint8`

,`qint32`

,`half`

. A mutable Tensor. Should be from a Variable node.: A`indices`

`Tensor`

. Must be one of the following types:`int32`

,`int64`

. A Tensor. Must be one of the following types: int32, int64. A tensor of indices into ref.: A`updates`

`Tensor`

. Must have the same type as`ref`

. A Tensor. Must have the same type as ref. A tensor of updated values to add to ref.: An optional`use_locking`

`bool`

. Defaults to`False`

. An optional bool. Defaults to True. If True, the assignment will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.: A name for the operation (optional).`name`

##### Returns:

A mutable `Tensor`

. Has the same type as `ref`

.
Same as ref. Returned as a convenience for operations that want
to use the updated values after the update is done.

`tf.scatter_nd_sub(ref, indices, updates, use_locking=None, name=None)`

Applies sparse subtraction between `updates`

and individual values or slices

within a given variable according to `indices`

.

`ref`

is a `Tensor`

with rank `P`

and `indices`

is a `Tensor`

of rank `Q`

.

`indices`

must be integer tensor, containing indices into `ref`

.
It must be shape `[d_0, ..., d_{Q-2}, K]`

where `0 < K <= P`

.

The innermost dimension of `indices`

(with length `K`

) corresponds to
indices into elements (if `K = P`

) or slices (if `K < P`

) along the `K`

th
dimension of `ref`

.

`updates`

is `Tensor`

of rank `Q-1+P-K`

with shape:

```
[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].
```

For example, say we want to subtract 4 scattered elements from a rank-1 tensor with 8 elements. In Python, that subtraction would look like this:

```
ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8])
indices = tf.constant([[4], [3], [1], [7]])
updates = tf.constant([9, 10, 11, 12])
sub = tf.scatter_nd_sub(ref, indices, updates)
with tf.Session() as sess:
print sess.run(sub)
```

The resulting update to ref would look like this:

```
[1, -9, 3, -6, -4, 6, 7, -4]
```

See tf.scatter_nd for more details about how to make updates to slices.

##### Args:

: A mutable`ref`

`Tensor`

. Must be one of the following types:`float32`

,`float64`

,`int64`

,`int32`

,`uint8`

,`uint16`

,`int16`

,`int8`

,`complex64`

,`complex128`

,`qint8`

,`quint8`

,`qint32`

,`half`

. A mutable Tensor. Should be from a Variable node.: A`indices`

`Tensor`

. Must be one of the following types:`int32`

,`int64`

. A Tensor. Must be one of the following types: int32, int64. A tensor of indices into ref.: A`updates`

`Tensor`

. Must have the same type as`ref`

. A Tensor. Must have the same type as ref. A tensor of updated values to subtract from ref.: An optional`use_locking`

`bool`

. Defaults to`False`

. An optional bool. Defaults to True. If True, the assignment will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.: A name for the operation (optional).`name`

##### Returns:

A mutable `Tensor`

. Has the same type as `ref`

.
Same as ref. Returned as a convenience for operations that want
to use the updated values after the update is done.

`tf.sparse_mask(a, mask_indices, name=None)`

Masks elements of `IndexedSlices`

.

Given an `IndexedSlices`

instance `a`

, returns another `IndexedSlices`

that
contains a subset of the slices of `a`

. Only the slices at indices not
specified in `mask_indices`

are returned.

This is useful when you need to extract a subset of slices in an
`IndexedSlices`

object.

For example:

```
# `a` contains slices at indices [12, 26, 37, 45] from a large tensor
# with shape [1000, 10]
a.indices => [12, 26, 37, 45]
tf.shape(a.values) => [4, 10]
# `b` will be the subset of `a` slices at its second and third indices, so
# we want to mask its first and last indices (which are at absolute
# indices 12, 45)
b = tf.sparse_mask(a, [12, 45])
b.indices => [26, 37]
tf.shape(b.values) => [2, 10]
```

##### Args:

`a`

: An`IndexedSlices`

instance.`mask_indices`

: Indices of elements to mask.`name`

: A name for the operation (optional).

##### Returns:

The masked `IndexedSlices`

instance.

`class tf.IndexedSlices`

A sparse representation of a set of tensor slices at given indices.

This class is a simple wrapper for a pair of `Tensor`

objects:

`values`

: A`Tensor`

of any dtype with shape`[D0, D1, ..., Dn]`

.`indices`

: A 1-D integer`Tensor`

with shape`[D0]`

.

An `IndexedSlices`

is typically used to represent a subset of a larger
tensor `dense`

of shape `[LARGE0, D1, .. , DN]`

where `LARGE0 >> D0`

.
The values in `indices`

are the indices in the first dimension of
the slices that have been extracted from the larger tensor.

The dense tensor `dense`

represented by an `IndexedSlices`

`slices`

has

```
dense[slices.indices[i], :, :, :, ...] = slices.values[i, :, :, :, ...]
```

The `IndexedSlices`

class is used principally in the definition of
gradients for operations that have sparse gradients
(e.g. `tf.gather`

).

Contrast this representation with
`SparseTensor`

,
which uses multi-dimensional indices and scalar values.

`tf.IndexedSlices.__init__(values, indices, dense_shape=None)`

{:#IndexedSlices.**init**}

Creates an `IndexedSlices`

.

`tf.IndexedSlices.values`

A `Tensor`

containing the values of the slices.

`tf.IndexedSlices.indices`

A 1-D `Tensor`

containing the indices of the slices.

`tf.IndexedSlices.dense_shape`

A 1-D `Tensor`

containing the shape of the corresponding dense tensor.

`tf.IndexedSlices.name`

The name of this `IndexedSlices`

.

`tf.IndexedSlices.dtype`

The `DType`

of elements in this tensor.

`tf.IndexedSlices.device`

The name of the device on which `values`

will be produced, or `None`

.

`tf.IndexedSlices.op`

The `Operation`

that produces `values`

as an output.

#### Other Methods

`tf.IndexedSlices.__neg__()`

{:#IndexedSlices.**neg**}

`tf.IndexedSlices.__str__()`

{:#IndexedSlices.**str**}

`tf.IndexedSlices.graph`

The `Graph`

that contains the values, indices, and shape tensors.

### Read-only Lookup Tables

`tf.initialize_all_tables(name='init_all_tables')`

Returns an Op that initializes all tables of the default graph.

##### Args:

: Optional name for the initialization op.`name`

##### Returns:

An Op that initializes all tables. Note that if there are not tables the returned Op is a NoOp.