Other Functions and Classes

tf.contrib.losses.absolute_difference(predictions, targets, weight=1.0, scope=None)

Adds an Absolute Difference loss to the training procedure.

weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If weight is a tensor of size [batch_size], then the total loss for each sample of the batch is rescaled by the corresponding element in the weight vector. If the shape of weight matches the shape of predictions, then the loss of each measurable element of predictions is scaled by the corresponding value of weight.

Args:
  • predictions: The predicted outputs.
  • targets: The ground truth output tensor, same dimensions as 'predictions'.
  • weight: Coefficients for the loss a scalar, a tensor of shape [batch_size] or a tensor whose shape matches predictions.
  • scope: The scope for the operations performed in computing the loss.
Returns:

A scalar Tensor representing the loss value.

Raises:
  • ValueError: If the shape of predictions doesn't match that of targets or if the shape of weight is invalid.

tf.contrib.losses.add_loss(*args, **kwargs)

Adds a externally defined loss to the collection of losses.

Args:
  • loss: A loss Tensor.
  • loss_collection: Optional collection to add the loss to.

tf.contrib.losses.compute_weighted_loss(losses, weight=1.0)

Computes the weighted loss.

Args:
  • losses: A tensor of size [batch_size, d1, ... dN].
  • weight: A tensor of size [1] or [batch_size, d1, ... dK] where K < N.
Returns:

A scalar Tensor that returns the weighted loss.

Raises:
  • ValueError: If the weight is None or the shape is not compatible with the losses shape or if the number of dimensions (rank) of either losses or weight is missing.

tf.contrib.losses.cosine_distance(predictions, targets, dim, weight=1.0, scope=None)

Adds a cosine-distance loss to the training procedure.

Note that the function assumes that the predictions and targets are already unit-normalized.

Args:
  • predictions: An arbitrary matrix.
  • targets: A Tensor whose shape matches 'predictions'
  • dim: The dimension along which the cosine distance is computed.
  • weight: Coefficients for the loss a scalar, a tensor of shape [batch_size] or a tensor whose shape matches predictions.
  • scope: The scope for the operations performed in computing the loss.
Returns:

A scalar Tensor representing the loss value.

Raises:
  • ValueError: If predictions.shape doesn't match targets.shape, if the ignore mask is provided and its shape doesn't match targets.shape or if the ignore mask is not boolean valued.

tf.contrib.losses.get_losses(scope=None, loss_collection='losses')

Gets the list of losses from the loss_collection.

Args:
  • scope: an optional scope for filtering the losses to return.
  • loss_collection: Optional losses collection.
Returns:

a list of loss tensors.


tf.contrib.losses.get_regularization_losses(scope=None)

Gets the regularization losses.

Args:
  • scope: an optional scope for filtering the losses to return.
Returns:

A list of loss variables.


tf.contrib.losses.get_total_loss(add_regularization_losses=True, name='total_loss')

Returns a tensor whose value represents the total loss.

Notice that the function adds the given losses to the regularization losses.

Args:
  • add_regularization_losses: A boolean indicating whether or not to use the regularization losses in the sum.
  • name: The name of the returned tensor.
Returns:

A Tensor whose value represents the total loss.

Raises:
  • ValueError: if losses is not iterable.

tf.contrib.losses.hinge_loss(logits, target, scope=None)

Method that returns the loss tensor for hinge loss.

Args:
  • logits: The logits, a float tensor.
  • target: The ground truth output tensor. Its shape should match the shape of logits. The values of the tensor are expected to be 0.0 or 1.0.
  • scope: The scope for the operations performed in computing the loss.
Returns:

A Tensor of same shape as logits and target representing the loss values across the batch.

Raises:
  • ValueError: If the shapes of logits and target don't match.

tf.contrib.losses.log_loss(predictions, targets, weight=1.0, epsilon=1e-07, scope=None)

Adds a Log Loss term to the training procedure.

weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If weight is a tensor of size [batch_size], then the total loss for each sample of the batch is rescaled by the corresponding element in the weight vector. If the shape of weight matches the shape of predictions, then the loss of each measurable element of predictions is scaled by the corresponding value of weight.

Args:
  • predictions: The predicted outputs.
  • targets: The ground truth output tensor, same dimensions as 'predictions'.
  • weight: Coefficients for the loss a scalar, a tensor of shape [batch_size] or a tensor whose shape matches predictions.
  • epsilon: A small increment to add to avoid taking a log of zero.
  • scope: The scope for the operations performed in computing the loss.
Returns:

A scalar Tensor representing the loss value.

Raises:
  • ValueError: If the shape of predictions doesn't match that of targets or if the shape of weight is invalid.

tf.contrib.losses.mean_pairwise_squared_error(*args, **kwargs)

Adds a pairwise-errors-squared loss to the training procedure. (deprecated)

THIS FUNCTION IS DEPRECATED. It will be removed after 2016-10-01. Instructions for updating: Use mean_pairwise_squared_error.

Unlike the sum_of_squares loss, which is a measure of the differences between corresponding elements of predictions and targets, sum_of_pairwise_squares is a measure of the differences between pairs of corresponding elements of predictions and targets.

For example, if targets=[a, b, c] and predictions=[x, y, z], there are three pairs of differences are summed to compute the loss: loss = [ ((a-b) - (x-y)).^2 + ((a-c) - (x-z)).^2 + ((b-c) - (y-z)).^2 ] / 3

Note that since the inputs are of size [batch_size, d0, ... dN], the corresponding pairs are computed within each batch sample but not across samples within a batch. For example, if predictions represents a batch of 16 grayscale images of dimenion [batch_size, 100, 200], then the set of pairs is drawn from each image, but not across images.

weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If weight is a tensor of size [batch_size], then the total loss for each sample of the batch is rescaled by the corresponding element in the weight vector.

Args: predictions: The predicted outputs, a tensor of size [batch_size, d0, .. dN] where N+1 is the total number of dimensions in predictions. targets: The ground truth output tensor, whose shape must match the shape of the predictions tensor. weight: Coefficients for the loss a scalar, a tensor of shape [batch_size] or a tensor whose shape matches predictions. scope: The scope for the operations performed in computing the loss.

Returns: A scalar Tensor representing the loss value.

Raises: ValueError: If the shape of predictions doesn't match that of targets or if the shape of weight is invalid.


tf.contrib.losses.mean_squared_error(*args, **kwargs)

Adds a Sum-of-Squares loss to the training procedure. (deprecated)

THIS FUNCTION IS DEPRECATED. It will be removed after 2016-10-01. Instructions for updating: Use mean_squared_error.

weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If weight is a tensor of size [batch_size], then the total loss for each sample of the batch is rescaled by the corresponding element in the weight vector. If the shape of weight matches the shape of predictions, then the loss of each measurable element of predictions is scaled by the corresponding value of weight.

Args: predictions: The predicted outputs. targets: The ground truth output tensor, same dimensions as 'predictions'. weight: Coefficients for the loss a scalar, a tensor of shape [batch_size] or a tensor whose shape matches predictions. scope: The scope for the operations performed in computing the loss.

Returns: A scalar Tensor representing the loss value.

Raises: ValueError: If the shape of predictions doesn't match that of targets or if the shape of weight is invalid.


tf.contrib.losses.sigmoid_cross_entropy(logits, multi_class_labels, weight=1.0, label_smoothing=0, scope=None)

Creates a cross-entropy loss using tf.nn.sigmoid_cross_entropy_with_logits.

weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If weight is a tensor of size [batch_size], then the loss weights apply to each corresponding sample.

If label_smoothing is nonzero, smooth the labels towards 1/2: new_multiclass_labels = multiclass_labels * (1 - label_smoothing) + 0.5 * label_smoothing

Args:
  • logits: [batch_size, num_classes] logits outputs of the network .
  • multi_class_labels: [batch_size, num_classes] target labels in (0, 1).
  • weight: Coefficients for the loss. The tensor must be a scalar, a tensor of shape [batch_size] or shape [batch_size, num_classes].
  • label_smoothing: If greater than 0 then smooth the labels.
  • scope: The scope for the operations performed in computing the loss.
Returns:

A scalar Tensor representing the loss value.

Raises:
  • ValueError: If the shape of predictions doesn't match that of targets or if the shape of weight is invalid or if weight is None.

tf.contrib.losses.softmax_cross_entropy(logits, onehot_labels, weight=1.0, label_smoothing=0, scope=None)

Creates a cross-entropy loss using tf.nn.softmax_cross_entropy_with_logits.

weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If weight is a tensor of size [batch_size], then the loss weights apply to each corresponding sample.

If label_smoothing is nonzero, smooth the labels towards 1/num_classes: new_onehot_labels = onehot_labels * (1 - label_smoothing) + label_smoothing / num_classes

Args:
  • logits: [batch_size, num_classes] logits outputs of the network .
  • onehot_labels: [batch_size, num_classes] target one_hot_encoded labels.
  • weight: Coefficients for the loss. The tensor must be a scalar or a tensor of shape [batch_size].
  • label_smoothing: If greater than 0 then smooth the labels.
  • scope: the scope for the operations performed in computing the loss.
Returns:

A scalar Tensor representing the loss value.

Raises:
  • ValueError: If the shape of logits doesn't match that of onehot_labels or if the shape of weight is invalid or if weight is None.

tf.contrib.losses.sparse_softmax_cross_entropy(logits, labels, weight=1.0, scope=None)

Cross-entropy loss using tf.nn.sparse_softmax_cross_entropy_with_logits.

weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If weight is a tensor of size [batch_size], then the loss weights apply to each corresponding sample.

Args:
  • logits: [batch_size, num_classes] logits outputs of the network .
  • labels: [batch_size, 1] or [batch_size] target labels of dtype int32 or int64 in the range [0, num_classes).
  • weight: Coefficients for the loss. The tensor must be a scalar or a tensor of shape [batch_size] or [batch_size, 1].
  • scope: the scope for the operations performed in computing the loss.
Returns:

A scalar Tensor representing the loss value.

Raises:
  • ValueError: If the shapes of logits, labels, and weight are incompatible, or if weight is None.

tf.contrib.losses.sum_of_pairwise_squares(*args, **kwargs)

Adds a pairwise-errors-squared loss to the training procedure. (deprecated)

THIS FUNCTION IS DEPRECATED. It will be removed after 2016-10-01. Instructions for updating: Use mean_pairwise_squared_error.

Unlike the sum_of_squares loss, which is a measure of the differences between corresponding elements of predictions and targets, sum_of_pairwise_squares is a measure of the differences between pairs of corresponding elements of predictions and targets.

For example, if targets=[a, b, c] and predictions=[x, y, z], there are three pairs of differences are summed to compute the loss: loss = [ ((a-b) - (x-y)).^2 + ((a-c) - (x-z)).^2 + ((b-c) - (y-z)).^2 ] / 3

Note that since the inputs are of size [batch_size, d0, ... dN], the corresponding pairs are computed within each batch sample but not across samples within a batch. For example, if predictions represents a batch of 16 grayscale images of dimenion [batch_size, 100, 200], then the set of pairs is drawn from each image, but not across images.

weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If weight is a tensor of size [batch_size], then the total loss for each sample of the batch is rescaled by the corresponding element in the weight vector.

Args: predictions: The predicted outputs, a tensor of size [batch_size, d0, .. dN] where N+1 is the total number of dimensions in predictions. targets: The ground truth output tensor, whose shape must match the shape of the predictions tensor. weight: Coefficients for the loss a scalar, a tensor of shape [batch_size] or a tensor whose shape matches predictions. scope: The scope for the operations performed in computing the loss.

Returns: A scalar Tensor representing the loss value.

Raises: ValueError: If the shape of predictions doesn't match that of targets or if the shape of weight is invalid.


tf.contrib.losses.sum_of_squares(*args, **kwargs)

Adds a Sum-of-Squares loss to the training procedure. (deprecated)

THIS FUNCTION IS DEPRECATED. It will be removed after 2016-10-01. Instructions for updating: Use mean_squared_error.

weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If weight is a tensor of size [batch_size], then the total loss for each sample of the batch is rescaled by the corresponding element in the weight vector. If the shape of weight matches the shape of predictions, then the loss of each measurable element of predictions is scaled by the corresponding value of weight.

Args: predictions: The predicted outputs. targets: The ground truth output tensor, same dimensions as 'predictions'. weight: Coefficients for the loss a scalar, a tensor of shape [batch_size] or a tensor whose shape matches predictions. scope: The scope for the operations performed in computing the loss.

Returns: A scalar Tensor representing the loss value.

Raises: ValueError: If the shape of predictions doesn't match that of targets or if the shape of weight is invalid.