tf.raw_ops.FractionalAvgPoolGrad

Computes gradient of the FractionalAvgPool function.

tf.raw_ops.FractionalAvgPoolGrad(
    orig_input_tensor_shape, out_backprop, row_pooling_sequence,
    col_pooling_sequence, overlapping=False, name=None
)

Unlike FractionalMaxPoolGrad, we don't need to find arg_max for FractionalAvgPoolGrad, we just need to evenly back-propagate each element of out_backprop to those indices that form the same pooling cell. Therefore, we just need to know the shape of original input tensor, instead of the whole tensor.

Args:

  • orig_input_tensor_shape: A Tensor of type int64. Original input tensor shape for fractional_avg_pool
  • out_backprop: A Tensor. Must be one of the following types: float32, float64, int32, int64. 4-D with shape [batch, height, width, channels]. Gradients w.r.t. the output of fractional_avg_pool.
  • row_pooling_sequence: A Tensor of type int64. row pooling sequence, form pooling region with col_pooling_sequence.
  • col_pooling_sequence: A Tensor of type int64. column pooling sequence, form pooling region with row_pooling sequence.
  • overlapping: An optional bool. Defaults to False. When set to True, it means when pooling, the values at the boundary of adjacent pooling cells are used by both cells. For example:

    index 0 1 2 3 4

    value 20 5 16 3 7

    If the pooling sequence is [0, 2, 4], then 16, at index 2 will be used twice. The result would be [41/3, 26/3] for fractional avg pooling.

  • name: A name for the operation (optional).

Returns:

A Tensor. Has the same type as out_backprop.