tft.var

Computes the variance of the values of a Tensor over the whole dataset.

Uses the biased variance (0 delta degrees of freedom), as given by (x - mean(x))**2 / length(x).

x Tensor, SparseTensor, or RaggedTensor. Its type must be floating point (float{16|32|64}), or integral ([u]int{8|16|32|64}).
reduce_instance_dims By default collapses the batch and instance dimensions to arrive at a single scalar output. If False, only collapses the batch dimension and outputs a vector of the same shape as the input.
name (Optional) A name for this operation.
output_dtype (Optional) If not None, casts the output tensor to this type.

A Tensor containing the variance. If x is floating point, the variance will have the same type as x. If x is integral, the output is cast to float32. NaNs and infinite input values are ignored.

TypeError If the type of x is not supported.