tft.scale_to_z_score_per_key

tft.scale_to_z_score_per_key(
    x,
    key=None,
    elementwise=False,
    name=None,
    output_dtype=None
)

Returns a standardized column with mean 0 and variance 1, grouped per key.

Scaling to z-score subtracts out the mean and divides by standard deviation. Note that the standard deviation computed here is based on the biased variance (0 delta degrees of freedom), as computed by analyzers.var.

Args:

  • x: A numeric Tensor or SparseTensor.
  • key: A Tensor or SparseTensor of dtype tf.string. Must meet one of the following conditions:
    1. key is None
    2. Both x and key are dense,
    3. Both x and key are sparse and key must exactly match x in everything except values,
    4. The axis=1 index of each x matches its index of dense key.
  • elementwise: If true, scales each element of the tensor independently; otherwise uses the mean and variance of the whole tensor. Currently, not supported for per-key operations.
  • name: (Optional) A name for this operation.
  • output_dtype: (Optional) If not None, casts the output tensor to this type.

Returns:

A Tensor or SparseTensor containing the input column scaled to mean 0 and variance 1 (standard deviation 1), grouped per key if a key is provided.

That is, for all keys k: (x - mean(x)) / std_dev(x) for all x with key k. If x is floating point, the mean will have the same type as x. If x is integral, the output is cast to tf.float32.

Note that TFLearn generally permits only tf.int64 and tf.float32, so casting this scaler's output may be necessary.