Save the date! Google I/O returns May 18-20 Register now

tf.raw_ops.QuantizedMatMulWithBiasAndRequantize

a A Tensor. Must be one of the following types: qint8, quint8, qint32, qint16, quint16.
b A Tensor. Must be one of the following types: qint8, quint8, qint32, qint16, quint16.
bias A Tensor. Must be one of the following types: float32, qint32.
min_a A Tensor of type float32.
max_a A Tensor of type float32.
min_b A Tensor of type float32.
max_b A Tensor of type float32.
min_freezed_output A Tensor of type float32.
max_freezed_output A Tensor of type float32.
Toutput An optional tf.DType from: tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16. Defaults to tf.quint8.
transpose_a An optional bool. Defaults to False.
transpose_b An optional bool. Defaults to False.
input_quant_mode An optional string from: "MIN_FIRST", "SCALED". Defaults to "MIN_FIRST".
name A name for the operation (optional).

A tuple of Tensor objects (out, min_out, max_out).
out A Tensor of type Toutput.
min_out A Tensor of type float32.
max_out A Tensor of type float32.