tf.raw_ops.QuantizedMatMulWithBiasAndReluAndRequantize

Perform a quantized matrix multiplication of a by the matrix b with bias

add and relu and requantize fusion.

The inputs must be two-dimensional matrices and 1D bias vector. And the inner dimension of a (after being transposed if transpose_a is non-zero) must match the outer dimension of b (after being transposed if transposed_b is non-zero). Then do broadcast add operation with bias values on the matrix multiplication result. The bias size must match inner dimension of b. Then do relu activation to get non-negative result. Then do requantize operation to get final uint8 result.

Args: a: A Tensor. Must be one of the following types: qint8, quint8, qint32, qint16, quint16. A matrix to be multiplied. Must be a two-dimensional tensor of type quint8. b: A Tensor. Must be one of the following types: qint8, quint8, qint32, qint16, quint16. A matrix to be multiplied and must be a two-dimensional tensor of type qint8. bias: A Tensor. Must be one of the following types: float32, qint32. A 1D bias tensor with size matching with inner dimension of b (after being transposed if transposed_b is non-zero). min_a: A Tensor of type float32. The float value that