TF 2.0 is out! Get hands-on practice at TF World, Oct 28-31. Use code TF20 for 20% off select passes. Register now

tfp.experimental.auto_batching.instructions.FunctionCallOp

View source on GitHub

Class FunctionCallOp

Call a Function.

Aliases:

This is a higher-level instruction, what in LLVM jargon is called an "intrinsic". An upstream compiler may construct such instructions; there is a pass that lowers these to sequences of instructions the downstream VM can stage directly.

This differs from PrimOp in that the function being called is itself implemented in this instruction language, and is subject to auto-batching by the downstream VM.

A FunctionCallOp is required to statically know the identity of the Function being called. This is because we want to copy the return values to their destinations at the caller side of the return sequence. Why do we want that? Because threads may diverge at function returns, thus needing to write the returned values to different caller variables. Doing that on the callee side would require per-thread information about where to write the variables, which, in this design, is encoded in the program counter stack. Why, in turn, may threads diverge at function returns? Because part of the point is to allow them to converge when calling the same function, even if from different points.

Args:

  • function: A Function object describing the function to call. This requires all call targets to be known statically.
  • vars_in: list of strings. The names of the VM variables whose current values to pass to the function.
  • vars_out: pattern of strings. The names of the VM variables where to save the results returned from function.

__new__

__new__(
    _cls,
    function,
    vars_in,
    vars_out
)

Create new instance of FunctionCallOp(function, vars_in, vars_out)

Properties

function

vars_in

vars_out

Methods

replace

View source

replace(vars_out=None)

Return a copy of self with vars_out replaced.