CallableOptionsOrBuilder

interfaccia pubblica CallableOptionsOrBuilder
Sottoclassi indirette conosciute

Metodi pubblici

booleano astratto
contieneFeedDevices (chiave String)
 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
booleano astratto
contieneFetchDevices (chiave String)
map<string, string> fetch_devices = 7;
stringa astratta
getFeed (indice int)
 Tensors to be fed in the callable.
astratto com.google.protobuf.ByteString
getFeedBytes (indice int)
 Tensors to be fed in the callable.
astratto int
getFeedCount ()
 Tensors to be fed in the callable.
mappa astratta<String, String>
getFeedDevices ()
Utilizza invece getFeedDevicesMap() .
astratto int
getFeedDevicesCount ()
 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
mappa astratta<String, String>
getFeedDevicesMap ()
 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
stringa astratta
getFeedDevicesOrDefault (chiave stringa, valore predefinito stringa)
 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
stringa astratta
getFeedDevicesOrThrow (chiave stringa)
 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
Elenco astratto<String>
getElencoFeed ()
 Tensors to be fed in the callable.
stringa astratta
getFetch (indice int)
 Fetches.
astratto com.google.protobuf.ByteString
getFetchBytes (indice int)
 Fetches.
astratto int
getFetchCount ()
 Fetches.
mappa astratta<String, String>
getFetchDevices ()
Utilizzare invece getFetchDevicesMap() .
astratto int
getFetchDevicesCount ()
map<string, string> fetch_devices = 7;
mappa astratta<String, String>
getFetchDevicesMap ()
map<string, string> fetch_devices = 7;
stringa astratta
getFetchDevicesOrDefault (chiave String, String defaultValue)
map<string, string> fetch_devices = 7;
stringa astratta
getFetchDevicesOrThrow (chiave stringa)
map<string, string> fetch_devices = 7;
Elenco astratto<String>
getFetchList ()
 Fetches.
booleano astratto
getFetchSkipSync ()
 By default, RunCallable() will synchronize the GPU stream before returning
 fetched tensors on a GPU device, to ensure that the values in those tensors
 have been produced.
astratto RunOptions
getRunOptions ()
 Options that will be applied to each run.
estratto RunOptionsOrBuilder
getRunOptionsOrBuilder ()
 Options that will be applied to each run.
stringa astratta
getTarget (indice int)
 Target Nodes.
astratto com.google.protobuf.ByteString
getTargetBytes (indice int)
 Target Nodes.
astratto int
getTargetCount ()
 Target Nodes.
Elenco astratto<String>
getListaDestinazioni ()
 Target Nodes.
TensorConnection astratto
getTensorConnection (indice int)
 Tensors to be connected in the callable.
astratto int
getTensorConnectionCount ()
 Tensors to be connected in the callable.
Lista astratta< TensorConnection >
getTensorConnectionList ()
 Tensors to be connected in the callable.
astratto TensorConnectionOrBuilder
getTensorConnectionOrBuilder (indice int)
 Tensors to be connected in the callable.
Elenco astratto<? estende TensorConnectionOrBuilder >
getTensorConnectionOrBuilderList ()
 Tensors to be connected in the callable.
booleano astratto
haOpzioniEsegui ()
 Options that will be applied to each run.

Metodi pubblici

booleano astratto pubblico contieneFeedDevices (chiave String)

 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
 The options below allow changing that - feeding tensors backed by
 device memory, or returning tensors that are backed by device memory.
 The maps below map the name of a feed/fetch tensor (which appears in
 'feed' or 'fetch' fields above), to the fully qualified name of the device
 owning the memory backing the contents of the tensor.
 For example, creating a callable with the following options:
 CallableOptions {
   feed: "a:0"
   feed: "b:0"
   fetch: "x:0"
   fetch: "y:0"
   feed_devices: {
     "a:0": "/job:localhost/replica:0/task:0/device:GPU:0"
   }
   fetch_devices: {
     "y:0": "/job:localhost/replica:0/task:0/device:GPU:0"
  }
 }
 means that the Callable expects:
 - The first argument ("a:0") is a Tensor backed by GPU memory.
 - The second argument ("b:0") is a Tensor backed by host memory.
 and of its return values:
 - The first output ("x:0") will be backed by host memory.
 - The second output ("y:0") will be backed by GPU memory.
 FEEDS:
 It is the responsibility of the caller to ensure that the memory of the fed
 tensors will be correctly initialized and synchronized before it is
 accessed by operations executed during the call to Session::RunCallable().
 This is typically ensured by using the TensorFlow memory allocators
 (Device::GetAllocator()) to create the Tensor to be fed.
 Alternatively, for CUDA-enabled GPU devices, this typically means that the
 operation that produced the contents of the tensor has completed, i.e., the
 CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or
 cuStreamSynchronize()).
 
map<string, string> feed_devices = 6;

booleano astratto pubblico contieneFetchDevices (chiave String)

map<string, string> fetch_devices = 7;

public abstract String getFeed (indice int)

 Tensors to be fed in the callable. Each feed is the name of a tensor.
 
repeated string feed = 1;

abstract pubblico com.google.protobuf.ByteString getFeedBytes (indice int)

 Tensors to be fed in the callable. Each feed is the name of a tensor.
 
repeated string feed = 1;

public abstract int getFeedCount ()

 Tensors to be fed in the callable. Each feed is the name of a tensor.
 
repeated string feed = 1;

public abstract Map<String, String> getFeedDevices ()

Utilizza invece getFeedDevicesMap() .

public abstract int getFeedDevicesCount ()

 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
 The options below allow changing that - feeding tensors backed by
 device memory, or returning tensors that are backed by device memory.
 The maps below map the name of a feed/fetch tensor (which appears in
 'feed' or 'fetch' fields above), to the fully qualified name of the device
 owning the memory backing the contents of the tensor.
 For example, creating a callable with the following options:
 CallableOptions {
   feed: "a:0"
   feed: "b:0"
   fetch: "x:0"
   fetch: "y:0"
   feed_devices: {
     "a:0": "/job:localhost/replica:0/task:0/device:GPU:0"
   }
   fetch_devices: {
     "y:0": "/job:localhost/replica:0/task:0/device:GPU:0"
  }
 }
 means that the Callable expects:
 - The first argument ("a:0") is a Tensor backed by GPU memory.
 - The second argument ("b:0") is a Tensor backed by host memory.
 and of its return values:
 - The first output ("x:0") will be backed by host memory.
 - The second output ("y:0") will be backed by GPU memory.
 FEEDS:
 It is the responsibility of the caller to ensure that the memory of the fed
 tensors will be correctly initialized and synchronized before it is
 accessed by operations executed during the call to Session::RunCallable().
 This is typically ensured by using the TensorFlow memory allocators
 (Device::GetAllocator()) to create the Tensor to be fed.
 Alternatively, for CUDA-enabled GPU devices, this typically means that the
 operation that produced the contents of the tensor has completed, i.e., the
 CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or
 cuStreamSynchronize()).
 
map<string, string> feed_devices = 6;

public abstract Map<String, String> getFeedDevicesMap ()

 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
 The options below allow changing that - feeding tensors backed by
 device memory, or returning tensors that are backed by device memory.
 The maps below map the name of a feed/fetch tensor (which appears in
 'feed' or 'fetch' fields above), to the fully qualified name of the device
 owning the memory backing the contents of the tensor.
 For example, creating a callable with the following options:
 CallableOptions {
   feed: "a:0"
   feed: "b:0"
   fetch: "x:0"
   fetch: "y:0"
   feed_devices: {
     "a:0": "/job:localhost/replica:0/task:0/device:GPU:0"
   }
   fetch_devices: {
     "y:0": "/job:localhost/replica:0/task:0/device:GPU:0"
  }
 }
 means that the Callable expects:
 - The first argument ("a:0") is a Tensor backed by GPU memory.
 - The second argument ("b:0") is a Tensor backed by host memory.
 and of its return values:
 - The first output ("x:0") will be backed by host memory.
 - The second output ("y:0") will be backed by GPU memory.
 FEEDS:
 It is the responsibility of the caller to ensure that the memory of the fed
 tensors will be correctly initialized and synchronized before it is
 accessed by operations executed during the call to Session::RunCallable().
 This is typically ensured by using the TensorFlow memory allocators
 (Device::GetAllocator()) to create the Tensor to be fed.
 Alternatively, for CUDA-enabled GPU devices, this typically means that the
 operation that produced the contents of the tensor has completed, i.e., the
 CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or
 cuStreamSynchronize()).
 
map<string, string> feed_devices = 6;

public abstract String getFeedDevicesOrDefault (chiave String, String defaultValue)

 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
 The options below allow changing that - feeding tensors backed by
 device memory, or returning tensors that are backed by device memory.
 The maps below map the name of a feed/fetch tensor (which appears in
 'feed' or 'fetch' fields above), to the fully qualified name of the device
 owning the memory backing the contents of the tensor.
 For example, creating a callable with the following options:
 CallableOptions {
   feed: "a:0"
   feed: "b:0"
   fetch: "x:0"
   fetch: "y:0"
   feed_devices: {
     "a:0": "/job:localhost/replica:0/task:0/device:GPU:0"
   }
   fetch_devices: {
     "y:0": "/job:localhost/replica:0/task:0/device:GPU:0"
  }
 }
 means that the Callable expects:
 - The first argument ("a:0") is a Tensor backed by GPU memory.
 - The second argument ("b:0") is a Tensor backed by host memory.
 and of its return values:
 - The first output ("x:0") will be backed by host memory.
 - The second output ("y:0") will be backed by GPU memory.
 FEEDS:
 It is the responsibility of the caller to ensure that the memory of the fed
 tensors will be correctly initialized and synchronized before it is
 accessed by operations executed during the call to Session::RunCallable().
 This is typically ensured by using the TensorFlow memory allocators
 (Device::GetAllocator()) to create the Tensor to be fed.
 Alternatively, for CUDA-enabled GPU devices, this typically means that the
 operation that produced the contents of the tensor has completed, i.e., the
 CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or
 cuStreamSynchronize()).
 
map<string, string> feed_devices = 6;

public abstract String getFeedDevicesOrThrow (chiave String)

 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
 The options below allow changing that - feeding tensors backed by
 device memory, or returning tensors that are backed by device memory.
 The maps below map the name of a feed/fetch tensor (which appears in
 'feed' or 'fetch' fields above), to the fully qualified name of the device
 owning the memory backing the contents of the tensor.
 For example, creating a callable with the following options:
 CallableOptions {
   feed: "a:0"
   feed: "b:0"
   fetch: "x:0"
   fetch: "y:0"
   feed_devices: {
     "a:0": "/job:localhost/replica:0/task:0/device:GPU:0"
   }
   fetch_devices: {
     "y:0": "/job:localhost/replica:0/task:0/device:GPU:0"
  }
 }
 means that the Callable expects:
 - The first argument ("a:0") is a Tensor backed by GPU memory.
 - The second argument ("b:0") is a Tensor backed by host memory.
 and of its return values:
 - The first output ("x:0") will be backed by host memory.
 - The second output ("y:0") will be backed by GPU memory.
 FEEDS:
 It is the responsibility of the caller to ensure that the memory of the fed
 tensors will be correctly initialized and synchronized before it is
 accessed by operations executed during the call to Session::RunCallable().
 This is typically ensured by using the TensorFlow memory allocators
 (Device::GetAllocator()) to create the Tensor to be fed.
 Alternatively, for CUDA-enabled GPU devices, this typically means that the
 operation that produced the contents of the tensor has completed, i.e., the
 CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or
 cuStreamSynchronize()).
 
map<string, string> feed_devices = 6;

public abstract List<String> getFeedList ()

 Tensors to be fed in the callable. Each feed is the name of a tensor.
 
repeated string feed = 1;

public abstract String getFetch (indice int)

 Fetches. A list of tensor names. The caller of the callable expects a
 tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The
 order of specified fetches does not change the execution order.
 
repeated string fetch = 2;

abstract pubblico com.google.protobuf.ByteString getFetchBytes (indice int)

 Fetches. A list of tensor names. The caller of the callable expects a
 tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The
 order of specified fetches does not change the execution order.
 
repeated string fetch = 2;

public abstract int getFetchCount ()

 Fetches. A list of tensor names. The caller of the callable expects a
 tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The
 order of specified fetches does not change the execution order.
 
repeated string fetch = 2;

public abstract Map<String, String> getFetchDevices ()

Utilizzare invece getFetchDevicesMap() .

public abstract int getFetchDevicesCount ()

map<string, string> fetch_devices = 7;

public abstract Map<String, String> getFetchDevicesMap ()

map<string, string> fetch_devices = 7;

public abstract String getFetchDevicesOrDefault (chiave String, String defaultValue)

map<string, string> fetch_devices = 7;

public abstract String getFetchDevicesOrThrow (chiave String)

map<string, string> fetch_devices = 7;

lista astratta pubblica<String> getFetchList ()

 Fetches. A list of tensor names. The caller of the callable expects a
 tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The
 order of specified fetches does not change the execution order.
 
repeated string fetch = 2;

pubblico astratto booleano getFetchSkipSync ()

 By default, RunCallable() will synchronize the GPU stream before returning
 fetched tensors on a GPU device, to ensure that the values in those tensors
 have been produced. This simplifies interacting with the tensors, but
 potentially incurs a performance hit.
 If this options is set to true, the caller is responsible for ensuring
 that the values in the fetched tensors have been produced before they are
 used. The caller can do this by invoking `Device::Sync()` on the underlying
 device(s), or by feeding the tensors back to the same Session using
 `feed_devices` with the same corresponding device name.
 
bool fetch_skip_sync = 8;

estratto pubblico RunOptions getRunOptions ()

 Options that will be applied to each run.
 
.tensorflow.RunOptions run_options = 4;

estratto pubblico RunOptionsOrBuilder getRunOptionsOrBuilder ()

 Options that will be applied to each run.
 
.tensorflow.RunOptions run_options = 4;

public abstract String getTarget (indice int)

 Target Nodes. A list of node names. The named nodes will be run by the
 callable but their outputs will not be returned.
 
repeated string target = 3;

abstract pubblico com.google.protobuf.ByteString getTargetBytes (indice int)

 Target Nodes. A list of node names. The named nodes will be run by the
 callable but their outputs will not be returned.
 
repeated string target = 3;

public abstract int getTargetCount ()

 Target Nodes. A list of node names. The named nodes will be run by the
 callable but their outputs will not be returned.
 
repeated string target = 3;

public abstract List<String> getTargetList ()

 Target Nodes. A list of node names. The named nodes will be run by the
 callable but their outputs will not be returned.
 
repeated string target = 3;

public abstract TensorConnection getTensorConnection (indice int)

 Tensors to be connected in the callable. Each TensorConnection denotes
 a pair of tensors in the graph, between which an edge will be created
 in the callable.
 
repeated .tensorflow.TensorConnection tensor_connection = 5;

public abstract int getTensorConnectionCount ()

 Tensors to be connected in the callable. Each TensorConnection denotes
 a pair of tensors in the graph, between which an edge will be created
 in the callable.
 
repeated .tensorflow.TensorConnection tensor_connection = 5;

elenco astratto pubblico < TensorConnection > getTensorConnectionList ()

 Tensors to be connected in the callable. Each TensorConnection denotes
 a pair of tensors in the graph, between which an edge will be created
 in the callable.
 
repeated .tensorflow.TensorConnection tensor_connection = 5;

public abstract TensorConnectionOrBuilder getTensorConnectionOrBuilder (indice int)

 Tensors to be connected in the callable. Each TensorConnection denotes
 a pair of tensors in the graph, between which an edge will be created
 in the callable.
 
repeated .tensorflow.TensorConnection tensor_connection = 5;

Elenco abstract pubblico<? estende TensorConnectionOrBuilder > getTensorConnectionOrBuilderList ()

 Tensors to be connected in the callable. Each TensorConnection denotes
 a pair of tensors in the graph, between which an edge will be created
 in the callable.
 
repeated .tensorflow.TensorConnection tensor_connection = 5;

public abstract booleano hasRunOptions ()

 Options that will be applied to each run.
 
.tensorflow.RunOptions run_options = 4;