CallableOptionsOrBuilder

interface publique CallableOptionsOrBuilder
Sous-classes indirectes connues

Méthodes publiques

booléen abstrait
contientFeedDevices (clé de chaîne)
 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
booléen abstrait
contientFetchDevices (clé de chaîne)
map<string, string> fetch_devices = 7;
Chaîne abstraite
getFeed (index entier)
 Tensors to be fed in the callable.
résumé com.google.protobuf.ByteString
getFeedBytes (index int)
 Tensors to be fed in the callable.
abstrait entier
getFeedCount ()
 Tensors to be fed in the callable.
Carte abstraite<String, String>
getFeedDevices ()
Utilisez plutôt getFeedDevicesMap() .
abstrait entier
getFeedDevicesCount ()
 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
Carte abstraite<String, String>
getFeedDevicesMap ()
 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
Chaîne abstraite
getFeedDevicesOrDefault (clé de chaîne, valeur par défaut de chaîne)
 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
Chaîne abstraite
getFeedDevicesOrThrow (clé de chaîne)
 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
Liste abstraite<String>
getFeedList ()
 Tensors to be fed in the callable.
Chaîne abstraite
getFetch (index entier)
 Fetches.
résumé com.google.protobuf.ByteString
getFetchBytes (index int)
 Fetches.
abstrait entier
getFetchCount ()
 Fetches.
Carte abstraite<String, String>
getFetchDevices ()
Utilisez plutôt getFetchDevicesMap() .
abstrait entier
getFetchDevicesCount ()
map<string, string> fetch_devices = 7;
Carte abstraite<String, String>
getFetchDevicesMap ()
map<string, string> fetch_devices = 7;
Chaîne abstraite
getFetchDevicesOrDefault (clé de chaîne, valeur par défaut de chaîne)
map<string, string> fetch_devices = 7;
Chaîne abstraite
getFetchDevicesOrThrow (clé de chaîne)
map<string, string> fetch_devices = 7;
Liste abstraite<String>
getFetchList ()
 Fetches.
booléen abstrait
getFetchSkipSync ()
 By default, RunCallable() will synchronize the GPU stream before returning
 fetched tensors on a GPU device, to ensure that the values in those tensors
 have been produced.
Résumé Options d'exécution
getRunOptions ()
 Options that will be applied to each run.
résumé RunOptionsOrBuilder
getRunOptionsOrBuilder ()
 Options that will be applied to each run.
Chaîne abstraite
getTarget (index entier)
 Target Nodes.
résumé com.google.protobuf.ByteString
getTargetBytes (index int)
 Target Nodes.
abstrait entier
getTargetCount ()
 Target Nodes.
Liste abstraite<String>
getListeCible ()
 Target Nodes.
TensorConnection abstraite
getTensorConnection (index int)
 Tensors to be connected in the callable.
abstrait entier
getTensorConnectionCount ()
 Tensors to be connected in the callable.
Liste abstraite < TensorConnection >
getTensorConnectionList ()
 Tensors to be connected in the callable.
TensorConnectionOrBuilder abstrait
getTensorConnectionOrBuilder (index int)
 Tensors to be connected in the callable.
Liste abstraite <? étend TensorConnectionOrBuilder >
getTensorConnectionOrBuilderList ()
 Tensors to be connected in the callable.
booléen abstrait
hasRunOptions ()
 Options that will be applied to each run.

Méthodes publiques

public abstrait booléen contientFeedDevices (clé de chaîne)

 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
 The options below allow changing that - feeding tensors backed by
 device memory, or returning tensors that are backed by device memory.
 The maps below map the name of a feed/fetch tensor (which appears in
 'feed' or 'fetch' fields above), to the fully qualified name of the device
 owning the memory backing the contents of the tensor.
 For example, creating a callable with the following options:
 CallableOptions {
   feed: "a:0"
   feed: "b:0"
   fetch: "x:0"
   fetch: "y:0"
   feed_devices: {
     "a:0": "/job:localhost/replica:0/task:0/device:GPU:0"
   }
   fetch_devices: {
     "y:0": "/job:localhost/replica:0/task:0/device:GPU:0"
  }
 }
 means that the Callable expects:
 - The first argument ("a:0") is a Tensor backed by GPU memory.
 - The second argument ("b:0") is a Tensor backed by host memory.
 and of its return values:
 - The first output ("x:0") will be backed by host memory.
 - The second output ("y:0") will be backed by GPU memory.
 FEEDS:
 It is the responsibility of the caller to ensure that the memory of the fed
 tensors will be correctly initialized and synchronized before it is
 accessed by operations executed during the call to Session::RunCallable().
 This is typically ensured by using the TensorFlow memory allocators
 (Device::GetAllocator()) to create the Tensor to be fed.
 Alternatively, for CUDA-enabled GPU devices, this typically means that the
 operation that produced the contents of the tensor has completed, i.e., the
 CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or
 cuStreamSynchronize()).
 
map<string, string> feed_devices = 6;

public abstrait booléen contientFetchDevices (clé de chaîne)

map<string, string> fetch_devices = 7;

chaîne abstraite publique getFeed (index int)

 Tensors to be fed in the callable. Each feed is the name of a tensor.
 
repeated string feed = 1;

résumé public com.google.protobuf.ByteString getFeedBytes (index int)

 Tensors to be fed in the callable. Each feed is the name of a tensor.
 
repeated string feed = 1;

public abstrait int getFeedCount ()

 Tensors to be fed in the callable. Each feed is the name of a tensor.
 
repeated string feed = 1;

public abstrait Map<String, String> getFeedDevices ()

Utilisez plutôt getFeedDevicesMap() .

public abstrait int getFeedDevicesCount ()

 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
 The options below allow changing that - feeding tensors backed by
 device memory, or returning tensors that are backed by device memory.
 The maps below map the name of a feed/fetch tensor (which appears in
 'feed' or 'fetch' fields above), to the fully qualified name of the device
 owning the memory backing the contents of the tensor.
 For example, creating a callable with the following options:
 CallableOptions {
   feed: "a:0"
   feed: "b:0"
   fetch: "x:0"
   fetch: "y:0"
   feed_devices: {
     "a:0": "/job:localhost/replica:0/task:0/device:GPU:0"
   }
   fetch_devices: {
     "y:0": "/job:localhost/replica:0/task:0/device:GPU:0"
  }
 }
 means that the Callable expects:
 - The first argument ("a:0") is a Tensor backed by GPU memory.
 - The second argument ("b:0") is a Tensor backed by host memory.
 and of its return values:
 - The first output ("x:0") will be backed by host memory.
 - The second output ("y:0") will be backed by GPU memory.
 FEEDS:
 It is the responsibility of the caller to ensure that the memory of the fed
 tensors will be correctly initialized and synchronized before it is
 accessed by operations executed during the call to Session::RunCallable().
 This is typically ensured by using the TensorFlow memory allocators
 (Device::GetAllocator()) to create the Tensor to be fed.
 Alternatively, for CUDA-enabled GPU devices, this typically means that the
 operation that produced the contents of the tensor has completed, i.e., the
 CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or
 cuStreamSynchronize()).
 
map<string, string> feed_devices = 6;

public abstrait Map<String, String> getFeedDevicesMap ()

 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
 The options below allow changing that - feeding tensors backed by
 device memory, or returning tensors that are backed by device memory.
 The maps below map the name of a feed/fetch tensor (which appears in
 'feed' or 'fetch' fields above), to the fully qualified name of the device
 owning the memory backing the contents of the tensor.
 For example, creating a callable with the following options:
 CallableOptions {
   feed: "a:0"
   feed: "b:0"
   fetch: "x:0"
   fetch: "y:0"
   feed_devices: {
     "a:0": "/job:localhost/replica:0/task:0/device:GPU:0"
   }
   fetch_devices: {
     "y:0": "/job:localhost/replica:0/task:0/device:GPU:0"
  }
 }
 means that the Callable expects:
 - The first argument ("a:0") is a Tensor backed by GPU memory.
 - The second argument ("b:0") is a Tensor backed by host memory.
 and of its return values:
 - The first output ("x:0") will be backed by host memory.
 - The second output ("y:0") will be backed by GPU memory.
 FEEDS:
 It is the responsibility of the caller to ensure that the memory of the fed
 tensors will be correctly initialized and synchronized before it is
 accessed by operations executed during the call to Session::RunCallable().
 This is typically ensured by using the TensorFlow memory allocators
 (Device::GetAllocator()) to create the Tensor to be fed.
 Alternatively, for CUDA-enabled GPU devices, this typically means that the
 operation that produced the contents of the tensor has completed, i.e., the
 CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or
 cuStreamSynchronize()).
 
map<string, string> feed_devices = 6;

chaîne abstraite publique getFeedDevicesOrDefault (clé de chaîne, valeur par défaut de chaîne)

 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
 The options below allow changing that - feeding tensors backed by
 device memory, or returning tensors that are backed by device memory.
 The maps below map the name of a feed/fetch tensor (which appears in
 'feed' or 'fetch' fields above), to the fully qualified name of the device
 owning the memory backing the contents of the tensor.
 For example, creating a callable with the following options:
 CallableOptions {
   feed: "a:0"
   feed: "b:0"
   fetch: "x:0"
   fetch: "y:0"
   feed_devices: {
     "a:0": "/job:localhost/replica:0/task:0/device:GPU:0"
   }
   fetch_devices: {
     "y:0": "/job:localhost/replica:0/task:0/device:GPU:0"
  }
 }
 means that the Callable expects:
 - The first argument ("a:0") is a Tensor backed by GPU memory.
 - The second argument ("b:0") is a Tensor backed by host memory.
 and of its return values:
 - The first output ("x:0") will be backed by host memory.
 - The second output ("y:0") will be backed by GPU memory.
 FEEDS:
 It is the responsibility of the caller to ensure that the memory of the fed
 tensors will be correctly initialized and synchronized before it is
 accessed by operations executed during the call to Session::RunCallable().
 This is typically ensured by using the TensorFlow memory allocators
 (Device::GetAllocator()) to create the Tensor to be fed.
 Alternatively, for CUDA-enabled GPU devices, this typically means that the
 operation that produced the contents of the tensor has completed, i.e., the
 CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or
 cuStreamSynchronize()).
 
map<string, string> feed_devices = 6;

chaîne abstraite publique getFeedDevicesOrThrow (clé de chaîne)

 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
 The options below allow changing that - feeding tensors backed by
 device memory, or returning tensors that are backed by device memory.
 The maps below map the name of a feed/fetch tensor (which appears in
 'feed' or 'fetch' fields above), to the fully qualified name of the device
 owning the memory backing the contents of the tensor.
 For example, creating a callable with the following options:
 CallableOptions {
   feed: "a:0"
   feed: "b:0"
   fetch: "x:0"
   fetch: "y:0"
   feed_devices: {
     "a:0": "/job:localhost/replica:0/task:0/device:GPU:0"
   }
   fetch_devices: {
     "y:0": "/job:localhost/replica:0/task:0/device:GPU:0"
  }
 }
 means that the Callable expects:
 - The first argument ("a:0") is a Tensor backed by GPU memory.
 - The second argument ("b:0") is a Tensor backed by host memory.
 and of its return values:
 - The first output ("x:0") will be backed by host memory.
 - The second output ("y:0") will be backed by GPU memory.
 FEEDS:
 It is the responsibility of the caller to ensure that the memory of the fed
 tensors will be correctly initialized and synchronized before it is
 accessed by operations executed during the call to Session::RunCallable().
 This is typically ensured by using the TensorFlow memory allocators
 (Device::GetAllocator()) to create the Tensor to be fed.
 Alternatively, for CUDA-enabled GPU devices, this typically means that the
 operation that produced the contents of the tensor has completed, i.e., the
 CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or
 cuStreamSynchronize()).
 
map<string, string> feed_devices = 6;

liste abstraite publique<String> getFeedList ()

 Tensors to be fed in the callable. Each feed is the name of a tensor.
 
repeated string feed = 1;

chaîne abstraite publique getFetch (index int)

 Fetches. A list of tensor names. The caller of the callable expects a
 tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The
 order of specified fetches does not change the execution order.
 
repeated string fetch = 2;

résumé public com.google.protobuf.ByteString getFetchBytes (index int)

 Fetches. A list of tensor names. The caller of the callable expects a
 tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The
 order of specified fetches does not change the execution order.
 
repeated string fetch = 2;

public abstrait int getFetchCount ()

 Fetches. A list of tensor names. The caller of the callable expects a
 tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The
 order of specified fetches does not change the execution order.
 
repeated string fetch = 2;

public abstrait Map<String, String> getFetchDevices ()

Utilisez plutôt getFetchDevicesMap() .

public abstrait int getFetchDevicesCount ()

map<string, string> fetch_devices = 7;

public abstrait Map<String, String> getFetchDevicesMap ()

map<string, string> fetch_devices = 7;

chaîne abstraite publique getFetchDevicesOrDefault (clé de chaîne, valeur par défaut de chaîne)

map<string, string> fetch_devices = 7;

chaîne abstraite publique getFetchDevicesOrThrow (clé de chaîne)

map<string, string> fetch_devices = 7;

liste abstraite publique<String> getFetchList ()

 Fetches. A list of tensor names. The caller of the callable expects a
 tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The
 order of specified fetches does not change the execution order.
 
repeated string fetch = 2;

public abstrait booléen getFetchSkipSync ()

 By default, RunCallable() will synchronize the GPU stream before returning
 fetched tensors on a GPU device, to ensure that the values in those tensors
 have been produced. This simplifies interacting with the tensors, but
 potentially incurs a performance hit.
 If this options is set to true, the caller is responsible for ensuring
 that the values in the fetched tensors have been produced before they are
 used. The caller can do this by invoking `Device::Sync()` on the underlying
 device(s), or by feeding the tensors back to the same Session using
 `feed_devices` with the same corresponding device name.
 
bool fetch_skip_sync = 8;

résumé public RunOptions getRunOptions ()

 Options that will be applied to each run.
 
.tensorflow.RunOptions run_options = 4;

résumé public RunOptionsOrBuilder getRunOptionsOrBuilder ()

 Options that will be applied to each run.
 
.tensorflow.RunOptions run_options = 4;

chaîne abstraite publique getTarget (index int)

 Target Nodes. A list of node names. The named nodes will be run by the
 callable but their outputs will not be returned.
 
repeated string target = 3;

résumé public com.google.protobuf.ByteString getTargetBytes (index int)

 Target Nodes. A list of node names. The named nodes will be run by the
 callable but their outputs will not be returned.
 
repeated string target = 3;

public abstrait int getTargetCount ()

 Target Nodes. A list of node names. The named nodes will be run by the
 callable but their outputs will not be returned.
 
repeated string target = 3;

liste abstraite publique<String> getTargetList ()

 Target Nodes. A list of node names. The named nodes will be run by the
 callable but their outputs will not be returned.
 
repeated string target = 3;

résumé public TensorConnection getTensorConnection (index int)

 Tensors to be connected in the callable. Each TensorConnection denotes
 a pair of tensors in the graph, between which an edge will be created
 in the callable.
 
repeated .tensorflow.TensorConnection tensor_connection = 5;

public abstrait int getTensorConnectionCount ()

 Tensors to be connected in the callable. Each TensorConnection denotes
 a pair of tensors in the graph, between which an edge will be created
 in the callable.
 
repeated .tensorflow.TensorConnection tensor_connection = 5;

Liste abstraite publique < TensorConnection > getTensorConnectionList ()

 Tensors to be connected in the callable. Each TensorConnection denotes
 a pair of tensors in the graph, between which an edge will be created
 in the callable.
 
repeated .tensorflow.TensorConnection tensor_connection = 5;

résumé public TensorConnectionOrBuilder getTensorConnectionOrBuilder (index int)

 Tensors to be connected in the callable. Each TensorConnection denotes
 a pair of tensors in the graph, between which an edge will be created
 in the callable.
 
repeated .tensorflow.TensorConnection tensor_connection = 5;

Liste des résumés publics<? étend TensorConnectionOrBuilder > getTensorConnectionOrBuilderList ()

 Tensors to be connected in the callable. Each TensorConnection denotes
 a pair of tensors in the graph, between which an edge will be created
 in the callable.
 
repeated .tensorflow.TensorConnection tensor_connection = 5;

public abstrait booléen hasRunOptions ()

 Options that will be applied to each run.
 
.tensorflow.RunOptions run_options = 4;