CallableOptionsOrBuilder

giao diện công cộng CallableOptionsOrBuilder
Các lớp con gián tiếp đã biết

Phương pháp công cộng

trừu tượng boolean
chứaFeedDevices (Khóa chuỗi)
 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
trừu tượng boolean
chứaFetchDevices (Khóa chuỗi)
map<string, string> fetch_devices = 7;
chuỗi trừu tượng
getFeed (chỉ số int)
 Tensors to be fed in the callable.
tóm tắt com.google.protobuf.ByteString
getFeedBytes (chỉ số int)
 Tensors to be fed in the callable.
int trừu tượng
getFeedCount ()
 Tensors to be fed in the callable.
Bản đồ trừu tượng<Chuỗi, Chuỗi>
getFeedDevices ()
Thay vào đó hãy sử dụng getFeedDevicesMap() .
int trừu tượng
getFeedDevicesCount ()
 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
Bản đồ trừu tượng<Chuỗi, Chuỗi>
getFeedDevicesMap ()
 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
chuỗi trừu tượng
getFeedDevicesOrDefault (Khóa chuỗi, Chuỗi defaultValue)
 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
chuỗi trừu tượng
getFeedDevicesOrThrow (Khóa chuỗi)
 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
Danh sách trừu tượng<String>
getFeedList ()
 Tensors to be fed in the callable.
chuỗi trừu tượng
getFetch (chỉ số int)
 Fetches.
tóm tắt com.google.protobuf.ByteString
getFetchBytes (chỉ mục int)
 Fetches.
int trừu tượng
getFetchCount ()
 Fetches.
Bản đồ trừu tượng<Chuỗi, Chuỗi>
getFetchDevices ()
Thay vào đó hãy sử dụng getFetchDevicesMap() .
int trừu tượng
getFetchDevicesCount ()
map<string, string> fetch_devices = 7;
Bản đồ trừu tượng<Chuỗi, Chuỗi>
getFetchDevicesMap ()
map<string, string> fetch_devices = 7;
chuỗi trừu tượng
getFetchDevicesOrDefault (Khóa chuỗi, Chuỗi defaultValue)
map<string, string> fetch_devices = 7;
chuỗi trừu tượng
getFetchDevicesOrThrow (Khóa chuỗi)
map<string, string> fetch_devices = 7;
Danh sách trừu tượng<String>
getFetchList ()
 Fetches.
trừu tượng boolean
getFetchSkipSync ()
 By default, RunCallable() will synchronize the GPU stream before returning
 fetched tensors on a GPU device, to ensure that the values in those tensors
 have been produced.
tùy chọn chạy trừu tượng
getRunOptions ()
 Options that will be applied to each run.
trừu tượng RunOptionsOrBuilder
getRunOptionsOrBuilder ()
 Options that will be applied to each run.
chuỗi trừu tượng
getTarget (chỉ mục int)
 Target Nodes.
tóm tắt com.google.protobuf.ByteString
getTargetBytes (chỉ mục int)
 Target Nodes.
int trừu tượng
getTargetCount ()
 Target Nodes.
Danh sách trừu tượng<String>
getTargetList ()
 Target Nodes.
kết nối Tensor trừu tượng
getTensorConnection (chỉ số int)
 Tensors to be connected in the callable.
int trừu tượng
getTensorConnectionCount ()
 Tensors to be connected in the callable.
Danh sách trừu tượng< TensorConnection >
getTensorConnectionList ()
 Tensors to be connected in the callable.
trừu tượng TensorConnectionOrBuilder
getTensorConnectionOrBuilder (chỉ mục int)
 Tensors to be connected in the callable.
Danh sách trừu tượng<? mở rộng TensorConnectionOrBuilder >
getTensorConnectionOrBuilderList ()
 Tensors to be connected in the callable.
trừu tượng boolean
hasRunOptions ()
 Options that will be applied to each run.

Phương pháp công cộng

boolean trừu tượng công khai chứaFeedDevices (Khóa chuỗi)

 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
 The options below allow changing that - feeding tensors backed by
 device memory, or returning tensors that are backed by device memory.
 The maps below map the name of a feed/fetch tensor (which appears in
 'feed' or 'fetch' fields above), to the fully qualified name of the device
 owning the memory backing the contents of the tensor.
 For example, creating a callable with the following options:
 CallableOptions {
   feed: "a:0"
   feed: "b:0"
   fetch: "x:0"
   fetch: "y:0"
   feed_devices: {
     "a:0": "/job:localhost/replica:0/task:0/device:GPU:0"
   }
   fetch_devices: {
     "y:0": "/job:localhost/replica:0/task:0/device:GPU:0"
  }
 }
 means that the Callable expects:
 - The first argument ("a:0") is a Tensor backed by GPU memory.
 - The second argument ("b:0") is a Tensor backed by host memory.
 and of its return values:
 - The first output ("x:0") will be backed by host memory.
 - The second output ("y:0") will be backed by GPU memory.
 FEEDS:
 It is the responsibility of the caller to ensure that the memory of the fed
 tensors will be correctly initialized and synchronized before it is
 accessed by operations executed during the call to Session::RunCallable().
 This is typically ensured by using the TensorFlow memory allocators
 (Device::GetAllocator()) to create the Tensor to be fed.
 Alternatively, for CUDA-enabled GPU devices, this typically means that the
 operation that produced the contents of the tensor has completed, i.e., the
 CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or
 cuStreamSynchronize()).
 
map<string, string> feed_devices = 6;

boolean trừu tượng công khai chứaFetchDevices (Khóa chuỗi)

map<string, string> fetch_devices = 7;

Chuỗi trừu tượng công khai getFeed (chỉ mục int)

 Tensors to be fed in the callable. Each feed is the name of a tensor.
 
repeated string feed = 1;

tóm tắt công khai com.google.protobuf.ByteString getFeedBytes (chỉ mục int)

 Tensors to be fed in the callable. Each feed is the name of a tensor.
 
repeated string feed = 1;

tóm tắt công khai int getFeedCount ()

 Tensors to be fed in the callable. Each feed is the name of a tensor.
 
repeated string feed = 1;

Bản đồ trừu tượng công khai<String, String> getFeedDevices ()

Thay vào đó hãy sử dụng getFeedDevicesMap() .

tóm tắt công khai int getFeedDevicesCount ()

 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
 The options below allow changing that - feeding tensors backed by
 device memory, or returning tensors that are backed by device memory.
 The maps below map the name of a feed/fetch tensor (which appears in
 'feed' or 'fetch' fields above), to the fully qualified name of the device
 owning the memory backing the contents of the tensor.
 For example, creating a callable with the following options:
 CallableOptions {
   feed: "a:0"
   feed: "b:0"
   fetch: "x:0"
   fetch: "y:0"
   feed_devices: {
     "a:0": "/job:localhost/replica:0/task:0/device:GPU:0"
   }
   fetch_devices: {
     "y:0": "/job:localhost/replica:0/task:0/device:GPU:0"
  }
 }
 means that the Callable expects:
 - The first argument ("a:0") is a Tensor backed by GPU memory.
 - The second argument ("b:0") is a Tensor backed by host memory.
 and of its return values:
 - The first output ("x:0") will be backed by host memory.
 - The second output ("y:0") will be backed by GPU memory.
 FEEDS:
 It is the responsibility of the caller to ensure that the memory of the fed
 tensors will be correctly initialized and synchronized before it is
 accessed by operations executed during the call to Session::RunCallable().
 This is typically ensured by using the TensorFlow memory allocators
 (Device::GetAllocator()) to create the Tensor to be fed.
 Alternatively, for CUDA-enabled GPU devices, this typically means that the
 operation that produced the contents of the tensor has completed, i.e., the
 CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or
 cuStreamSynchronize()).
 
map<string, string> feed_devices = 6;

Bản đồ trừu tượng công khai<String, String> getFeedDevicesMap ()

 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
 The options below allow changing that - feeding tensors backed by
 device memory, or returning tensors that are backed by device memory.
 The maps below map the name of a feed/fetch tensor (which appears in
 'feed' or 'fetch' fields above), to the fully qualified name of the device
 owning the memory backing the contents of the tensor.
 For example, creating a callable with the following options:
 CallableOptions {
   feed: "a:0"
   feed: "b:0"
   fetch: "x:0"
   fetch: "y:0"
   feed_devices: {
     "a:0": "/job:localhost/replica:0/task:0/device:GPU:0"
   }
   fetch_devices: {
     "y:0": "/job:localhost/replica:0/task:0/device:GPU:0"
  }
 }
 means that the Callable expects:
 - The first argument ("a:0") is a Tensor backed by GPU memory.
 - The second argument ("b:0") is a Tensor backed by host memory.
 and of its return values:
 - The first output ("x:0") will be backed by host memory.
 - The second output ("y:0") will be backed by GPU memory.
 FEEDS:
 It is the responsibility of the caller to ensure that the memory of the fed
 tensors will be correctly initialized and synchronized before it is
 accessed by operations executed during the call to Session::RunCallable().
 This is typically ensured by using the TensorFlow memory allocators
 (Device::GetAllocator()) to create the Tensor to be fed.
 Alternatively, for CUDA-enabled GPU devices, this typically means that the
 operation that produced the contents of the tensor has completed, i.e., the
 CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or
 cuStreamSynchronize()).
 
map<string, string> feed_devices = 6;

chuỗi trừu tượng công khai getFeedDevicesOrDefault (Khóa chuỗi, Chuỗi defaultValue)

 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
 The options below allow changing that - feeding tensors backed by
 device memory, or returning tensors that are backed by device memory.
 The maps below map the name of a feed/fetch tensor (which appears in
 'feed' or 'fetch' fields above), to the fully qualified name of the device
 owning the memory backing the contents of the tensor.
 For example, creating a callable with the following options:
 CallableOptions {
   feed: "a:0"
   feed: "b:0"
   fetch: "x:0"
   fetch: "y:0"
   feed_devices: {
     "a:0": "/job:localhost/replica:0/task:0/device:GPU:0"
   }
   fetch_devices: {
     "y:0": "/job:localhost/replica:0/task:0/device:GPU:0"
  }
 }
 means that the Callable expects:
 - The first argument ("a:0") is a Tensor backed by GPU memory.
 - The second argument ("b:0") is a Tensor backed by host memory.
 and of its return values:
 - The first output ("x:0") will be backed by host memory.
 - The second output ("y:0") will be backed by GPU memory.
 FEEDS:
 It is the responsibility of the caller to ensure that the memory of the fed
 tensors will be correctly initialized and synchronized before it is
 accessed by operations executed during the call to Session::RunCallable().
 This is typically ensured by using the TensorFlow memory allocators
 (Device::GetAllocator()) to create the Tensor to be fed.
 Alternatively, for CUDA-enabled GPU devices, this typically means that the
 operation that produced the contents of the tensor has completed, i.e., the
 CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or
 cuStreamSynchronize()).
 
map<string, string> feed_devices = 6;

Chuỗi tóm tắt công khai getFeedDevicesOrThrow (Khóa chuỗi)

 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
 The options below allow changing that - feeding tensors backed by
 device memory, or returning tensors that are backed by device memory.
 The maps below map the name of a feed/fetch tensor (which appears in
 'feed' or 'fetch' fields above), to the fully qualified name of the device
 owning the memory backing the contents of the tensor.
 For example, creating a callable with the following options:
 CallableOptions {
   feed: "a:0"
   feed: "b:0"
   fetch: "x:0"
   fetch: "y:0"
   feed_devices: {
     "a:0": "/job:localhost/replica:0/task:0/device:GPU:0"
   }
   fetch_devices: {
     "y:0": "/job:localhost/replica:0/task:0/device:GPU:0"
  }
 }
 means that the Callable expects:
 - The first argument ("a:0") is a Tensor backed by GPU memory.
 - The second argument ("b:0") is a Tensor backed by host memory.
 and of its return values:
 - The first output ("x:0") will be backed by host memory.
 - The second output ("y:0") will be backed by GPU memory.
 FEEDS:
 It is the responsibility of the caller to ensure that the memory of the fed
 tensors will be correctly initialized and synchronized before it is
 accessed by operations executed during the call to Session::RunCallable().
 This is typically ensured by using the TensorFlow memory allocators
 (Device::GetAllocator()) to create the Tensor to be fed.
 Alternatively, for CUDA-enabled GPU devices, this typically means that the
 operation that produced the contents of the tensor has completed, i.e., the
 CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or
 cuStreamSynchronize()).
 
map<string, string> feed_devices = 6;

Danh sách tóm tắt công khai<String> getFeedList ()

 Tensors to be fed in the callable. Each feed is the name of a tensor.
 
repeated string feed = 1;

Chuỗi tóm tắt công khai getFetch (chỉ mục int)

 Fetches. A list of tensor names. The caller of the callable expects a
 tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The
 order of specified fetches does not change the execution order.
 
repeated string fetch = 2;

tóm tắt công khai com.google.protobuf.ByteString getFetchBytes (chỉ mục int)

 Fetches. A list of tensor names. The caller of the callable expects a
 tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The
 order of specified fetches does not change the execution order.
 
repeated string fetch = 2;

tóm tắt công khai int getFetchCount ()

 Fetches. A list of tensor names. The caller of the callable expects a
 tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The
 order of specified fetches does not change the execution order.
 
repeated string fetch = 2;

Bản đồ trừu tượng công khai<String, String> getFetchDevices ()

Thay vào đó hãy sử dụng getFetchDevicesMap() .

tóm tắt công khai int getFetchDevicesCount ()

map<string, string> fetch_devices = 7;

Bản đồ trừu tượng công khai<String, String> getFetchDevicesMap ()

map<string, string> fetch_devices = 7;

chuỗi trừu tượng công khai getFetchDevicesOrDefault (Khóa chuỗi, Chuỗi defaultValue)

map<string, string> fetch_devices = 7;

Chuỗi tóm tắt công khai getFetchDevicesOrThrow (Khóa chuỗi)

map<string, string> fetch_devices = 7;

Danh sách tóm tắt công khai<String> getFetchList ()

 Fetches. A list of tensor names. The caller of the callable expects a
 tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The
 order of specified fetches does not change the execution order.
 
repeated string fetch = 2;

boolean trừu tượng công khai getFetchSkipSync ()

 By default, RunCallable() will synchronize the GPU stream before returning
 fetched tensors on a GPU device, to ensure that the values in those tensors
 have been produced. This simplifies interacting with the tensors, but
 potentially incurs a performance hit.
 If this options is set to true, the caller is responsible for ensuring
 that the values in the fetched tensors have been produced before they are
 used. The caller can do this by invoking `Device::Sync()` on the underlying
 device(s), or by feeding the tensors back to the same Session using
 `feed_devices` with the same corresponding device name.
 
bool fetch_skip_sync = 8;

tóm tắt công khai RunOptions getRunOptions ()

 Options that will be applied to each run.
 
.tensorflow.RunOptions run_options = 4;

tóm tắt công khai RunOptionsOrBuilder getRunOptionsOrBuilder ()

 Options that will be applied to each run.
 
.tensorflow.RunOptions run_options = 4;

Chuỗi tóm tắt công khai getTarget (chỉ mục int)

 Target Nodes. A list of node names. The named nodes will be run by the
 callable but their outputs will not be returned.
 
repeated string target = 3;

tóm tắt công khai com.google.protobuf.ByteString getTargetBytes (chỉ mục int)

 Target Nodes. A list of node names. The named nodes will be run by the
 callable but their outputs will not be returned.
 
repeated string target = 3;

tóm tắt công khai int getTargetCount ()

 Target Nodes. A list of node names. The named nodes will be run by the
 callable but their outputs will not be returned.
 
repeated string target = 3;

Danh sách tóm tắt công khai<String> getTargetList ()

 Target Nodes. A list of node names. The named nodes will be run by the
 callable but their outputs will not be returned.
 
repeated string target = 3;

tóm tắt công khai TensorConnection getTensorConnection (int chỉ mục)

 Tensors to be connected in the callable. Each TensorConnection denotes
 a pair of tensors in the graph, between which an edge will be created
 in the callable.
 
repeated .tensorflow.TensorConnection tensor_connection = 5;

tóm tắt công khai int getTensorConnectionCount ()

 Tensors to be connected in the callable. Each TensorConnection denotes
 a pair of tensors in the graph, between which an edge will be created
 in the callable.
 
repeated .tensorflow.TensorConnection tensor_connection = 5;

Danh sách tóm tắt công khai< TensorConnection > getTensorConnectionList ()

 Tensors to be connected in the callable. Each TensorConnection denotes
 a pair of tensors in the graph, between which an edge will be created
 in the callable.
 
repeated .tensorflow.TensorConnection tensor_connection = 5;

tóm tắt công khai TensorConnectionOrBuilder getTensorConnectionOrBuilder (int chỉ mục)

 Tensors to be connected in the callable. Each TensorConnection denotes
 a pair of tensors in the graph, between which an edge will be created
 in the callable.
 
repeated .tensorflow.TensorConnection tensor_connection = 5;

Danh sách tóm tắt công khai<? mở rộng TensorConnectionOrBuilder > getTensorConnectionOrBuilderList ()

 Tensors to be connected in the callable. Each TensorConnection denotes
 a pair of tensors in the graph, between which an edge will be created
 in the callable.
 
repeated .tensorflow.TensorConnection tensor_connection = 5;

boolean trừu tượng công khai hasRunOptions ()

 Options that will be applied to each run.
 
.tensorflow.RunOptions run_options = 4;