# tf.contrib.tpu.RunConfig

## Class RunConfig

Inherits From: RunConfig

RunConfig with TPU support.

## Properties

### global_id_in_cluster

The global id in the training cluster.

All global ids in the training cluster are assigned from an increasing sequence of consecutive integers. The first id is 0.

  cluster = {'chief': ['host0:2222'],
'ps': ['host1:2222', 'host2:2222'],
'worker': ['host3:2222', 'host4:2222', 'host5:2222']}


Nodes with task type worker can have id 0, 1, 2. Nodes with task type ps can have id, 0, 1. So, task_id is not unique, but the pair (task_type, task_id) can uniquely determine a node in the cluster.

Global id, i.e., this field, is tracking the index of the node among ALL nodes in the cluster. It is uniquely assigned. For example, for the cluster spec given above, the global ids are assigned as:

  task_type  | task_id  |  global_id
--------------------------------
chief      | 0        |  0
worker     | 0        |  1
worker     | 1        |  2
worker     | 2        |  3
ps         | 0        |  4
ps         | 1        |  5


An integer id.

### service

Returns the platform defined (in TF_CONFIG) service dict.

### train_distribute

Returns the optional tf.contrib.distribute.DistributionStrategy object.

## Methods

### __init__

__init__(
tpu_config=None,
evaluation_master=None,
master=None,
cluster=None,
**kwargs
)


Constructs a RunConfig.

#### Args:

• tpu_config: the TPUConfig that specifies TPU-specific configuration.
• evaluation_master: a string. The address of the master to use for eval. Defaults to master if not set.
• master: a string. The address of the master to use for training.
• cluster: a ClusterResolver
• **kwargs: keyword config parameters.

#### Raises:

• ValueError: if cluster is not None and the provided session_config has a cluster_def already.

### replace

replace(**kwargs)