Feature columns

Feature columns provide a mechanism to map data to a model.

tf.contrib.layers.bucketized_column(source_column, boundaries)

Creates a _BucketizedColumn for discretizing dense input.

Args:
  • source_column: A _RealValuedColumn defining dense column.
  • boundaries: A list of floats specifying the boundaries. It has to be sorted.
Returns:

A _BucketizedColumn.

Raises:
  • ValueError: if 'boundaries' is empty or not sorted.

tf.contrib.layers.check_feature_columns(feature_columns)

Checks the validity of the set of FeatureColumns.

Args:
  • feature_columns: A set of instances or subclasses of FeatureColumn.
Raises:
  • ValueError: If there are duplicate feature column keys.

tf.contrib.layers.create_feature_spec_for_parsing(feature_columns)

Helper that prepares features config from input feature_columns.

The returned feature config can be used as arg 'features' in tf.parse_example.

Typical usage example:

# Define features and transformations
feature_a = sparse_column_with_vocabulary_file(...)
feature_b = real_valued_column(...)
feature_c_bucketized = bucketized_column(real_valued_column("feature_c"), ...)
feature_a_x_feature_c = crossed_column(
  columns=[feature_a, feature_c_bucketized], ...)

feature_columns = set(
  [feature_b, feature_c_bucketized, feature_a_x_feature_c])
batch_examples = tf.parse_example(
    serialized=serialized_examples,
    features=create_feature_spec_for_parsing(feature_columns))

For the above example, create_feature_spec_for_parsing would return the dict: { "feature_a": parsing_ops.VarLenFeature(tf.string), "feature_b": parsing_ops.FixedLenFeature([1], dtype=tf.float32), "feature_c": parsing_ops.FixedLenFeature([1], dtype=tf.float32) }

Args:
  • feature_columns: An iterable containing all the feature columns. All items should be instances of classes derived from _FeatureColumn, unless feature_columns is a dict -- in which case, this should be true of all values in the dict.
Returns:

A dict mapping feature keys to FixedLenFeature or VarLenFeature values.


tf.contrib.layers.crossed_column(columns, hash_bucket_size, combiner='sum', ckpt_to_load_from=None, tensor_name_in_ckpt=None, hash_key=None)

Creates a _CrossedColumn for performing feature crosses.

Args:
  • columns: An iterable of _FeatureColumn. Items can be an instance of _SparseColumn, _CrossedColumn, or _BucketizedColumn.
  • hash_bucket_size: An int that is > 1. The number of buckets.
  • combiner: A string specifying how to reduce if there are multiple entries in a single row. Currently "mean", "sqrtn" and "sum" are supported, with "sum" the default. "sqrtn" often achieves good accuracy, in particular with bag-of-words columns. Each of this can be thought as example level normalizations on the column::
    • "sum": do not normalize
    • "mean": do l1 normalization
    • "sqrtn": do l2 normalization For more information: tf.embedding_lookup_sparse.
  • ckpt_to_load_from: (Optional). String representing checkpoint name/pattern to restore the column weights. Required if tensor_name_in_ckpt is not None.
  • tensor_name_in_ckpt: (Optional). Name of the Tensor in the provided checkpoint from which to restore the column weights. Required if ckpt_to_load_from is not None.
  • hash_key: Specify the hash_key that will be used by the FingerprintCat64 function to combine the crosses fingerprints on SparseFeatureCrossOp (optional).
Returns:

A _CrossedColumn.

Raises:
  • TypeError: if any item in columns is not an instance of _SparseColumn, _CrossedColumn, or _BucketizedColumn, or hash_bucket_size is not an int.
  • ValueError: if hash_bucket_size is not > 1 or len(columns) is not > 1.

tf.contrib.layers.embedding_column(sparse_id_column, dimension, combiner='mean', initializer=None, ckpt_to_load_from=None, tensor_name_in_ckpt=None, max_norm=None)

Creates an _EmbeddingColumn for feeding sparse data into a DNN.

Args:
  • sparse_id_column: A _SparseColumn which is created by for example sparse_column_with_* or crossed_column functions. Note that combiner defined in sparse_id_column is ignored.
  • dimension: An integer specifying dimension of the embedding.
  • combiner: A string specifying how to reduce if there are multiple entries in a single row. Currently "mean", "sqrtn" and "sum" are supported, with "mean" the default. "sqrtn" often achieves good accuracy, in particular with bag-of-words columns. Each of this can be thought as example level normalizations on the column:
    • "sum": do not normalize
    • "mean": do l1 normalization
    • "sqrtn": do l2 normalization For more information: tf.embedding_lookup_sparse.
  • initializer: A variable initializer function to be used in embedding variable initialization. If not specified, defaults to tf.truncated_normal_initializer with mean 0.0 and standard deviation 1/sqrt(sparse_id_column.length).
  • ckpt_to_load_from: (Optional). String representing checkpoint name/pattern to restore the column weights. Required if tensor_name_in_ckpt is not None.
  • tensor_name_in_ckpt: (Optional). Name of the Tensor in the provided checkpoint from which to restore the column weights. Required if ckpt_to_load_from is not None.
  • max_norm: (Optional). If not None, embedding values are l2-normalized to the value of max_norm.
Returns:

An _EmbeddingColumn.


tf.contrib.layers.scattered_embedding_column(column_name, size, dimension, hash_key, combiner='mean', initializer=None)

Creates an embedding column of a sparse feature using parameter hashing.

The i-th embedding component of a value v is found by retrieving an embedding weight whose index is a fingerprint of the pair (v,i).

An embedding column with sparse_column_with_hash_bucket such as embedding_column( sparse_column_with_hash_bucket(column_name, bucket_size), dimension)

could be replaced by scattered_embedding_column( column_name, size=bucket_size * dimension, dimension=dimension, hash_key=tf.contrib.layers.SPARSE_FEATURE_CROSS_DEFAULT_HASH_KEY)

for the same number of embedding parameters and hopefully reduced impact of collisions with a cost of slowing down training.

Args:
  • column_name: A string defining sparse column name.
  • size: An integer specifying the number of parameters in the embedding layer.
  • dimension: An integer specifying dimension of the embedding.
  • hash_key: Specify the hash_key that will be used by the FingerprintCat64 function to combine the crosses fingerprints on SparseFeatureCrossOp.
  • combiner: A string specifying how to reduce if there are multiple entries in a single row. Currently "mean", "sqrtn" and "sum" are supported, with "mean" the default. "sqrtn" often achieves good accuracy, in particular with bag-of-words columns. Each of this can be thought as example level normalizations on the column:
    • "sum": do not normalize features in the column
    • "mean": do l1 normalization on features in the column
    • "sqrtn": do l2 normalization on features in the column For more information: tf.embedding_lookup_sparse.
  • initializer: A variable initializer function to be used in embedding variable initialization. If not specified, defaults to tf.truncated_normal_initializer with mean 0 and standard deviation 0.1.
Returns:

A _ScatteredEmbeddingColumn.

Raises:
  • ValueError: if dimension or size is not a positive integer; or if combiner is not supported.

tf.contrib.layers.input_from_feature_columns(columns_to_tensors, feature_columns, weight_collections=None, trainable=True, scope=None)

A tf.contrib.layer style input layer builder based on FeatureColumns.

Generally a single example in training data is described with feature columns. At the first layer of the model, this column oriented data should be converted to a single tensor. Each feature column needs a different kind of operation during this conversion. For example sparse features need a totally different handling than continuous features.

Example:

  # Building model for training
  columns_to_tensor = tf.parse_example(...)
  first_layer = input_from_feature_columns(
      columns_to_tensors=columns_to_tensor,
      feature_columns=feature_columns)
  second_layer = fully_connected(inputs=first_layer, ...)
  ...

where feature_columns can be defined as follows:

  sparse_feature = sparse_column_with_hash_bucket(
      column_name="sparse_col", ...)
  sparse_feature_emb = embedding_column(sparse_id_column=sparse_feature, ...)
  real_valued_feature = real_valued_column(...)
  real_valued_buckets = bucketized_column(
      source_column=real_valued_feature, ...)

  feature_columns=[sparse_feature_emb, real_valued_buckets]
Args:
  • columns_to_tensors: A mapping from feature column to tensors. 'string' key means a base feature (not-transformed). It can have FeatureColumn as a key too. That means that FeatureColumn is already transformed by input pipeline. For example, inflow may have handled transformations.
  • feature_columns: A set containing all the feature columns. All items in the set should be instances of classes derived by FeatureColumn.
  • weight_collections: List of graph collections to which weights are added.
  • trainable: If True also add variables to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable).
  • scope: Optional scope for variable_scope.
Returns:

A Tensor which can be consumed by hidden layers in the neural network.

Raises:
  • ValueError: if FeatureColumn cannot be consumed by a neural network.

tf.contrib.layers.joint_weighted_sum_from_feature_columns(columns_to_tensors, feature_columns, num_outputs, weight_collections=None, trainable=True, scope=None)

A restricted linear prediction builder based on FeatureColumns.

As long as all feature columns are unweighted sparse columns this computes the prediction of a linear model which stores all weights in a single variable.

Args:
  • columns_to_tensors: A mapping from feature column to tensors. 'string' key means a base feature (not-transformed). It can have FeatureColumn as a key too. That means that FeatureColumn is already transformed by input pipeline. For example, inflow may have handled transformations.
  • feature_columns: A set containing all the feature columns. All items in the set should be instances of classes derived from FeatureColumn.
  • num_outputs: An integer specifying number of outputs. Default value is 1.
  • weight_collections: List of graph collections to which weights are added.
  • trainable: If True also add variables to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable).
  • scope: Optional scope for variable_scope.
Returns:

A tuple containing:

* A Tensor which represents predictions of a linear model.
* A list of Variables storing the weights.
* A Variable which is used for bias.
Raises:
  • ValueError: if FeatureColumn cannot be used for linear predictions.

tf.contrib.layers.make_place_holder_tensors_for_base_features(feature_columns)

Returns placeholder tensors for inference.

Args:
  • feature_columns: An iterable containing all the feature columns. All items should be instances of classes derived from _FeatureColumn.
Returns:

A dict mapping feature keys to SparseTensors (sparse columns) or placeholder Tensors (dense columns).


tf.contrib.layers.multi_class_target(*args, **kwargs)

Creates a _TargetColumn for multi class single label classification. (deprecated)

THIS FUNCTION IS DEPRECATED. It will be removed after 2016-11-12. Instructions for updating: This file will be removed after the deprecation date.Please switch to third_party/tensorflow/contrib/learn/python/learn/estimators/head.py

The target column uses softmax cross entropy loss.

Args:
  • n_classes: Integer, number of classes, must be >= 2
  • label_name: String, name of the key in label dict. Can be null if label is a tensor (single headed models).
  • weight_column_name: A string defining feature column name representing weights. It is used to down weight or boost examples during training. It will be multiplied by the loss of the example.
Returns:

An instance of _MultiClassTargetColumn.

Raises:
  • ValueError: if n_classes is < 2

tf.contrib.layers.one_hot_column(sparse_id_column)

Creates an _OneHotColumn for a one-hot or multi-hot repr in a DNN.

Args:
  • sparse_id_column: A _SparseColumn which is created by sparse_column_with_* or crossed_column functions. Note that combiner defined in sparse_id_column is ignored.
Returns:

An _OneHotColumn.


tf.contrib.layers.parse_feature_columns_from_examples(serialized, feature_columns, name=None, example_names=None)

Parses tf.Examples to extract tensors for given feature_columns.

This is a wrapper of 'tf.parse_example'.

Example:

columns_to_tensor = parse_feature_columns_from_examples(
    serialized=my_data,
    feature_columns=my_features)

# Where my_features are:
# Define features and transformations
sparse_feature_a = sparse_column_with_keys(
    column_name="sparse_feature_a", keys=["AB", "CD", ...])

embedding_feature_a = embedding_column(
    sparse_id_column=sparse_feature_a, dimension=3, combiner="sum")

sparse_feature_b = sparse_column_with_hash_bucket(
    column_name="sparse_feature_b", hash_bucket_size=1000)

embedding_feature_b = embedding_column(
    sparse_id_column=sparse_feature_b, dimension=16, combiner="sum")

crossed_feature_a_x_b = crossed_column(
    columns=[sparse_feature_a, sparse_feature_b], hash_bucket_size=10000)

real_feature = real_valued_column("real_feature")
real_feature_buckets = bucketized_column(
    source_column=real_feature, boundaries=[...])

my_features = [embedding_feature_b, real_feature_buckets, embedding_feature_a]
Args:
  • serialized: A vector (1-D Tensor) of strings, a batch of binary serialized Example protos.
  • feature_columns: An iterable containing all the feature columns. All items should be instances of classes derived from _FeatureColumn.
  • name: A name for this operation (optional).
  • example_names: A vector (1-D Tensor) of strings (optional), the names of the serialized protos in the batch.
Returns:

A dict mapping FeatureColumn to Tensor and SparseTensor values.


tf.contrib.layers.parse_feature_columns_from_sequence_examples(serialized, context_feature_columns, sequence_feature_columns, name=None, example_name=None)

Parses tf.SequenceExamples to extract tensors for given FeatureColumns.

Args:
  • serialized: A scalar (0-D Tensor) of type string, a single serialized SequenceExample proto.
  • context_feature_columns: An iterable containing the feature columns for context features. All items should be instances of classes derived from _FeatureColumn. Can be None.
  • sequence_feature_columns: An iterable containing the feature columns for sequence features. All items should be instances of classes derived from _FeatureColumn. Can be None.
  • name: A name for this operation (optional).
  • example_name: A scalar (0-D Tensor) of type string (optional), the names of the serialized proto.
Returns:

A tuple consisting of:

  • context_features: a dict mapping FeatureColumns from context_feature_columns to their parsed Tensors/SparseTensors.
  • sequence_features: a dict mapping FeatureColumns from sequence_feature_columns to their parsed Tensors/SparseTensors.

tf.contrib.layers.real_valued_column(column_name, dimension=1, default_value=None, dtype=tf.float32, normalizer=None)

Creates a _RealValuedColumn for dense numeric data.

Args:
  • column_name: A string defining real valued column name.
  • dimension: An integer specifying dimension of the real valued column. The default is 1. When dimension is not None, the Tensor representing the _RealValuedColumn will have the shape of [batch_size, dimension]. A None dimension means the feature column should be treat as variable length and will be parsed as a SparseTensor.
  • default_value: A single value compatible with dtype or a list of values compatible with dtype which the column takes on during tf.Example parsing if data is missing. When dimension is not None, a default value of None will cause tf.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every dimension. If a list of values is provided, the length of the list should be equal to the value of dimension. Only scalar default value is supported in case dimension is not specified.
  • dtype: defines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type.
  • normalizer: If not None, a function that can be used to normalize the value of the real valued column after default_value is applied for parsing. Normalizer function takes the input tensor as its argument, and returns the output tensor. (e.g. lambda x: (x - 3.0) / 4.2). Note that for variable length columns, the normalizer should expect an input_tensor of type SparseTensor.
Returns:

A _RealValuedColumn.

Raises:
  • TypeError: if dimension is not an int
  • ValueError: if dimension is not a positive integer
  • TypeError: if default_value is a list but its length is not equal to the value of dimension.
  • TypeError: if default_value is not compatible with dtype.
  • ValueError: if dtype is not convertable to tf.float32.

tf.contrib.layers.shared_embedding_columns(sparse_id_columns, dimension, combiner='mean', shared_embedding_name=None, initializer=None, ckpt_to_load_from=None, tensor_name_in_ckpt=None, max_norm=None)

Creates a list of _EmbeddingColumn sharing the same embedding.

Args:
  • sparse_id_columns: An iterable of _SparseColumn, such as those created by sparse_column_with_* or crossed_column functions. Note that combiner defined in each sparse_id_column is ignored.
  • dimension: An integer specifying dimension of the embedding.
  • combiner: A string specifying how to reduce if there are multiple entries in a single row. Currently "mean", "sqrtn" and "sum" are supported, with "mean" the default. "sqrtn" often achieves good accuracy, in particular with bag-of-words columns. Each of this can be thought as example level normalizations on the column:
    • "sum": do not normalize
    • "mean": do l1 normalization
    • "sqrtn": do l2 normalization For more information: tf.embedding_lookup_sparse.
  • shared_embedding_name: (Optional). A string specifying the name of shared embedding weights. This will be needed if you want to reference the shared embedding separately from the generated _EmbeddingColumn.
  • initializer: A variable initializer function to be used in embedding variable initialization. If not specified, defaults to tf.truncated_normal_initializer with mean 0.0 and standard deviation 1/sqrt(sparse_id_columns[0].length).
  • ckpt_to_load_from: (Optional). String representing checkpoint name/pattern to restore the column weights. Required if tensor_name_in_ckpt is not None.
  • tensor_name_in_ckpt: (Optional). Name of the Tensor in the provided checkpoint from which to restore the column weights. Required if ckpt_to_load_from is not None.
  • max_norm: (Optional). If not None, embedding values are l2-normalized to the value of max_norm.
Returns:

A tuple of _EmbeddingColumn with shared embedding space.

Raises:
  • ValueError: if sparse_id_columns is empty, or its elements are not compatible with each other.
  • TypeError: if sparse_id_columns is not a sequence or is a string. If at least one element of sparse_id_columns is not a SparseTensor.

tf.contrib.layers.sparse_column_with_hash_bucket(column_name, hash_bucket_size, combiner='sum', dtype=tf.string)

Creates a _SparseColumn with hashed bucket configuration.

Use this when your sparse features are in string or integer format, but you don't have a vocab file that maps each value to an integer ID. output_id = Hash(input_feature_string) % bucket_size

Args:
  • column_name: A string defining sparse column name.
  • hash_bucket_size: An int that is > 1. The number of buckets.
  • combiner: A string specifying how to reduce if the sparse column is multivalent. Currently "mean", "sqrtn" and "sum" are supported, with "sum" the default. "sqrtn" often achieves good accuracy, in particular with bag-of-words columns.
    • "sum": do not normalize features in the column
    • "mean": do l1 normalization on features in the column
    • "sqrtn": do l2 normalization on features in the column For more information: tf.embedding_lookup_sparse.
  • dtype: The type of features. Only string and integer types are supported.
Returns:

A _SparseColumn with hashed bucket configuration

Raises:
  • ValueError: hash_bucket_size is not greater than 2.
  • ValueError: dtype is neither string nor integer.

tf.contrib.layers.sparse_column_with_integerized_feature(column_name, bucket_size, combiner='sum', dtype=tf.int64)

Creates an integerized _SparseColumn.

Use this when your features are already pre-integerized into int64 IDs, that is, when the set of values to output is already coming in as what's desired in the output. Integerized means we can use the feature value itself as id.

Typically this is used for reading contiguous ranges of integers indexes, but it doesn't have to be. The output value is simply copied from the input_feature, whatever it is. Just be aware, however, that if you have large gaps of unused integers it might affect what you feed those in (for instance, if you make up a one-hot tensor from these, the unused integers will appear as values in the tensor which are always zero.)

Args:
  • column_name: A string defining sparse column name.
  • bucket_size: An int that is > 1. The number of buckets. It should be bigger than maximum feature. In other words features in this column should be an int64 in range [0, bucket_size)
  • combiner: A string specifying how to reduce if the sparse column is multivalent. Currently "mean", "sqrtn" and "sum" are supported, with "sum" the default. "sqrtn" often achieves good accuracy, in particular with bag-of-words columns.
    • "sum": do not normalize features in the column
    • "mean": do l1 normalization on features in the column
    • "sqrtn": do l2 normalization on features in the column For more information: tf.embedding_lookup_sparse.
  • dtype: Type of features. It should be an integer type. Default value is dtypes.int64.
Returns:

An integerized _SparseColumn definition.

Raises:
  • ValueError: bucket_size is not greater than 1.
  • ValueError: dtype is not integer.

tf.contrib.layers.sparse_column_with_keys(column_name, keys, default_value=-1, combiner='sum')

Creates a _SparseColumn with keys.

Look up logic is as follows: lookup_id = index_of_feature_in_keys if feature in keys else default_value

Args:
  • column_name: A string defining sparse column name.
  • keys: a string list defining vocabulary.
  • default_value: The value to use for out-of-vocabulary feature values. Default is -1.
  • combiner: A string specifying how to reduce if the sparse column is multivalent. Currently "mean", "sqrtn" and "sum" are supported, with "sum" the default. "sqrtn" often achieves good accuracy, in particular with bag-of-words columns.
    • "sum": do not normalize features in the column
    • "mean": do l1 normalization on features in the column
    • "sqrtn": do l2 normalization on features in the column For more information: tf.embedding_lookup_sparse.
Returns:

A _SparseColumnKeys with keys configuration.


tf.contrib.layers.weighted_sparse_column(sparse_id_column, weight_column_name, dtype=tf.float32)

Creates a _SparseColumn by combining sparse_id_column with a weight column.

Example:

python sparse_feature = sparse_column_with_hash_bucket(column_name="sparse_col", hash_bucket_size=1000) weighted_feature = weighted_sparse_column(sparse_id_column=sparse_feature, weight_column_name="weights_col")

This configuration assumes that input dictionary of model contains the following two items: * (key="sparse_col", value=sparse_tensor) where sparse_tensor is a SparseTensor. * (key="weights_col", value=weights_tensor) where weights_tensor is a SparseTensor. Following are assumed to be true: * sparse_tensor.indices = weights_tensor.indices * sparse_tensor.dense_shape = weights_tensor.dense_shape

Args:
  • sparse_id_column: A _SparseColumn which is created by sparse_column_with_* functions.
  • weight_column_name: A string defining a sparse column name which represents weight or value of the corresponding sparse id feature.
  • dtype: Type of weights, such as tf.float32
Returns:

A _WeightedSparseColumn composed of two sparse features: one represents id, the other represents weight (value) of the id feature in that example.

Raises:
  • ValueError: if dtype is not convertible to float.

tf.contrib.layers.weighted_sum_from_feature_columns(columns_to_tensors, feature_columns, num_outputs, weight_collections=None, trainable=True, scope=None)

A tf.contrib.layer style linear prediction builder based on FeatureColumns.

Generally a single example in training data is described with feature columns. This function generates weighted sum for each num_outputs. Weighted sum refers to logits in classification problems. It refers to prediction itself for linear regression problems.

Example:

# Building model for training feature_columns = ( real_valued_column("my_feature1"), ... ) columns_to_tensor = tf.parse_example(...) logits = weighted_sum_from_feature_columns( columns_to_tensors=columns_to_tensor, feature_columns=feature_columns, num_outputs=1) loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=labels, logits=logits)

Args:
  • columns_to_tensors: A mapping from feature column to tensors. 'string' key means a base feature (not-transformed). It can have FeatureColumn as a key too. That means that FeatureColumn is already transformed by input pipeline. For example, inflow may have handled transformations.
  • feature_columns: A set containing all the feature columns. All items in the set should be instances of classes derived from FeatureColumn.
  • num_outputs: An integer specifying number of outputs. Default value is 1.
  • weight_collections: List of graph collections to which weights are added.
  • trainable: If True also add variables to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable).
  • scope: Optional scope for variable_scope.
Returns:

A tuple containing:

* A Tensor which represents predictions of a linear model.
* A dictionary which maps feature_column to corresponding Variable.
* A Variable which is used for bias.
Raises:
  • ValueError: if FeatureColumn cannot be used for linear predictions.

tf.contrib.layers.infer_real_valued_columns(features)


tf.contrib.layers.sequence_input_from_feature_columns(*args, **kwargs)

Builds inputs for sequence models from FeatureColumns. (experimental)

THIS FUNCTION IS EXPERIMENTAL. It may change or be removed at any time, and without warning.

See documentation for input_from_feature_columns. The following types of FeatureColumn are permitted in feature_columns: _OneHotColumn, _EmbeddingColumn, _ScatteredEmbeddingColumn, _RealValuedColumn, _DataFrameColumn. In addition, columns in feature_columns may not be constructed using any of the following: ScatteredEmbeddingColumn, BucketizedColumn, CrossedColumn.

Args:
  • columns_to_tensors: A mapping from feature column to tensors. 'string' key means a base feature (not-transformed). It can have FeatureColumn as a key too. That means that FeatureColumn is already transformed by input pipeline. For example, inflow may have handled transformations.
  • feature_columns: A set containing all the feature columns. All items in the set should be instances of classes derived by FeatureColumn.
  • weight_collections: List of graph collections to which weights are added.
  • trainable: If True also add variables to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable).
  • scope: Optional scope for variable_scope.
Returns:

A Tensor which can be consumed by hidden layers in the neural network.

Raises:
  • ValueError: if FeatureColumn cannot be consumed by a neural network.