tf.estimator.DNNLinearCombinedEstimator

An estimator for TensorFlow Linear and DNN joined models with custom head.

Inherits From: Estimator, Estimator

Example:

numeric_feature = numeric_column(...)
categorical_column_a = categorical_column_with_hash_bucket(...)
categorical_column_b = categorical_column_with_hash_bucket(...)

categorical_feature_a_x_categorical_feature_b = crossed_column(...)
categorical_feature_a_emb = embedding_column(
    categorical_column=categorical_feature_a, ...)
categorical_feature_b_emb = embedding_column(
    categorical_column=categorical_feature_b, ...)

estimator = tf.estimator.DNNLinearCombinedEstimator(
    head=tf.estimator.MultiLabelHead(n_classes=3),
    # wide settings
    linear_feature_columns=[categorical_feature_a_x_categorical_feature_b],
    linear_optimizer=tf.keras.optimizers.Ftrl(...),
    # deep settings
    dnn_feature_columns=[
        categorical_feature_a_emb, categorical_feature_b_emb,
        numeric_feature],
    dnn_hidden_units=[1000, 500, 100],
    dnn_optimizer=tf.keras.optimizers.Adagrad(...))

# To apply L1 and L2 regularization, you can set dnn_optimizer to:
tf.compat.v1.train.ProximalAdagradOptimizer(
    learning_rate=0.1,
    l1_regularization_strength=0.001,
    l2_regularization_strength=0.001)
# To apply learning rate decay, you can set dnn_optimizer to a callable:
lambda: tf.keras.optimizers.Adam(
    learning_rate=tf.compat.v1.train.exponential_decay(
        learning_rate=0.1,
        global_step=tf.compat.v1.train.get_global_step(),
        decay_steps=10000,
        decay_rate=0.96)
# It is the same for linear_optimizer.

# Input builders
def input_fn_train:
  # Returns tf.data.Dataset of (x, y) tuple where y represents label's class
  # index.
  pass
def input_fn_eval:
  # Returns tf.data.Dataset of (x, y) tuple where y represents label's class
  # index.
  pass
def input_fn_predict:
  # Returns tf.data.Dataset of (x, None) tuple.
  pass
estimator.train(input_fn=input_fn_train, steps=100)
metrics = estimator.evaluate(input_fn=input_fn_eval, steps=10)
predictions = estimator.predict(input_fn=input_fn_predict)

Input of train and evaluate should have following features, otherwise there will be a KeyError:

  • for each column in dnn_feature_columns + linear_feature_columns:
    • if column is a CategoricalColumn, a feature with key=column.name whose value is a SparseTensor.
    • if column is a WeightedCategoricalColumn, two features: the first with key the id column name, the second with key the weight column name. Both features' value must be a SparseTensor.
    • if column is a DenseColumn, a feature with key=column.name whose value is a Tensor.

Loss is calculated by using mean squared error.

head A Head instance constructed with a method such as tf.estimator.MultiLabelHead.
model_dir Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into an estimator to continue training a previously saved model.
linear_feature_columns An iterable containing all the feature columns used by linear part of the model. All items in the set must be instances of classes derived from FeatureColumn.
linear_optimizer An instance of tf.keras.optimizers.* used to apply gradients to the linear part of the model. Can also be a string (one of 'Adagrad', 'Adam', 'Ftrl', 'RMSProp', 'SGD'), or callable. Defaults to FTRL optimizer.
dnn_feature_columns An iterable containing all the feature columns used by deep part of the model. All items in the set must be instances of classes derived from FeatureColumn.
dnn_optimizer An instance of tf.keras.optimizers.* used to apply gradients to the deep part of the model. Can also be a string (one of 'Adagrad', 'Adam', 'Ftrl', 'RMSProp', 'SGD'), or callable. Defaults to Adagrad optimizer.
dnn_hidden_units List of hidden units per layer. All layers are fully connected.
dnn_activation_fn Activation function applied to each layer. If None, will use tf.nn.relu.
dnn_dropout When not None, the probability we will drop out a given coordinate.
config RunConfig object to configure the runtime settings.
batch_norm Whether to use batch normalization after each hidden layer.
linear_sparse_combiner A string specifying how to reduce the linear model if a categorical column is multivalent. One of "mean", "sqrtn", and "sum" -- these are effectively different ways to do example-level normalization, which can be useful for bag-of-words features. For more details, see tf.feature_column.linear_model.

ValueError If both linear_feature_columns and dnn_features_columns are empty at the same time.

config

export_savedmodel

model_dir

model_fn Returns the model_fn which is bound to self.params.
params

Methods

eval_dir

View source

Shows the directory name where evaluation metrics are dumped.

Args
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.

Returns
A string which is the path of directory contains evaluation metrics.

evaluate

View source