Help protect the Great Barrier Reef with TensorFlow on Kaggle Join Challenge


An estimator that can establish a simple baseline.

Inherits From: Estimator, Estimator

The estimator uses a user-specified head.

This estimator ignores feature values and will learn to predict the average value of each label. E.g. for single-label classification problems, this will predict the probability distribution of the classes as seen in the labels. For multi-label classification problems, it will predict the ratio of examples that contain each class.


# Build baseline multi-label classifier.
estimator = tf.estimator.BaselineEstimator(

# Input builders
def input_fn_train:
  # Returns of (x, y) tuple where y represents label's class
  # index.

def input_fn_eval:
  # Returns of (x, y) tuple where y represents label's class
  # index.

# Fit model.

# Evaluates cross entropy between the test and train labels.
loss = estimator.evaluate(input_fn=input_fn_eval)["loss"]

# For each class, predicts the ratio of training examples that contain the
# class.
predictions = estimator.predict(new_samples)

Input of train and evaluate should have following features, otherwise there will be a KeyError:

  • if weight_column is specified in the head constructor (and not None) for the head passed to BaselineEstimator's constructor, a feature with key=weight_column whose value is a Tensor.

head A Head instance constructed with a method such as tf.estimator.MultiLabelHead.
model_dir Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into a estimator to continue training a previously saved model.
optimizer String, tf.keras.optimizers.* object, or callable that creates the optimizer to use for training. If not specified, will use Ftrl as the default optimizer.
config RunConfig object to configure the runtime settings.




model_fn Returns the model_fn which is bound to self.params.



View source

Shows the directory name where evaluation metrics are dumped.

name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.

A string which is the path of directory contains evaluation metrics.


View source

Evaluates the model given evaluation data input_fn.

For each step, calls input_fn, which returns one batch of data. Evaluates until:

input_fn A function that constructs the input data for evaluation. See Premade Estimators for more information. The function should construct and return one of the following:

  • A object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below.
  • A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.