Automated hyper-parameter tuning

View on TensorFlow.org Run in Google Colab View on GitHub Download notebook

Welcome to the Automated hyper-parameter tuning tutorial. In this colab, you will learn how to improve your models using automated hyper-parameter tuning with TensorFlow Decision Forests.

More precicely we will:

  1. Train a model without hyper-parameter tuning. This model will be used to measure the quality improvement of hyper-parameter tuning.
  2. Train a model with hyper-parameter tuning using TF-DF's tuner. The hyper-parameters to optimize will be defined manually.
  3. Train another model with hyper-parameter tuning using TF-DF's tuner. But this time, the hyper-parameters to optimize will be set automatically. This is the recommanded first approach to try when using hyper-parameter tuning.
  4. Finally, we will train a model with hyper-parameter tuning using Keras's tuner.

Introduction

A learning algorithm trains a machine learning model on a training dataset. The parameters of a learning algorithm–called "hyper-parameters"–control how the model is trained and impact its quality. Therefore, finding the best hyper-parameters is an important stage of modeling.

Some hyper-parameters are simple to configure. For example, increasing the number of trees (num_trees) in a random forest increases the quality of the model until a plateau. Therefore, setting the largest value compatible with the serving constraints (more trees means a larger model) is a valid rule of thumb. However, other hyper-parameters have a more complex interaction with the model and cannot be chosen with such a simple rule. For example, increasing the maximum tree depth (max_depth) of a gradient boosted tree model can both increase or decrease the quality of the model. Furthermore, hyper-parameters can interact between each others, and the optimal value of a hyper-parameter cannot be found in isolation.

There are three main approaches to select the hyper-parameter values:

  1. The default approach: Learning algorithms come with default values. While not ideal in all cases, those values produce reasonable results in most situations. This approach is recommended as the first approach to use in any modeling. This page lists the default values of TF Decision Forests.

  2. The template hyper-parameter approach: In addition to the default values, TF Decision Forests also exposes the hyper-parameter templates. Those are benchmark-tuned hyper-parameter values with excellent performance but high training cost (e.g. hyperparameter_template="benchmark_rank1").

  3. The manual tuning approach: You can manually test different hyper-parameter values and select the one that performs best. This guide give some advice.

  4. The automated tuning approach: A tuning algorithm can be used to find automatically the best hyper-parameter values. This approach gives often the best results and does not require expertise. The main downside of this approach is the time it takes for large datasets.

In this colab, we shows the default and automated tuning approaches with the TensorFlow Decision Forests library.

Hyper-parameter tuning algorithms

Automated tuning algorithms work by generating and evaluating a large number of hyper-parameter values. Each of those iterations is called a "trial". The evaluation of a trial is expensive as it requires to train a new model each time. At the end of the tuning, the hyper-parameter with the best evaluation is used.

Tuning algorithm are configured as follow:

The search space

The search space is the list of hyper-parameters to optimize and the values they can take. For example, the maximum depth of a tree could be optimized for values in between 1 and 32. Exploring more hyper-parameters and more possible values often leads to better models but also takes more time. The hyper-parameters hyper-parameters are listed in the documentation.

When the possible value of one hyper-parameter depends on the value of another hyper-parameter, the search space is said to be conditional.

The number of trials

The number of trials defines how many models will be trained and evaluated. Larger number of trials generally leads to better models, but takes more time.

The optimizer

The optimizer selects the next hyper-parameter to evaluate the past trial evaluations. The simplest and often reasonable optimizer is the one that selects the hyper-parameter at random.

The objective / trial score

The objective is the metric optimized by the tuner. Often, this metric is a measure of quality (e.g. accuracy, log loss) of the model evaluated on a validation dataset.

Train-valid-test

The validation dataset should be different from the training datasets: If the training and validation datasets are the same, the selected hyper-parameters will be irrelevant. The validation dataset should also be different from the testing dataset (also called holdout dataset): Because hyper-parameter tuning is a form of training, if the testing and validation datasets are the same, you are effectively training on the test dataset. In this case, you might overfit on your test dataset without a way to measure it.

Cross-validation

In the case of a small dataset, for example a dataset with less than 100k examples, hyper-parameter tuning can be coupled with cross-validation: Instead of being evaluated from a single training-test round, the objective/trial score is evaluated as the average of the metric over multiple cross-validation rounds.

Similarly as to the train-valid-and-test datasets, the cross-validation used to evaluate the objective/score during hyper-parameter tuning should be different from the cross-validation used to evaluate the quality of the model.

Out-of-bag evaluation

Some models, like Random Forests, can be evaluated on the training datasets using the "out-of-bag evaluation" method. While not as accurate as cross-validation, the "out-of-bag evaluation" is much faster than cross-validation and does not require a separate validation datasets.

In tensorflow decision forests

In TF-DF, the model "self" evaluation is always a fair way to evaluate a model. For example, an out-of-bag evaluation is used for Random Forest models while a validation dataset is used for Gradient Boosted models.

Hyper-parameter tuning with TF Decision Forests

TF-DF supports automatic hyper-parameter tuning with minimal configuration. In the next example, we will train and compare two models: One trained with default hyper-parameters, and one trained with hyper-parameter tuning.

Setup

# Install TensorFlow Dececision Forests
pip install tensorflow_decision_forests -U -qq

Install Wurlitzer. Wurlitzer is required to show the detailed training logs in colabs (with verbose=2).

pip install wurlitzer -U -qq

Import the necessary libraries.

import tensorflow_decision_forests as tfdf
import matplotlib.pyplot as plt
import pandas as pd
import tensorflow as tf
import numpy as np

The hidden code cell limits the output height in colab.

Define "set_cell_height".

Training a model without Automated hyper-parameter tuning

We will train a model on the Adult dataset available on the UCI. Let's download the dataset.

# Download a copy of the adult dataset.
wget -q https://raw.githubusercontent.com/google/yggdrasil-decision-forests/main/yggdrasil_decision_forests/test_data/dataset/adult_train.csv -O /tmp/adult_train.csv
wget -q https://raw.githubusercontent.com/google/yggdrasil-decision-forests/main/yggdrasil_decision_forests/test_data/dataset/adult_test.csv -O /tmp/adult_test.csv

Split the dataset into a training and a testing dataset.

# Load the dataset in memory
train_df = pd.read_csv("/tmp/adult_train.csv")
test_df = pd.read_csv("/tmp/adult_test.csv")

# , and convert it into a TensorFlow dataset.
train_ds = tfdf.keras.pd_dataframe_to_tf_dataset(train_df, label="income")
test_ds = tfdf.keras.pd_dataframe_to_tf_dataset(test_df, label="income")

First, we train and evaluate the quality of a Gradient Boosted Trees model trained with the default hyper-parameters.

%%time
# Train a model with default hyper-parameters
model = tfdf.keras.GradientBoostedTreesModel()
model.fit(train_ds)
Warning: The `num_threads` constructor argument is not set and the number of CPU is os.cpu_count()=32 > 32. Setting num_threads to 32. Set num_threads manually to use more than 32 cpus.
WARNING:absl:The `num_threads` constructor argument is not set and the number of CPU is os.cpu_count()=32 > 32. Setting num_threads to 32. Set num_threads manually to use more than 32 cpus.
Use /tmpfs/tmp/tmplfxr97hp as temporary training directory
Reading training dataset...
[WARNING 24-04-20 11:39:03.3452 UTC gradient_boosted_trees.cc:1840] "goss_alpha" set but "sampling_method" not equal to "GOSS".
[WARNING 24-04-20 11:39:03.3452 UTC gradient_boosted_trees.cc:1851] "goss_beta" set but "sampling_method" not equal to "GOSS".
[WARNING 24-04-20 11:39:03.3452 UTC gradient_boosted_trees.cc:1865] "selective_gradient_boosting_ratio" set but "sampling_method" not equal to "SELGB".
Training dataset read in 0:00:03.930624. Found 22792 examples.
Training model...
Model trained in 0:00:03.045496
Compiling model...
[INFO 24-04-20 11:39:10.3189 UTC kernel.cc:1233] Loading model from path /tmpfs/tmp/tmplfxr97hp/model/ with prefix e44c5f7e5cae4178
[INFO 24-04-20 11:39:10.3400 UTC quick_scorer_extended.cc:911] The binary was compiled without AVX2 support, but your CPU supports it. Enable it for faster model inference.
[INFO 24-04-20 11:39:10.3411 UTC abstract_model.cc:1344] Engine "GradientBoostedTreesQuickScorerExtended" built
[INFO 24-04-20 11:39:10.3411 UTC kernel.cc:1061] Use fast generic engine
Model compiled.
CPU times: user 15 s, sys: 1.41 s, total: 16.4 s
Wall time: 11 s
<tf_keras.src.callbacks.History at 0x7f39bc6c6250>
# Evaluate the model
model.compile(["accuracy"])
test_accuracy = model.evaluate(test_ds, return_dict=True, verbose=0)["accuracy"]
print(f"Test accuracy without hyper-parameter tuning: {test_accuracy:.4f}")
Test accuracy without hyper-parameter tuning: 0.8744

The default hyper-parameters of the model are available with the learner_params function. The definition of those parameters is available in the documentation.

print("Default hyper-parameters of the model:\n", model.learner_params)
Default hyper-parameters of the model:
 {'adapt_subsample_for_maximum_training_duration': False, 'allow_na_conditions': False, 'apply_link_function': True, 'categorical_algorithm': 'CART', 'categorical_set_split_greedy_sampling': 0.1, 'categorical_set_split_max_num_items': -1, 'categorical_set_split_min_item_frequency': 1, 'compute_permutation_variable_importance': False, 'dart_dropout': 0.01, 'early_stopping': 'LOSS_INCREASE', 'early_stopping_initial_iteration': 10, 'early_stopping_num_trees_look_ahead': 30, 'focal_loss_alpha': 0.5, 'focal_loss_gamma': 2.0, 'forest_extraction': 'MART', 'goss_alpha': 0.2, 'goss_beta': 0.1, 'growing_strategy': 'LOCAL', 'honest': False, 'honest_fixed_separation': False, 'honest_ratio_leaf_examples': 0.5, 'in_split_min_examples_check': True, 'keep_non_leaf_label_distribution': True, 'l1_regularization': 0.0, 'l2_categorical_regularization': 1.0, 'l2_regularization': 0.0, 'lambda_loss': 1.0, 'loss': 'DEFAULT', 'max_depth': 6, 'max_num_nodes': None, 'maximum_model_size_in_memory_in_bytes': -1.0, 'maximum_training_duration_seconds': -1.0, 'min_examples': 5, 'missing_value_policy': 'GLOBAL_IMPUTATION', 'num_candidate_attributes': -1, 'num_candidate_attributes_ratio': -1.0, 'num_trees': 300, 'pure_serving_model': False, 'random_seed': 123456, 'sampling_method': 'RANDOM', 'selective_gradient_boosting_ratio': 0.01, 'shrinkage': 0.1, 'sorting_strategy': 'PRESORT', 'sparse_oblique_max_num_projections': None, 'sparse_oblique_normalization': None, 'sparse_oblique_num_projections_exponent': None, 'sparse_oblique_projection_density_factor': None, 'sparse_oblique_weights': None, 'split_axis': 'AXIS_ALIGNED', 'subsample': 1.0, 'uplift_min_examples_in_treatment': 5, 'uplift_split_score': 'KULLBACK_LEIBLER', 'use_hessian_gain': False, 'validation_interval_in_trees': 1, 'validation_ratio': 0.1}

Training a model with automated hyper-parameter tuning and manual definition of the hyper-parameters

Hyper-parameter tuning is enabled by specifying the tuner constructor argument of the model. The tuner object contains all the configuration of the tuner (search space, optimizer, trial and objective).

# Configure the tuner.

# Create a Random Search tuner with 50 trials.
tuner = tfdf.tuner.RandomSearch(num_trials=50)

# Define the search space.
#
# Adding more parameters generaly improve the quality of the model, but make
# the tuning last longer.

tuner.choice("min_examples", [2, 5, 7, 10])
tuner.choice("categorical_algorithm", ["CART", "RANDOM"])

# Some hyper-parameters are only valid for specific values of other
# hyper-parameters. For example, the "max_depth" parameter is mostly useful when
# "growing_strategy=LOCAL" while "max_num_nodes" is better suited when
# "growing_strategy=BEST_FIRST_GLOBAL".

local_search_space = tuner.choice("growing_strategy", ["LOCAL"])
local_search_space.choice("max_depth", [3, 4, 5, 6, 8])

# merge=True indicates that the parameter (here "growing_strategy") is already
# defined, and that new values are added to it.
global_search_space = tuner.choice("growing_strategy", ["BEST_FIRST_GLOBAL"], merge=True)
global_search_space.choice("max_num_nodes", [16, 32, 64, 128, 256])

tuner.choice("use_hessian_gain", [True, False])
tuner.choice("shrinkage", [0.02, 0.05, 0.10, 0.15])
tuner.choice("num_candidate_attributes_ratio", [0.2, 0.5, 0.9, 1.0])

# Uncomment some (or all) of the following hyper-parameters to increase the
# quality of the search. The number of trial should be increased accordingly.

# tuner.choice("split_axis", ["AXIS_ALIGNED"])
# oblique_space = tuner.choice("split_axis", ["SPARSE_OBLIQUE"], merge=True)
# oblique_space.choice("sparse_oblique_normalization",
#                      ["NONE", "STANDARD_DEVIATION", "MIN_MAX"])
# oblique_space.choice("sparse_oblique_weights", ["BINARY", "CONTINUOUS"])
# oblique_space.choice("sparse_oblique_num_projections_exponent", [1.0, 1.5])
<tensorflow_decision_forests.component.tuner.tuner.SearchSpace at 0x7f399ddc9d60>
%%time
%set_cell_height 300

# Tune the model. Notice the `tuner=tuner`.
tuned_model = tfdf.keras.GradientBoostedTreesModel(tuner=tuner)
tuned_model.fit(train_ds, verbose=2)

# The `num_threads` model constructor argument (not specified in the example
# above) controls how many trials are run in parallel (one per thread). If
# `num_threads` is not specified (like in the example above), one thread is
# allocated for each available CPU core.
#
# If the training is interrupted (for example, by pressing on the "stop" button
# on the top-left of the colab cell), the best model so-far will be returned.

# In the training logs, you can see lines such as `[10/50] Score: -0.45 / -0.40
# HParams: ...`. This indicates that 10 of the 50 trials have been completed.
# And that the last trial returned a score of "-0.45" and that the best trial so
# far has a score of "-0.40". In this example, the model is optimized by
# logloss. Since scores are maximized and log loss should be minimized, the
# score is effectively minus the log loss.
<IPython.core.display.Javascript object>
Warning: The `num_threads` constructor argument is not set and the number of CPU is os.cpu_count()=32 > 32. Setting num_threads to 32. Set num_threads manually to use more than 32 cpus.
WARNING:absl:The `num_threads` constructor argument is not set and the number of CPU is os.cpu_count()=32 > 32. Setting num_threads to 32. Set num_threads manually to use more than 32 cpus.
Use /tmpfs/tmp/tmpi7_rh8z3 as temporary training directory
Reading training dataset...
Training tensor examples:
Features: {'age': <tf.Tensor 'data:0' shape=(None,) dtype=int64>, 'workclass': <tf.Tensor 'data_1:0' shape=(None,) dtype=string>, 'fnlwgt': <tf.Tensor 'data_2:0' shape=(None,) dtype=int64>, 'education': <tf.Tensor 'data_3:0' shape=(None,) dtype=string>, 'education_num': <tf.Tensor 'data_4:0' shape=(None,) dtype=int64>, 'marital_status': <tf.Tensor 'data_5:0' shape=(None,) dtype=string>, 'occupation': <tf.Tensor 'data_6:0' shape=(None,) dtype=string>, 'relationship': <tf.Tensor 'data_7:0' shape=(None,) dtype=string>, 'race': <tf.Tensor 'data_8:0' shape=(None,) dtype=string>, 'sex': <tf.Tensor 'data_9:0' shape=(None,) dtype=string>, 'capital_gain': <tf.Tensor 'data_10:0' shape=(None,) dtype=int64>, 'capital_loss': <tf.Tensor 'data_11:0' shape=(None,) dtype=int64>, 'hours_per_week': <tf.Tensor 'data_12:0' shape=(None,) dtype=int64>, 'native_country': <tf.Tensor 'data_13:0' shape=(None,) dtype=string>}
Label: Tensor("data_14:0", shape=(None,), dtype=int64)
Weights: None
Normalized tensor features:
 {'age': SemanticTensor(semantic=<Semantic.NUMERICAL: 1>, tensor=<tf.Tensor 'Cast:0' shape=(None,) dtype=float32>), 'workclass': SemanticTensor(semantic=<Semantic.CATEGORICAL: 2>, tensor=<tf.Tensor 'data_1:0' shape=(None,) dtype=string>), 'fnlwgt': SemanticTensor(semantic=<Semantic.NUMERICAL: 1>, tensor=<tf.Tensor 'Cast_1:0' shape=(None,) dtype=float32>), 'education': SemanticTensor(semantic=<Semantic.CATEGORICAL: 2>, tensor=<tf.Tensor 'data_3:0' shape=(None,) dtype=string>), 'education_num': SemanticTensor(semantic=<Semantic.NUMERICAL: 1>, tensor=<tf.Tensor 'Cast_2:0' shape=(None,) dtype=float32>), 'marital_status': SemanticTensor(semantic=<Semantic.CATEGORICAL: 2>, tensor=<tf.Tensor 'data_5:0' shape=(None,) dtype=string>), 'occupation': SemanticTensor(semantic=<Semantic.CATEGORICAL: 2>, tensor=<tf.Tensor 'data_6:0' shape=(None,) dtype=string>), 'relationship': SemanticTensor(semantic=<Semantic.CATEGORICAL: 2>, tensor=<tf.Tensor 'data_7:0' shape=(None,) dtype=string>), 'race': SemanticTensor(semantic=<Semantic.CATEGORICAL: 2>, tensor=<tf.Tensor 'data_8:0' shape=(None,) dtype=string>), 'sex': SemanticTensor(semantic=<Semantic.CATEGORICAL: 2>, tensor=<tf.Tensor 'data_9:0' shape=(None,) dtype=string>), 'capital_gain': SemanticTensor(semantic=<Semantic.NUMERICAL: 1>, tensor=<tf.Tensor 'Cast_3:0' shape=(None,) dtype=float32>), 'capital_loss': SemanticTensor(semantic=<Semantic.NUMERICAL: 1>, tensor=<tf.Tensor 'Cast_4:0' shape=(None,) dtype=float32>), 'hours_per_week': SemanticTensor(semantic=<Semantic.NUMERICAL: 1>, tensor=<tf.Tensor 'Cast_5:0' shape=(None,) dtype=float32>), 'native_country': SemanticTensor(semantic=<Semantic.CATEGORICAL: 2>, tensor=<tf.Tensor 'data_13:0' shape=(None,) dtype=string>)}
[WARNING 24-04-20 11:39:18.1748 UTC gradient_boosted_trees.cc:1840] "goss_alpha" set but "sampling_method" not equal to "GOSS".
[WARNING 24-04-20 11:39:18.1748 UTC gradient_boosted_trees.cc:1851] "goss_beta" set but "sampling_method" not equal to "GOSS".
[WARNING 24-04-20 11:39:18.1748 UTC gradient_boosted_trees.cc:1865] "selective_gradient_boosting_ratio" set but "sampling_method" not equal to "SELGB".
Training dataset read in 0:00:00.403593. Found 22792 examples.
Training model...
Standard output detected as not visible to the user e.g. running in a notebook. Creating a training log redirection. If training gets stuck, try calling tfdf.keras.set_training_logs_redirection(False).
[INFO 24-04-20 11:39:18.5916 UTC kernel.cc:771] Start Yggdrasil model training
[INFO 24-04-20 11:39:18.5917 UTC kernel.cc:772] Collect training examples
[INFO 24-04-20 11:39:18.5917 UTC kernel.cc:785] Dataspec guide:
column_guides {
  column_name_pattern: "^__LABEL$"
  type: CATEGORICAL
  categorial {
    min_vocab_frequency: 0
    max_vocab_count: -1
  }
}
default_column_guide {
  categorial {
    max_vocab_count: 2000
  }
  discretized_numerical {
    maximum_num_bins: 255
  }
}
ignore_columns_without_guides: false
detect_numerical_as_discretized_numerical: false

[INFO 24-04-20 11:39:18.5918 UTC kernel.cc:391] Number of batches: 23
[INFO 24-04-20 11:39:18.5918 UTC kernel.cc:392] Number of examples: 22792
[INFO 24-04-20 11:39:18.5996 UTC data_spec_inference.cc:305] 1 item(s) have been pruned (i.e. they are considered out of dictionary) for the column native_country (40 item(s) left) because min_value_count=5 and max_number_of_unique_values=2000
[INFO 24-04-20 11:39:18.5996 UTC data_spec_inference.cc:305] 1 item(s) have been pruned (i.e. they are considered out of dictionary) for the column occupation (13 item(s) left) because min_value_count=5 and max_number_of_unique_values=2000
[INFO 24-04-20 11:39:18.5996 UTC data_spec_inference.cc:305] 1 item(s) have been pruned (i.e. they are considered out of dictionary) for the column workclass (7 item(s) left) because min_value_count=5 and max_number_of_unique_values=2000
[INFO 24-04-20 11:39:18.6063 UTC kernel.cc:792] Training dataset:
Number of records: 22792
Number of columns: 15

Number of columns by type:
    CATEGORICAL: 9 (60%)
    NUMERICAL: 6 (40%)

Columns:

CATEGORICAL: 9 (60%)
    0: "__LABEL" CATEGORICAL integerized vocab-size:3 no-ood-item
    4: "education" CATEGORICAL has-dict vocab-size:17 zero-ood-items most-frequent:"HS-grad" 7340 (32.2043%)
    8: "marital_status" CATEGORICAL has-dict vocab-size:8 zero-ood-items most-frequent:"Married-civ-spouse" 10431 (45.7661%)
    9: "native_country" CATEGORICAL num-nas:407 (1.78571%) has-dict vocab-size:41 num-oods:1 (0.00446728%) most-frequent:"United-States" 20436 (91.2933%)
    10: "occupation" CATEGORICAL num-nas:1260 (5.52826%) has-dict vocab-size:14 num-oods:4 (0.018577%) most-frequent:"Prof-specialty" 2870 (13.329%)
    11: "race" CATEGORICAL has-dict vocab-size:6 zero-ood-items most-frequent:"White" 19467 (85.4115%)
    12: "relationship" CATEGORICAL has-dict vocab-size:7 zero-ood-items most-frequent:"Husband" 9191 (40.3256%)
    13: "sex" CATEGORICAL has-dict vocab-size:3 zero-ood-items most-frequent:"Male" 15165 (66.5365%)
    14: "workclass" CATEGORICAL num-nas:1257 (5.51509%) has-dict vocab-size:8 num-oods:3 (0.0139308%) most-frequent:"Private" 15879 (73.7358%)

NUMERICAL: 6 (40%)
    1: "age" NUMERICAL mean:38.6153 min:17 max:90 sd:13.661
    2: "capital_gain" NUMERICAL mean:1081.9 min:0 max:99999 sd:7509.48
    3: "capital_loss" NUMERICAL mean:87.2806 min:0 max:4356 sd:403.01
    5: "education_num" NUMERICAL mean:10.0927 min:1 max:16 sd:2.56427
    6: "fnlwgt" NUMERICAL mean:189879 min:12285 max:1.4847e+06 sd:106423
    7: "hours_per_week" NUMERICAL mean:40.3955 min:1 max:99 sd:12.249

Terminology:
    nas: Number of non-available (i.e. missing) values.
    ood: Out of dictionary.
    manually-defined: Attribute whose type is manually defined by the user, i.e., the type was not automatically inferred.
    tokenized: The attribute value is obtained through tokenization.
    has-dict: The attribute is attached to a string dictionary e.g. a categorical attribute stored as a string.
    vocab-size: Number of unique values.

[INFO 24-04-20 11:39:18.6063 UTC kernel.cc:808] Configure learner
[WARNING 24-04-20 11:39:18.6066 UTC gradient_boosted_trees.cc:1840] "goss_alpha" set but "sampling_method" not equal to "GOSS".
[WARNING 24-04-20 11:39:18.6066 UTC gradient_boosted_trees.cc:1851] "goss_beta" set but "sampling_method" not equal to "GOSS".
[WARNING 24-04-20 11:39:18.6066 UTC gradient_boosted_trees.cc:1865] "selective_gradient_boosting_ratio" set but "sampling_method" not equal to "SELGB".
[INFO 24-04-20 11:39:18.6067 UTC kernel.cc:822] Training config:
learner: "HYPERPARAMETER_OPTIMIZER"
features: "^age$"
features: "^capital_gain$"
features: "^capital_loss$"
features: "^education$"
features: "^education_num$"
features: "^fnlwgt$"
features: "^hours_per_week$"
features: "^marital_status$"
features: "^native_country$"
features: "^occupation$"
features: "^race$"
features: "^relationship$"
features: "^sex$"
features: "^workclass$"
label: "^__LABEL$"
task: CLASSIFICATION
metadata {
  framework: "TF Keras"
}
[yggdrasil_decision_forests.model.hyperparameters_optimizer_v2.proto.hyperparameters_optimizer_config] {
  base_learner {
    learner: "GRADIENT_BOOSTED_TREES"
    features: "^age$"
    features: "^capital_gain$"
    features: "^capital_loss$"
    features: "^education$"
    features: "^education_num$"
    features: "^fnlwgt$"
    features: "^hours_per_week$"
    features: "^marital_status$"
    features: "^native_country$"
    features: "^occupation$"
    features: "^race$"
    features: "^relationship$"
    features: "^sex$"
    features: "^workclass$"
    label: "^__LABEL$"
    task: CLASSIFICATION
    random_seed: 123456
    pure_serving_model: false
    [yggdrasil_decision_forests.model.gradient_boosted_trees.proto.gradient_boosted_trees_config] {
      num_trees: 300
      decision_tree {
        max_depth: 6
        min_examples: 5
        in_split_min_examples_check: true
        keep_non_leaf_label_distribution: true
        num_candidate_attributes: -1
        missing_value_policy: GLOBAL_IMPUTATION
        allow_na_conditions: false
        categorical_set_greedy_forward {
          sampling: 0.1
          max_num_items: -1
          min_item_frequency: 1
        }
        growing_strategy_local {
        }
        categorical {
          cart {
          }
        }
        axis_aligned_split {
        }
        internal {
          sorting_strategy: PRESORTED
        }
        uplift {
          min_examples_in_treatment: 5
          split_score: KULLBACK_LEIBLER
        }
      }
      shrinkage: 0.1
      loss: DEFAULT
      validation_set_ratio: 0.1
      validation_interval_in_trees: 1
      early_stopping: VALIDATION_LOSS_INCREASE
      early_stopping_num_trees_look_ahead: 30
      l2_regularization: 0
      lambda_loss: 1
      mart {
      }
      adapt_subsample_for_maximum_training_duration: false
      l1_regularization: 0
      use_hessian_gain: false
      l2_regularization_categorical: 1
      stochastic_gradient_boosting {
        ratio: 1
      }
      apply_link_function: true
      compute_permutation_variable_importance: false
      binary_focal_loss_options {
        misprediction_exponent: 2
        positive_sample_coefficient: 0.5
      }
      early_stopping_initial_iteration: 10
    }
  }
  optimizer {
    optimizer_key: "RANDOM"
    [yggdrasil_decision_forests.model.hyperparameters_optimizer_v2.proto.random] {
      num_trials: 50
    }
  }
  search_space {
    fields {
      name: "min_examples"
      discrete_candidates {
        possible_values {
          integer: 2
        }
        possible_values {
          integer: 5
        }
        possible_values {
          integer: 7
        }
        possible_values {
          integer: 10
        }
      }
    }
    fields {
      name: "categorical_algorithm"
      discrete_candidates {
        possible_values {
          categorical: "CART"
        }
        possible_values {
          categorical: "RANDOM"
        }
      }
    }
    fields {
      name: "growing_strategy"
      discrete_candidates {
        possible_values {
          categorical: "LOCAL"
        }
        possible_values {
          categorical: "BEST_FIRST_GLOBAL"
        }
      }
      children {
        name: "max_depth"
        discrete_candidates {
          possible_values {
            integer: 3
          }
          possible_values {
            integer: 4
          }
          possible_values {
            integer: 5
          }
          possible_values {
            integer: 6
          }
          possible_values {
            integer: 8
          }
        }
        parent_discrete_values {
          possible_values {
            categorical: "LOCAL"
          }
        }
      }
      children {
        name: "max_num_nodes"
        discrete_candidates {
          possible_values {
            integer: 16
          }
          possible_values {
            integer: 32
          }
          possible_values {
            integer: 64
          }
          possible_values {
            integer: 128
          }
          possible_values {
            integer: 256
          }
        }
        parent_discrete_values {
          possible_values {
            categorical: "BEST_FIRST_GLOBAL"
          }
        }
      }
    }
    fields {
      name: "use_hessian_gain"
      discrete_candidates {
        possible_values {
          categorical: "true"
        }
        possible_values {
          categorical: "false"
        }
      }
    }
    fields {
      name: "shrinkage"
      discrete_candidates {
        possible_values {
          real: 0.02
        }
        possible_values {
          real: 0.05
        }
        possible_values {
          real: 0.1
        }
        possible_values {
          real: 0.15
        }
      }
    }
    fields {
      name: "num_candidate_attributes_ratio"
      discrete_candidates {
        possible_values {
          real: 0.2
        }
        possible_values {
          real: 0.5
        }
        possible_values {
          real: 0.9
        }
        possible_values {
          real: 1
        }
      }
    }
  }
  base_learner_deployment {
    num_threads: 1
  }
}

[INFO 24-04-20 11:39:18.6071 UTC kernel.cc:825] Deployment config:
cache_path: "/tmpfs/tmp/tmpi7_rh8z3/working_cache"
num_threads: 32
try_resume_training: true

[INFO 24-04-20 11:39:18.6073 UTC kernel.cc:887] Train model
[INFO 24-04-20 11:39:18.6075 UTC hyperparameters_optimizer.cc:214] Hyperparameter search space:
fields {
  name: "min_examples"
  discrete_candidates {
    possible_values {
      integer: 2
    }
    possible_values {
      integer: 5
    }
    possible_values {
      integer: 7
    }
    possible_values {
      integer: 10
    }
  }
}
fields {
  name: "categorical_algorithm"
  discrete_candidates {
    possible_values {
      categorical: "CART"
    }
    possible_values {
      categorical: "RANDOM"
    }
  }
}
fields {
  name: "growing_strategy"
  discrete_candidates {
    possible_values {
      categorical: "LOCAL"
    }
    possible_values {
      categorical: "BEST_FIRST_GLOBAL"
    }
  }
  children {
    name: "max_depth"
    discrete_candidates {
      possible_values {
        integer: 3
      }
      possible_values {
        integer: 4
      }
      possible_values {
        integer: 5
      }
      possible_values {
        integer: 6
      }
      possible_values {
        integer: 8
      }
    }
    parent_discrete_values {
      possible_values {
        categorical: "LOCAL"
      }
    }
  }
  children {
    name: "max_num_nodes"
    discrete_candidates {
      possible_values {
        integer: 16
      }
      possible_values {
        integer: 32
      }
      possible_values {
        integer: 64
      }
      possible_values {
        integer: 128
      }
      possible_values {
        integer: 256
      }
    }
    parent_discrete_values {
      possible_values {
        categorical: "BEST_FIRST_GLOBAL"
      }
    }
  }
}
fields {
  name: "use_hessian_gain"
  discrete_candidates {
    possible_values {
      categorical: "true"
    }
    possible_values {
      categorical: "false"
    }
  }
}
fields {
  name: "shrinkage"
  discrete_candidates {
    possible_values {
      real: 0.02
    }
    possible_values {
      real: 0.05
    }
    possible_values {
      real: 0.1
    }
    possible_values {
      real: 0.15
    }
  }
}
fields {
  name: "num_candidate_attributes_ratio"
  discrete_candidates {
    possible_values {
      real: 0.2
    }
    possible_values {
      real: 0.5
    }
    possible_values {
      real: 0.9
    }
    possible_values {
      real: 1
    }
  }
}

[INFO 24-04-20 11:39:18.6076 UTC hyperparameters_optimizer.cc:509] Start local tuner with 1 parallel trial(s), each with 32 thread(s)
[INFO 24-04-20 11:39:18.6081 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:39:18.6081 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:39:18.6145 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:39:18.6426 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.022126 train-accuracy:0.761895 valid-loss:1.077863 valid-accuracy:0.736609
[INFO 24-04-20 11:39:20.6354 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.581401
[INFO 24-04-20 11:39:20.6355 UTC gradient_boosted_trees.cc:270] Truncates the model to 145 tree(s) i.e. 145  iteration(s).
[INFO 24-04-20 11:39:20.6356 UTC gradient_boosted_trees.cc:333] Final model num-trees:145 valid-loss:0.581401 valid-accuracy:0.872510
[INFO 24-04-20 11:39:20.6376 UTC hyperparameters_optimizer.cc:593] [1/50] Score: -0.581401 / -0.581401 HParams: fields { name: "min_examples" value { integer: 2 } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 32 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "shrinkage" value { real: 0.1 } } fields { name: "num_candidate_attributes_ratio" value { real: 0.2 } }
[INFO 24-04-20 11:39:20.6377 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:39:20.6377 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:39:20.6426 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:39:20.7044 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.080203 train-accuracy:0.761895 valid-loss:1.138223 valid-accuracy:0.736609
[INFO 24-04-20 11:39:35.9300 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.503793 train-accuracy:0.889933 valid-loss:0.581187 valid-accuracy:0.870297
[INFO 24-04-20 11:39:35.9301 UTC gradient_boosted_trees.cc:270] Truncates the model to 300 tree(s) i.e. 300  iteration(s).
[INFO 24-04-20 11:39:35.9301 UTC gradient_boosted_trees.cc:333] Final model num-trees:300 valid-loss:0.581187 valid-accuracy:0.870297
[INFO 24-04-20 11:39:35.9358 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:39:35.9358 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:39:35.9373 UTC hyperparameters_optimizer.cc:593] [2/50] Score: -0.581187 / -0.581187 HParams: fields { name: "min_examples" value { integer: 10 } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 128 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "shrinkage" value { real: 0.02 } } fields { name: "num_candidate_attributes_ratio" value { real: 1 } }
[INFO 24-04-20 11:39:35.9418 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:39:35.9803 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.015975 train-accuracy:0.761895 valid-loss:1.071430 valid-accuracy:0.736609
[INFO 24-04-20 11:39:37.3791 UTC gradient_boosted_trees.cc:1592]  num-trees:48 train-loss:0.545129 train-accuracy:0.880534 valid-loss:0.600357 valid-accuracy:0.864985
[INFO 24-04-20 11:39:42.4773 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.578782
[INFO 24-04-20 11:39:42.4773 UTC gradient_boosted_trees.cc:270] Truncates the model to 186 tree(s) i.e. 186  iteration(s).
[INFO 24-04-20 11:39:42.4776 UTC gradient_boosted_trees.cc:333] Final model num-trees:186 valid-loss:0.578782 valid-accuracy:0.873395
[INFO 24-04-20 11:39:42.4809 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:39:42.4810 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:39:42.4865 UTC hyperparameters_optimizer.cc:593] [3/50] Score: -0.578782 / -0.578782 HParams: fields { name: "min_examples" value { integer: 5 } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 6 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "shrinkage" value { real: 0.1 } } fields { name: "num_candidate_attributes_ratio" value { real: 1 } }
[INFO 24-04-20 11:39:42.4894 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:39:42.5198 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.054434 train-accuracy:0.761895 valid-loss:1.110703 valid-accuracy:0.736609
[INFO 24-04-20 11:39:48.0352 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.578653
[INFO 24-04-20 11:39:48.0353 UTC gradient_boosted_trees.cc:270] Truncates the model to 228 tree(s) i.e. 228  iteration(s).
[INFO 24-04-20 11:39:48.0356 UTC gradient_boosted_trees.cc:333] Final model num-trees:228 valid-loss:0.578653 valid-accuracy:0.870739
[INFO 24-04-20 11:39:48.0393 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:39:48.0394 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:39:48.0416 UTC hyperparameters_optimizer.cc:593] [4/50] Score: -0.578653 / -0.578653 HParams: fields { name: "min_examples" value { integer: 5 } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 32 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "shrinkage" value { real: 0.05 } } fields { name: "num_candidate_attributes_ratio" value { real: 0.5 } }
[INFO 24-04-20 11:39:48.0456 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:39:48.0890 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.080017 train-accuracy:0.761895 valid-loss:1.137988 valid-accuracy:0.736609
[INFO 24-04-20 11:39:58.1911 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.510029 train-accuracy:0.890566 valid-loss:0.588613 valid-accuracy:0.866755
[INFO 24-04-20 11:39:58.1911 UTC gradient_boosted_trees.cc:270] Truncates the model to 299 tree(s) i.e. 299  iteration(s).
[INFO 24-04-20 11:39:58.1912 UTC gradient_boosted_trees.cc:333] Final model num-trees:299 valid-loss:0.588549 valid-accuracy:0.865870
[INFO 24-04-20 11:39:58.1954 UTC hyperparameters_optimizer.cc:593] [5/50] Score: -0.588549 / -0.578653 HParams: fields { name: "min_examples" value { integer: 2 } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 32 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "shrinkage" value { real: 0.02 } } fields { name: "num_candidate_attributes_ratio" value { real: 1 } }
[INFO 24-04-20 11:39:58.1955 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:39:58.1955 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:39:58.2033 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:39:58.2233 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.080310 train-accuracy:0.761895 valid-loss:1.138544 valid-accuracy:0.736609
[INFO 24-04-20 11:40:02.2850 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.515617 train-accuracy:0.886914 valid-loss:0.591852 valid-accuracy:0.868083
[INFO 24-04-20 11:40:02.2850 UTC gradient_boosted_trees.cc:270] Truncates the model to 300 tree(s) i.e. 300  iteration(s).
[INFO 24-04-20 11:40:02.2851 UTC gradient_boosted_trees.cc:333] Final model num-trees:300 valid-loss:0.591852 valid-accuracy:0.868083
[INFO 24-04-20 11:40:02.2910 UTC hyperparameters_optimizer.cc:593] [6/50] Score: -0.591852 / -0.578653 HParams: fields { name: "min_examples" value { integer: 10 } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 64 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "shrinkage" value { real: 0.02 } } fields { name: "num_candidate_attributes_ratio" value { real: 0.2 } }
[INFO 24-04-20 11:40:02.2911 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:40:02.2911 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:40:02.3004 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:40:02.3214 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:0.985785 train-accuracy:0.761895 valid-loss:1.041083 valid-accuracy:0.736609
[INFO 24-04-20 11:40:04.2097 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.569126
[INFO 24-04-20 11:40:04.2097 UTC gradient_boosted_trees.cc:270] Truncates the model to 161 tree(s) i.e. 161  iteration(s).
[INFO 24-04-20 11:40:04.2099 UTC gradient_boosted_trees.cc:333] Final model num-trees:161 valid-loss:0.569126 valid-accuracy:0.873838
[INFO 24-04-20 11:40:04.2114 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:40:04.2114 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:40:04.2146 UTC hyperparameters_optimizer.cc:593] [7/50] Score: -0.569126 / -0.569126 HParams: fields { name: "min_examples" value { integer: 2 } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 5 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "shrinkage" value { real: 0.15 } } fields { name: "num_candidate_attributes_ratio" value { real: 1 } }
[INFO 24-04-20 11:40:04.2187 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:40:04.2713 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.055966 train-accuracy:0.761895 valid-loss:1.113004 valid-accuracy:0.736609
[INFO 24-04-20 11:40:07.4215 UTC gradient_boosted_trees.cc:1592]  num-trees:76 train-loss:0.569166 train-accuracy:0.874690 valid-loss:0.608466 valid-accuracy:0.866755
[INFO 24-04-20 11:40:16.0366 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.469406 train-accuracy:0.896946 valid-loss:0.571488 valid-accuracy:0.872953
[INFO 24-04-20 11:40:16.0366 UTC gradient_boosted_trees.cc:270] Truncates the model to 283 tree(s) i.e. 283  iteration(s).
[INFO 24-04-20 11:40:16.0368 UTC gradient_boosted_trees.cc:333] Final model num-trees:283 valid-loss:0.571175 valid-accuracy:0.873838
[INFO 24-04-20 11:40:16.0399 UTC hyperparameters_optimizer.cc:593] [8/50] Score: -0.571175 / -0.569126 HParams: fields { name: "min_examples" value { integer: 10 } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 6 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "shrinkage" value { real: 0.05 } } fields { name: "num_candidate_attributes_ratio" value { real: 1 } }
[INFO 24-04-20 11:40:16.0400 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:40:16.0400 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:40:16.0471 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:40:16.0993 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:0.978408 train-accuracy:0.761895 valid-loss:1.031947 valid-accuracy:0.736609
[INFO 24-04-20 11:40:20.2443 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.577748
[INFO 24-04-20 11:40:20.2444 UTC gradient_boosted_trees.cc:270] Truncates the model to 89 tree(s) i.e. 89  iteration(s).
[INFO 24-04-20 11:40:20.2446 UTC gradient_boosted_trees.cc:333] Final model num-trees:89 valid-loss:0.577748 valid-accuracy:0.871625
[INFO 24-04-20 11:40:20.2456 UTC hyperparameters_optimizer.cc:593] [9/50] Score: -0.577748 / -0.569126 HParams: fields { name: "min_examples" value { integer: 7 } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 16 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "shrinkage" value { real: 0.15 } } fields { name: "num_candidate_attributes_ratio" value { real: 1 } }
[INFO 24-04-20 11:40:20.2459 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:40:20.2459 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:40:20.2511 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:40:20.2919 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.080606 train-accuracy:0.761895 valid-loss:1.138615 valid-accuracy:0.736609
[INFO 24-04-20 11:40:29.4104 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.542830 train-accuracy:0.881654 valid-loss:0.593285 valid-accuracy:0.867198
[INFO 24-04-20 11:40:29.4104 UTC gradient_boosted_trees.cc:270] Truncates the model to 300 tree(s) i.e. 300  iteration(s).
[INFO 24-04-20 11:40:29.4104 UTC gradient_boosted_trees.cc:333] Final model num-trees:300 valid-loss:0.593285 valid-accuracy:0.867198
[INFO 24-04-20 11:40:29.4127 UTC hyperparameters_optimizer.cc:593] [10/50] Score: -0.593285 / -0.569126 HParams: fields { name: "min_examples" value { integer: 5 } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 16 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "shrinkage" value { real: 0.02 } } fields { name: "num_candidate_attributes_ratio" value { real: 1 } }
[INFO 24-04-20 11:40:29.4129 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:40:29.4129 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:40:29.4195 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:40:29.4346 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.007318 train-accuracy:0.761895 valid-loss:1.063819 valid-accuracy:0.736609
[INFO 24-04-20 11:40:30.8109 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.540563 train-accuracy:0.877271 valid-loss:0.581734 valid-accuracy:0.869854
[INFO 24-04-20 11:40:30.8109 UTC gradient_boosted_trees.cc:270] Truncates the model to 300 tree(s) i.e. 300  iteration(s).
[INFO 24-04-20 11:40:30.8110 UTC gradient_boosted_trees.cc:333] Final model num-trees:300 valid-loss:0.581734 valid-accuracy:0.869854
[INFO 24-04-20 11:40:30.8116 UTC hyperparameters_optimizer.cc:593] [11/50] Score: -0.581734 / -0.569126 HParams: fields { name: "min_examples" value { integer: 10 } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 3 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "shrinkage" value { real: 0.15 } } fields { name: "num_candidate_attributes_ratio" value { real: 1 } }
[INFO 24-04-20 11:40:30.8117 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:40:30.8117 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:40:30.8169 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:40:30.8680 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.015861 train-accuracy:0.761895 valid-loss:1.071101 valid-accuracy:0.736609
[INFO 24-04-20 11:40:36.7109 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.577719
[INFO 24-04-20 11:40:36.7109 UTC gradient_boosted_trees.cc:270] Truncates the model to 133 tree(s) i.e. 133  iteration(s).
[INFO 24-04-20 11:40:36.7111 UTC gradient_boosted_trees.cc:333] Final model num-trees:133 valid-loss:0.577719 valid-accuracy:0.872510
[INFO 24-04-20 11:40:36.7123 UTC hyperparameters_optimizer.cc:593] [12/50] Score: -0.577719 / -0.569126 HParams: fields { name: "min_examples" value { integer: 10 } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 16 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "shrinkage" value { real: 0.1 } } fields { name: "num_candidate_attributes_ratio" value { real: 0.9 } }
[INFO 24-04-20 11:40:36.7127 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:40:36.7127 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:40:36.7184 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:40:36.7434 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.021242 train-accuracy:0.761895 valid-loss:1.076859 valid-accuracy:0.736609
[INFO 24-04-20 11:40:37.4287 UTC gradient_boosted_trees.cc:1592]  num-trees:55 train-loss:0.569971 train-accuracy:0.870209 valid-loss:0.607976 valid-accuracy:0.863656
[INFO 24-04-20 11:40:39.2662 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.573576
[INFO 24-04-20 11:40:39.2662 UTC gradient_boosted_trees.cc:270] Truncates the model to 210 tree(s) i.e. 210  iteration(s).
[INFO 24-04-20 11:40:39.2663 UTC gradient_boosted_trees.cc:333] Final model num-trees:210 valid-loss:0.573576 valid-accuracy:0.872953
[INFO 24-04-20 11:40:39.2677 UTC hyperparameters_optimizer.cc:593] [13/50] Score: -0.573576 / -0.569126 HParams: fields { name: "min_examples" value { integer: 7 } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 5 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "shrinkage" value { real: 0.1 } } fields { name: "num_candidate_attributes_ratio" value { real: 0.5 } }
[INFO 24-04-20 11:40:39.2679 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:40:39.2679 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:40:39.2732 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:40:39.3143 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.052474 train-accuracy:0.761895 valid-loss:1.109417 valid-accuracy:0.736609
[INFO 24-04-20 11:40:44.6208 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.574613
[INFO 24-04-20 11:40:44.6208 UTC gradient_boosted_trees.cc:270] Truncates the model to 178 tree(s) i.e. 178  iteration(s).
[INFO 24-04-20 11:40:44.6212 UTC gradient_boosted_trees.cc:333] Final model num-trees:178 valid-loss:0.574613 valid-accuracy:0.872953
[INFO 24-04-20 11:40:44.6276 UTC hyperparameters_optimizer.cc:593] [14/50] Score: -0.574613 / -0.569126 HParams: fields { name: "min_examples" value { integer: 5 } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 8 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "shrinkage" value { real: 0.05 } } fields { name: "num_candidate_attributes_ratio" value { real: 0.5 } }
[INFO 24-04-20 11:40:44.6277 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:40:44.6277 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:40:44.6359 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:40:44.6565 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.013950 train-accuracy:0.761895 valid-loss:1.069965 valid-accuracy:0.736609
[INFO 24-04-20 11:40:46.8622 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.588103
[INFO 24-04-20 11:40:46.8622 UTC gradient_boosted_trees.cc:270] Truncates the model to 136 tree(s) i.e. 136  iteration(s).
[INFO 24-04-20 11:40:46.8626 UTC gradient_boosted_trees.cc:333] Final model num-trees:136 valid-loss:0.588103 valid-accuracy:0.869854
[INFO 24-04-20 11:40:46.8655 UTC hyperparameters_optimizer.cc:593] [15/50] Score: -0.588103 / -0.569126 HParams: fields { name: "min_examples" value { integer: 7 } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 64 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "shrinkage" value { real: 0.1 } } fields { name: "num_candidate_attributes_ratio" value { real: 0.2 } }
[INFO 24-04-20 11:40:46.8657 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:40:46.8657 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:40:46.8723 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:40:46.9163 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.012352 train-accuracy:0.761895 valid-loss:1.067086 valid-accuracy:0.736609
[INFO 24-04-20 11:40:52.6237 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.579442
[INFO 24-04-20 11:40:52.6238 UTC gradient_boosted_trees.cc:270] Truncates the model to 129 tree(s) i.e. 129  iteration(s).
[INFO 24-04-20 11:40:52.6242 UTC gradient_boosted_trees.cc:333] Final model num-trees:129 valid-loss:0.579442 valid-accuracy:0.870297
[INFO 24-04-20 11:40:52.6273 UTC hyperparameters_optimizer.cc:593] [16/50] Score: -0.579442 / -0.569126 HParams: fields { name: "min_examples" value { integer: 5 } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 128 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "shrinkage" value { real: 0.1 } } fields { name: "num_candidate_attributes_ratio" value { real: 1 } }
[INFO 24-04-20 11:40:52.6277 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:40:52.6277 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:40:52.6347 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:40:52.6989 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.054509 train-accuracy:0.761895 valid-loss:1.111318 valid-accuracy:0.736609
[INFO 24-04-20 11:41:03.2795 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.56991
[INFO 24-04-20 11:41:03.2796 UTC gradient_boosted_trees.cc:270] Truncates the model to 186 tree(s) i.e. 186  iteration(s).
[INFO 24-04-20 11:41:03.2800 UTC gradient_boosted_trees.cc:333] Final model num-trees:186 valid-loss:0.569910 valid-accuracy:0.873838
[INFO 24-04-20 11:41:03.2843 UTC hyperparameters_optimizer.cc:593] [17/50] Score: -0.56991 / -0.569126 HParams: fields { name: "min_examples" value { integer: 7 } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 64 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "shrinkage" value { real: 0.05 } } fields { name: "num_candidate_attributes_ratio" value { real: 1 } }
[INFO 24-04-20 11:41:03.2846 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:41:03.2846 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:41:03.2923 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:41:03.3323 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.055526 train-accuracy:0.761895 valid-loss:1.112339 valid-accuracy:0.736609
[INFO 24-04-20 11:41:07.4528 UTC gradient_boosted_trees.cc:1592]  num-trees:135 train-loss:0.534142 train-accuracy:0.883456 valid-loss:0.588371 valid-accuracy:0.870297
[INFO 24-04-20 11:41:11.0326 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.581089
[INFO 24-04-20 11:41:11.0327 UTC gradient_boosted_trees.cc:270] Truncates the model to 242 tree(s) i.e. 242  iteration(s).
[INFO 24-04-20 11:41:11.0328 UTC gradient_boosted_trees.cc:333] Final model num-trees:242 valid-loss:0.581089 valid-accuracy:0.867198
[INFO 24-04-20 11:41:11.0348 UTC hyperparameters_optimizer.cc:593] [18/50] Score: -0.581089 / -0.569126 HParams: fields { name: "min_examples" value { integer: 7 } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 16 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "shrinkage" value { real: 0.05 } } fields { name: "num_candidate_attributes_ratio" value { real: 0.9 } }
[INFO 24-04-20 11:41:11.0350 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:41:11.0350 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:41:11.0412 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:41:11.0924 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.080851 train-accuracy:0.761895 valid-loss:1.138916 valid-accuracy:0.736609
[INFO 24-04-20 11:41:23.4015 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.531686 train-accuracy:0.883261 valid-loss:0.586173 valid-accuracy:0.869854
[INFO 24-04-20 11:41:23.4016 UTC gradient_boosted_trees.cc:270] Truncates the model to 300 tree(s) i.e. 300  iteration(s).
[INFO 24-04-20 11:41:23.4016 UTC gradient_boosted_trees.cc:333] Final model num-trees:300 valid-loss:0.586173 valid-accuracy:0.869854
[INFO 24-04-20 11:41:23.4054 UTC hyperparameters_optimizer.cc:593] [19/50] Score: -0.586173 / -0.569126 HParams: fields { name: "min_examples" value { integer: 5 } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 6 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "shrinkage" value { real: 0.02 } } fields { name: "num_candidate_attributes_ratio" value { real: 1 } }
[INFO 24-04-20 11:41:23.4055 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:41:23.4056 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:41:23.4133 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:41:23.4519 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.080606 train-accuracy:0.761895 valid-loss:1.138615 valid-accuracy:0.736609
[INFO 24-04-20 11:41:32.4926 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.542858 train-accuracy:0.881800 valid-loss:0.595354 valid-accuracy:0.866755
[INFO 24-04-20 11:41:32.4926 UTC gradient_boosted_trees.cc:270] Truncates the model to 300 tree(s) i.e. 300  iteration(s).
[INFO 24-04-20 11:41:32.4926 UTC gradient_boosted_trees.cc:333] Final model num-trees:300 valid-loss:0.595354 valid-accuracy:0.866755
[INFO 24-04-20 11:41:32.4949 UTC hyperparameters_optimizer.cc:593] [20/50] Score: -0.595354 / -0.569126 HParams: fields { name: "min_examples" value { integer: 2 } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 16 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "shrinkage" value { real: 0.02 } } fields { name: "num_candidate_attributes_ratio" value { real: 0.9 } }
[INFO 24-04-20 11:41:32.4950 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:41:32.4950 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:41:32.5014 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:41:32.5134 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.033944 train-accuracy:0.761895 valid-loss:1.087890 valid-accuracy:0.736609
[INFO 24-04-20 11:41:33.3103 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.553261 train-accuracy:0.875566 valid-loss:0.590388 valid-accuracy:0.865870
[INFO 24-04-20 11:41:33.3103 UTC gradient_boosted_trees.cc:270] Truncates the model to 299 tree(s) i.e. 299  iteration(s).
[INFO 24-04-20 11:41:33.3103 UTC gradient_boosted_trees.cc:333] Final model num-trees:299 valid-loss:0.590370 valid-accuracy:0.866313
[INFO 24-04-20 11:41:33.3109 UTC hyperparameters_optimizer.cc:593] [21/50] Score: -0.59037 / -0.569126 HParams: fields { name: "min_examples" value { integer: 7 } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 3 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "shrinkage" value { real: 0.15 } } fields { name: "num_candidate_attributes_ratio" value { real: 0.2 } }
[INFO 24-04-20 11:41:33.3116 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:41:33.3116 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:41:33.3161 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:41:33.3420 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.056437 train-accuracy:0.761895 valid-loss:1.113420 valid-accuracy:0.736609
[INFO 24-04-20 11:41:37.4636 UTC gradient_boosted_trees.cc:1592]  num-trees:230 train-loss:0.463528 train-accuracy:0.899966 valid-loss:0.581779 valid-accuracy:0.873838
[INFO 24-04-20 11:41:37.8668 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.581186
[INFO 24-04-20 11:41:37.8668 UTC gradient_boosted_trees.cc:270] Truncates the model to 223 tree(s) i.e. 223  iteration(s).
[INFO 24-04-20 11:41:37.8672 UTC gradient_boosted_trees.cc:333] Final model num-trees:223 valid-loss:0.581186 valid-accuracy:0.874281
[INFO 24-04-20 11:41:37.8726 UTC hyperparameters_optimizer.cc:593] [22/50] Score: -0.581186 / -0.569126 HParams: fields { name: "min_examples" value { integer: 5 } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 256 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "shrinkage" value { real: 0.05 } } fields { name: "num_candidate_attributes_ratio" value { real: 0.2 } }
[INFO 24-04-20 11:41:37.8730 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:41:37.8730 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:41:37.8811 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:41:37.9288 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.080559 train-accuracy:0.761895 valid-loss:1.138519 valid-accuracy:0.736609
[INFO 24-04-20 11:41:48.4246 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.524125 train-accuracy:0.882287 valid-loss:0.586707 valid-accuracy:0.868969
[INFO 24-04-20 11:41:48.4246 UTC gradient_boosted_trees.cc:270] Truncates the model to 300 tree(s) i.e. 300  iteration(s).
[INFO 24-04-20 11:41:48.4247 UTC gradient_boosted_trees.cc:333] Final model num-trees:300 valid-loss:0.586707 valid-accuracy:0.868969
[INFO 24-04-20 11:41:48.4290 UTC hyperparameters_optimizer.cc:593] [23/50] Score: -0.586707 / -0.569126 HParams: fields { name: "min_examples" value { integer: 2 } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 32 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "shrinkage" value { real: 0.02 } } fields { name: "num_candidate_attributes_ratio" value { real: 1 } }
[INFO 24-04-20 11:41:48.4294 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:41:48.4294 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:41:48.4370 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:41:48.4561 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:0.992466 train-accuracy:0.761895 valid-loss:1.048658 valid-accuracy:0.736609
[INFO 24-04-20 11:41:50.3696 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.574698
[INFO 24-04-20 11:41:50.3696 UTC gradient_boosted_trees.cc:270] Truncates the model to 242 tree(s) i.e. 242  iteration(s).
[INFO 24-04-20 11:41:50.3697 UTC gradient_boosted_trees.cc:333] Final model num-trees:242 valid-loss:0.574698 valid-accuracy:0.871625
[INFO 24-04-20 11:41:50.3705 UTC hyperparameters_optimizer.cc:593] [24/50] Score: -0.574698 / -0.569126 HParams: fields { name: "min_examples" value { integer: 2 } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 4 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "shrinkage" value { real: 0.15 } } fields { name: "num_candidate_attributes_ratio" value { real: 0.9 } }
[INFO 24-04-20 11:41:50.3707 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:41:50.3707 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:41:50.3759 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:41:50.4045 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.056455 train-accuracy:0.761895 valid-loss:1.113410 valid-accuracy:0.736609
[INFO 24-04-20 11:41:55.0341 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.511416 train-accuracy:0.884381 valid-loss:0.572223 valid-accuracy:0.874723
[INFO 24-04-20 11:41:55.0342 UTC gradient_boosted_trees.cc:270] Truncates the model to 291 tree(s) i.e. 291  iteration(s).
[INFO 24-04-20 11:41:55.0342 UTC gradient_boosted_trees.cc:333] Final model num-trees:291 valid-loss:0.572029 valid-accuracy:0.874723
[INFO 24-04-20 11:41:55.0362 UTC hyperparameters_optimizer.cc:593] [25/50] Score: -0.572029 / -0.569126 HParams: fields { name: "min_examples" value { integer: 10 } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 16 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "shrinkage" value { real: 0.05 } } fields { name: "num_candidate_attributes_ratio" value { real: 1 } }
[INFO 24-04-20 11:41:55.0364 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:41:55.0364 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:41:55.0420 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:41:55.0594 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.024983 train-accuracy:0.761895 valid-loss:1.080660 valid-accuracy:0.736609
[INFO 24-04-20 11:41:57.0973 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.488794 train-accuracy:0.890031 valid-loss:0.571949 valid-accuracy:0.873395
[INFO 24-04-20 11:41:57.0973 UTC gradient_boosted_trees.cc:270] Truncates the model to 284 tree(s) i.e. 284  iteration(s).
[INFO 24-04-20 11:41:57.0974 UTC gradient_boosted_trees.cc:333] Final model num-trees:284 valid-loss:0.571257 valid-accuracy:0.872953
[INFO 24-04-20 11:41:57.0990 UTC hyperparameters_optimizer.cc:593] [26/50] Score: -0.571257 / -0.569126 HParams: fields { name: "min_examples" value { integer: 10 } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 5 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "shrinkage" value { real: 0.1 } } fields { name: "num_candidate_attributes_ratio" value { real: 0.5 } }
[INFO 24-04-20 11:41:57.0992 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:41:57.0992 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:41:57.1050 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:41:57.1279 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:0.992049 train-accuracy:0.761895 valid-loss:1.047210 valid-accuracy:0.736609
[INFO 24-04-20 11:41:59.3586 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.576255
[INFO 24-04-20 11:41:59.3587 UTC gradient_boosted_trees.cc:270] Truncates the model to 174 tree(s) i.e. 174  iteration(s).
[INFO 24-04-20 11:41:59.3587 UTC gradient_boosted_trees.cc:333] Final model num-trees:174 valid-loss:0.576255 valid-accuracy:0.868526
[INFO 24-04-20 11:41:59.3595 UTC hyperparameters_optimizer.cc:593] [27/50] Score: -0.576255 / -0.569126 HParams: fields { name: "min_examples" value { integer: 7 } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 4 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "shrinkage" value { real: 0.15 } } fields { name: "num_candidate_attributes_ratio" value { real: 0.9 } }
[INFO 24-04-20 11:41:59.3596 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:41:59.3596 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:41:59.3650 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:41:59.3863 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:0.974501 train-accuracy:0.761895 valid-loss:1.024211 valid-accuracy:0.736609
[INFO 24-04-20 11:42:00.2862 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.583674
[INFO 24-04-20 11:42:00.2863 UTC gradient_boosted_trees.cc:270] Truncates the model to 61 tree(s) i.e. 61  iteration(s).
[INFO 24-04-20 11:42:00.2867 UTC gradient_boosted_trees.cc:333] Final model num-trees:61 valid-loss:0.583674 valid-accuracy:0.866755
[INFO 24-04-20 11:42:00.2890 UTC hyperparameters_optimizer.cc:593] [28/50] Score: -0.583674 / -0.569126 HParams: fields { name: "min_examples" value { integer: 5 } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 8 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "shrinkage" value { real: 0.15 } } fields { name: "num_candidate_attributes_ratio" value { real: 0.2 } }
[INFO 24-04-20 11:42:00.2902 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:42:00.2902 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:42:00.2952 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:42:00.3287 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.080079 train-accuracy:0.761895 valid-loss:1.138475 valid-accuracy:0.736609
[INFO 24-04-20 11:42:06.9053 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.482299 train-accuracy:0.895096 valid-loss:0.586102 valid-accuracy:0.871182
[INFO 24-04-20 11:42:06.9053 UTC gradient_boosted_trees.cc:270] Truncates the model to 300 tree(s) i.e. 300  iteration(s).
[INFO 24-04-20 11:42:06.9053 UTC gradient_boosted_trees.cc:333] Final model num-trees:300 valid-loss:0.586102 valid-accuracy:0.871182
[INFO 24-04-20 11:42:06.9169 UTC hyperparameters_optimizer.cc:593] [29/50] Score: -0.586102 / -0.569126 HParams: fields { name: "min_examples" value { integer: 5 } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 8 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "shrinkage" value { real: 0.02 } } fields { name: "num_candidate_attributes_ratio" value { real: 0.2 } }
[INFO 24-04-20 11:42:06.9170 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:42:06.9170 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:42:06.9302 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:42:06.9450 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.015585 train-accuracy:0.761895 valid-loss:1.068358 valid-accuracy:0.736609
[INFO 24-04-20 11:42:07.4662 UTC gradient_boosted_trees.cc:1592]  num-trees:109 train-loss:0.577998 train-accuracy:0.867969 valid-loss:0.607920 valid-accuracy:0.864099
[INFO 24-04-20 11:42:08.3443 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.537262 train-accuracy:0.878780 valid-loss:0.585214 valid-accuracy:0.869854
[INFO 24-04-20 11:42:08.3444 UTC gradient_boosted_trees.cc:270] Truncates the model to 300 tree(s) i.e. 300  iteration(s).
[INFO 24-04-20 11:42:08.3444 UTC gradient_boosted_trees.cc:333] Final model num-trees:300 valid-loss:0.585214 valid-accuracy:0.869854
[INFO 24-04-20 11:42:08.3450 UTC hyperparameters_optimizer.cc:593] [30/50] Score: -0.585214 / -0.569126 HParams: fields { name: "min_examples" value { integer: 10 } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 3 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "shrinkage" value { real: 0.15 } } fields { name: "num_candidate_attributes_ratio" value { real: 0.5 } }
[INFO 24-04-20 11:42:08.3454 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:42:08.3454 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:42:08.3505 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:42:08.3688 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.057450 train-accuracy:0.761895 valid-loss:1.114456 valid-accuracy:0.736609
[INFO 24-04-20 11:42:11.4761 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.499704 train-accuracy:0.890712 valid-loss:0.584889 valid-accuracy:0.869854
[INFO 24-04-20 11:42:11.4762 UTC gradient_boosted_trees.cc:270] Truncates the model to 298 tree(s) i.e. 298  iteration(s).
[INFO 24-04-20 11:42:11.4762 UTC gradient_boosted_trees.cc:333] Final model num-trees:298 valid-loss:0.584790 valid-accuracy:0.869411
[INFO 24-04-20 11:42:11.4784 UTC hyperparameters_optimizer.cc:593] [31/50] Score: -0.58479 / -0.569126 HParams: fields { name: "min_examples" value { integer: 7 } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 16 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "shrinkage" value { real: 0.05 } } fields { name: "num_candidate_attributes_ratio" value { real: 0.2 } }
[INFO 24-04-20 11:42:11.4785 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:42:11.4786 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:42:11.4854 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:42:11.5101 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.035081 train-accuracy:0.761895 valid-loss:1.091865 valid-accuracy:0.736609
[INFO 24-04-20 11:42:16.0150 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.552464 train-accuracy:0.876443 valid-loss:0.595066 valid-accuracy:0.869854
[INFO 24-04-20 11:42:16.0150 UTC gradient_boosted_trees.cc:270] Truncates the model to 298 tree(s) i.e. 298  iteration(s).
[INFO 24-04-20 11:42:16.0150 UTC gradient_boosted_trees.cc:333] Final model num-trees:298 valid-loss:0.595026 valid-accuracy:0.869854
[INFO 24-04-20 11:42:16.0157 UTC hyperparameters_optimizer.cc:593] [32/50] Score: -0.595026 / -0.569126 HParams: fields { name: "min_examples" value { integer: 2 } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 3 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "shrinkage" value { real: 0.1 } } fields { name: "num_candidate_attributes_ratio" value { real: 0.9 } }
[INFO 24-04-20 11:42:16.0158 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:42:16.0159 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:42:16.0215 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:42:16.0441 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.080487 train-accuracy:0.761895 valid-loss:1.138629 valid-accuracy:0.736609
[INFO 24-04-20 11:42:20.0808 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.523350 train-accuracy:0.883992 valid-loss:0.583351 valid-accuracy:0.868526
[INFO 24-04-20 11:42:20.0808 UTC gradient_boosted_trees.cc:270] Truncates the model to 300 tree(s) i.e. 300  iteration(s).
[INFO 24-04-20 11:42:20.0808 UTC gradient_boosted_trees.cc:333] Final model num-trees:300 valid-loss:0.583351 valid-accuracy:0.868526
[INFO 24-04-20 11:42:20.0866 UTC hyperparameters_optimizer.cc:593] [33/50] Score: -0.583351 / -0.569126 HParams: fields { name: "min_examples" value { integer: 7 } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 128 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "shrinkage" value { real: 0.02 } } fields { name: "num_candidate_attributes_ratio" value { real: 0.5 } }
[INFO 24-04-20 11:42:20.0869 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:42:20.0870 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:42:20.0952 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:42:20.1345 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.010715 train-accuracy:0.761895 valid-loss:1.065719 valid-accuracy:0.736609
[INFO 24-04-20 11:42:22.9603 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.574468
[INFO 24-04-20 11:42:22.9603 UTC gradient_boosted_trees.cc:270] Truncates the model to 83 tree(s) i.e. 83  iteration(s).
[INFO 24-04-20 11:42:22.9609 UTC gradient_boosted_trees.cc:333] Final model num-trees:83 valid-loss:0.574468 valid-accuracy:0.872510
[INFO 24-04-20 11:42:22.9648 UTC hyperparameters_optimizer.cc:593] [34/50] Score: -0.574468 / -0.569126 HParams: fields { name: "min_examples" value { integer: 2 } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 8 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "shrinkage" value { real: 0.1 } } fields { name: "num_candidate_attributes_ratio" value { real: 0.5 } }
[INFO 24-04-20 11:42:22.9649 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:42:22.9649 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:42:22.9714 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:42:22.9886 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.057450 train-accuracy:0.761895 valid-loss:1.114456 valid-accuracy:0.736609
[INFO 24-04-20 11:42:26.1424 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.498302 train-accuracy:0.892222 valid-loss:0.585352 valid-accuracy:0.870297
[INFO 24-04-20 11:42:26.1424 UTC gradient_boosted_trees.cc:270] Truncates the model to 296 tree(s) i.e. 296  iteration(s).
[INFO 24-04-20 11:42:26.1425 UTC gradient_boosted_trees.cc:333] Final model num-trees:296 valid-loss:0.585279 valid-accuracy:0.870297
[INFO 24-04-20 11:42:26.1446 UTC hyperparameters_optimizer.cc:593] [35/50] Score: -0.585279 / -0.569126 HParams: fields { name: "min_examples" value { integer: 10 } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 16 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "shrinkage" value { real: 0.05 } } fields { name: "num_candidate_attributes_ratio" value { real: 0.2 } }
[INFO 24-04-20 11:42:26.1447 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:42:26.1448 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:42:26.1510 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:42:26.1722 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.016525 train-accuracy:0.761895 valid-loss:1.069784 valid-accuracy:0.736609
[INFO 24-04-20 11:42:27.6359 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.57933
[INFO 24-04-20 11:42:27.6359 UTC gradient_boosted_trees.cc:270] Truncates the model to 123 tree(s) i.e. 123  iteration(s).
[INFO 24-04-20 11:42:27.6364 UTC gradient_boosted_trees.cc:333] Final model num-trees:123 valid-loss:0.579330 valid-accuracy:0.867641
[INFO 24-04-20 11:42:27.6411 UTC hyperparameters_optimizer.cc:593] [36/50] Score: -0.57933 / -0.569126 HParams: fields { name: "min_examples" value { integer: 2 } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 8 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "shrinkage" value { real: 0.1 } } fields { name: "num_candidate_attributes_ratio" value { real: 0.2 } }
[INFO 24-04-20 11:42:27.6412 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:42:27.6412 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:42:27.6484 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:42:27.6608 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.053989 train-accuracy:0.761895 valid-loss:1.109535 valid-accuracy:0.736609
[INFO 24-04-20 11:42:28.4511 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.567582 train-accuracy:0.871475 valid-loss:0.596684 valid-accuracy:0.865870
[INFO 24-04-20 11:42:28.4512 UTC gradient_boosted_trees.cc:270] Truncates the model to 300 tree(s) i.e. 300  iteration(s).
[INFO 24-04-20 11:42:28.4512 UTC gradient_boosted_trees.cc:333] Final model num-trees:300 valid-loss:0.596684 valid-accuracy:0.865870
[INFO 24-04-20 11:42:28.4518 UTC hyperparameters_optimizer.cc:593] [37/50] Score: -0.596684 / -0.569126 HParams: fields { name: "min_examples" value { integer: 2 } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 3 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "shrinkage" value { real: 0.1 } } fields { name: "num_candidate_attributes_ratio" value { real: 0.2 } }
[INFO 24-04-20 11:42:28.4519 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:42:28.4519 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:42:28.4572 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:42:28.4917 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:0.981052 train-accuracy:0.761895 valid-loss:1.035441 valid-accuracy:0.736609
[INFO 24-04-20 11:42:30.5459 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.586668
[INFO 24-04-20 11:42:30.5460 UTC gradient_boosted_trees.cc:270] Truncates the model to 61 tree(s) i.e. 61  iteration(s).
[INFO 24-04-20 11:42:30.5462 UTC gradient_boosted_trees.cc:333] Final model num-trees:61 valid-loss:0.586668 valid-accuracy:0.868969
[INFO 24-04-20 11:42:30.5471 UTC hyperparameters_optimizer.cc:593] [38/50] Score: -0.586668 / -0.569126 HParams: fields { name: "min_examples" value { integer: 10 } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 16 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "shrinkage" value { real: 0.15 } } fields { name: "num_candidate_attributes_ratio" value { real: 0.5 } }
[INFO 24-04-20 11:42:30.5473 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:42:30.5474 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:42:30.5527 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:42:30.5748 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.080688 train-accuracy:0.761895 valid-loss:1.138783 valid-accuracy:0.736609
[INFO 24-04-20 11:42:34.3731 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.523219 train-accuracy:0.886135 valid-loss:0.593385 valid-accuracy:0.868526
[INFO 24-04-20 11:42:34.3731 UTC gradient_boosted_trees.cc:270] Truncates the model to 300 tree(s) i.e. 300  iteration(s).
[INFO 24-04-20 11:42:34.3731 UTC gradient_boosted_trees.cc:333] Final model num-trees:300 valid-loss:0.593385 valid-accuracy:0.868526
[INFO 24-04-20 11:42:34.3776 UTC hyperparameters_optimizer.cc:593] [39/50] Score: -0.593385 / -0.569126 HParams: fields { name: "min_examples" value { integer: 2 } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 32 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "shrinkage" value { real: 0.02 } } fields { name: "num_candidate_attributes_ratio" value { real: 0.2 } }
[INFO 24-04-20 11:42:34.3778 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:42:34.3778 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:42:34.3860 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:42:34.4134 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.082622 train-accuracy:0.761895 valid-loss:1.140940 valid-accuracy:0.736609
[INFO 24-04-20 11:42:37.4800 UTC gradient_boosted_trees.cc:1592]  num-trees:165 train-loss:0.632445 train-accuracy:0.861881 valid-loss:0.662238 valid-accuracy:0.850819
[INFO 24-04-20 11:42:39.9779 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.588445 train-accuracy:0.867433 valid-loss:0.620650 valid-accuracy:0.862328
[INFO 24-04-20 11:42:39.9780 UTC gradient_boosted_trees.cc:270] Truncates the model to 300 tree(s) i.e. 300  iteration(s).
[INFO 24-04-20 11:42:39.9780 UTC gradient_boosted_trees.cc:333] Final model num-trees:300 valid-loss:0.620650 valid-accuracy:0.862328
[INFO 24-04-20 11:42:39.9789 UTC hyperparameters_optimizer.cc:593] [40/50] Score: -0.62065 / -0.569126 HParams: fields { name: "min_examples" value { integer: 5 } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 4 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "shrinkage" value { real: 0.02 } } fields { name: "num_candidate_attributes_ratio" value { real: 0.9 } }
[INFO 24-04-20 11:42:39.9793 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:42:39.9794 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:42:39.9851 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:42:40.0101 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:0.991671 train-accuracy:0.761895 valid-loss:1.045193 valid-accuracy:0.736609
[INFO 24-04-20 11:42:41.6192 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.573957
[INFO 24-04-20 11:42:41.6193 UTC gradient_boosted_trees.cc:270] Truncates the model to 126 tree(s) i.e. 126  iteration(s).
[INFO 24-04-20 11:42:41.6194 UTC gradient_boosted_trees.cc:333] Final model num-trees:126 valid-loss:0.573957 valid-accuracy:0.870297
[INFO 24-04-20 11:42:41.6202 UTC hyperparameters_optimizer.cc:593] [41/50] Score: -0.573957 / -0.569126 HParams: fields { name: "min_examples" value { integer: 5 } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 5 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "shrinkage" value { real: 0.15 } } fields { name: "num_candidate_attributes_ratio" value { real: 0.5 } }
[INFO 24-04-20 11:42:41.6204 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:42:41.6204 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:42:41.6253 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:42:41.6554 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.015325 train-accuracy:0.761895 valid-loss:1.070753 valid-accuracy:0.736609
[INFO 24-04-20 11:42:44.7167 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.571936
[INFO 24-04-20 11:42:44.7168 UTC gradient_boosted_trees.cc:270] Truncates the model to 159 tree(s) i.e. 159  iteration(s).
[INFO 24-04-20 11:42:44.7170 UTC gradient_boosted_trees.cc:333] Final model num-trees:159 valid-loss:0.571936 valid-accuracy:0.872067
[INFO 24-04-20 11:42:44.7193 UTC hyperparameters_optimizer.cc:593] [42/50] Score: -0.571936 / -0.569126 HParams: fields { name: "min_examples" value { integer: 7 } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 32 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "shrinkage" value { real: 0.1 } } fields { name: "num_candidate_attributes_ratio" value { real: 0.9 } }
[INFO 24-04-20 11:42:44.7194 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:42:44.7194 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:42:44.7253 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:42:44.7628 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.056130 train-accuracy:0.761895 valid-loss:1.113107 valid-accuracy:0.736609
[INFO 24-04-20 11:42:50.5114 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.573479
[INFO 24-04-20 11:42:50.5114 UTC gradient_boosted_trees.cc:270] Truncates the model to 215 tree(s) i.e. 215  iteration(s).
[INFO 24-04-20 11:42:50.5116 UTC gradient_boosted_trees.cc:333] Final model num-trees:215 valid-loss:0.573479 valid-accuracy:0.872953
[INFO 24-04-20 11:42:50.5143 UTC hyperparameters_optimizer.cc:593] [43/50] Score: -0.573479 / -0.569126 HParams: fields { name: "min_examples" value { integer: 5 } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 6 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "shrinkage" value { real: 0.05 } } fields { name: "num_candidate_attributes_ratio" value { real: 1 } }
[INFO 24-04-20 11:42:50.5147 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:42:50.5148 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:42:50.5211 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:42:50.5507 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.081456 train-accuracy:0.761895 valid-loss:1.139474 valid-accuracy:0.736609
[INFO 24-04-20 11:42:54.6583 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.522091 train-accuracy:0.882823 valid-loss:0.589971 valid-accuracy:0.868083
[INFO 24-04-20 11:42:54.6583 UTC gradient_boosted_trees.cc:270] Truncates the model to 300 tree(s) i.e. 300  iteration(s).
[INFO 24-04-20 11:42:54.6583 UTC gradient_boosted_trees.cc:333] Final model num-trees:300 valid-loss:0.589971 valid-accuracy:0.868083
[INFO 24-04-20 11:42:54.6645 UTC hyperparameters_optimizer.cc:593] [44/50] Score: -0.589971 / -0.569126 HParams: fields { name: "min_examples" value { integer: 7 } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 128 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "shrinkage" value { real: 0.02 } } fields { name: "num_candidate_attributes_ratio" value { real: 0.2 } }
[INFO 24-04-20 11:42:54.6646 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:42:54.6646 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:42:54.6729 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:42:54.6997 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.010652 train-accuracy:0.761895 valid-loss:1.064824 valid-accuracy:0.736609
[INFO 24-04-20 11:42:56.8398 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.571687
[INFO 24-04-20 11:42:56.8398 UTC gradient_boosted_trees.cc:270] Truncates the model to 119 tree(s) i.e. 119  iteration(s).
[INFO 24-04-20 11:42:56.8402 UTC gradient_boosted_trees.cc:333] Final model num-trees:119 valid-loss:0.571687 valid-accuracy:0.868526
[INFO 24-04-20 11:42:56.8441 UTC hyperparameters_optimizer.cc:593] [45/50] Score: -0.571687 / -0.569126 HParams: fields { name: "min_examples" value { integer: 7 } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 8 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "shrinkage" value { real: 0.1 } } fields { name: "num_candidate_attributes_ratio" value { real: 0.5 } }
[INFO 24-04-20 11:42:56.8442 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:42:56.8442 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:42:56.8509 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:42:56.8811 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.013729 train-accuracy:0.761895 valid-loss:1.069266 valid-accuracy:0.736609
[INFO 24-04-20 11:42:59.4262 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.568272
[INFO 24-04-20 11:42:59.4262 UTC gradient_boosted_trees.cc:270] Truncates the model to 117 tree(s) i.e. 117  iteration(s).
[INFO 24-04-20 11:42:59.4265 UTC gradient_boosted_trees.cc:333] Final model num-trees:117 valid-loss:0.568272 valid-accuracy:0.873395
[INFO 24-04-20 11:42:59.4297 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:42:59.4298 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:42:59.4303 UTC hyperparameters_optimizer.cc:593] [46/50] Score: -0.568272 / -0.568272 HParams: fields { name: "min_examples" value { integer: 5 } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 256 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "shrinkage" value { real: 0.1 } } fields { name: "num_candidate_attributes_ratio" value { real: 0.9 } }
[INFO 24-04-20 11:42:59.4350 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:42:59.4604 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.009908 train-accuracy:0.761895 valid-loss:1.065147 valid-accuracy:0.736609
[INFO 24-04-20 11:43:01.0106 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.588365
[INFO 24-04-20 11:43:01.0106 UTC gradient_boosted_trees.cc:270] Truncates the model to 72 tree(s) i.e. 72  iteration(s).
[INFO 24-04-20 11:43:01.0114 UTC gradient_boosted_trees.cc:333] Final model num-trees:72 valid-loss:0.588365 valid-accuracy:0.867641
[INFO 24-04-20 11:43:01.0151 UTC hyperparameters_optimizer.cc:593] [47/50] Score: -0.588365 / -0.568272 HParams: fields { name: "min_examples" value { integer: 2 } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 8 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "shrinkage" value { real: 0.1 } } fields { name: "num_candidate_attributes_ratio" value { real: 0.2 } }
[INFO 24-04-20 11:43:01.0152 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:43:01.0153 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:43:01.0223 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:43:01.0647 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.079317 train-accuracy:0.761895 valid-loss:1.137114 valid-accuracy:0.736609
[INFO 24-04-20 11:43:07.4826 UTC gradient_boosted_trees.cc:1592]  num-trees:212 train-loss:0.511944 train-accuracy:0.884040 valid-loss:0.592119 valid-accuracy:0.866755
[INFO 24-04-20 11:43:09.7157 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.478934 train-accuracy:0.892709 valid-loss:0.580173 valid-accuracy:0.871625
[INFO 24-04-20 11:43:09.7157 UTC gradient_boosted_trees.cc:270] Truncates the model to 300 tree(s) i.e. 300  iteration(s).
[INFO 24-04-20 11:43:09.7158 UTC gradient_boosted_trees.cc:333] Final model num-trees:300 valid-loss:0.580173 valid-accuracy:0.871625
[INFO 24-04-20 11:43:09.7261 UTC hyperparameters_optimizer.cc:593] [48/50] Score: -0.580173 / -0.568272 HParams: fields { name: "min_examples" value { integer: 7 } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 8 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "shrinkage" value { real: 0.02 } } fields { name: "num_candidate_attributes_ratio" value { real: 0.5 } }
[INFO 24-04-20 11:43:09.7265 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:43:09.7265 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:43:09.7367 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:43:09.7890 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.055888 train-accuracy:0.761895 valid-loss:1.113012 valid-accuracy:0.736609
[INFO 24-04-20 11:43:21.3043 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.572366
[INFO 24-04-20 11:43:21.3044 UTC gradient_boosted_trees.cc:270] Truncates the model to 264 tree(s) i.e. 264  iteration(s).
[INFO 24-04-20 11:43:21.3047 UTC gradient_boosted_trees.cc:333] Final model num-trees:264 valid-loss:0.572366 valid-accuracy:0.873838
[INFO 24-04-20 11:43:21.3083 UTC hyperparameters_optimizer.cc:593] [49/50] Score: -0.572366 / -0.568272 HParams: fields { name: "min_examples" value { integer: 2 } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 6 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "shrinkage" value { real: 0.05 } } fields { name: "num_candidate_attributes_ratio" value { real: 1 } }
[INFO 24-04-20 11:43:21.3084 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:43:21.3084 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:43:21.3158 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:43:21.3638 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.012875 train-accuracy:0.761895 valid-loss:1.067941 valid-accuracy:0.736609
[INFO 24-04-20 11:43:25.5823 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.568887
[INFO 24-04-20 11:43:25.5823 UTC gradient_boosted_trees.cc:270] Truncates the model to 105 tree(s) i.e. 105  iteration(s).
[INFO 24-04-20 11:43:25.5827 UTC gradient_boosted_trees.cc:333] Final model num-trees:105 valid-loss:0.568887 valid-accuracy:0.875166
[INFO 24-04-20 11:43:25.5852 UTC hyperparameters_optimizer.cc:593] [50/50] Score: -0.568887 / -0.568272 HParams: fields { name: "min_examples" value { integer: 2 } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 64 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "shrinkage" value { real: 0.1 } } fields { name: "num_candidate_attributes_ratio" value { real: 0.9 } }
[INFO 24-04-20 11:43:25.5885 UTC hyperparameters_optimizer.cc:224] Best hyperparameters:
fields {
  name: "min_examples"
  value {
    integer: 5
  }
}
fields {
  name: "categorical_algorithm"
  value {
    categorical: "CART"
  }
}
fields {
  name: "growing_strategy"
  value {
    categorical: "BEST_FIRST_GLOBAL"
  }
}
fields {
  name: "max_num_nodes"
  value {
    integer: 256
  }
}
fields {
  name: "use_hessian_gain"
  value {
    categorical: "true"
  }
}
fields {
  name: "shrinkage"
  value {
    real: 0.1
  }
}
fields {
  name: "num_candidate_attributes_ratio"
  value {
    real: 0.9
  }
}

[INFO 24-04-20 11:43:25.5888 UTC kernel.cc:919] Export model in log directory: /tmpfs/tmp/tmpi7_rh8z3 with prefix 46d5c4b975a742e3
[INFO 24-04-20 11:43:25.5957 UTC kernel.cc:937] Save model in resources
[INFO 24-04-20 11:43:25.5985 UTC abstract_model.cc:881] Model self evaluation:
Task: CLASSIFICATION
Label: __LABEL
Loss (BINOMIAL_LOG_LIKELIHOOD): 0.568272

Accuracy: 0.873395  CI95[W][0 1]
ErrorRate: : 0.126605


Confusion Table:
truth\prediction
      1    2
1  1570   94
2   192  403
Total: 2259


[INFO 24-04-20 11:43:25.6157 UTC kernel.cc:1233] Loading model from path /tmpfs/tmp/tmpi7_rh8z3/model/ with prefix 46d5c4b975a742e3
[INFO 24-04-20 11:43:25.6456 UTC quick_scorer_extended.cc:911] The binary was compiled without AVX2 support, but your CPU supports it. Enable it for faster model inference.
[INFO 24-04-20 11:43:25.6471 UTC abstract_model.cc:1344] Engine "GradientBoostedTreesQuickScorerExtended" built
[INFO 24-04-20 11:43:25.6471 UTC kernel.cc:1061] Use fast generic engine
Model trained in 0:04:07.062880
Compiling model...
Model compiled.
CPU times: user 4min 9s, sys: 493 ms, total: 4min 9s
Wall time: 4min 7s
<tf_keras.src.callbacks.History at 0x7f39af26a040>
# Evaluate the model
tuned_model.compile(["accuracy"])
tuned_test_accuracy = tuned_model.evaluate(test_ds, return_dict=True, verbose=0)["accuracy"]
print(f"Test accuracy with the TF-DF hyper-parameter tuner: {tuned_test_accuracy:.4f}")
Test accuracy with the TF-DF hyper-parameter tuner: 0.8722

The hyper-parameters and objective scores of the trials are available in the model inspector. The score value is always maximized. In this example, the score is the negative log loss on the validation dataset (selected automatically).

# Display the tuning logs.
tuning_logs = tuned_model.make_inspector().tuning_logs()
tuning_logs.head()

The single rows with best=True is the one used in the final model.

# Best hyper-parameters.
tuning_logs[tuning_logs.best].iloc[0]
score                                     -0.568272
evaluation_time                          220.821355
best                                           True
min_examples                                      5
categorical_algorithm                          CART
growing_strategy                  BEST_FIRST_GLOBAL
max_num_nodes                                 256.0
use_hessian_gain                               true
shrinkage                                       0.1
num_candidate_attributes_ratio                  0.9
max_depth                                       NaN
Name: 45, dtype: object

Next, we plot the evaluation of the best score during the tuning.

plt.figure(figsize=(10, 5))
plt.plot(tuning_logs["score"], label="current trial")
plt.plot(tuning_logs["score"].cummax(), label="best trial")
plt.xlabel("Tuning step")
plt.ylabel("Tuning score")
plt.legend()
plt.show()

png

As before, hyper-parameter tuning is enabled by specifying the tuner constructor argument of the model. Set use_predefined_hps=True to automatically configure the search space for the hyper-parameters.

%%time
%set_cell_height 300

# Create a Random Search tuner with 50 trials and automatic hp configuration.
tuner = tfdf.tuner.RandomSearch(num_trials=50, use_predefined_hps=True)

# Define and train the model.
tuned_model = tfdf.keras.GradientBoostedTreesModel(tuner=tuner)
tuned_model.fit(train_ds, verbose=2)
<IPython.core.display.Javascript object>
Warning: The `num_threads` constructor argument is not set and the number of CPU is os.cpu_count()=32 > 32. Setting num_threads to 32. Set num_threads manually to use more than 32 cpus.
WARNING:absl:The `num_threads` constructor argument is not set and the number of CPU is os.cpu_count()=32 > 32. Setting num_threads to 32. Set num_threads manually to use more than 32 cpus.
Use /tmpfs/tmp/tmpl269t4qx as temporary training directory
Reading training dataset...
Training tensor examples:
Features: {'age': <tf.Tensor 'data:0' shape=(None,) dtype=int64>, 'workclass': <tf.Tensor 'data_1:0' shape=(None,) dtype=string>, 'fnlwgt': <tf.Tensor 'data_2:0' shape=(None,) dtype=int64>, 'education': <tf.Tensor 'data_3:0' shape=(None,) dtype=string>, 'education_num': <tf.Tensor 'data_4:0' shape=(None,) dtype=int64>, 'marital_status': <tf.Tensor 'data_5:0' shape=(None,) dtype=string>, 'occupation': <tf.Tensor 'data_6:0' shape=(None,) dtype=string>, 'relationship': <tf.Tensor 'data_7:0' shape=(None,) dtype=string>, 'race': <tf.Tensor 'data_8:0' shape=(None,) dtype=string>, 'sex': <tf.Tensor 'data_9:0' shape=(None,) dtype=string>, 'capital_gain': <tf.Tensor 'data_10:0' shape=(None,) dtype=int64>, 'capital_loss': <tf.Tensor 'data_11:0' shape=(None,) dtype=int64>, 'hours_per_week': <tf.Tensor 'data_12:0' shape=(None,) dtype=int64>, 'native_country': <tf.Tensor 'data_13:0' shape=(None,) dtype=string>}
Label: Tensor("data_14:0", shape=(None,), dtype=int64)
Weights: None
Normalized tensor features:
 {'age': SemanticTensor(semantic=<Semantic.NUMERICAL: 1>, tensor=<tf.Tensor 'Cast:0' shape=(None,) dtype=float32>), 'workclass': SemanticTensor(semantic=<Semantic.CATEGORICAL: 2>, tensor=<tf.Tensor 'data_1:0' shape=(None,) dtype=string>), 'fnlwgt': SemanticTensor(semantic=<Semantic.NUMERICAL: 1>, tensor=<tf.Tensor 'Cast_1:0' shape=(None,) dtype=float32>), 'education': SemanticTensor(semantic=<Semantic.CATEGORICAL: 2>, tensor=<tf.Tensor 'data_3:0' shape=(None,) dtype=string>), 'education_num': SemanticTensor(semantic=<Semantic.NUMERICAL: 1>, tensor=<tf.Tensor 'Cast_2:0' shape=(None,) dtype=float32>), 'marital_status': SemanticTensor(semantic=<Semantic.CATEGORICAL: 2>, tensor=<tf.Tensor 'data_5:0' shape=(None,) dtype=string>), 'occupation': SemanticTensor(semantic=<Semantic.CATEGORICAL: 2>, tensor=<tf.Tensor 'data_6:0' shape=(None,) dtype=string>), 'relationship': SemanticTensor(semantic=<Semantic.CATEGORICAL: 2>, tensor=<tf.Tensor 'data_7:0' shape=(None,) dtype=string>), 'race': SemanticTensor(semantic=<Semantic.CATEGORICAL: 2>, tensor=<tf.Tensor 'data_8:0' shape=(None,) dtype=string>), 'sex': SemanticTensor(semantic=<Semantic.CATEGORICAL: 2>, tensor=<tf.Tensor 'data_9:0' shape=(None,) dtype=string>), 'capital_gain': SemanticTensor(semantic=<Semantic.NUMERICAL: 1>, tensor=<tf.Tensor 'Cast_3:0' shape=(None,) dtype=float32>), 'capital_loss': SemanticTensor(semantic=<Semantic.NUMERICAL: 1>, tensor=<tf.Tensor 'Cast_4:0' shape=(None,) dtype=float32>), 'hours_per_week': SemanticTensor(semantic=<Semantic.NUMERICAL: 1>, tensor=<tf.Tensor 'Cast_5:0' shape=(None,) dtype=float32>), 'native_country': SemanticTensor(semantic=<Semantic.CATEGORICAL: 2>, tensor=<tf.Tensor 'data_13:0' shape=(None,) dtype=string>)}
[WARNING 24-04-20 11:43:26.4721 UTC gradient_boosted_trees.cc:1840] "goss_alpha" set but "sampling_method" not equal to "GOSS".
[WARNING 24-04-20 11:43:26.4721 UTC gradient_boosted_trees.cc:1851] "goss_beta" set but "sampling_method" not equal to "GOSS".
[WARNING 24-04-20 11:43:26.4721 UTC gradient_boosted_trees.cc:1865] "selective_gradient_boosting_ratio" set but "sampling_method" not equal to "SELGB".
Training dataset read in 0:00:00.379650. Found 22792 examples.
Training model...
[INFO 24-04-20 11:43:26.8649 UTC kernel.cc:771] Start Yggdrasil model training
[INFO 24-04-20 11:43:26.8649 UTC kernel.cc:772] Collect training examples
[INFO 24-04-20 11:43:26.8649 UTC kernel.cc:785] Dataspec guide:
column_guides {
  column_name_pattern: "^__LABEL$"
  type: CATEGORICAL
  categorial {
    min_vocab_frequency: 0
    max_vocab_count: -1
  }
}
default_column_guide {
  categorial {
    max_vocab_count: 2000
  }
  discretized_numerical {
    maximum_num_bins: 255
  }
}
ignore_columns_without_guides: false
detect_numerical_as_discretized_numerical: false

[INFO 24-04-20 11:43:26.8650 UTC kernel.cc:391] Number of batches: 23
[INFO 24-04-20 11:43:26.8650 UTC kernel.cc:392] Number of examples: 22792
[INFO 24-04-20 11:43:26.8734 UTC data_spec_inference.cc:305] 1 item(s) have been pruned (i.e. they are considered out of dictionary) for the column native_country (40 item(s) left) because min_value_count=5 and max_number_of_unique_values=2000
[INFO 24-04-20 11:43:26.8735 UTC data_spec_inference.cc:305] 1 item(s) have been pruned (i.e. they are considered out of dictionary) for the column occupation (13 item(s) left) because min_value_count=5 and max_number_of_unique_values=2000
[INFO 24-04-20 11:43:26.8735 UTC data_spec_inference.cc:305] 1 item(s) have been pruned (i.e. they are considered out of dictionary) for the column workclass (7 item(s) left) because min_value_count=5 and max_number_of_unique_values=2000
[INFO 24-04-20 11:43:26.8801 UTC kernel.cc:792] Training dataset:
Number of records: 22792
Number of columns: 15

Number of columns by type:
    CATEGORICAL: 9 (60%)
    NUMERICAL: 6 (40%)

Columns:

CATEGORICAL: 9 (60%)
    0: "__LABEL" CATEGORICAL integerized vocab-size:3 no-ood-item
    4: "education" CATEGORICAL has-dict vocab-size:17 zero-ood-items most-frequent:"HS-grad" 7340 (32.2043%)
    8: "marital_status" CATEGORICAL has-dict vocab-size:8 zero-ood-items most-frequent:"Married-civ-spouse" 10431 (45.7661%)
    9: "native_country" CATEGORICAL num-nas:407 (1.78571%) has-dict vocab-size:41 num-oods:1 (0.00446728%) most-frequent:"United-States" 20436 (91.2933%)
    10: "occupation" CATEGORICAL num-nas:1260 (5.52826%) has-dict vocab-size:14 num-oods:4 (0.018577%) most-frequent:"Prof-specialty" 2870 (13.329%)
    11: "race" CATEGORICAL has-dict vocab-size:6 zero-ood-items most-frequent:"White" 19467 (85.4115%)
    12: "relationship" CATEGORICAL has-dict vocab-size:7 zero-ood-items most-frequent:"Husband" 9191 (40.3256%)
    13: "sex" CATEGORICAL has-dict vocab-size:3 zero-ood-items most-frequent:"Male" 15165 (66.5365%)
    14: "workclass" CATEGORICAL num-nas:1257 (5.51509%) has-dict vocab-size:8 num-oods:3 (0.0139308%) most-frequent:"Private" 15879 (73.7358%)

NUMERICAL: 6 (40%)
    1: "age" NUMERICAL mean:38.6153 min:17 max:90 sd:13.661
    2: "capital_gain" NUMERICAL mean:1081.9 min:0 max:99999 sd:7509.48
    3: "capital_loss" NUMERICAL mean:87.2806 min:0 max:4356 sd:403.01
    5: "education_num" NUMERICAL mean:10.0927 min:1 max:16 sd:2.56427
    6: "fnlwgt" NUMERICAL mean:189879 min:12285 max:1.4847e+06 sd:106423
    7: "hours_per_week" NUMERICAL mean:40.3955 min:1 max:99 sd:12.249

Terminology:
    nas: Number of non-available (i.e. missing) values.
    ood: Out of dictionary.
    manually-defined: Attribute whose type is manually defined by the user, i.e., the type was not automatically inferred.
    tokenized: The attribute value is obtained through tokenization.
    has-dict: The attribute is attached to a string dictionary e.g. a categorical attribute stored as a string.
    vocab-size: Number of unique values.

[INFO 24-04-20 11:43:26.8801 UTC kernel.cc:808] Configure learner
[WARNING 24-04-20 11:43:26.8804 UTC gradient_boosted_trees.cc:1840] "goss_alpha" set but "sampling_method" not equal to "GOSS".
[WARNING 24-04-20 11:43:26.8805 UTC gradient_boosted_trees.cc:1851] "goss_beta" set but "sampling_method" not equal to "GOSS".
[WARNING 24-04-20 11:43:26.8805 UTC gradient_boosted_trees.cc:1865] "selective_gradient_boosting_ratio" set but "sampling_method" not equal to "SELGB".
[INFO 24-04-20 11:43:26.8806 UTC kernel.cc:822] Training config:
learner: "HYPERPARAMETER_OPTIMIZER"
features: "^age$"
features: "^capital_gain$"
features: "^capital_loss$"
features: "^education$"
features: "^education_num$"
features: "^fnlwgt$"
features: "^hours_per_week$"
features: "^marital_status$"
features: "^native_country$"
features: "^occupation$"
features: "^race$"
features: "^relationship$"
features: "^sex$"
features: "^workclass$"
label: "^__LABEL$"
task: CLASSIFICATION
metadata {
  framework: "TF Keras"
}
[yggdrasil_decision_forests.model.hyperparameters_optimizer_v2.proto.hyperparameters_optimizer_config] {
  base_learner {
    learner: "GRADIENT_BOOSTED_TREES"
    features: "^age$"
    features: "^capital_gain$"
    features: "^capital_loss$"
    features: "^education$"
    features: "^education_num$"
    features: "^fnlwgt$"
    features: "^hours_per_week$"
    features: "^marital_status$"
    features: "^native_country$"
    features: "^occupation$"
    features: "^race$"
    features: "^relationship$"
    features: "^sex$"
    features: "^workclass$"
    label: "^__LABEL$"
    task: CLASSIFICATION
    random_seed: 123456
    pure_serving_model: false
    [yggdrasil_decision_forests.model.gradient_boosted_trees.proto.gradient_boosted_trees_config] {
      num_trees: 300
      decision_tree {
        max_depth: 6
        min_examples: 5
        in_split_min_examples_check: true
        keep_non_leaf_label_distribution: true
        num_candidate_attributes: -1
        missing_value_policy: GLOBAL_IMPUTATION
        allow_na_conditions: false
        categorical_set_greedy_forward {
          sampling: 0.1
          max_num_items: -1
          min_item_frequency: 1
        }
        growing_strategy_local {
        }
        categorical {
          cart {
          }
        }
        axis_aligned_split {
        }
        internal {
          sorting_strategy: PRESORTED
        }
        uplift {
          min_examples_in_treatment: 5
          split_score: KULLBACK_LEIBLER
        }
      }
      shrinkage: 0.1
      loss: DEFAULT
      validation_set_ratio: 0.1
      validation_interval_in_trees: 1
      early_stopping: VALIDATION_LOSS_INCREASE
      early_stopping_num_trees_look_ahead: 30
      l2_regularization: 0
      lambda_loss: 1
      mart {
      }
      adapt_subsample_for_maximum_training_duration: false
      l1_regularization: 0
      use_hessian_gain: false
      l2_regularization_categorical: 1
      stochastic_gradient_boosting {
        ratio: 1
      }
      apply_link_function: true
      compute_permutation_variable_importance: false
      binary_focal_loss_options {
        misprediction_exponent: 2
        positive_sample_coefficient: 0.5
      }
      early_stopping_initial_iteration: 10
    }
  }
  optimizer {
    optimizer_key: "RANDOM"
    [yggdrasil_decision_forests.model.hyperparameters_optimizer_v2.proto.random] {
      num_trials: 50
    }
  }
  base_learner_deployment {
    num_threads: 1
  }
  predefined_search_space {
  }
}

[INFO 24-04-20 11:43:26.8808 UTC kernel.cc:825] Deployment config:
cache_path: "/tmpfs/tmp/tmpl269t4qx/working_cache"
num_threads: 32
try_resume_training: true

[INFO 24-04-20 11:43:26.8809 UTC kernel.cc:887] Train model
[INFO 24-04-20 11:43:26.8812 UTC hyperparameters_optimizer.cc:214] Hyperparameter search space:
fields {
  name: "split_axis"
  discrete_candidates {
    possible_values {
      categorical: "AXIS_ALIGNED"
    }
    possible_values {
      categorical: "SPARSE_OBLIQUE"
    }
  }
  children {
    name: "sparse_oblique_projection_density_factor"
    discrete_candidates {
      possible_values {
        real: 1
      }
      possible_values {
        real: 2
      }
      possible_values {
        real: 3
      }
      possible_values {
        real: 4
      }
      possible_values {
        real: 5
      }
    }
    parent_discrete_values {
      possible_values {
        categorical: "SPARSE_OBLIQUE"
      }
    }
  }
  children {
    name: "sparse_oblique_normalization"
    discrete_candidates {
      possible_values {
        categorical: "NONE"
      }
      possible_values {
        categorical: "STANDARD_DEVIATION"
      }
      possible_values {
        categorical: "MIN_MAX"
      }
    }
    parent_discrete_values {
      possible_values {
        categorical: "SPARSE_OBLIQUE"
      }
    }
  }
  children {
    name: "sparse_oblique_weights"
    discrete_candidates {
      possible_values {
        categorical: "BINARY"
      }
      possible_values {
        categorical: "CONTINUOUS"
      }
    }
    parent_discrete_values {
      possible_values {
        categorical: "SPARSE_OBLIQUE"
      }
    }
  }
}
fields {
  name: "categorical_algorithm"
  discrete_candidates {
    possible_values {
      categorical: "CART"
    }
    possible_values {
      categorical: "RANDOM"
    }
  }
}
fields {
  name: "growing_strategy"
  discrete_candidates {
    possible_values {
      categorical: "LOCAL"
    }
    possible_values {
      categorical: "BEST_FIRST_GLOBAL"
    }
  }
  children {
    name: "max_num_nodes"
    discrete_candidates {
      possible_values {
        integer: 16
      }
      possible_values {
        integer: 32
      }
      possible_values {
        integer: 64
      }
      possible_values {
        integer: 128
      }
      possible_values {
        integer: 256
      }
      possible_values {
        integer: 512
      }
    }
    parent_discrete_values {
      possible_values {
        categorical: "BEST_FIRST_GLOBAL"
      }
    }
  }
  children {
    name: "max_depth"
    discrete_candidates {
      possible_values {
        integer: 3
      }
      possible_values {
        integer: 4
      }
      possible_values {
        integer: 6
      }
      possible_values {
        integer: 8
      }
    }
    parent_discrete_values {
      possible_values {
        categorical: "LOCAL"
      }
    }
  }
}
fields {
  name: "sampling_method"
  discrete_candidates {
    possible_values {
      categorical: "RANDOM"
    }
  }
  children {
    name: "subsample"
    discrete_candidates {
      possible_values {
        real: 0.6
      }
      possible_values {
        real: 0.8
      }
      possible_values {
        real: 0.9
      }
      possible_values {
        real: 1
      }
    }
    parent_discrete_values {
      possible_values {
        categorical: "RANDOM"
      }
    }
  }
}
fields {
  name: "shrinkage"
  discrete_candidates {
    possible_values {
      real: 0.02
    }
    possible_values {
      real: 0.05
    }
    possible_values {
      real: 0.1
    }
  }
}
fields {
  name: "min_examples"
  discrete_candidates {
    possible_values {
      integer: 5
    }
    possible_values {
      integer: 7
    }
    possible_values {
      integer: 10
    }
    possible_values {
      integer: 20
    }
  }
}
fields {
  name: "use_hessian_gain"
  discrete_candidates {
    possible_values {
      categorical: "true"
    }
    possible_values {
      categorical: "false"
    }
  }
}
fields {
  name: "num_candidate_attributes_ratio"
  discrete_candidates {
    possible_values {
      real: 0.2
    }
    possible_values {
      real: 0.5
    }
    possible_values {
      real: 0.9
    }
    possible_values {
      real: 1
    }
  }
}

[INFO 24-04-20 11:43:26.8814 UTC hyperparameters_optimizer.cc:509] Start local tuner with 1 parallel trial(s), each with 32 thread(s)
[INFO 24-04-20 11:43:26.8819 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:43:26.8819 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:43:26.8882 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:43:27.1834 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.017080 train-accuracy:0.761895 valid-loss:1.072045 valid-accuracy:0.736609
[INFO 24-04-20 11:43:37.7866 UTC gradient_boosted_trees.cc:1592]  num-trees:37 train-loss:0.556901 train-accuracy:0.883553 valid-loss:0.652912 valid-accuracy:0.849048
[INFO 24-04-20 11:44:08.1161 UTC gradient_boosted_trees.cc:1592]  num-trees:129 train-loss:0.440547 train-accuracy:0.913164 valid-loss:0.635362 valid-accuracy:0.853032
[INFO 24-04-20 11:44:10.1389 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.631539
[INFO 24-04-20 11:44:10.1390 UTC gradient_boosted_trees.cc:270] Truncates the model to 105 tree(s) i.e. 105  iteration(s).
[INFO 24-04-20 11:44:10.1394 UTC gradient_boosted_trees.cc:333] Final model num-trees:105 valid-loss:0.631539 valid-accuracy:0.854803
[INFO 24-04-20 11:44:10.1413 UTC hyperparameters_optimizer.cc:593] [1/50] Score: -0.631539 / -0.631539 HParams: fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 5 } } fields { name: "sparse_oblique_normalization" value { categorical: "MIN_MAX" } } fields { name: "sparse_oblique_weights" value { categorical: "BINARY" } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 32 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 1 } } fields { name: "shrinkage" value { real: 0.1 } } fields { name: "min_examples" value { integer: 10 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "num_candidate_attributes_ratio" value { real: 0.2 } }
[INFO 24-04-20 11:44:10.1414 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:44:10.1414 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:44:10.1463 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:44:10.3488 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.079625 train-accuracy:0.761895 valid-loss:1.137371 valid-accuracy:0.736609
[INFO 24-04-20 11:44:38.2527 UTC gradient_boosted_trees.cc:1592]  num-trees:136 train-loss:0.558131 train-accuracy:0.876297 valid-loss:0.607508 valid-accuracy:0.868526
[INFO 24-04-20 11:45:08.4404 UTC gradient_boosted_trees.cc:1592]  num-trees:282 train-loss:0.498537 train-accuracy:0.888813 valid-loss:0.580250 valid-accuracy:0.869411
[INFO 24-04-20 11:45:12.1156 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.493187 train-accuracy:0.890372 valid-loss:0.579161 valid-accuracy:0.868969
[INFO 24-04-20 11:45:12.1157 UTC gradient_boosted_trees.cc:270] Truncates the model to 299 tree(s) i.e. 299  iteration(s).
[INFO 24-04-20 11:45:12.1157 UTC gradient_boosted_trees.cc:333] Final model num-trees:299 valid-loss:0.579139 valid-accuracy:0.868969
[INFO 24-04-20 11:45:12.1228 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:45:12.1228 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:45:12.1241 UTC hyperparameters_optimizer.cc:593] [2/50] Score: -0.579139 / -0.579139 HParams: fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 2 } } fields { name: "sparse_oblique_normalization" value { categorical: "NONE" } } fields { name: "sparse_oblique_weights" value { categorical: "BINARY" } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 128 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 0.8 } } fields { name: "shrinkage" value { real: 0.02 } } fields { name: "min_examples" value { integer: 10 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "num_candidate_attributes_ratio" value { real: 0.9 } }
[INFO 24-04-20 11:45:12.1284 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:45:12.4485 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.007886 train-accuracy:0.761895 valid-loss:1.061560 valid-accuracy:0.736609
[INFO 24-04-20 11:45:38.4822 UTC gradient_boosted_trees.cc:1592]  num-trees:80 train-loss:0.422466 train-accuracy:0.912434 valid-loss:0.578831 valid-accuracy:0.865870
[INFO 24-04-20 11:45:44.1534 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.576524
[INFO 24-04-20 11:45:44.1535 UTC gradient_boosted_trees.cc:270] Truncates the model to 67 tree(s) i.e. 67  iteration(s).
[INFO 24-04-20 11:45:44.1545 UTC gradient_boosted_trees.cc:333] Final model num-trees:67 valid-loss:0.576524 valid-accuracy:0.867198
[INFO 24-04-20 11:45:44.1584 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:45:44.1585 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:45:44.1647 UTC hyperparameters_optimizer.cc:593] [3/50] Score: -0.576524 / -0.576524 HParams: fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 3 } } fields { name: "sparse_oblique_normalization" value { categorical: "NONE" } } fields { name: "sparse_oblique_weights" value { categorical: "BINARY" } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 8 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 1 } } fields { name: "shrinkage" value { real: 0.1 } } fields { name: "min_examples" value { integer: 5 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "num_candidate_attributes_ratio" value { real: 0.5 } }
[INFO 24-04-20 11:45:44.1675 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:45:44.3624 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.016576 train-accuracy:0.761895 valid-loss:1.072904 valid-accuracy:0.736609
[INFO 24-04-20 11:46:08.6550 UTC gradient_boosted_trees.cc:1592]  num-trees:120 train-loss:0.458457 train-accuracy:0.900258 valid-loss:0.580639 valid-accuracy:0.866313
[INFO 24-04-20 11:46:14.2095 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.58024
[INFO 24-04-20 11:46:14.2095 UTC gradient_boosted_trees.cc:270] Truncates the model to 117 tree(s) i.e. 117  iteration(s).
[INFO 24-04-20 11:46:14.2099 UTC gradient_boosted_trees.cc:333] Final model num-trees:117 valid-loss:0.580240 valid-accuracy:0.866755
[INFO 24-04-20 11:46:14.2122 UTC hyperparameters_optimizer.cc:593] [4/50] Score: -0.58024 / -0.576524 HParams: fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 1 } } fields { name: "sparse_oblique_normalization" value { categorical: "NONE" } } fields { name: "sparse_oblique_weights" value { categorical: "BINARY" } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 64 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 1 } } fields { name: "shrinkage" value { real: 0.1 } } fields { name: "min_examples" value { integer: 20 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "num_candidate_attributes_ratio" value { real: 0.2 } }
[INFO 24-04-20 11:46:14.2123 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:46:14.2123 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:46:14.2185 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:46:14.5606 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.011469 train-accuracy:0.761895 valid-loss:1.065462 valid-accuracy:0.736609
[INFO 24-04-20 11:46:38.7917 UTC gradient_boosted_trees.cc:1592]  num-trees:75 train-loss:0.473998 train-accuracy:0.897239 valid-loss:0.598055 valid-accuracy:0.861443
[INFO 24-04-20 11:46:46.7510 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.597285
[INFO 24-04-20 11:46:46.7511 UTC gradient_boosted_trees.cc:270] Truncates the model to 69 tree(s) i.e. 69  iteration(s).
[INFO 24-04-20 11:46:46.7515 UTC gradient_boosted_trees.cc:333] Final model num-trees:69 valid-loss:0.597285 valid-accuracy:0.860115
[INFO 24-04-20 11:46:46.7533 UTC hyperparameters_optimizer.cc:593] [5/50] Score: -0.597285 / -0.576524 HParams: fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 4 } } fields { name: "sparse_oblique_normalization" value { categorical: "STANDARD_DEVIATION" } } fields { name: "sparse_oblique_weights" value { categorical: "CONTINUOUS" } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 256 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 1 } } fields { name: "shrinkage" value { real: 0.1 } } fields { name: "min_examples" value { integer: 20 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "num_candidate_attributes_ratio" value { real: 0.5 } }
[INFO 24-04-20 11:46:46.7537 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:46:46.7537 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:46:46.7593 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:46:46.8919 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.080233 train-accuracy:0.761895 valid-loss:1.138164 valid-accuracy:0.736609
[INFO 24-04-20 11:47:08.8764 UTC gradient_boosted_trees.cc:1592]  num-trees:167 train-loss:0.555336 train-accuracy:0.881508 valid-loss:0.612655 valid-accuracy:0.863656
[INFO 24-04-20 11:47:26.3719 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.511197 train-accuracy:0.890225 valid-loss:0.592463 valid-accuracy:0.867198
[INFO 24-04-20 11:47:26.3719 UTC gradient_boosted_trees.cc:270] Truncates the model to 300 tree(s) i.e. 300  iteration(s).
[INFO 24-04-20 11:47:26.3719 UTC gradient_boosted_trees.cc:333] Final model num-trees:300 valid-loss:0.592463 valid-accuracy:0.867198
[INFO 24-04-20 11:47:26.3765 UTC hyperparameters_optimizer.cc:593] [6/50] Score: -0.592463 / -0.576524 HParams: [INFOfields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 1 } } fields { name: "sparse_oblique_normalization" value { categorical: "MIN_MAX" } } fields { name: "sparse_oblique_weights" value { categorical: "BINARY" } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 32 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 0.6 } } fields { name: "shrinkage" value { real: 0.02 } } fields { name: "min_examples" value { integer: 20 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "num_candidate_attributes_ratio" value { real: 1 } }
 24-04-20 11:47:26.3766 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:47:26.3767 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:47:26.3853 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:47:26.6546 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.052418 train-accuracy:0.761895 valid-loss:1.109157 valid-accuracy:0.736609
[INFO 24-04-20 11:47:39.0564 UTC gradient_boosted_trees.cc:1592]  num-trees:46 train-loss:0.568038 train-accuracy:0.878586 valid-loss:0.618729 valid-accuracy:0.866313
[INFO 24-04-20 11:48:09.3068 UTC gradient_boosted_trees.cc:1592]  num-trees:155 train-loss:0.451158 train-accuracy:0.904885 valid-loss:0.580549 valid-accuracy:0.865870
[INFO 24-04-20 11:48:11.8108 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.579207
[INFO 24-04-20 11:48:11.8108 UTC gradient_boosted_trees.cc:270] Truncates the model to 134 tree(s) i.e. 134  iteration(s).
[INFO 24-04-20 11:48:11.8115 UTC gradient_boosted_trees.cc:333] Final model num-trees:134 valid-loss:0.579207 valid-accuracy:0.864099
[INFO 24-04-20 11:48:11.8152 UTC hyperparameters_optimizer.cc:593] [7/50] Score: -0.579207 / -0.576524 HParams: fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 5 } } fields { name: "sparse_oblique_normalization" value { categorical: "NONE" } } fields { name: "sparse_oblique_weights" value { categorical: "CONTINUOUS" } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 256 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 0.8 } } fields { name: "shrinkage" value { real: 0.05 } } fields { name: "min_examples" value { integer: 5 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "num_candidate_attributes_ratio" value { real: 0.5 } }
[INFO 24-04-20 11:48:11.8153 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:48:11.8153 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:48:11.8225 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:48:11.9397 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.024398 train-accuracy:0.761895 valid-loss:1.080875 valid-accuracy:0.736609
[INFO 24-04-20 11:48:33.3505 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.607541
[INFO 24-04-20 11:48:33.3506 UTC gradient_boosted_trees.cc:270] Truncates the model to 148 tree(s) i.e. 148  iteration(s).
[INFO 24-04-20 11:48:33.3507 UTC gradient_boosted_trees.cc:333] Final model num-trees:148 valid-loss:0.607541 valid-accuracy:0.863214
[INFO 24-04-20 11:48:33.3515 UTC hyperparameters_optimizer.cc:593] [8/50] Score: -0.607541 / -0.576524 HParams: fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 3 } } fields { name: "sparse_oblique_normalization" value { categorical: "STANDARD_DEVIATION" } } fields { name: "sparse_oblique_weights" value { categorical: "CONTINUOUS" } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 4 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 0.8 } } fields { name: "shrinkage" value { real: 0.1 } } fields { name: "min_examples" value { integer: 7 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "num_candidate_attributes_ratio" value { real: 0.9 } }
[INFO 24-04-20 11:48:33.3516 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:48:33.3516 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:48:33.3567 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:48:33.6283 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.052939 train-accuracy:0.761895 valid-loss:1.109668 valid-accuracy:0.736609
[INFO 24-04-20 11:48:39.5145 UTC gradient_boosted_trees.cc:1592]  num-trees:22 train-loss:0.675009 train-accuracy:0.862660 valid-loss:0.725751 valid-accuracy:0.841523
[INFO 24-04-20 11:49:09.5556 UTC gradient_boosted_trees.cc:1592]  num-trees:131 train-loss:0.470891 train-accuracy:0.896070 valid-loss:0.587392 valid-accuracy:0.869411
[INFO 24-04-20 11:49:24.5348 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.585858
[INFO 24-04-20 11:49:24.5349 UTC gradient_boosted_trees.cc:270] Truncates the model to 155 tree(s) i.e. 155  iteration(s).
[INFO 24-04-20 11:49:24.5353 UTC gradient_boosted_trees.cc:333] Final model num-trees:155 valid-loss:0.585858 valid-accuracy:0.868083
[INFO 24-04-20 11:49:24.5390 UTC hyperparameters_optimizer.cc:593] [9/50] Score: -0.585858 / -0.576524 HParams: fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 3 } } fields { name: "sparse_oblique_normalization" value { categorical: "STANDARD_DEVIATION" } } fields { name: "sparse_oblique_weights" value { categorical: "CONTINUOUS" } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 256 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 0.9 } } fields { name: "shrinkage" value { real: 0.05 } } fields { name: "min_examples" value { integer: 10 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "num_candidate_attributes_ratio" value { real: 0.9 } }
[INFO 24-04-20 11:49:24.5391 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:49:24.5391 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:49:24.5458 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:49:24.9157 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.052843 train-accuracy:0.761895 valid-loss:1.111456 valid-accuracy:0.736609
[INFO 24-04-20 11:49:39.6689 UTC gradient_boosted_trees.cc:1592]  num-trees:41 train-loss:0.568976 train-accuracy:0.880047 valid-loss:0.669706 valid-accuracy:0.849491
[INFO 24-04-20 11:50:09.9498 UTC gradient_boosted_trees.cc:1592]  num-trees:123 train-loss:0.448902 train-accuracy:0.902985 valid-loss:0.628714 valid-accuracy:0.850376
[INFO 24-04-20 11:50:31.4511 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.627163
[INFO 24-04-20 11:50:31.4511 UTC gradient_boosted_trees.cc:270] Truncates the model to 151 tree(s) i.e. 151  iteration(s).
[INFO 24-04-20 11:50:31.4518 UTC gradient_boosted_trees.cc:333] Final model num-trees:151 valid-loss:0.627163 valid-accuracy:0.850819
[INFO 24-04-20 11:50:31.4576 UTC hyperparameters_optimizer.cc:593] [10/50] Score: -0.627163 / -0.576524 HParams: fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 5 } } fields { name: "sparse_oblique_normalization" value { categorical: "STANDARD_DEVIATION" } } fields { name: "sparse_oblique_weights" value { categorical: "BINARY" } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 8 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 0.9 } } fields { name: "shrinkage" value { real: 0.05 } } fields { name: "min_examples" value { integer: 20 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "num_candidate_attributes_ratio" value { real: 0.9 } }
[INFO 24-04-20 11:50:31.4577 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:50:31.4577 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:50:31.4663 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:50:31.5937 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.016328 train-accuracy:0.761895 valid-loss:1.070658 valid-accuracy:0.736609
[INFO 24-04-20 11:50:40.0563 UTC gradient_boosted_trees.cc:1592]  num-trees:73 train-loss:0.547573 train-accuracy:0.877660 valid-loss:0.599204 valid-accuracy:0.864099
[INFO 24-04-20 11:50:55.9566 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.584505
[INFO 24-04-20 11:50:55.9566 UTC gradient_boosted_trees.cc:270] Truncates the model to 173 tree(s) i.e. 173  iteration(s).
[INFO 24-04-20 11:50:55.9569 UTC gradient_boosted_trees.cc:333] Final model num-trees:173 valid-loss:0.584505 valid-accuracy:0.865427
[INFO 24-04-20 11:50:55.9584 UTC hyperparameters_optimizer.cc:593] [11/50] Score: -0.584505 / -0.576524 HParams: fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 1 } } fields { name: "sparse_oblique_normalization" value { categorical: "MIN_MAX" } } fields { name: "sparse_oblique_weights" value { categorical: "BINARY" } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 16 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 0.6 } } fields { name: "shrinkage" value { real: 0.1 } } fields { name: "min_examples" value { integer: 10 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "num_candidate_attributes_ratio" value { real: 0.2 } }
[INFO 24-04-20 11:50:55.9586 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:50:55.9586 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:50:55.9644 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:50:56.2077 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.079718 train-accuracy:0.761895 valid-loss:1.137516 valid-accuracy:0.736609
[INFO 24-04-20 11:51:10.0877 UTC gradient_boosted_trees.cc:1592]  num-trees:57 train-loss:0.677483 train-accuracy:0.860761 valid-loss:0.727486 valid-accuracy:0.843293
[INFO 24-04-20 11:51:40.3105 UTC gradient_boosted_trees.cc:1592]  num-trees:180 train-loss:0.540261 train-accuracy:0.878488 valid-loss:0.606224 valid-accuracy:0.863656
[INFO 24-04-20 11:52:10.4827 UTC gradient_boosted_trees.cc:1592]  num-trees:299 train-loss:0.500395 train-accuracy:0.889690 valid-loss:0.589758 valid-accuracy:0.864985
[INFO 24-04-20 11:52:10.7256 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.500055 train-accuracy:0.889787 valid-loss:0.589640 valid-accuracy:0.865427
[INFO 24-04-20 11:52:10.7257 UTC gradient_boosted_trees.cc:270] Truncates the model to 300 tree(s) i.e. 300  iteration(s).
[INFO 24-04-20 11:52:10.7257 UTC gradient_boosted_trees.cc:333] Final model num-trees:300 valid-loss:0.589640 valid-accuracy:0.865427
[INFO 24-04-20 11:52:10.7324 UTC hyperparameters_optimizer.cc:593] [12/50] Score: -0.58964 / -0.576524 HParams: fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 2 } } fields { name: "sparse_oblique_normalization" value { categorical: "MIN_MAX" } } fields { name: "sparse_oblique_weights" value { categorical: "BINARY" } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 256 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 1 } } fields { name: "shrinkage" value { real: 0.02 } } fields { name: "min_examples" value { integer: 5 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "num_candidate_attributes_ratio" value { real: 0.9 } }
[INFO 24-04-20 11:52:10.7328 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:52:10.7328 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:52:10.7422 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:52:10.8188 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.060520 train-accuracy:0.761895 valid-loss:1.117708 valid-accuracy:0.736609
[INFO 24-04-20 11:52:34.5050 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.540605 train-accuracy:0.879462 valid-loss:0.594701 valid-accuracy:0.868526
[INFO 24-04-20 11:52:34.5050 UTC gradient_boosted_trees.cc:270] Truncates the model to 299 tree(s) i.e. 299  iteration(s).
[INFO 24-04-20 11:52:34.5050 UTC gradient_boosted_trees.cc:333] Final model num-trees:299 valid-loss:0.594616 valid-accuracy:0.869411
[INFO 24-04-20 11:52:34.5062 UTC hyperparameters_optimizer.cc:593] [13/50] Score: -0.594616 / -0.576524 HParams: fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 2 } } fields { name: "sparse_oblique_normalization" value { categorical: "NONE" } } fields { name: "sparse_oblique_weights" value { categorical: "BINARY" } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 4 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 0.6 } } fields { name: "shrinkage" value { real: 0.05 } } fields { name: "min_examples" value { integer: 20 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "num_candidate_attributes_ratio" value { real: 1 } }
[INFO 24-04-20 11:52:34.5063 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:52:34.5063 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:52:34.5123 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:52:34.6701 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.053581 train-accuracy:0.761895 valid-loss:1.110675 valid-accuracy:0.736609
[INFO 24-04-20 11:52:40.5919 UTC gradient_boosted_trees.cc:1592]  num-trees:38 train-loss:0.597302 train-accuracy:0.874836 valid-loss:0.646977 valid-accuracy:0.863214
[INFO 24-04-20 11:53:04.4197 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.588014
[INFO 24-04-20 11:53:04.4197 UTC gradient_boosted_trees.cc:270] Truncates the model to 153 tree(s) i.e. 153  iteration(s).
[INFO 24-04-20 11:53:04.4202 UTC gradient_boosted_trees.cc:333] Final model num-trees:153 valid-loss:0.588014 valid-accuracy:0.868969
[INFO 24-04-20 11:53:04.4241 UTC hyperparameters_optimizer.cc:593] [14/50] Score: -0.588014 / -0.576524 HParams: fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 3 } } fields { name: "sparse_oblique_normalization" value { categorical: "NONE" } } fields { name: "sparse_oblique_weights" value { categorical: "BINARY" } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 256 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 0.6 } } fields { name: "shrinkage" value { real: 0.05 } } fields { name: "min_examples" value { integer: 7 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "num_candidate_attributes_ratio" value { real: 0.5 } }
[INFO 24-04-20 11:53:04.4242 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:53:04.4242 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:53:04.4312 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:53:04.6968 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.080979 train-accuracy:0.761895 valid-loss:1.138389 valid-accuracy:0.736609
[INFO 24-04-20 11:53:10.8203 UTC gradient_boosted_trees.cc:1592]  num-trees:25 train-loss:0.827971 train-accuracy:0.811961 valid-loss:0.869491 valid-accuracy:0.791058
[INFO 24-04-20 11:53:40.9819 UTC gradient_boosted_trees.cc:1592]  num-trees:143 train-loss:0.568514 train-accuracy:0.875177 valid-loss:0.607375 valid-accuracy:0.861886
[INFO 24-04-20 11:54:10.9841 UTC gradient_boosted_trees.cc:1592]  num-trees:259 train-loss:0.522482 train-accuracy:0.884673 valid-loss:0.582253 valid-accuracy:0.865870
[INFO 24-04-20 11:54:21.7716 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.511275 train-accuracy:0.887255 valid-loss:0.579076 valid-accuracy:0.866313
[INFO 24-04-20 11:54:21.7717 UTC gradient_boosted_trees.cc:270] Truncates the model to 299 tree(s) i.e. 299  iteration(s).
[INFO 24-04-20 11:54:21.7717 UTC gradient_boosted_trees.cc:333] Final model num-trees:299 valid-loss:0.579013 valid-accuracy:0.866313
[INFO 24-04-20 11:54:21.7777 UTC hyperparameters_optimizer.cc:593] [15/50] Score: -0.579013 / -0.576524 HParams: fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 5 } } fields { name: "sparse_oblique_normalization" value { categorical: "NONE" } } fields { name: "sparse_oblique_weights" value { categorical: "BINARY" } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 256 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 0.8 } } fields { name: "shrinkage" value { real: 0.02 } } fields { name: "min_examples" value { integer: 20 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "num_candidate_attributes_ratio" value { real: 1 } }
[INFO 24-04-20 11:54:21.7778 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:54:21.7779 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:54:21.7871 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:54:22.0178 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.019006 train-accuracy:0.761895 valid-loss:1.073190 valid-accuracy:0.736609
[INFO 24-04-20 11:54:41.1749 UTC gradient_boosted_trees.cc:1592]  num-trees:79 train-loss:0.529136 train-accuracy:0.881069 valid-loss:0.587939 valid-accuracy:0.868969
[INFO 24-04-20 11:55:11.3356 UTC gradient_boosted_trees.cc:1592]  num-trees:198 train-loss:0.458779 train-accuracy:0.898407 valid-loss:0.577907 valid-accuracy:0.864985
[INFO 24-04-20 11:55:24.3297 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.576464
[INFO 24-04-20 11:55:24.3298 UTC gradient_boosted_trees.cc:270] Truncates the model to 220 tree(s) i.e. 220  iteration(s).
[INFO 24-04-20 11:55:24.3300 UTC gradient_boosted_trees.cc:333] Final model num-trees:220 valid-loss:0.576464 valid-accuracy:0.865427
[INFO 24-04-20 11:55:24.3325 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:55:24.3325 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:55:24.3351 UTC hyperparameters_optimizer.cc:593] [16/50] Score: -0.576464 / -0.576464 HParams: fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 2 } } fields { name: "sparse_oblique_normalization" value { categorical: "MIN_MAX" } } fields { name: "sparse_oblique_weights" value { categorical: "BINARY" } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 16 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 1 } } fields { name: "shrinkage" value { real: 0.1 } } fields { name: "min_examples" value { integer: 5 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "num_candidate_attributes_ratio" value { real: 0.5 } }
[INFO 24-04-20 11:55:24.3388 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:55:24.5448 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.079152 train-accuracy:0.761895 valid-loss:1.137115 valid-accuracy:0.736609
[INFO 24-04-20 11:55:41.4136 UTC gradient_boosted_trees.cc:1592]  num-trees:85 train-loss:0.599402 train-accuracy:0.872303 valid-loss:0.662776 valid-accuracy:0.852590
[INFO 24-04-20 11:56:11.5979 UTC gradient_boosted_trees.cc:1592]  num-trees:234 train-loss:0.487885 train-accuracy:0.893489 valid-loss:0.597058 valid-accuracy:0.867198
[INFO 24-04-20 11:56:25.0461 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.462216 train-accuracy:0.900453 valid-loss:0.593662 valid-accuracy:0.867198
[INFO 24-04-20 11:56:25.0461 UTC gradient_boosted_trees.cc:270] Truncates the model to 294 tree(s) i.e. 294  iteration(s).
[INFO 24-04-20 11:56:25.0462 UTC gradient_boosted_trees.cc:333] Final model num-trees:294 valid-loss:0.593544 valid-accuracy:0.867198
[INFO 24-04-20 11:56:25.0549 UTC hyperparameters_optimizer.cc:593] [17/50] Score: -0.593544 / -0.576464 HParams: fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 3 } } fields { name: "sparse_oblique_normalization" value { categorical: "STANDARD_DEVIATION" } } fields { name: "sparse_oblique_weights" value { categorical: "BINARY" } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 8 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 0.6 } } fields { name: "shrinkage" value { real: 0.02 } } fields { name: "min_examples" value { integer: 20 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "num_candidate_attributes_ratio" value { real: 0.2 } }
[INFO 24-04-20 11:56:25.0550 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:56:25.0551 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:56:25.0661 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:56:25.2518 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.053989 train-accuracy:0.761895 valid-loss:1.111597 valid-accuracy:0.736609
[INFO 24-04-20 11:56:41.7763 UTC gradient_boosted_trees.cc:1592]  num-trees:90 train-loss:0.513245 train-accuracy:0.891784 valid-loss:0.612604 valid-accuracy:0.864542
[INFO 24-04-20 11:56:55.5599 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.605471
[INFO 24-04-20 11:56:55.5599 UTC gradient_boosted_trees.cc:270] Truncates the model to 133 tree(s) i.e. 133  iteration(s).
[INFO 24-04-20 11:56:55.5605 UTC gradient_boosted_trees.cc:333] Final model num-trees:133 valid-loss:0.605471 valid-accuracy:0.864985
[INFO 24-04-20 11:56:55.5644 UTC hyperparameters_optimizer.cc:593] [18/50] Score: -0.605471 / -0.576464 HParams: fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 4 } } fields { name: "sparse_oblique_normalization" value { categorical: "STANDARD_DEVIATION" } } fields { name: "sparse_oblique_weights" value { categorical: "CONTINUOUS" } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 256 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 0.6 } } fields { name: "shrinkage" value { real: 0.05 } } fields { name: "min_examples" value { integer: 5 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "num_candidate_attributes_ratio" value { real: 0.2 } }
[INFO 24-04-20 11:56:55.5645 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:56:55.5645 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:56:55.5717 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:56:55.8583 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.078470 train-accuracy:0.761895 valid-loss:1.135989 valid-accuracy:0.736609
[INFO 24-04-20 11:57:12.0757 UTC gradient_boosted_trees.cc:1592]  num-trees:55 train-loss:0.658252 train-accuracy:0.866361 valid-loss:0.718027 valid-accuracy:0.845950
[INFO 24-04-20 11:57:42.2076 UTC gradient_boosted_trees.cc:1592]  num-trees:155 train-loss:0.511689 train-accuracy:0.887060 valid-loss:0.601473 valid-accuracy:0.866755
[INFO 24-04-20 11:58:12.3039 UTC gradient_boosted_trees.cc:1592]  num-trees:256 train-loss:0.467628 train-accuracy:0.895875 valid-loss:0.583148 valid-accuracy:0.868526
[INFO 24-04-20 11:58:25.4031 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.450289 train-accuracy:0.900307 valid-loss:0.581562 valid-accuracy:0.868969
[INFO 24-04-20 11:58:25.4032 UTC gradient_boosted_trees.cc:270] Truncates the model to 296 tree(s) i.e. 296  iteration(s).
[INFO 24-04-20 11:58:25.4033 UTC gradient_boosted_trees.cc:333] Final model num-trees:296 valid-loss:0.581214 valid-accuracy:0.869411
[INFO 24-04-20 11:58:25.4147 UTC hyperparameters_optimizer.cc:593] [19/50] Score: -0.581214 / -0.576464 HParams: fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 2 } } fields { name: "sparse_oblique_normalization" value { categorical: "STANDARD_DEVIATION" } } fields { name: "sparse_oblique_weights" value { categorical: "BINARY" } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 8 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 1 } } fields { name: "shrinkage" value { real: 0.02 } } fields { name: "min_examples" value { integer: 5 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "num_candidate_attributes_ratio" value { real: 0.5 } }
[INFO 24-04-20 11:58:25.4151 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:58:25.4151 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:58:25.4276 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:58:25.5298 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.082692 train-accuracy:0.761895 valid-loss:1.140741 valid-accuracy:0.736609
[INFO 24-04-20 11:58:42.3949 UTC gradient_boosted_trees.cc:1592]  num-trees:165 train-loss:0.625270 train-accuracy:0.859981 valid-loss:0.651116 valid-accuracy:0.846392
[INFO 24-04-20 11:58:56.4433 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.590130 train-accuracy:0.865290 valid-loss:0.620030 valid-accuracy:0.855688
[INFO 24-04-20 11:58:56.4433 UTC gradient_boosted_trees.cc:270] Truncates the model to 300 tree(s) i.e. 300  iteration(s).
[INFO 24-04-20 11:58:56.4433 UTC gradient_boosted_trees.cc:333] Final model num-trees:300 valid-loss:0.620030 valid-accuracy:0.855688
[INFO 24-04-20 11:58:56.4445 UTC hyperparameters_optimizer.cc:593] [20/50] Score: -0.62003 / -0.576464 HParams: fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 5 } } fields { name: "sparse_oblique_normalization" value { categorical: "NONE" } } fields { name: "sparse_oblique_weights" value { categorical: "CONTINUOUS" } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 4 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 0.6 } } fields { name: "shrinkage" value { real: 0.02 } } fields { name: "min_examples" value { integer: 10 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "num_candidate_attributes_ratio" value { real: 1 } }
[INFO 24-04-20 11:58:56.4446 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:58:56.4446 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:58:56.4504 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:58:56.5735 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.085297 train-accuracy:0.761895 valid-loss:1.143266 valid-accuracy:0.736609
[INFO 24-04-20 11:59:12.3995 UTC gradient_boosted_trees.cc:1592]  num-trees:135 train-loss:0.675725 train-accuracy:0.853942 valid-loss:0.706471 valid-accuracy:0.840637
[INFO 24-04-20 11:59:31.6827 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.616435 train-accuracy:0.859543 valid-loss:0.645493 valid-accuracy:0.849491
[INFO 24-04-20 11:59:31.6827 UTC gradient_boosted_trees.cc:270] Truncates the model to 300 tree(s) i.e. 300  iteration(s).
[INFO 24-04-20 11:59:31.6828 UTC gradient_boosted_trees.cc:333] Final model num-trees:300 valid-loss:0.645493 valid-accuracy:0.849491
[INFO 24-04-20 11:59:31.6835 UTC hyperparameters_optimizer.cc:593] [21/50] Score: -0.645493 / -0.576464 HParams: fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 5 } } fields { name: "sparse_oblique_normalization" value { categorical: "NONE" } } fields { name: "sparse_oblique_weights" value { categorical: "CONTINUOUS" } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 3 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 1 } } fields { name: "shrinkage" value { real: 0.02 } } fields { name: "min_examples" value { integer: 10 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "num_candidate_attributes_ratio" value { real: 0.2 } }
[INFO 24-04-20 11:59:31.6836 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 11:59:31.6836 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 11:59:31.6889 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 11:59:31.8587 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.080746 train-accuracy:0.761895 valid-loss:1.138830 valid-accuracy:0.736609
[INFO 24-04-20 11:59:42.5366 UTC gradient_boosted_trees.cc:1592]  num-trees:64 train-loss:0.675732 train-accuracy:0.859787 valid-loss:0.713405 valid-accuracy:0.846392
[INFO 24-04-20 12:00:12.5430 UTC gradient_boosted_trees.cc:1592]  num-trees:240 train-loss:0.539488 train-accuracy:0.879560 valid-loss:0.596245 valid-accuracy:0.870739
[INFO 24-04-20 12:00:22.7129 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.522861 train-accuracy:0.883748 valid-loss:0.587343 valid-accuracy:0.872067
[INFO 24-04-20 12:00:22.7129 UTC gradient_boosted_trees.cc:270] Truncates the model to 300 tree(s) i.e. 300  iteration(s).
[INFO 24-04-20 12:00:22.7130 UTC gradient_boosted_trees.cc:333] Final model num-trees:300 valid-loss:0.587343 valid-accuracy:0.872067
[INFO 24-04-20 12:00:22.7171 UTC hyperparameters_optimizer.cc:593] [22/50] Score: -0.587343 / -0.576464 HParams: fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 1 } } fields { name: "sparse_oblique_normalization" value { categorical: "STANDARD_DEVIATION" } } fields { name: "sparse_oblique_weights" value { categorical: "BINARY" } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 6 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 0.9 } } fields { name: "shrinkage" value { real: 0.02 } } fields { name: "min_examples" value { integer: 7 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "num_candidate_attributes_ratio" value { real: 1 } }
[INFO 24-04-20 12:00:22.7172 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 12:00:22.7172 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 12:00:22.7252 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 12:00:22.9412 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.017781 train-accuracy:0.761895 valid-loss:1.072645 valid-accuracy:0.736609
[INFO 24-04-20 12:00:42.7439 UTC gradient_boosted_trees.cc:1592]  num-trees:93 train-loss:0.486231 train-accuracy:0.897726 valid-loss:0.605622 valid-accuracy:0.861443
[INFO 24-04-20 12:00:49.1122 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.605239
[INFO 24-04-20 12:00:49.1123 UTC gradient_boosted_trees.cc:270] Truncates the model to 92 tree(s) i.e. 92  iteration(s).
[INFO 24-04-20 12:00:49.1127 UTC gradient_boosted_trees.cc:333] Final model num-trees:92 valid-loss:0.605239 valid-accuracy:0.859672
[INFO 24-04-20 12:00:49.1145 UTC hyperparameters_optimizer.cc:593] [23/50] Score: -0.605239 / -0.576464 HParams: [INFOfields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 4 } } fields { name: "sparse_oblique_normalization" value { categorical: "STANDARD_DEVIATION" } } fields { name: "sparse_oblique_weights" value { categorical: "CONTINUOUS" } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 6 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 0.8 } } fields { name: "shrinkage" value { real: 0.1 } } fields { name: "min_examples" value { integer: 5 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "num_candidate_attributes_ratio" value { real: 0.5 } } 24-04-20 12:00:49.1146 UTC gradient_boosted_trees.cc:544] Default loss set to 
BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 12:00:49.1146 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 12:00:49.1203 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 12:00:49.3295 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.014103 train-accuracy:0.761895 valid-loss:1.069569 valid-accuracy:0.736609
[INFO 24-04-20 12:01:09.2048 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.620195
[INFO 24-04-20 12:01:09.2049 UTC gradient_boosted_trees.cc:270] Truncates the model to 65 tree(s) i.e. 65  iteration(s).
[INFO 24-04-20 12:01:09.2054 UTC gradient_boosted_trees.cc:333] Final model num-trees:65 valid-loss:0.620195 valid-accuracy:0.861886
[INFO 24-04-20 12:01:09.2069 UTC hyperparameters_optimizer.cc:593] [24/50] Score: -0.620195 / -0.576464 HParams: fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 5 } } fields { name: "sparse_oblique_normalization" value { categorical: "STANDARD_DEVIATION" } } fields { name: "sparse_oblique_weights" value { categorical: "CONTINUOUS" } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 32 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 0.6 } } fields { name: "shrinkage" value { real: 0.1 } } fields { name: "min_examples" value { integer: 5 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "num_candidate_attributes_ratio" value { real: 1 } }
[INFO 24-04-20 12:01:09.2074 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 12:01:09.2075 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 12:01:09.2129 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 12:01:09.4876 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.054467 train-accuracy:0.761895 valid-loss:1.111421 valid-accuracy:0.736609
[INFO 24-04-20 12:01:12.8650 UTC gradient_boosted_trees.cc:1592]  num-trees:13 train-loss:0.776072 train-accuracy:0.822335 valid-loss:0.829307 valid-accuracy:0.796370
[INFO 24-04-20 12:01:42.9181 UTC gradient_boosted_trees.cc:1592]  num-trees:117 train-loss:0.490031 train-accuracy:0.891248 valid-loss:0.602710 valid-accuracy:0.858787
[INFO 24-04-20 12:02:10.5251 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.597233
[INFO 24-04-20 12:02:10.5252 UTC gradient_boosted_trees.cc:270] Truncates the model to 182 tree(s) i.e. 182  iteration(s).
[INFO 24-04-20 12:02:10.5257 UTC gradient_boosted_trees.cc:333] Final model num-trees:182 valid-loss:0.597233 valid-accuracy:0.861000
[INFO 24-04-20 12:02:10.5303 UTC hyperparameters_optimizer.cc:593] [25/50] Score: -0.597233 / -0.576464 HParams: fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 4 } } fields { name: "sparse_oblique_normalization" value { categorical: "STANDARD_DEVIATION" } } fields { name: "sparse_oblique_weights" value { categorical: "BINARY" } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 512 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 0.9 } } fields { name: "shrinkage" value { real: 0.05 } } fields { name: "min_examples" value { integer: 5 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "num_candidate_attributes_ratio" value { real: 0.5 } }
[INFO 24-04-20 12:02:10.5305 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 12:02:10.5305 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 12:02:10.5382 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 12:02:10.8168 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.080744 train-accuracy:0.761895 valid-loss:1.138851 valid-accuracy:0.736609
[INFO 24-04-20 12:02:13.0463 UTC gradient_boosted_trees.cc:1592]  num-trees:9 train-loss:0.969269 train-accuracy:0.761895 valid-loss:1.024583 valid-accuracy:0.736609
[INFO 24-04-20 12:02:43.3098 UTC gradient_boosted_trees.cc:1592]  num-trees:117 train-loss:0.577927 train-accuracy:0.875177 valid-loss:0.659990 valid-accuracy:0.850819
[INFO 24-04-20 12:03:13.4366 UTC gradient_boosted_trees.cc:1592]  num-trees:224 train-loss:0.510443 train-accuracy:0.887401 valid-loss:0.632056 valid-accuracy:0.855688
[INFO 24-04-20 12:03:34.6953 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.481158 train-accuracy:0.894657 valid-loss:0.627246 valid-accuracy:0.853918
[INFO 24-04-20 12:03:34.6953 UTC gradient_boosted_trees.cc:270] Truncates the model to 294 tree(s) i.e. 294  iteration(s).
[INFO 24-04-20 12:03:34.6954 UTC gradient_boosted_trees.cc:333] Final model num-trees:294 valid-loss:0.627238 valid-accuracy:0.854803
[INFO 24-04-20 12:03:34.7026 UTC hyperparameters_optimizer.cc:593] [26/50] Score: -0.627238 / -0.576464 HParams: fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 5 } } fields { name: "sparse_oblique_normalization" value { categorical: "STANDARD_DEVIATION" } } fields { name: "sparse_oblique_weights" value { categorical: "BINARY" } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 64 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 0.8 } } fields { name: "shrinkage" value { real: 0.02 } } fields { name: "min_examples" value { integer: 7 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "num_candidate_attributes_ratio" value { real: 1 } }
[INFO 24-04-20 12:03:34.7027 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 12:03:34.7027 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 12:03:34.7126 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 12:03:35.0569 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.079504 train-accuracy:0.761895 valid-loss:1.137702 valid-accuracy:0.736609
[INFO 24-04-20 12:03:43.5018 UTC gradient_boosted_trees.cc:1592]  num-trees:26 train-loss:0.801994 train-accuracy:0.810208 valid-loss:0.861103 valid-accuracy:0.786189
[INFO 24-04-20 12:04:13.7312 UTC gradient_boosted_trees.cc:1592]  num-trees:114 train-loss:0.546379 train-accuracy:0.890566 valid-loss:0.643705 valid-accuracy:0.853918
[INFO 24-04-20 12:04:43.7507 UTC gradient_boosted_trees.cc:1592]  num-trees:202 train-loss:0.480620 train-accuracy:0.901670 valid-loss:0.612598 valid-accuracy:0.860115
[INFO 24-04-20 12:05:13.9332 UTC gradient_boosted_trees.cc:1592]  num-trees:289 train-loss:0.441230 train-accuracy:0.912288 valid-loss:0.605467 valid-accuracy:0.859672
[INFO 24-04-20 12:05:17.7591 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.436997 train-accuracy:0.912872 valid-loss:0.604192 valid-accuracy:0.858787
[INFO 24-04-20 12:05:17.7592 UTC gradient_boosted_trees.cc:270] Truncates the model to 300 tree(s) i.e. 300  iteration(s).
[INFO 24-04-20 12:05:17.7592 UTC gradient_boosted_trees.cc:333] Final model num-trees:300 valid-loss:0.604192 valid-accuracy:0.858787
[INFO 24-04-20 12:05:17.7704 UTC hyperparameters_optimizer.cc:593] [27/50] Score: -0.604192 / -0.576464 HParams: fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 4 } } fields { name: "sparse_oblique_normalization" value { categorical: "MIN_MAX" } } fields { name: "sparse_oblique_weights" value { categorical: "BINARY" } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 8 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 0.9 } } fields { name: "shrinkage" value { real: 0.02 } } fields { name: "min_examples" value { integer: 10 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "num_candidate_attributes_ratio" value { real: 0.5 } }
[INFO 24-04-20 12:05:17.7705 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 12:05:17.7705 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 12:05:17.7845 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 12:05:18.0993 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.079121 train-accuracy:0.761895 valid-loss:1.136935 valid-accuracy:0.736609
[INFO 24-04-20 12:05:44.0798 UTC gradient_boosted_trees.cc:1592]  num-trees:82 train-loss:0.593026 train-accuracy:0.874154 valid-loss:0.652801 valid-accuracy:0.853032
[INFO 24-04-20 12:06:14.0818 UTC gradient_boosted_trees.cc:1592]  num-trees:174 train-loss:0.507478 train-accuracy:0.888618 valid-loss:0.588100 valid-accuracy:0.864099
[INFO 24-04-20 12:06:44.2572 UTC gradient_boosted_trees.cc:1592]  num-trees:266 train-loss:0.470497 train-accuracy:0.896021 valid-loss:0.575175 valid-accuracy:0.867198
[INFO 24-04-20 12:06:55.6128 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.458649 train-accuracy:0.899089 valid-loss:0.573679 valid-accuracy:0.865870
[INFO 24-04-20 12:06:55.6128 UTC gradient_boosted_trees.cc:270] Truncates the model to 300 tree(s) i.e. 300  iteration(s).
[INFO 24-04-20 12:06:55.6128 UTC gradient_boosted_trees.cc:333] Final model num-trees:300 valid-loss:0.573679 valid-accuracy:0.865870
[INFO 24-04-20 12:06:55.6249 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 12:06:55.6250 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 12:06:55.6269 UTC hyperparameters_optimizer.cc:593] [28/50] Score: -0.573679 / -0.573679 HParams: fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 4 } } fields { name: "sparse_oblique_normalization" value { categorical: "NONE" } } fields { name: "sparse_oblique_weights" value { categorical: "CONTINUOUS" } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 8 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 0.9 } } fields { name: "shrinkage" value { real: 0.02 } } fields { name: "min_examples" value { integer: 5 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "num_candidate_attributes_ratio" value { real: 0.2 } }
[INFO 24-04-20 12:06:55.6312 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 12:06:55.7339 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.034401 train-accuracy:0.761895 valid-loss:1.090276 valid-accuracy:0.736609
[INFO 24-04-20 12:07:14.2759 UTC gradient_boosted_trees.cc:1592]  num-trees:182 train-loss:0.577740 train-accuracy:0.869478 valid-loss:0.624283 valid-accuracy:0.858787
[INFO 24-04-20 12:07:26.4369 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.559424 train-accuracy:0.873813 valid-loss:0.617982 valid-accuracy:0.862771
[INFO 24-04-20 12:07:26.4369 UTC gradient_boosted_trees.cc:270] Truncates the model to 296 tree(s) i.e. 296  iteration(s).
[INFO 24-04-20 12:07:26.4370 UTC gradient_boosted_trees.cc:333] Final model num-trees:296 valid-loss:0.617692 valid-accuracy:0.862771
[INFO 24-04-20 12:07:26.4377 UTC hyperparameters_optimizer.cc:593] [29/50] Score: -0.617692 / -0.573679 HParams: fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 4 } } fields { name: "sparse_oblique_normalization" value { categorical: "STANDARD_DEVIATION" } } fields { name: "sparse_oblique_weights" value { categorical: "BINARY" } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 3 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 0.9 } } fields { name: "shrinkage" value { real: 0.1 } } fields { name: "min_examples" value { integer: 20 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "num_candidate_attributes_ratio" value { real: 0.2 } }
[INFO 24-04-20 12:07:26.4378 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 12:07:26.4378 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 12:07:26.4430 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 12:07:26.6506 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.009800 train-accuracy:0.761895 valid-loss:1.063156 valid-accuracy:0.736609
[INFO 24-04-20 12:07:44.3201 UTC gradient_boosted_trees.cc:1592]  num-trees:86 train-loss:0.447820 train-accuracy:0.906784 valid-loss:0.584784 valid-accuracy:0.870739
[INFO 24-04-20 12:08:03.0640 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.582429
[INFO 24-04-20 12:08:03.0641 UTC gradient_boosted_trees.cc:270] Truncates the model to 146 tree(s) i.e. 146  iteration(s).
[INFO 24-04-20 12:08:03.0647 UTC gradient_boosted_trees.cc:333] Final model num-trees:146 valid-loss:0.582429 valid-accuracy:0.868526
[INFO 24-04-20 12:08:03.0688 UTC hyperparameters_optimizer.cc:593] [30/50] Score: -0.582429 / -0.573679 HParams: fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 1 } } fields { name: "sparse_oblique_normalization" value { categorical: "MIN_MAX" } } fields { name: "sparse_oblique_weights" value { categorical: "CONTINUOUS" } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 512 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 1 } } fields { name: "shrinkage" value { real: 0.1 } } fields { name: "min_examples" value { integer: 5 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "num_candidate_attributes_ratio" value { real: 0.5 } }
[INFO 24-04-20 12:08:03.0689 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 12:08:03.0690 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 12:08:03.0763 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 12:08:03.3195 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.050215 train-accuracy:0.761895 valid-loss:1.106337 valid-accuracy:0.736609
[INFO 24-04-20 12:08:14.4202 UTC gradient_boosted_trees.cc:1592]  num-trees:46 train-loss:0.542935 train-accuracy:0.887693 valid-loss:0.615639 valid-accuracy:0.864985
[INFO 24-04-20 12:08:44.5841 UTC gradient_boosted_trees.cc:1592]  num-trees:168 train-loss:0.400861 train-accuracy:0.919739 valid-loss:0.581155 valid-accuracy:0.868969
[INFO 24-04-20 12:08:46.3216 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.579083
[INFO 24-04-20 12:08:46.3217 UTC gradient_boosted_trees.cc:270] Truncates the model to 145 tree(s) i.e. 145  iteration(s).
[INFO 24-04-20 12:08:46.3226 UTC gradient_boosted_trees.cc:333] Final model num-trees:145 valid-loss:0.579083 valid-accuracy:0.871182
[INFO 24-04-20 12:08:46.3295 UTC hyperparameters_optimizer.cc:593] [31/50] Score: -0.579083 / -0.573679 HParams: fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 2 } } fields { name: "sparse_oblique_normalization" value { categorical: "NONE" } } fields { name: "sparse_oblique_weights" value { categorical: "CONTINUOUS" } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 8 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 0.8 } } fields { name: "shrinkage" value { real: 0.05 } } fields { name: "min_examples" value { integer: 7 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "num_candidate_attributes_ratio" value { real: 1 } }
[INFO 24-04-20 12:08:46.3298 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 12:08:46.3299 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 12:08:46.3387 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 12:08:46.4970 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.060070 train-accuracy:0.761895 valid-loss:1.117365 valid-accuracy:0.736609
[INFO 24-04-20 12:09:14.6947 UTC gradient_boosted_trees.cc:1592]  num-trees:172 train-loss:0.561195 train-accuracy:0.871816 valid-loss:0.603439 valid-accuracy:0.861886
[INFO 24-04-20 12:09:36.0270 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.534396 train-accuracy:0.878732 valid-loss:0.592159 valid-accuracy:0.864985
[INFO 24-04-20 12:09:36.0270 UTC gradient_boosted_trees.cc:270] Truncates the model to 300 tree(s) i.e. 300  iteration(s).
[INFO 24-04-20 12:09:36.0270 UTC gradient_boosted_trees.cc:333] Final model num-trees:300 valid-loss:0.592159 valid-accuracy:0.864985
[INFO 24-04-20 12:09:36.0281 UTC hyperparameters_optimizer.cc:593] [32/50] Score: -0.592159 / -0.573679 HParams: fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 4 } } fields { name: "sparse_oblique_normalization" value { categorical: "NONE" } } fields { name: "sparse_oblique_weights" value { categorical: "CONTINUOUS" } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 4 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 1 } } fields { name: "shrinkage" value { real: 0.05 } } fields { name: "min_examples" value { integer: 10 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "num_candidate_attributes_ratio" value { real: 0.5 } }
[INFO 24-04-20 12:09:36.0282 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 12:09:36.0283 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 12:09:36.0339 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 12:09:36.2970 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.082344 train-accuracy:0.761895 valid-loss:1.140383 valid-accuracy:0.736609
[INFO 24-04-20 12:09:44.7215 UTC gradient_boosted_trees.cc:1592]  num-trees:35 train-loss:0.809653 train-accuracy:0.812935 valid-loss:0.859870 valid-accuracy:0.786189
[INFO 24-04-20 12:10:14.7416 UTC gradient_boosted_trees.cc:1592]  num-trees:153 train-loss:0.611812 train-accuracy:0.862027 valid-loss:0.654732 valid-accuracy:0.841965
[INFO 24-04-20 12:10:44.9008 UTC gradient_boosted_trees.cc:1592]  num-trees:264 train-loss:0.571651 train-accuracy:0.870209 valid-loss:0.622257 valid-accuracy:0.853475
[INFO 24-04-20 12:10:55.1758 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.563066 train-accuracy:0.873228 valid-loss:0.617188 valid-accuracy:0.858344
[INFO 24-04-20 12:10:55.1758 UTC gradient_boosted_trees.cc:270] Truncates the model to 300 tree(s) i.e. 300  iteration(s).
[INFO 24-04-20 12:10:55.1758 UTC gradient_boosted_trees.cc:333] Final model num-trees:300 valid-loss:0.617188 valid-accuracy:0.858344
[INFO 24-04-20 12:10:55.1785 UTC hyperparameters_optimizer.cc:593] [33/50] Score: -0.617188 / -0.573679 HParams: fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 4 } } fields { name: "sparse_oblique_normalization" value { categorical: "MIN_MAX" } } fields { name: "sparse_oblique_weights" value { categorical: "BINARY" } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 16 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 0.9 } } fields { name: "shrinkage" value { real: 0.02 } } fields { name: "min_examples" value { integer: 7 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "num_candidate_attributes_ratio" value { real: 0.2 } }
[INFO 24-04-20 12:10:55.1786 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 12:10:55.1786 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 12:10:55.1850 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 12:10:55.5169 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.056010 train-accuracy:0.761895 valid-loss:1.115039 valid-accuracy:0.736609
[INFO 24-04-20 12:11:14.9320 UTC gradient_boosted_trees.cc:1592]  num-trees:62 train-loss:0.560039 train-accuracy:0.882092 valid-loss:0.659251 valid-accuracy:0.845064
[INFO 24-04-20 12:11:45.0392 UTC gradient_boosted_trees.cc:1592]  num-trees:155 train-loss:0.480452 train-accuracy:0.900794 valid-loss:0.635471 valid-accuracy:0.852590
[INFO 24-04-20 12:12:11.1738 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.633523
[INFO 24-04-20 12:12:11.1738 UTC gradient_boosted_trees.cc:270] Truncates the model to 205 tree(s) i.e. 205  iteration(s).
[INFO 24-04-20 12:12:11.1744 UTC gradient_boosted_trees.cc:333] Final model num-trees:205 valid-loss:0.633523 valid-accuracy:0.854803
[INFO 24-04-20 12:12:11.1789 UTC hyperparameters_optimizer.cc:593] [34/50] Score: -0.633523 / -0.573679 HParams: fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 5 } } fields { name: "sparse_oblique_normalization" value { categorical: "MIN_MAX" } } fields { name: "sparse_oblique_weights" value { categorical: "BINARY" } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 64 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 0.9 } } fields { name: "shrinkage" value { real: 0.05 } } fields { name: "min_examples" value { integer: 20 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "num_candidate_attributes_ratio" value { real: 1 } }
[INFO 24-04-20 12:12:11.1790 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 12:12:11.1791 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 12:12:11.1871 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 12:12:11.4212 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.080643 train-accuracy:0.761895 valid-loss:1.138458 valid-accuracy:0.736609
[INFO 24-04-20 12:12:15.1625 UTC gradient_boosted_trees.cc:1592]  num-trees:17 train-loss:0.893533 train-accuracy:0.761895 valid-loss:0.941465 valid-accuracy:0.736609
[INFO 24-04-20 12:12:45.2827 UTC gradient_boosted_trees.cc:1592]  num-trees:145 train-loss:0.581445 train-accuracy:0.869478 valid-loss:0.621648 valid-accuracy:0.859230
[INFO 24-04-20 12:13:15.4609 UTC gradient_boosted_trees.cc:1592]  num-trees:271 train-loss:0.532678 train-accuracy:0.879852 valid-loss:0.591891 valid-accuracy:0.866755
[INFO 24-04-20 12:13:22.3835 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.525324 train-accuracy:0.882677 valid-loss:0.588533 valid-accuracy:0.869854
[INFO 24-04-20 12:13:22.3836 UTC gradient_boosted_trees.cc:270] Truncates the model to 299 tree(s) i.e. 299  iteration(s).
[INFO 24-04-20 12:13:22.3836 UTC gradient_boosted_trees.cc:333] Final model num-trees:299 valid-loss:0.588517 valid-accuracy:0.870297
[INFO 24-04-20 12:13:22.3877 UTC hyperparameters_optimizer.cc:593] [35/50] Score: -0.588517 / -0.573679 HParams: fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 4 } } fields { name: "sparse_oblique_normalization" value { categorical: "NONE" } } fields { name: "sparse_oblique_weights" value { categorical: "BINARY" } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 6 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 0.9 } } fields { name: "shrinkage" value { real: 0.02 } } fields { name: "min_examples" value { integer: 7 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "num_candidate_attributes_ratio" value { real: 0.5 } }
[INFO 24-04-20 12:13:22.3878 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 12:13:22.3878 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 12:13:22.3953 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 12:13:22.6347 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.008621 train-accuracy:0.761895 valid-loss:1.061219 valid-accuracy:0.736609
[INFO 24-04-20 12:13:45.6070 UTC gradient_boosted_trees.cc:1592]  num-trees:100 train-loss:0.441264 train-accuracy:0.907174 valid-loss:0.576598 valid-accuracy:0.868083
[INFO 24-04-20 12:13:48.4084 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.571786
[INFO 24-04-20 12:13:48.4084 UTC gradient_boosted_trees.cc:270] Truncates the model to 82 tree(s) i.e. 82  iteration(s).
[INFO 24-04-20 12:13:48.4090 UTC gradient_boosted_trees.cc:333] Final model num-trees:82 valid-loss:0.571786 valid-accuracy:0.871182
[INFO 24-04-20 12:13:48.4118 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 12:13:48.4118 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 12:13:48.4231 UTC hyperparameters_optimizer.cc:593] [36/50] Score: -0.571786 / -0.571786 HParams: fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 2 } } fields { name: "sparse_oblique_normalization" value { categorical: "NONE" } } fields { name: "sparse_oblique_weights" value { categorical: "BINARY" } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 64 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 0.9 } } fields { name: "shrinkage" value { real: 0.1 } } fields { name: "min_examples" value { integer: 7 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "num_candidate_attributes_ratio" value { real: 1 } }
[INFO 24-04-20 12:13:48.4253 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 12:13:48.7022 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.022495 train-accuracy:0.761895 valid-loss:1.078056 valid-accuracy:0.736609
[INFO 24-04-20 12:14:15.8238 UTC gradient_boosted_trees.cc:1592]  num-trees:96 train-loss:0.531460 train-accuracy:0.886427 valid-loss:0.631790 valid-accuracy:0.851704
[INFO 24-04-20 12:14:44.2647 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.625075
[INFO 24-04-20 12:14:44.2648 UTC gradient_boosted_trees.cc:270] Truncates the model to 175 tree(s) i.e. 175  iteration(s).
[INFO 24-04-20 12:14:44.2651 UTC gradient_boosted_trees.cc:333] Final model num-trees:175 valid-loss:0.625075 valid-accuracy:0.853918
[INFO 24-04-20 12:14:44.2668 UTC hyperparameters_optimizer.cc:593] [37/50] Score: -0.625075 / -0.571786 HParams: fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 5 } } fields { name: "sparse_oblique_normalization" value { categorical: "MIN_MAX" } } fields { name: "sparse_oblique_weights" value { categorical: "BINARY" } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 16 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 1 } } fields { name: "shrinkage" value { real: 0.1 } } fields { name: "min_examples" value { integer: 20 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "num_candidate_attributes_ratio" value { real: 1 } }
[INFO 24-04-20 12:14:44.2671 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 12:14:44.2671 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 12:14:44.2730 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 12:14:44.3195 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.080831 train-accuracy:0.761895 valid-loss:1.138862 valid-accuracy:0.736609
[INFO 24-04-20 12:14:45.8472 UTC gradient_boosted_trees.cc:1592]  num-trees:42 train-loss:0.756284 train-accuracy:0.846735 valid-loss:0.797705 valid-accuracy:0.826915
[INFO 24-04-20 12:14:55.2152 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.531490 train-accuracy:0.883651 valid-loss:0.588020 valid-accuracy:0.872067
[INFO 24-04-20 12:14:55.2152 UTC gradient_boosted_trees.cc:270] Truncates the model to 299 tree(s) i.e. 299  iteration(s).
[INFO 24-04-20 12:14:55.2153 UTC gradient_boosted_trees.cc:333] Final model num-trees:299 valid-loss:0.588008 valid-accuracy:0.872067
[INFO 24-04-20 12:14:55.2192 UTC hyperparameters_optimizer.cc:593] [38/50] Score: -0.588008 / -0.571786 HParams: fields { name: "split_axis" value { categorical: "AXIS_ALIGNED" } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 6 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 0.8 } } fields { name: "shrinkage" value { real: 0.02 } } fields { name: "min_examples" value { integer: 5 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "num_candidate_attributes_ratio" value { real: 0.9 } }
[INFO 24-04-20 12:14:55.2196 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 12:14:55.2196 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 12:14:55.2271 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 12:14:55.3151 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.035720 train-accuracy:0.761895 valid-loss:1.091776 valid-accuracy:0.736609
[INFO 24-04-20 12:15:15.9067 UTC gradient_boosted_trees.cc:1592]  num-trees:238 train-loss:0.559601 train-accuracy:0.874300 valid-loss:0.610479 valid-accuracy:0.864542
[INFO 24-04-20 12:15:21.2798 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.549374 train-accuracy:0.876881 valid-loss:0.605893 valid-accuracy:0.866313
[INFO 24-04-20 12:15:21.2798 UTC gradient_boosted_trees.cc:270] Truncates the model to 299 tree(s) i.e. 299  iteration(s).
[INFO 24-04-20 12:15:21.2798 UTC gradient_boosted_trees.cc:333] Final model num-trees:299 valid-loss:0.605842 valid-accuracy:0.866313
[INFO 24-04-20 12:15:21.2805 UTC hyperparameters_optimizer.cc:593] [39/50] Score: -0.605842 / -0.571786 HParams: fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 4 } } fields { name: "sparse_oblique_normalization" value { categorical: "NONE" } } fields { name: "sparse_oblique_weights" value { categorical: "BINARY" } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 3 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 0.8 } } fields { name: "shrinkage" value { real: 0.1 } } fields { name: "min_examples" value { integer: 7 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "num_candidate_attributes_ratio" value { real: 0.5 } }
[INFO 24-04-20 12:15:21.2807 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 12:15:21.2807 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 12:15:21.2862 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 12:15:21.5646 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.080070 train-accuracy:0.761895 valid-loss:1.138312 valid-accuracy:0.736609
[INFO 24-04-20 12:15:45.9122 UTC gradient_boosted_trees.cc:1592]  num-trees:88 train-loss:0.606703 train-accuracy:0.871816 valid-loss:0.649034 valid-accuracy:0.857902
[INFO 24-04-20 12:16:16.0088 UTC gradient_boosted_trees.cc:1592]  num-trees:195 train-loss:0.522986 train-accuracy:0.887498 valid-loss:0.588079 valid-accuracy:0.868969
[INFO 24-04-20 12:16:45.8272 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.489488 train-accuracy:0.894755 valid-loss:0.577959 valid-accuracy:0.869854
[INFO 24-04-20 12:16:45.8272 UTC gradient_boosted_trees.cc:270] Truncates the model to 298 tree(s) i.e. 298  iteration(s).
[INFO 24-04-20 12:16:45.8273 UTC gradient_boosted_trees.cc:333] Final model num-trees:298 valid-loss:0.577896 valid-accuracy:0.869411
[INFO 24-04-20 12:16:45.8336 UTC hyperparameters_optimizer.cc:593] [40/50] Score: -0.577896 / -0.571786 HParams: fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 5 } } fields { name: "sparse_oblique_normalization" value { categorical: "NONE" } } fields { name: "sparse_oblique_weights" value { categorical: "CONTINUOUS" } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 256 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 0.8 } } fields { name: "shrinkage" value { real: 0.02 } } fields { name: "min_examples" value { integer: 20 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "num_candidate_attributes_ratio" value { real: 1 } }
[INFO 24-04-20 12:16:45.8338 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 12:16:45.8338 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 12:16:45.8442 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 12:16:45.9250 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.033852 train-accuracy:0.761895 valid-loss:1.089140 valid-accuracy:0.736609
[INFO 24-04-20 12:16:46.0902 UTC gradient_boosted_trees.cc:1592]  num-trees:3 train-loss:0.944534 train-accuracy:0.761895 valid-loss:0.994035 valid-accuracy:0.736609
[INFO 24-04-20 12:17:11.1035 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.548920 train-accuracy:0.877125 valid-loss:0.598300 valid-accuracy:0.866755
[INFO 24-04-20 12:17:11.1036 UTC gradient_boosted_trees.cc:270] Truncates the model to 275 tree(s) i.e. 275  iteration(s).
[INFO 24-04-20 12:17:11.1036 UTC gradient_boosted_trees.cc:333] Final model num-trees:275 valid-loss:0.597798 valid-accuracy:0.867198
[INFO 24-04-20 12:17:11.1043 UTC hyperparameters_optimizer.cc:593] [41/50] Score: -0.597798 / -0.571786 HParams: fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 3 } } fields { name: "sparse_oblique_normalization" value { categorical: "NONE" } } fields { name: "sparse_oblique_weights" value { categorical: "CONTINUOUS" } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 3 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 0.8 } } fields { name: "shrinkage" value { real: 0.1 } } fields { name: "min_examples" value { integer: 7 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "num_candidate_attributes_ratio" value { real: 1 } }
[INFO 24-04-20 12:17:11.1045 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 12:17:11.1045 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 12:17:11.1095 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 12:17:11.3169 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.055057 train-accuracy:0.761895 valid-loss:1.112117 valid-accuracy:0.736609
[INFO 24-04-20 12:17:16.1276 UTC gradient_boosted_trees.cc:1592]  num-trees:24 train-loss:0.676387 train-accuracy:0.861783 valid-loss:0.737358 valid-accuracy:0.842408
[INFO 24-04-20 12:17:46.2224 UTC gradient_boosted_trees.cc:1592]  num-trees:163 train-loss:0.464683 train-accuracy:0.906200 valid-loss:0.633622 valid-accuracy:0.853032
[INFO 24-04-20 12:17:53.1728 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.633292
[INFO 24-04-20 12:17:53.1728 UTC gradient_boosted_trees.cc:270] Truncates the model to 165 tree(s) i.e. 165  iteration(s).
[INFO 24-04-20 12:17:53.1734 UTC gradient_boosted_trees.cc:333] Final model num-trees:165 valid-loss:0.633292 valid-accuracy:0.852147
[INFO 24-04-20 12:17:53.1776 UTC hyperparameters_optimizer.cc:593] [42/50] Score: -0.633292 / -0.571786 HParams: fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 5 } } fields { name: "sparse_oblique_normalization" value { categorical: "STANDARD_DEVIATION" } } fields { name: "sparse_oblique_weights" value { categorical: "BINARY" } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 512 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 0.6 } } fields { name: "shrinkage" value { real: 0.05 } } fields { name: "min_examples" value { integer: 10 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "num_candidate_attributes_ratio" value { real: 0.9 } }
[INFO 24-04-20 12:17:53.1777 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 12:17:53.1778 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 12:17:53.1859 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 12:17:53.3984 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.011378 train-accuracy:0.761895 valid-loss:1.065565 valid-accuracy:0.736609
[INFO 24-04-20 12:18:16.2661 UTC gradient_boosted_trees.cc:1592]  num-trees:106 train-loss:0.450177 train-accuracy:0.907904 valid-loss:0.587472 valid-accuracy:0.866755
[INFO 24-04-20 12:18:18.1861 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.584868
[INFO 24-04-20 12:18:18.1861 UTC gradient_boosted_trees.cc:270] Truncates the model to 85 tree(s) i.e. 85  iteration(s).
[INFO 24-04-20 12:18:18.1867 UTC gradient_boosted_trees.cc:333] Final model num-trees:85 valid-loss:0.584868 valid-accuracy:0.866313
[INFO 24-04-20 12:18:18.1886 UTC hyperparameters_optimizer.cc:593] [43/50] Score: -0.584868 / -0.571786 HParams: fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 2 } } fields { name: "sparse_oblique_normalization" value { categorical: "MIN_MAX" } } fields { name: "sparse_oblique_weights" value { categorical: "BINARY" } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 32 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 0.9 } } fields { name: "shrinkage" value { real: 0.1 } } fields { name: "min_examples" value { integer: 10 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "num_candidate_attributes_ratio" value { real: 0.2 } }
[INFO 24-04-20 12:18:18.1888 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 12:18:18.1889 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 12:18:18.1950 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 12:18:18.4742 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.007702 train-accuracy:0.761895 valid-loss:1.061728 valid-accuracy:0.736609
[INFO 24-04-20 12:18:46.4609 UTC gradient_boosted_trees.cc:1592]  num-trees:102 train-loss:0.434600 train-accuracy:0.908878 valid-loss:0.585372 valid-accuracy:0.868526
[INFO 24-04-20 12:18:49.5825 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.581309
[INFO 24-04-20 12:18:49.5825 UTC gradient_boosted_trees.cc:270] Truncates the model to 83 tree(s) i.e. 83  iteration(s).
[INFO 24-04-20 12:18:49.5832 UTC gradient_boosted_trees.cc:333] Final model num-trees:83 valid-loss:0.581309 valid-accuracy:0.868969
[INFO 24-04-20 12:18:49.5857 UTC hyperparameters_optimizer.cc:593] [44/50] Score: -0.581309 / -0.571786 HParams: fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 4 } } fields { name: "sparse_oblique_normalization" value { categorical: "NONE" } } fields { name: "sparse_oblique_weights" value { categorical: "BINARY" } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 64 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 0.9 } } fields { name: "shrinkage" value { real: 0.1 } } fields { name: "min_examples" value { integer: 5 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "num_candidate_attributes_ratio" value { real: 0.9 } }
[INFO 24-04-20 12:18:49.5860 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 12:18:49.5861 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 12:18:49.5926 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 12:18:49.9228 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.054670 train-accuracy:0.761895 valid-loss:1.111134 valid-accuracy:0.736609
[INFO 24-04-20 12:19:16.7366 UTC gradient_boosted_trees.cc:1592]  num-trees:84 train-loss:0.532153 train-accuracy:0.885501 valid-loss:0.591597 valid-accuracy:0.864542
[INFO 24-04-20 12:19:46.8041 UTC gradient_boosted_trees.cc:1592]  num-trees:175 train-loss:0.477048 train-accuracy:0.896946 valid-loss:0.575477 valid-accuracy:0.866313
[INFO 24-04-20 12:19:55.6225 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.575182
[INFO 24-04-20 12:19:55.6226 UTC gradient_boosted_trees.cc:270] Truncates the model to 172 tree(s) i.e. 172  iteration(s).
[INFO 24-04-20 12:19:55.6230 UTC gradient_boosted_trees.cc:333] Final model num-trees:172 valid-loss:0.575182 valid-accuracy:0.866313
[INFO 24-04-20 12:19:55.6265 UTC hyperparameters_optimizer.cc:593] [45/50] Score: -0.575182 / -0.571786 HParams: fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 5 } } fields { name: "sparse_oblique_normalization" value { categorical: "NONE" } } fields { name: "sparse_oblique_weights" value { categorical: "BINARY" } } fields { name: "categorical_algorithm" value { categorical: "RANDOM" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 32 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 1 } } fields { name: "shrinkage" value { real: 0.05 } } fields { name: "min_examples" value { integer: 7 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "num_candidate_attributes_ratio" value { real: 1 } }
[INFO 24-04-20 12:19:55.6266 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 12:19:55.6267 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 12:19:55.6338 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 12:19:55.7469 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.034381 train-accuracy:0.761895 valid-loss:1.090479 valid-accuracy:0.736609
[INFO 24-04-20 12:20:16.8927 UTC gradient_boosted_trees.cc:1592]  num-trees:190 train-loss:0.571752 train-accuracy:0.870939 valid-loss:0.618687 valid-accuracy:0.860115
[INFO 24-04-20 12:20:29.2130 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.553388 train-accuracy:0.875907 valid-loss:0.612170 valid-accuracy:0.860115
[INFO 24-04-20 12:20:29.2131 UTC gradient_boosted_trees.cc:270] Truncates the model to 300 tree(s) i.e. 300  iteration(s).
[INFO 24-04-20 12:20:29.2131 UTC gradient_boosted_trees.cc:333] Final model num-trees:300 valid-loss:0.612170 valid-accuracy:0.860115
[INFO 24-04-20 12:20:29.2138 UTC hyperparameters_optimizer.cc:593] [46/50] Score: -0.61217 / -0.571786 HParams: fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 4 } } fields { name: "sparse_oblique_normalization" value { categorical: "MIN_MAX" } } fields { name: "sparse_oblique_weights" value { categorical: "CONTINUOUS" } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 3 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 1 } } fields { name: "shrinkage" value { real: 0.1 } } fields { name: "min_examples" value { integer: 5 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "num_candidate_attributes_ratio" value { real: 0.5 } }
[INFO 24-04-20 12:20:29.2139 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 12:20:29.2139 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 12:20:29.2192 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 12:20:29.3209 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.084782 train-accuracy:0.761895 valid-loss:1.143113 valid-accuracy:0.736609
[INFO 24-04-20 12:20:46.9769 UTC gradient_boosted_trees.cc:1592]  num-trees:169 train-loss:0.665927 train-accuracy:0.853114 valid-loss:0.700177 valid-accuracy:0.835325
[INFO 24-04-20 12:21:00.6692 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.626406 train-accuracy:0.857059 valid-loss:0.658615 valid-accuracy:0.840637
[INFO 24-04-20 12:21:00.6692 UTC gradient_boosted_trees.cc:270] Truncates the model to 300 tree(s) i.e. 300  iteration(s).
[INFO 24-04-20 12:21:00.6692 UTC gradient_boosted_trees.cc:333] Final model num-trees:300 valid-loss:0.658615 valid-accuracy:0.840637
[INFO 24-04-20 12:21:00.6700 UTC hyperparameters_optimizer.cc:593] [47/50] Score: -0.658615 / -0.571786 HParams: fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 4 } } fields { name: "sparse_oblique_normalization" value { categorical: "NONE" } } fields { name: "sparse_oblique_weights" value { categorical: "BINARY" } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 3 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 1 } } fields { name: "shrinkage" value { real: 0.02 } } fields { name: "min_examples" value { integer: 5 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "num_candidate_attributes_ratio" value { real: 0.5 } }
[INFO 24-04-20 12:21:00.6702 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 12:21:00.6702 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 12:21:00.6757 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 12:21:00.8668 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.078871 train-accuracy:0.761895 valid-loss:1.137038 valid-accuracy:0.736609
[INFO 24-04-20 12:21:17.0580 UTC gradient_boosted_trees.cc:1592]  num-trees:84 train-loss:0.588684 train-accuracy:0.882531 valid-loss:0.664665 valid-accuracy:0.853918
[INFO 24-04-20 12:21:47.2018 UTC gradient_boosted_trees.cc:1592]  num-trees:235 train-loss:0.464404 train-accuracy:0.905323 valid-loss:0.602023 valid-accuracy:0.865427
[INFO 24-04-20 12:22:00.2663 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.435161 train-accuracy:0.912677 valid-loss:0.598654 valid-accuracy:0.865870
[INFO 24-04-20 12:22:00.2663 UTC gradient_boosted_trees.cc:270] Truncates the model to 300 tree(s) i.e. 300  iteration(s).
[INFO 24-04-20 12:22:00.2663 UTC gradient_boosted_trees.cc:333] Final model num-trees:300 valid-loss:0.598654 valid-accuracy:0.865870
[INFO 24-04-20 12:22:00.2780 UTC hyperparameters_optimizer.cc:593] [48/50] Score: -0.598654 / -0.571786 HParams: [INFO 24-04-20 12:22:00.2781 UTC gradient_boosted_trees.cc:544] fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 3 } } fields { name: "sparse_oblique_normalization" value { categorical: "MIN_MAX" } } fields { name: "sparse_oblique_weights" value { categorical: "BINARY" } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "LOCAL" } } fields { name: "max_depth" value { integer: 8 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 0.6 } } fields { name: "shrinkage" value { real: 0.02 } } fields { name: "min_examples" value { integer: 5 } } fields { name: "use_hessian_gain" value { categorical: "false" } } fields { name: "num_candidate_attributes_ratio" value { real: 1 } }
Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 12:22:00.2782 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 12:22:00.2924 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 12:22:00.5463 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.059373 train-accuracy:0.761895 valid-loss:1.116517 valid-accuracy:0.736609
[INFO 24-04-20 12:22:17.2567 UTC gradient_boosted_trees.cc:1592]  num-trees:65 train-loss:0.594698 train-accuracy:0.865095 valid-loss:0.634110 valid-accuracy:0.852590
[INFO 24-04-20 12:22:47.4207 UTC gradient_boosted_trees.cc:1592]  num-trees:171 train-loss:0.531316 train-accuracy:0.881751 valid-loss:0.593663 valid-accuracy:0.864099
[INFO 24-04-20 12:23:17.5371 UTC gradient_boosted_trees.cc:1592]  num-trees:275 train-loss:0.495416 train-accuracy:0.891053 valid-loss:0.587540 valid-accuracy:0.861000
[INFO 24-04-20 12:23:24.7253 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.487725 train-accuracy:0.892709 valid-loss:0.586623 valid-accuracy:0.862771
[INFO 24-04-20 12:23:24.7253 UTC gradient_boosted_trees.cc:270] Truncates the model to 298 tree(s) i.e. 298  iteration(s).
[INFO 24-04-20 12:23:24.7253 UTC gradient_boosted_trees.cc:333] Final model num-trees:298 valid-loss:0.586586 valid-accuracy:0.862771
[INFO 24-04-20 12:23:24.7279 UTC hyperparameters_optimizer.cc:593] [49/50] Score: -0.586586 / -0.571786 HParams: fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 3 } } fields { name: "sparse_oblique_normalization" value { categorical: "STANDARD_DEVIATION" } } fields { name: "sparse_oblique_weights" value { categorical: "CONTINUOUS" } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 16 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 1 } } fields { name: "shrinkage" value { real: 0.05 } } fields { name: "min_examples" value { integer: 10 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "num_candidate_attributes_ratio" value { real: 1 } }
[INFO 24-04-20 12:23:24.7283 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 12:23:24.7283 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 12:23:24.7349 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 12:23:24.9005 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:1.055494 train-accuracy:0.761895 valid-loss:1.112262 valid-accuracy:0.736609
[INFO 24-04-20 12:23:47.5936 UTC gradient_boosted_trees.cc:1592]  num-trees:145 train-loss:0.544730 train-accuracy:0.878342 valid-loss:0.589646 valid-accuracy:0.866755
[INFO 24-04-20 12:24:13.3175 UTC gradient_boosted_trees.cc:1590]  num-trees:300 train-loss:0.494594 train-accuracy:0.889105 valid-loss:0.578382 valid-accuracy:0.870739
[INFO 24-04-20 12:24:13.3175 UTC gradient_boosted_trees.cc:270] Truncates the model to 293 tree(s) i.e. 293  iteration(s).
[INFO 24-04-20 12:24:13.3176 UTC gradient_boosted_trees.cc:333] Final model num-trees:293 valid-loss:0.577675 valid-accuracy:0.871625
[INFO 24-04-20 12:24:13.3200 UTC hyperparameters_optimizer.cc:593] [50/50] Score: -0.577675 / -0.571786 HParams: fields { name: "split_axis" value { categorical: "SPARSE_OBLIQUE" } } fields { name: "sparse_oblique_projection_density_factor" value { real: 1 } } fields { name: "sparse_oblique_normalization" value { categorical: "STANDARD_DEVIATION" } } fields { name: "sparse_oblique_weights" value { categorical: "BINARY" } } fields { name: "categorical_algorithm" value { categorical: "CART" } } fields { name: "growing_strategy" value { categorical: "BEST_FIRST_GLOBAL" } } fields { name: "max_num_nodes" value { integer: 16 } } fields { name: "sampling_method" value { categorical: "RANDOM" } } fields { name: "subsample" value { real: 0.8 } } fields { name: "shrinkage" value { real: 0.05 } } fields { name: "min_examples" value { integer: 5 } } fields { name: "use_hessian_gain" value { categorical: "true" } } fields { name: "num_candidate_attributes_ratio" value { real: 0.9 } }
[INFO 24-04-20 12:24:13.3240 UTC hyperparameters_optimizer.cc:224] Best hyperparameters:
fields {
  name: "split_axis"
  value {
    categorical: "SPARSE_OBLIQUE"
  }
}
fields {
  name: "sparse_oblique_projection_density_factor"
  value {
    real: 2
  }
}
fields {
  name: "sparse_oblique_normalization"
  value {
    categorical: "NONE"
  }
}
fields {
  name: "sparse_oblique_weights"
  value {
    categorical: "BINARY"
  }
}
fields {
  name: "categorical_algorithm"
  value {
    categorical: "RANDOM"
  }
}
fields {
  name: "growing_strategy"
  value {
    categorical: "BEST_FIRST_GLOBAL"
  }
}
fields {
  name: "max_num_nodes"
  value {
    integer: 64
  }
}
fields {
  name: "sampling_method"
  value {
    categorical: "RANDOM"
  }
}
fields {
  name: "subsample"
  value {
    real: 0.9
  }
}
fields {
  name: "shrinkage"
  value {
    real: 0.1
  }
}
fields {
  name: "min_examples"
  value {
    integer: 7
  }
}
fields {
  name: "use_hessian_gain"
  value {
    categorical: "false"
  }
}
fields {
  name: "num_candidate_attributes_ratio"
  value {
    real: 1
  }
}

[INFO 24-04-20 12:24:13.3245 UTC kernel.cc:919] Export model in log directory: /tmpfs/tmp/tmpl269t4qx with prefix 1585dd90b3df4ade
[INFO 24-04-20 12:24:13.3315 UTC kernel.cc:937] Save model in resources
[INFO 24-04-20 12:24:13.3344 UTC abstract_model.cc:881] Model self evaluation:
Task: CLASSIFICATION
Label: __LABEL
Loss (BINOMIAL_LOG_LIKELIHOOD): 0.571786

Accuracy: 0.871182  CI95[W][0 1]
ErrorRate: : 0.128818


Confusion Table:
truth\prediction
      1    2
1  1569   95
2   196  399
Total: 2259


[INFO 24-04-20 12:24:13.3517 UTC kernel.cc:1233] Loading model from path /tmpfs/tmp/tmpl269t4qx/model/ with prefix 1585dd90b3df4ade
[INFO 24-04-20 12:24:13.3755 UTC decision_forest.cc:734] Model loaded with 82 root(s), 7626 node(s), and 14 input feature(s).
[INFO 24-04-20 12:24:13.3755 UTC abstract_model.cc:1344] Engine "GradientBoostedTreesGeneric" built
[INFO 24-04-20 12:24:13.3756 UTC kernel.cc:1061] Use fast generic engine
Model trained in 0:40:46.518024
Compiling model...
Model compiled.
CPU times: user 40min 56s, sys: 3.43 s, total: 41min
Wall time: 40min 47s
<tf_keras.src.callbacks.History at 0x7f39bc270e20>
# Evaluate the model
tuned_model.compile(["accuracy"])
tuned_test_accuracy = tuned_model.evaluate(test_ds, return_dict=True, verbose=0)["accuracy"]
print(f"Test accuracy with the TF-DF hyper-parameter tuner: {tuned_test_accuracy:.4f}")
Test accuracy with the TF-DF hyper-parameter tuner: 0.8741

Same as before, display the tuning logs.

# Display the tuning logs.
tuning_logs = tuned_model.make_inspector().tuning_logs()
tuning_logs.head()

Same as before, shows the best hyper-parameters.

# Best hyper-parameters.
tuning_logs[tuning_logs.best].iloc[0]
score                                               -0.571786
evaluation_time                                   1821.530222
best                                                     True
split_axis                                     SPARSE_OBLIQUE
sparse_oblique_projection_density_factor                  2.0
sparse_oblique_normalization                             NONE
sparse_oblique_weights                                 BINARY
categorical_algorithm                                  RANDOM
growing_strategy                            BEST_FIRST_GLOBAL
max_num_nodes                                            64.0
sampling_method                                        RANDOM
subsample                                                 0.9
shrinkage                                                 0.1
min_examples                                                7
use_hessian_gain                                        false
num_candidate_attributes_ratio                            1.0
max_depth                                                 NaN
Name: 35, dtype: object

Finally, plots the evolution of the quality of the model during tuning:

plt.figure(figsize=(10, 5))
plt.plot(tuning_logs["score"], label="current trial")
plt.plot(tuning_logs["score"].cummax(), label="best trial")
plt.xlabel("Tuning step")
plt.ylabel("Tuning score")
plt.legend()
plt.show()

png

Training a model with Keras Tuner (Alternative approach)

TensorFlow Decision Forests is based on the Keras framework, and it is compatible with the Keras tuner.

Currently, the TF-DF Tuner and the Keras Tuner are complementary.

TF-DF Tuner

  • Automatic configuration of the objective.
  • Automatic extraction of validation dataset (if needed).
  • Support model self evaluation (e.g. out-of-bag evaluation).
  • Distributed hyper-parameter tuning.
  • Shared dataset access in between the trials: The tensorflow dataset is read only once, speeding-up tuning significantly on small datasets.

Keras Tuner

  • Support tuning of the pre-processing parameters.
  • Support hyper-band optimizer.
  • Support custom objectives.

Let's tune a TF-DF model using the Keras tuner.

# Install the Keras tuner
!pip install keras-tuner -U -qq
import keras_tuner as kt
%%time

def build_model(hp):
  """Creates a model."""

  model = tfdf.keras.GradientBoostedTreesModel(
      min_examples=hp.Choice("min_examples", [2, 5, 7, 10]),
      categorical_algorithm=hp.Choice("categorical_algorithm", ["CART", "RANDOM"]),
      max_depth=hp.Choice("max_depth", [4, 5, 6, 7]),
      # The keras tuner convert automaticall boolean parameters to integers.
      use_hessian_gain=bool(hp.Choice("use_hessian_gain", [True, False])),
      shrinkage=hp.Choice("shrinkage", [0.02, 0.05, 0.10, 0.15]),
      num_candidate_attributes_ratio=hp.Choice("num_candidate_attributes_ratio", [0.2, 0.5, 0.9, 1.0]),
  )

  # Optimize the model accuracy as computed on the validation dataset.
  model.compile(metrics=["accuracy"])
  return model

keras_tuner = kt.RandomSearch(
    build_model,
    objective="val_accuracy",
    max_trials=50,
    overwrite=True,
    directory="/tmp/keras_tuning")

# Important: The tuning should not be done on the test dataset.

# Extract a validation dataset from the training dataset. The new training
# dataset is called the "sub-training-dataset".

def split_dataset(dataset, test_ratio=0.30):
  """Splits a panda dataframe in two."""
  test_indices = np.random.rand(len(dataset)) < test_ratio
  return dataset[~test_indices], dataset[test_indices]

sub_train_df, sub_valid_df = split_dataset(train_df)
sub_train_ds = tfdf.keras.pd_dataframe_to_tf_dataset(sub_train_df, label="income")
sub_valid_ds = tfdf.keras.pd_dataframe_to_tf_dataset(sub_valid_df, label="income")

# Tune the model
keras_tuner.search(sub_train_ds, validation_data=sub_valid_ds)
Warning: The `num_threads` constructor argument is not set and the number of CPU is os.cpu_count()=32 > 32. Setting num_threads to 32. Set num_threads manually to use more than 32 cpus.
WARNING:absl:The `num_threads` constructor argument is not set and the number of CPU is os.cpu_count()=32 > 32. Setting num_threads to 32. Set num_threads manually to use more than 32 cpus.
Use /tmpfs/tmp/tmpln_gjjxu as temporary training directory

Search: Running Trial #1

Value             |Best Value So Far |Hyperparameter
10                |10                |min_examples
CART              |CART              |categorical_algorithm
7                 |7                 |max_depth
1                 |1                 |use_hessian_gain
0.15              |0.15              |shrinkage
0.5               |0.5               |num_candidate_attributes_ratio
[WARNING 24-04-20 12:24:17.3831 UTC gradient_boosted_trees.cc:1840] "goss_alpha" set but "sampling_method" not equal to "GOSS".
[WARNING 24-04-20 12:24:17.3831 UTC gradient_boosted_trees.cc:1851] "goss_beta" set but "sampling_method" not equal to "GOSS".
[WARNING 24-04-20 12:24:17.3831 UTC gradient_boosted_trees.cc:1865] "selective_gradient_boosting_ratio" set but "sampling_method" not equal to "SELGB".
Warning: The `num_threads` constructor argument is not set and the number of CPU is os.cpu_count()=32 > 32. Setting num_threads to 32. Set num_threads manually to use more than 32 cpus.
WARNING:absl:The `num_threads` constructor argument is not set and the number of CPU is os.cpu_count()=32 > 32. Setting num_threads to 32. Set num_threads manually to use more than 32 cpus.
Use /tmpfs/tmp/tmpz2tcb0j7 as temporary training directory
[WARNING 24-04-20 12:24:17.8409 UTC gradient_boosted_trees.cc:1840] "goss_alpha" set but "sampling_method" not equal to "GOSS".
[WARNING 24-04-20 12:24:17.8410 UTC gradient_boosted_trees.cc:1851] "goss_beta" set but "sampling_method" not equal to "GOSS".
[WARNING 24-04-20 12:24:17.8410 UTC gradient_boosted_trees.cc:1865] "selective_gradient_boosting_ratio" set but "sampling_method" not equal to "SELGB".

---------------------------------------------------------------------------

FatalTypeError                            Traceback (most recent call last)

File <timed exec>:40


File /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/keras_tuner/src/engine/base_tuner.py:234, in BaseTuner.search(self, *fit_args, **fit_kwargs)
    231         continue
    233     self.on_trial_begin(trial)
--> 234     self._try_run_and_update_trial(trial, *fit_args, **fit_kwargs)
    235     self.on_trial_end(trial)
    236 self.on_search_end()


File /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/keras_tuner/src/engine/base_tuner.py:279, in BaseTuner._try_run_and_update_trial(self, trial, *fit_args, **fit_kwargs)
    277 except Exception as e:
    278     if isinstance(e, errors.FatalError):
--> 279         raise e
    280     if config_module.DEBUG:
    281         # Printing the stacktrace and the error.
    282         traceback.print_exc()


File /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/keras_tuner/src/engine/base_tuner.py:274, in BaseTuner._try_run_and_update_trial(self, trial, *fit_args, **fit_kwargs)
    272 def _try_run_and_update_trial(self, trial, *fit_args, **fit_kwargs):
    273     try:
--> 274         self._run_and_update_trial(trial, *fit_args, **fit_kwargs)
    275         trial.status = trial_module.TrialStatus.COMPLETED
    276         return


File /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/keras_tuner/src/engine/base_tuner.py:239, in BaseTuner._run_and_update_trial(self, trial, *fit_args, **fit_kwargs)
    238 def _run_and_update_trial(self, trial, *fit_args, **fit_kwargs):
--> 239     results = self.run_trial(trial, *fit_args, **fit_kwargs)
    240     if self.oracle.get_trial(trial.trial_id).metrics.exists(
    241         self.oracle.objective.name
    242     ):
    243         # The oracle is updated by calling `self.oracle.update_trial()` in
    244         # `Tuner.run_trial()`. For backward compatibility, we support this
    245         # use case. No further action needed in this case.
    246         warnings.warn(
    247             "The use case of calling "
    248             "`self.oracle.update_trial(trial_id, metrics)` "
   (...)
    254             stacklevel=2,
    255         )


File /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/keras_tuner/src/engine/tuner.py:314, in Tuner.run_trial(self, trial, *args, **kwargs)
    312     callbacks.append(model_checkpoint)
    313     copied_kwargs["callbacks"] = callbacks
--> 314     obj_value = self._build_and_fit_model(trial, *args, **copied_kwargs)
    316     histories.append(obj_value)
    317 return histories


File /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/keras_tuner/src/engine/tuner.py:232, in Tuner._build_and_fit_model(self, trial, *args, **kwargs)
    214 """For AutoKeras to override.
    215 
    216 DO NOT REMOVE this function. AutoKeras overrides the function to tune
   (...)
    229     The fit history.
    230 """
    231 hp = trial.hyperparameters
--> 232 model = self._try_build(hp)
    233 results = self.hypermodel.fit(hp, model, *args, **kwargs)
    235 # Save the build config for model loading later.


File /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/keras_tuner/src/engine/tuner.py:167, in Tuner._try_build(self, hp)
    165 # Stop if `build()` does not return a valid model.
    166 if not isinstance(model, keras.models.Model):
--> 167     raise errors.FatalTypeError(
    168         "Expected the model-building function, or HyperModel.build() "
    169         "to return a valid Keras Model instance. "
    170         f"Received: {model} of type {type(model)}."
    171     )
    172 # Check model size.
    173 size = maybe_compute_model_size(model)


FatalTypeError: Expected the model-building function, or HyperModel.build() to return a valid Keras Model instance. Received: <tensorflow_decision_forests.keras.GradientBoostedTreesModel object at 0x7f39d84d5490> of type <class 'tensorflow_decision_forests.keras.GradientBoostedTreesModel'>.

The best hyper-parameter are available with get_best_hyperparameters:

# Tune the model
best_hyper_parameters = keras_tuner.get_best_hyperparameters()[0].values
print("Best hyper-parameters:", keras_tuner.get_best_hyperparameters()[0].values)
Best hyper-parameters: {'min_examples': 10, 'categorical_algorithm': 'CART', 'max_depth': 7, 'use_hessian_gain': 1, 'shrinkage': 0.15, 'num_candidate_attributes_ratio': 0.5}

The model should be re-trained with the best hyper-parameters:

%set_cell_height 300
# Train the model
# The keras tuner convert automaticall boolean parameters to integers.
best_hyper_parameters["use_hessian_gain"] = bool(best_hyper_parameters["use_hessian_gain"])
best_model = tfdf.keras.GradientBoostedTreesModel(**best_hyper_parameters)
best_model.fit(train_ds, verbose=2)
<IPython.core.display.Javascript object>
Warning: The `num_threads` constructor argument is not set and the number of CPU is os.cpu_count()=32 > 32. Setting num_threads to 32. Set num_threads manually to use more than 32 cpus.
WARNING:absl:The `num_threads` constructor argument is not set and the number of CPU is os.cpu_count()=32 > 32. Setting num_threads to 32. Set num_threads manually to use more than 32 cpus.
Use /tmpfs/tmp/tmp_uzoe05p as temporary training directory
Reading training dataset...
Training tensor examples:
Features: {'age': <tf.Tensor 'data:0' shape=(None,) dtype=int64>, 'workclass': <tf.Tensor 'data_1:0' shape=(None,) dtype=string>, 'fnlwgt': <tf.Tensor 'data_2:0' shape=(None,) dtype=int64>, 'education': <tf.Tensor 'data_3:0' shape=(None,) dtype=string>, 'education_num': <tf.Tensor 'data_4:0' shape=(None,) dtype=int64>, 'marital_status': <tf.Tensor 'data_5:0' shape=(None,) dtype=string>, 'occupation': <tf.Tensor 'data_6:0' shape=(None,) dtype=string>, 'relationship': <tf.Tensor 'data_7:0' shape=(None,) dtype=string>, 'race': <tf.Tensor 'data_8:0' shape=(None,) dtype=string>, 'sex': <tf.Tensor 'data_9:0' shape=(None,) dtype=string>, 'capital_gain': <tf.Tensor 'data_10:0' shape=(None,) dtype=int64>, 'capital_loss': <tf.Tensor 'data_11:0' shape=(None,) dtype=int64>, 'hours_per_week': <tf.Tensor 'data_12:0' shape=(None,) dtype=int64>, 'native_country': <tf.Tensor 'data_13:0' shape=(None,) dtype=string>}
Label: Tensor("data_14:0", shape=(None,), dtype=int64)
Weights: None
Normalized tensor features:
 {'age': SemanticTensor(semantic=<Semantic.NUMERICAL: 1>, tensor=<tf.Tensor 'Cast:0' shape=(None,) dtype=float32>), 'workclass': SemanticTensor(semantic=<Semantic.CATEGORICAL: 2>, tensor=<tf.Tensor 'data_1:0' shape=(None,) dtype=string>), 'fnlwgt': SemanticTensor(semantic=<Semantic.NUMERICAL: 1>, tensor=<tf.Tensor 'Cast_1:0' shape=(None,) dtype=float32>), 'education': SemanticTensor(semantic=<Semantic.CATEGORICAL: 2>, tensor=<tf.Tensor 'data_3:0' shape=(None,) dtype=string>), 'education_num': SemanticTensor(semantic=<Semantic.NUMERICAL: 1>, tensor=<tf.Tensor 'Cast_2:0' shape=(None,) dtype=float32>), 'marital_status': SemanticTensor(semantic=<Semantic.CATEGORICAL: 2>, tensor=<tf.Tensor 'data_5:0' shape=(None,) dtype=string>), 'occupation': SemanticTensor(semantic=<Semantic.CATEGORICAL: 2>, tensor=<tf.Tensor 'data_6:0' shape=(None,) dtype=string>), 'relationship': SemanticTensor(semantic=<Semantic.CATEGORICAL: 2>, tensor=<tf.Tensor 'data_7:0' shape=(None,) dtype=string>), 'race': SemanticTensor(semantic=<Semantic.CATEGORICAL: 2>, tensor=<tf.Tensor 'data_8:0' shape=(None,) dtype=string>), 'sex': SemanticTensor(semantic=<Semantic.CATEGORICAL: 2>, tensor=<tf.Tensor 'data_9:0' shape=(None,) dtype=string>), 'capital_gain': SemanticTensor(semantic=<Semantic.NUMERICAL: 1>, tensor=<tf.Tensor 'Cast_3:0' shape=(None,) dtype=float32>), 'capital_loss': SemanticTensor(semantic=<Semantic.NUMERICAL: 1>, tensor=<tf.Tensor 'Cast_4:0' shape=(None,) dtype=float32>), 'hours_per_week': SemanticTensor(semantic=<Semantic.NUMERICAL: 1>, tensor=<tf.Tensor 'Cast_5:0' shape=(None,) dtype=float32>), 'native_country': SemanticTensor(semantic=<Semantic.CATEGORICAL: 2>, tensor=<tf.Tensor 'data_13:0' shape=(None,) dtype=string>)}
[WARNING 24-04-20 12:24:18.3496 UTC gradient_boosted_trees.cc:1840] "goss_alpha" set but "sampling_method" not equal to "GOSS".
[WARNING 24-04-20 12:24:18.3496 UTC gradient_boosted_trees.cc:1851] "goss_beta" set but "sampling_method" not equal to "GOSS".
[WARNING 24-04-20 12:24:18.3497 UTC gradient_boosted_trees.cc:1865] "selective_gradient_boosting_ratio" set but "sampling_method" not equal to "SELGB".
Training dataset read in 0:00:00.394819. Found 22792 examples.
Training model...
[INFO 24-04-20 12:24:18.7612 UTC kernel.cc:771] Start Yggdrasil model training
[INFO 24-04-20 12:24:18.7612 UTC kernel.cc:772] Collect training examples
[INFO 24-04-20 12:24:18.7612 UTC kernel.cc:785] Dataspec guide:
column_guides {
  column_name_pattern: "^__LABEL$"
  type: CATEGORICAL
  categorial {
    min_vocab_frequency: 0
    max_vocab_count: -1
  }
}
default_column_guide {
  categorial {
    max_vocab_count: 2000
  }
  discretized_numerical {
    maximum_num_bins: 255
  }
}
ignore_columns_without_guides: false
detect_numerical_as_discretized_numerical: false

[INFO 24-04-20 12:24:18.7613 UTC kernel.cc:391] Number of batches: 23
[INFO 24-04-20 12:24:18.7613 UTC kernel.cc:392] Number of examples: 22792
[INFO 24-04-20 12:24:18.7693 UTC data_spec_inference.cc:305] 1 item(s) have been pruned (i.e. they are considered out of dictionary) for the column native_country (40 item(s) left) because min_value_count=5 and max_number_of_unique_values=2000
[INFO 24-04-20 12:24:18.7694 UTC data_spec_inference.cc:305] 1 item(s) have been pruned (i.e. they are considered out of dictionary) for the column occupation (13 item(s) left) because min_value_count=5 and max_number_of_unique_values=2000
[INFO 24-04-20 12:24:18.7694 UTC data_spec_inference.cc:305] 1 item(s) have been pruned (i.e. they are considered out of dictionary) for the column workclass (7 item(s) left) because min_value_count=5 and max_number_of_unique_values=2000
[INFO 24-04-20 12:24:18.7756 UTC kernel.cc:792] Training dataset:
Number of records: 22792
Number of columns: 15

Number of columns by type:
    CATEGORICAL: 9 (60%)
    NUMERICAL: 6 (40%)

Columns:

CATEGORICAL: 9 (60%)
    0: "__LABEL" CATEGORICAL integerized vocab-size:3 no-ood-item
    4: "education" CATEGORICAL has-dict vocab-size:17 zero-ood-items most-frequent:"HS-grad" 7340 (32.2043%)
    8: "marital_status" CATEGORICAL has-dict vocab-size:8 zero-ood-items most-frequent:"Married-civ-spouse" 10431 (45.7661%)
    9: "native_country" CATEGORICAL num-nas:407 (1.78571%) has-dict vocab-size:41 num-oods:1 (0.00446728%) most-frequent:"United-States" 20436 (91.2933%)
    10: "occupation" CATEGORICAL num-nas:1260 (5.52826%) has-dict vocab-size:14 num-oods:4 (0.018577%) most-frequent:"Prof-specialty" 2870 (13.329%)
    11: "race" CATEGORICAL has-dict vocab-size:6 zero-ood-items most-frequent:"White" 19467 (85.4115%)
    12: "relationship" CATEGORICAL has-dict vocab-size:7 zero-ood-items most-frequent:"Husband" 9191 (40.3256%)
    13: "sex" CATEGORICAL has-dict vocab-size:3 zero-ood-items most-frequent:"Male" 15165 (66.5365%)
    14: "workclass" CATEGORICAL num-nas:1257 (5.51509%) has-dict vocab-size:8 num-oods:3 (0.0139308%) most-frequent:"Private" 15879 (73.7358%)

NUMERICAL: 6 (40%)
    1: "age" NUMERICAL mean:38.6153 min:17 max:90 sd:13.661
    2: "capital_gain" NUMERICAL mean:1081.9 min:0 max:99999 sd:7509.48
    3: "capital_loss" NUMERICAL mean:87.2806 min:0 max:4356 sd:403.01
    5: "education_num" NUMERICAL mean:10.0927 min:1 max:16 sd:2.56427
    6: "fnlwgt" NUMERICAL mean:189879 min:12285 max:1.4847e+06 sd:106423
    7: "hours_per_week" NUMERICAL mean:40.3955 min:1 max:99 sd:12.249

Terminology:
    nas: Number of non-available (i.e. missing) values.
    ood: Out of dictionary.
    manually-defined: Attribute whose type is manually defined by the user, i.e., the type was not automatically inferred.
    tokenized: The attribute value is obtained through tokenization.
    has-dict: The attribute is attached to a string dictionary e.g. a categorical attribute stored as a string.
    vocab-size: Number of unique values.

[INFO 24-04-20 12:24:18.7756 UTC kernel.cc:808] Configure learner
[WARNING 24-04-20 12:24:18.7758 UTC gradient_boosted_trees.cc:1840] "goss_alpha" set but "sampling_method" not equal to "GOSS".
[WARNING 24-04-20 12:24:18.7758 UTC gradient_boosted_trees.cc:1851] "goss_beta" set but "sampling_method" not equal to "GOSS".
[WARNING 24-04-20 12:24:18.7759 UTC gradient_boosted_trees.cc:1865] "selective_gradient_boosting_ratio" set but "sampling_method" not equal to "SELGB".
[INFO 24-04-20 12:24:18.7759 UTC kernel.cc:822] Training config:
learner: "GRADIENT_BOOSTED_TREES"
features: "^age$"
features: "^capital_gain$"
features: "^capital_loss$"
features: "^education$"
features: "^education_num$"
features: "^fnlwgt$"
features: "^hours_per_week$"
features: "^marital_status$"
features: "^native_country$"
features: "^occupation$"
features: "^race$"
features: "^relationship$"
features: "^sex$"
features: "^workclass$"
label: "^__LABEL$"
task: CLASSIFICATION
random_seed: 123456
metadata {
  framework: "TF Keras"
}
pure_serving_model: false
[yggdrasil_decision_forests.model.gradient_boosted_trees.proto.gradient_boosted_trees_config] {
  num_trees: 300
  decision_tree {
    max_depth: 7
    min_examples: 10
    in_split_min_examples_check: true
    keep_non_leaf_label_distribution: true
    missing_value_policy: GLOBAL_IMPUTATION
    allow_na_conditions: false
    categorical_set_greedy_forward {
      sampling: 0.1
      max_num_items: -1
      min_item_frequency: 1
    }
    growing_strategy_local {
    }
    categorical {
      cart {
      }
    }
    num_candidate_attributes_ratio: 0.5
    axis_aligned_split {
    }
    internal {
      sorting_strategy: PRESORTED
    }
    uplift {
      min_examples_in_treatment: 5
      split_score: KULLBACK_LEIBLER
    }
  }
  shrinkage: 0.15
  loss: DEFAULT
  validation_set_ratio: 0.1
  validation_interval_in_trees: 1
  early_stopping: VALIDATION_LOSS_INCREASE
  early_stopping_num_trees_look_ahead: 30
  l2_regularization: 0
  lambda_loss: 1
  mart {
  }
  adapt_subsample_for_maximum_training_duration: false
  l1_regularization: 0
  use_hessian_gain: true
  l2_regularization_categorical: 1
  stochastic_gradient_boosting {
    ratio: 1
  }
  apply_link_function: true
  compute_permutation_variable_importance: false
  binary_focal_loss_options {
    misprediction_exponent: 2
    positive_sample_coefficient: 0.5
  }
  early_stopping_initial_iteration: 10
}

[INFO 24-04-20 12:24:18.7760 UTC kernel.cc:825] Deployment config:
cache_path: "/tmpfs/tmp/tmp_uzoe05p/working_cache"
num_threads: 32
try_resume_training: true

[INFO 24-04-20 12:24:18.7762 UTC kernel.cc:887] Train model
[INFO 24-04-20 12:24:18.7763 UTC gradient_boosted_trees.cc:544] Default loss set to BINOMIAL_LOG_LIKELIHOOD
[INFO 24-04-20 12:24:18.7763 UTC gradient_boosted_trees.cc:1171] Training gradient boosted tree on 22792 example(s) and 14 feature(s).
[INFO 24-04-20 12:24:18.7824 UTC gradient_boosted_trees.cc:1214] 20533 examples used for training and 2259 examples used for validation
[INFO 24-04-20 12:24:18.8028 UTC gradient_boosted_trees.cc:1590]  num-trees:1 train-loss:0.975468 train-accuracy:0.761895 valid-loss:1.026095 valid-accuracy:0.736609
[INFO 24-04-20 12:24:18.8196 UTC gradient_boosted_trees.cc:1592]  num-trees:2 train-loss:0.897070 train-accuracy:0.761895 valid-loss:0.945305 valid-accuracy:0.736609
[INFO 24-04-20 12:24:20.2681 UTC early_stopping.cc:53] Early stop of the training because the validation loss does not decrease anymore. Best valid-loss: 0.57263
[INFO 24-04-20 12:24:20.2681 UTC gradient_boosted_trees.cc:1629] Create final snapshot of the model at iteration 104
[INFO 24-04-20 12:24:20.2741 UTC gradient_boosted_trees.cc:270] Truncates the model to 75 tree(s) i.e. 75  iteration(s).
[INFO 24-04-20 12:24:20.2744 UTC gradient_boosted_trees.cc:333] Final model num-trees:75 valid-loss:0.572630 valid-accuracy:0.869411
[INFO 24-04-20 12:24:20.2760 UTC kernel.cc:919] Export model in log directory: /tmpfs/tmp/tmp_uzoe05p with prefix b04d9dc7d6754618
[INFO 24-04-20 12:24:20.2801 UTC kernel.cc:937] Save model in resources
[INFO 24-04-20 12:24:20.2839 UTC abstract_model.cc:881] Model self evaluation:
Task: CLASSIFICATION
Label: __LABEL
Loss (BINOMIAL_LOG_LIKELIHOOD): 0.57263

Accuracy: 0.869411  CI95[W][0 1]
ErrorRate: : 0.130589


Confusion Table:
truth\prediction
      1    2
1  1577   87
2   208  387
Total: 2259


[INFO 24-04-20 12:24:20.3022 UTC kernel.cc:1233] Loading model from path /tmpfs/tmp/tmp_uzoe05p/model/ with prefix b04d9dc7d6754618
[INFO 24-04-20 12:24:20.3206 UTC quick_scorer_extended.cc:911] The binary was compiled without AVX2 support, but your CPU supports it. Enable it for faster model inference.
[INFO 24-04-20 12:24:20.3214 UTC kernel.cc:1061] Use fast generic engine
Model trained in 0:00:01.567091
Compiling model...
Model compiled.
<tf_keras.src.callbacks.History at 0x7f37a8625370>

We can then evaluate the tuned model:

# Evaluate the model
best_model.compile(["accuracy"])
tuned_test_accuracy = best_model.evaluate(test_ds, return_dict=True, verbose=0)["accuracy"]
print(f"Test accuracy with the Keras Tuner: {tuned_test_accuracy:.4f}")
Test accuracy with the Keras Tuner: 0.8715