Save the date! Google I/O returns May 18-20

# Classification on imbalanced data

This tutorial demonstrates how to classify a highly imbalanced dataset in which the number of examples in one class greatly outnumbers the examples in another. You will work with the Credit Card Fraud Detection dataset hosted on Kaggle. The aim is to detect a mere 492 fraudulent transactions from 284,807 transactions in total. You will use Keras to define the model and class weights to help the model learn from the imbalanced data. .

This tutorial contains complete code to:

• Load a CSV file using Pandas.
• Create train, validation, and test sets.
• Define and train a model using Keras (including setting class weights).
• Evaluate the model using various metrics (including precision and recall).
• Try common techniques for dealing with imbalanced data like:
• Class weighting
• Oversampling

## Setup

import tensorflow as tf
from tensorflow import keras

import os
import tempfile

import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns

import sklearn
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler

mpl.rcParams['figure.figsize'] = (12, 10)
colors = plt.rcParams['axes.prop_cycle'].by_key()['color']


## Data processing and exploration

file = tf.keras.utils

raw_df[['Time', 'V1', 'V2', 'V3', 'V4', 'V5', 'V26', 'V27', 'V28', 'Amount', 'Class']].describe()


### Examine the class label imbalance

Let's look at the dataset imbalance:

neg, pos = np.bincount(raw_df['Class'])
total = neg + pos
print('Examples:\n    Total: {}\n    Positive: {} ({:.2f}% of total)\n'.format(
total, pos, 100 * pos / total))

Examples:
Total: 284807
Positive: 492 (0.17% of total)


This shows the small fraction of positive samples.

### Clean, split and normalize the data

The raw data has a few issues. First the Time and Amount columns are too variable to use directly. Drop the Time column (since it's not clear what it means) and take the log of the Amount column to reduce its range.

cleaned_df = raw_df.copy()

# You don't want the Time column.
cleaned_df.pop('Time')

# The Amount column covers a huge range. Convert to log-space.
eps = 0.001 # 0 => 0.1¢
cleaned_df['Log Ammount'] = np.log(cleaned_df.pop('Amount')+eps)


Split the dataset into train, validation, and test sets. The validation set is used during the model fitting to evaluate the loss and any metrics, however the model is not fit with this data. The test set is completely unused during the training phase and is only used at the end to evaluate how well the model generalizes to new data. This is especially important with imbalanced datasets where overfitting is a significant concern from the lack of training data.

# Use a utility from sklearn to split and shuffle our dataset.
train_df, test_df = train_test_split(cleaned_df, test_size=0.2)
train_df, val_df = train_test_split(train_df, test_size=0.2)

# Form np arrays of labels and features.
train_labels = np.array(train_df.pop('Class'))
bool_train_labels = train_labels != 0
val_labels = np.array(val_df.pop('Class'))
test_labels = np.array(test_df.pop('Class'))

train_features = np.array(train_df)
val_features = np.array(val_df)
test_features = np.array(test_df)


Normalize the input features using the sklearn StandardScaler. This will set the mean to 0 and standard deviation to 1.

scaler = StandardScaler()
train_features = scaler.fit_transform(train_features)

val_features = scaler.transform(val_features)
test_features = scaler.transform(test_features)

train_features = np.clip(train_features, -5, 5)
val_features = np.clip(val_features, -5, 5)
test_features = np.clip(test_features, -5, 5)

print('Training labels shape:', train_labels.shape)
print('Validation labels shape:', val_labels.shape)
print('Test labels shape:', test_labels.shape)

print('Training features shape:', train_features.shape)
print('Validation features shape:', val_features.shape)
print('Test features shape:', test_features.shape)

Training labels shape: (182276,)
Validation labels shape: (45569,)
Test labels shape: (56962,)
Training features shape: (182276, 29)
Validation features shape: (45569, 29)
Test features shape: (56962, 29)


### Look at the data distribution

Next compare the distributions of the positive and negative examples over a few features. Good questions to ask yourself at this point are:

• Do these distributions make sense?
• Yes. You've normalized the input and these are mostly concentrated in the +/- 2 range.
• Can you see the difference between the distributions?
• Yes the positive examples contain a much higher rate of extreme values.
pos_df = pd.DataFrame(train_features[ bool_train_labels], columns=train_df.columns)
neg_df = pd.DataFrame(train_features[~bool_train_labels], columns=train_df.columns)

sns.jointplot(pos_df['V5'], pos_df['V6'],
kind='hex', xlim=(-5,5), ylim=(-5,5))
plt.suptitle("Positive distribution")

sns.jointplot(neg_df['V5'], neg_df['V6'],
kind='hex', xlim=(-5,5), ylim=(-5,5))
_ = plt.suptitle("Negative distribution")

/home/kbuilder/.local/lib/python3.6/site-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be data, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
FutureWarning
/home/kbuilder/.local/lib/python3.6/site-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be data, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
FutureWarning


## Define the model and metrics

Define a function that creates a simple neural network with a densly connected hidden layer, a dropout layer to reduce overfitting, and an output sigmoid layer that returns the probability of a transaction being fraudulent:

METRICS = [
keras.metrics.TruePositives(name='tp'),
keras.metrics.FalsePositives(name='fp'),
keras.metrics.TrueNegatives(name='tn'),
keras.metrics.FalseNegatives(name='fn'),
keras.metrics.BinaryAccuracy(name='accuracy'),
keras.metrics.Precision(name='precision'),
keras.metrics.Recall(name='recall'),
keras.metrics.AUC(name='auc'),
keras.metrics.AUC(name='prc', curve='PR'), # precision-recall curve
]

def make_model(metrics=METRICS, output_bias=None):
if output_bias is not None:
output_bias = tf.keras.initializers.Constant(output_bias)
model = keras.Sequential([
keras.layers.Dense(
16, activation='relu',
input_shape=(train_features.shape[-1],)),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation='sigmoid',
bias_initializer=output_bias),
])

model.compile(
loss=keras.losses.BinaryCrossentropy(),
metrics=metrics)

return model


### Understanding useful metrics

Notice that there are a few metrics defined above that can be computed by the model that will be helpful when evaluating the performance.

• False negatives and false positives are samples that were incorrectly classified
• True negatives and true positives are samples that were correctly classified
• Accuracy is the percentage of examples correctly classified > $\frac{\text{true samples} }{\text{total samples} }$
• Precision is the percentage of predicted positives that were correctly classified > $\frac{\text{true positives} }{\text{true positives + false positives} }$
• Recall is the percentage of actual positives that were correctly classified > $\frac{\text{true positives} }{\text{true positives + false negatives} }$
• AUC refers to the Area Under the Curve of a Receiver Operating Characteristic curve (ROC-AUC). This metric is equal to the probability that a classifier will rank a random positive sample higher than a random negative sample.
• AUPRC refers to Area Under the Curve of the Precision-Recall Curve. This metric computes precision-recall pairs for different probability thresholds.

## Baseline model

### Build the model

Now create and train your model using the function that was defined earlier. Notice that the model is fit using a larger than default batch size of 2048, this is important to ensure that each batch has a decent chance of containing a few positive samples. If the batch size was too small, they would likely have no fraudulent transactions to learn from.

EPOCHS = 100
BATCH_SIZE = 2048

early_stopping = tf.keras.callbacks.EarlyStopping(
monitor='val_prc',
verbose=1,
patience=10,
mode='max',
restore_best_weights=True)

model = make_model()
model.summary()

Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
dense (Dense)                (None, 16)                480
_________________________________________________________________
dropout (Dropout)            (None, 16)                0
_________________________________________________________________
dense_1 (Dense)              (None, 1)                 17
=================================================================
Total params: 497
Trainable params: 497
Non-trainable params: 0
_________________________________________________________________


Test run the model:

model.predict(train_features[:10])

array([[0.16105835],
[0.33830184],
[0.30945653],
[0.34930834],
[0.3607132 ],
[0.31011787],
[0.20363246],
[0.2789183 ],
[0.25884733],
[0.33510146]], dtype=float32)


### Optional: Set the correct initial bias.

These initial guesses are not great. You know the dataset is imbalanced. Set the output layer's bias to reflect that (See: A Recipe for Training Neural Networks: "init well"). This can help with initial convergence.

With the default bias initialization the loss should be about math.log(2) = 0.69314

results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)
print("Loss: {:0.4f}".format(results[0]))

Loss: 0.3234


The correct bias to set can be derived from:

$$p_0 = pos/(pos + neg) = 1/(1+e^{-b_0})$$
$$b_0 = -log_e(1/p_0 - 1)$$
$$b_0 = log_e(pos/neg)$$
initial_bias = np.log([pos/neg])
initial_bias

array([-6.35935934])


Set that as the initial bias, and the model will give much more reasonable initial guesses.

It should be near: pos/total = 0.0018

model = make_model(output_bias=initial_bias)
model.predict(train_features[:10])

array([[0.00448684],
[0.00133903],
[0.00302145],
[0.00546093],
[0.00166563],
[0.00135329],
[0.00521673],
[0.00368532],
[0.01018428],
[0.00223487]], dtype=float32)


With this initialization the initial loss should be approximately:

$$-p_0log(p_0)-(1-p_0)log(1-p_0) = 0.01317$$
results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)
print("Loss: {:0.4f}".format(results[0]))

Loss: 0.0132


This initial loss is about 50 times less than if would have been with naive initialization.

This way the model doesn't need to spend the first few epochs just learning that positive examples are unlikely. This also makes it easier to read plots of the loss during training.

### Checkpoint the initial weights

To make the various training runs more comparable, keep this initial model's weights in a checkpoint file, and load them into each model before training.

initial_weights = os.path.join(tempfile.mkdtemp(), 'initial_weights')
model.save_weights(initial_weights)


### Confirm that the bias fix helps

Before moving on, confirm quick that the careful bias initialization actually helped.

Train the model for 20 epochs, with and without this careful initialization, and compare the losses:

model = make_model()
model.layers[-1].bias.assign([0.0])
zero_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(val_features, val_labels),
verbose=0)

model = make_model()
careful_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(val_features, val_labels),
verbose=0)

def plot_loss(history, label, n):
# Use a log scale on y-axis to show the wide range of values.
plt.semilogy(history.epoch, history.history['loss'],
color=colors[n], label='Train ' + label)
plt.semilogy(history.epoch, history.history['val_loss'],
color=colors[n], label='Val ' + label,
linestyle="--")
plt.xlabel('Epoch')
plt.ylabel('Loss')

plot_loss(zero_bias_history, "Zero Bias", 0)
plot_loss(careful_bias_history, "Careful Bias", 1)


The above figure makes it clear: In terms of validation loss, on this problem, this careful initialization gives a clear advantage.

### Train the model

model = make_model()
baseline_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks=[early_stopping],
validation_data=(val_features, val_labels))

Epoch 1/100
90/90 [==============================] - 3s 19ms/step - loss: 0.0128 - tp: 85.8242 - fp: 34.5824 - tn: 139445.7363 - fn: 143.4286 - accuracy: 0.9988 - precision: 0.7223 - recall: 0.4169 - auc: 0.8116 - prc: 0.4300 - val_loss: 0.0056 - val_tp: 29.0000 - val_fp: 6.0000 - val_tn: 45495.0000 - val_fn: 39.0000 - val_accuracy: 0.9990 - val_precision: 0.8286 - val_recall: 0.4265 - val_auc: 0.8869 - val_prc: 0.5852
Epoch 2/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0078 - tp: 57.2198 - fp: 13.6044 - tn: 93968.6374 - fn: 101.1099 - accuracy: 0.9988 - precision: 0.7962 - recall: 0.3536 - auc: 0.8804 - prc: 0.4441 - val_loss: 0.0046 - val_tp: 37.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 31.0000 - val_accuracy: 0.9992 - val_precision: 0.8409 - val_recall: 0.5441 - val_auc: 0.8965 - val_prc: 0.6553
Epoch 3/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0070 - tp: 72.5055 - fp: 11.8132 - tn: 93965.4615 - fn: 90.7912 - accuracy: 0.9989 - precision: 0.8528 - recall: 0.4286 - auc: 0.8877 - prc: 0.5442 - val_loss: 0.0041 - val_tp: 42.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 26.0000 - val_accuracy: 0.9993 - val_precision: 0.8571 - val_recall: 0.6176 - val_auc: 0.9115 - val_prc: 0.7018
Epoch 4/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0056 - tp: 75.9341 - fp: 15.5385 - tn: 93974.5824 - fn: 74.5165 - accuracy: 0.9991 - precision: 0.8144 - recall: 0.5043 - auc: 0.9071 - prc: 0.5767 - val_loss: 0.0038 - val_tp: 43.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 25.0000 - val_accuracy: 0.9993 - val_precision: 0.8600 - val_recall: 0.6324 - val_auc: 0.9115 - val_prc: 0.7219
Epoch 5/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0052 - tp: 85.5604 - fp: 12.5165 - tn: 93963.4505 - fn: 79.0440 - accuracy: 0.9990 - precision: 0.8791 - recall: 0.5287 - auc: 0.9219 - prc: 0.6853 - val_loss: 0.0036 - val_tp: 49.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 19.0000 - val_accuracy: 0.9994 - val_precision: 0.8750 - val_recall: 0.7206 - val_auc: 0.9189 - val_prc: 0.7450
Epoch 6/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0052 - tp: 83.9121 - fp: 17.2637 - tn: 93969.3297 - fn: 70.0659 - accuracy: 0.9991 - precision: 0.8201 - recall: 0.5450 - auc: 0.9025 - prc: 0.6265 - val_loss: 0.0034 - val_tp: 49.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 19.0000 - val_accuracy: 0.9994 - val_precision: 0.8750 - val_recall: 0.7206 - val_auc: 0.9189 - val_prc: 0.7694
Epoch 7/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0044 - tp: 84.3626 - fp: 19.3846 - tn: 93974.5055 - fn: 62.3187 - accuracy: 0.9992 - precision: 0.8203 - recall: 0.5790 - auc: 0.9044 - prc: 0.6688 - val_loss: 0.0033 - val_tp: 50.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 18.0000 - val_accuracy: 0.9995 - val_precision: 0.8772 - val_recall: 0.7353 - val_auc: 0.9189 - val_prc: 0.7776
Epoch 8/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0050 - tp: 96.3956 - fp: 14.0879 - tn: 93960.6703 - fn: 69.4176 - accuracy: 0.9991 - precision: 0.8880 - recall: 0.5940 - auc: 0.8907 - prc: 0.6928 - val_loss: 0.0032 - val_tp: 50.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 18.0000 - val_accuracy: 0.9995 - val_precision: 0.8772 - val_recall: 0.7353 - val_auc: 0.9190 - val_prc: 0.7800
Epoch 9/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0047 - tp: 89.1099 - fp: 13.9560 - tn: 93969.5824 - fn: 67.9231 - accuracy: 0.9991 - precision: 0.8546 - recall: 0.5488 - auc: 0.9101 - prc: 0.6632 - val_loss: 0.0031 - val_tp: 52.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 16.0000 - val_accuracy: 0.9995 - val_precision: 0.8814 - val_recall: 0.7647 - val_auc: 0.9189 - val_prc: 0.7862
Epoch 10/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0041 - tp: 101.6044 - fp: 15.1978 - tn: 93961.1868 - fn: 62.5824 - accuracy: 0.9992 - precision: 0.8819 - recall: 0.6159 - auc: 0.9338 - prc: 0.7485 - val_loss: 0.0031 - val_tp: 52.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 16.0000 - val_accuracy: 0.9995 - val_precision: 0.8814 - val_recall: 0.7647 - val_auc: 0.9189 - val_prc: 0.7877
Epoch 11/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0036 - tp: 98.8132 - fp: 14.0220 - tn: 93970.1648 - fn: 57.5714 - accuracy: 0.9993 - precision: 0.8852 - recall: 0.6702 - auc: 0.9372 - prc: 0.7668 - val_loss: 0.0030 - val_tp: 53.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 15.0000 - val_accuracy: 0.9995 - val_precision: 0.8833 - val_recall: 0.7794 - val_auc: 0.9188 - val_prc: 0.7876
Epoch 12/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0044 - tp: 97.6044 - fp: 15.2308 - tn: 93965.3736 - fn: 62.3626 - accuracy: 0.9992 - precision: 0.8661 - recall: 0.5838 - auc: 0.9161 - prc: 0.7001 - val_loss: 0.0029 - val_tp: 52.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 16.0000 - val_accuracy: 0.9995 - val_precision: 0.8814 - val_recall: 0.7647 - val_auc: 0.9189 - val_prc: 0.7936
Epoch 13/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0036 - tp: 96.4615 - fp: 14.4615 - tn: 93975.3187 - fn: 54.3297 - accuracy: 0.9993 - precision: 0.8736 - recall: 0.6655 - auc: 0.9356 - prc: 0.7517 - val_loss: 0.0029 - val_tp: 52.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 16.0000 - val_accuracy: 0.9995 - val_precision: 0.8814 - val_recall: 0.7647 - val_auc: 0.9188 - val_prc: 0.7951
Epoch 14/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0043 - tp: 93.4176 - fp: 14.2527 - tn: 93965.8791 - fn: 67.0220 - accuracy: 0.9991 - precision: 0.8560 - recall: 0.5876 - auc: 0.9145 - prc: 0.7059 - val_loss: 0.0028 - val_tp: 52.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 16.0000 - val_accuracy: 0.9995 - val_precision: 0.8814 - val_recall: 0.7647 - val_auc: 0.9189 - val_prc: 0.7954
Epoch 15/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0043 - tp: 99.6264 - fp: 13.4945 - tn: 93964.0989 - fn: 63.3516 - accuracy: 0.9992 - precision: 0.9029 - recall: 0.6014 - auc: 0.9154 - prc: 0.7398 - val_loss: 0.0028 - val_tp: 52.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 16.0000 - val_accuracy: 0.9995 - val_precision: 0.8814 - val_recall: 0.7647 - val_auc: 0.9189 - val_prc: 0.7998
Epoch 16/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0037 - tp: 106.5714 - fp: 12.2967 - tn: 93964.9560 - fn: 56.7473 - accuracy: 0.9993 - precision: 0.8967 - recall: 0.6516 - auc: 0.9198 - prc: 0.7593 - val_loss: 0.0027 - val_tp: 52.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 16.0000 - val_accuracy: 0.9995 - val_precision: 0.8814 - val_recall: 0.7647 - val_auc: 0.9189 - val_prc: 0.7995
Epoch 17/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0034 - tp: 98.8462 - fp: 13.2637 - tn: 93978.0220 - fn: 50.4396 - accuracy: 0.9994 - precision: 0.8885 - recall: 0.6706 - auc: 0.9149 - prc: 0.7379 - val_loss: 0.0027 - val_tp: 52.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 16.0000 - val_accuracy: 0.9995 - val_precision: 0.8814 - val_recall: 0.7647 - val_auc: 0.9188 - val_prc: 0.8033
Epoch 18/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0041 - tp: 95.4396 - fp: 15.6923 - tn: 93968.3077 - fn: 61.1319 - accuracy: 0.9992 - precision: 0.8596 - recall: 0.6062 - auc: 0.9241 - prc: 0.6920 - val_loss: 0.0027 - val_tp: 53.0000 - val_fp: 8.0000 - val_tn: 45493.0000 - val_fn: 15.0000 - val_accuracy: 0.9995 - val_precision: 0.8689 - val_recall: 0.7794 - val_auc: 0.9189 - val_prc: 0.8029
Epoch 19/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0040 - tp: 96.8571 - fp: 12.2198 - tn: 93967.2637 - fn: 64.2308 - accuracy: 0.9992 - precision: 0.8893 - recall: 0.6057 - auc: 0.9181 - prc: 0.7197 - val_loss: 0.0027 - val_tp: 53.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 15.0000 - val_accuracy: 0.9995 - val_precision: 0.8833 - val_recall: 0.7794 - val_auc: 0.9189 - val_prc: 0.8040
Epoch 20/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0038 - tp: 103.8681 - fp: 15.3956 - tn: 93965.3846 - fn: 55.9231 - accuracy: 0.9993 - precision: 0.8758 - recall: 0.6413 - auc: 0.9228 - prc: 0.7262 - val_loss: 0.0027 - val_tp: 52.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 16.0000 - val_accuracy: 0.9995 - val_precision: 0.8814 - val_recall: 0.7647 - val_auc: 0.9189 - val_prc: 0.8042
Epoch 21/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0039 - tp: 98.0330 - fp: 11.1648 - tn: 93973.3407 - fn: 58.0330 - accuracy: 0.9993 - precision: 0.8961 - recall: 0.6234 - auc: 0.9210 - prc: 0.7244 - val_loss: 0.0027 - val_tp: 55.0000 - val_fp: 9.0000 - val_tn: 45492.0000 - val_fn: 13.0000 - val_accuracy: 0.9995 - val_precision: 0.8594 - val_recall: 0.8088 - val_auc: 0.9188 - val_prc: 0.8058
Epoch 22/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0040 - tp: 94.7582 - fp: 17.5714 - tn: 93965.6484 - fn: 62.5934 - accuracy: 0.9992 - precision: 0.8420 - recall: 0.5900 - auc: 0.9084 - prc: 0.7070 - val_loss: 0.0026 - val_tp: 54.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 14.0000 - val_accuracy: 0.9995 - val_precision: 0.8852 - val_recall: 0.7941 - val_auc: 0.9189 - val_prc: 0.8065
Epoch 23/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0043 - tp: 94.4725 - fp: 16.9670 - tn: 93961.0110 - fn: 68.1209 - accuracy: 0.9991 - precision: 0.8408 - recall: 0.5526 - auc: 0.9170 - prc: 0.7076 - val_loss: 0.0026 - val_tp: 53.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 15.0000 - val_accuracy: 0.9995 - val_precision: 0.8833 - val_recall: 0.7794 - val_auc: 0.9189 - val_prc: 0.8072
Epoch 24/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0038 - tp: 98.7363 - fp: 15.3297 - tn: 93961.8571 - fn: 64.6484 - accuracy: 0.9992 - precision: 0.8857 - recall: 0.6075 - auc: 0.9227 - prc: 0.7570 - val_loss: 0.0026 - val_tp: 54.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 14.0000 - val_accuracy: 0.9995 - val_precision: 0.8852 - val_recall: 0.7941 - val_auc: 0.9188 - val_prc: 0.8080
Epoch 25/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0036 - tp: 102.8352 - fp: 15.9011 - tn: 93968.0549 - fn: 53.7802 - accuracy: 0.9993 - precision: 0.8731 - recall: 0.6590 - auc: 0.9260 - prc: 0.7575 - val_loss: 0.0026 - val_tp: 55.0000 - val_fp: 8.0000 - val_tn: 45493.0000 - val_fn: 13.0000 - val_accuracy: 0.9995 - val_precision: 0.8730 - val_recall: 0.8088 - val_auc: 0.9188 - val_prc: 0.8079
Epoch 26/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0041 - tp: 94.2747 - fp: 18.1978 - tn: 93967.5385 - fn: 60.5604 - accuracy: 0.9992 - precision: 0.8189 - recall: 0.6172 - auc: 0.9104 - prc: 0.6988 - val_loss: 0.0026 - val_tp: 55.0000 - val_fp: 8.0000 - val_tn: 45493.0000 - val_fn: 13.0000 - val_accuracy: 0.9995 - val_precision: 0.8730 - val_recall: 0.8088 - val_auc: 0.9188 - val_prc: 0.8082
Epoch 27/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0035 - tp: 98.6593 - fp: 16.6593 - tn: 93970.8462 - fn: 54.4066 - accuracy: 0.9993 - precision: 0.8527 - recall: 0.6731 - auc: 0.9321 - prc: 0.7655 - val_loss: 0.0026 - val_tp: 56.0000 - val_fp: 8.0000 - val_tn: 45493.0000 - val_fn: 12.0000 - val_accuracy: 0.9996 - val_precision: 0.8750 - val_recall: 0.8235 - val_auc: 0.9188 - val_prc: 0.8095
Epoch 28/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0035 - tp: 102.9011 - fp: 12.5275 - tn: 93969.4505 - fn: 55.6923 - accuracy: 0.9993 - precision: 0.8824 - recall: 0.6568 - auc: 0.9419 - prc: 0.7602 - val_loss: 0.0026 - val_tp: 56.0000 - val_fp: 9.0000 - val_tn: 45492.0000 - val_fn: 12.0000 - val_accuracy: 0.9995 - val_precision: 0.8615 - val_recall: 0.8235 - val_auc: 0.9188 - val_prc: 0.8065
Epoch 29/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0038 - tp: 103.0440 - fp: 18.6264 - tn: 93962.9451 - fn: 55.9560 - accuracy: 0.9992 - precision: 0.8451 - recall: 0.6583 - auc: 0.9262 - prc: 0.7390 - val_loss: 0.0027 - val_tp: 56.0000 - val_fp: 9.0000 - val_tn: 45492.0000 - val_fn: 12.0000 - val_accuracy: 0.9995 - val_precision: 0.8615 - val_recall: 0.8235 - val_auc: 0.9188 - val_prc: 0.8077
Epoch 30/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0040 - tp: 107.3956 - fp: 17.6484 - tn: 93957.3626 - fn: 58.1648 - accuracy: 0.9992 - precision: 0.8625 - recall: 0.6653 - auc: 0.9352 - prc: 0.7321 - val_loss: 0.0026 - val_tp: 56.0000 - val_fp: 9.0000 - val_tn: 45492.0000 - val_fn: 12.0000 - val_accuracy: 0.9995 - val_precision: 0.8615 - val_recall: 0.8235 - val_auc: 0.9189 - val_prc: 0.8100
Epoch 31/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0039 - tp: 97.1868 - fp: 16.2747 - tn: 93971.4945 - fn: 55.6154 - accuracy: 0.9992 - precision: 0.8596 - recall: 0.6426 - auc: 0.9168 - prc: 0.7193 - val_loss: 0.0026 - val_tp: 55.0000 - val_fp: 9.0000 - val_tn: 45492.0000 - val_fn: 13.0000 - val_accuracy: 0.9995 - val_precision: 0.8594 - val_recall: 0.8088 - val_auc: 0.9189 - val_prc: 0.8093
Epoch 32/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0033 - tp: 102.7912 - fp: 15.3297 - tn: 93965.3626 - fn: 57.0879 - accuracy: 0.9993 - precision: 0.8906 - recall: 0.6485 - auc: 0.9354 - prc: 0.7856 - val_loss: 0.0026 - val_tp: 54.0000 - val_fp: 9.0000 - val_tn: 45492.0000 - val_fn: 14.0000 - val_accuracy: 0.9995 - val_precision: 0.8571 - val_recall: 0.7941 - val_auc: 0.9189 - val_prc: 0.8104
Epoch 33/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0039 - tp: 106.9890 - fp: 13.9670 - tn: 93959.6044 - fn: 60.0110 - accuracy: 0.9992 - precision: 0.8946 - recall: 0.6250 - auc: 0.9362 - prc: 0.7510 - val_loss: 0.0026 - val_tp: 53.0000 - val_fp: 5.0000 - val_tn: 45496.0000 - val_fn: 15.0000 - val_accuracy: 0.9996 - val_precision: 0.9138 - val_recall: 0.7794 - val_auc: 0.9189 - val_prc: 0.8118
Epoch 34/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0031 - tp: 101.4725 - fp: 9.3297 - tn: 93976.8571 - fn: 52.9121 - accuracy: 0.9994 - precision: 0.9200 - recall: 0.6816 - auc: 0.9432 - prc: 0.7955 - val_loss: 0.0026 - val_tp: 55.0000 - val_fp: 9.0000 - val_tn: 45492.0000 - val_fn: 13.0000 - val_accuracy: 0.9995 - val_precision: 0.8594 - val_recall: 0.8088 - val_auc: 0.9189 - val_prc: 0.8108
Epoch 35/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0035 - tp: 102.0110 - fp: 18.2747 - tn: 93964.6813 - fn: 55.6044 - accuracy: 0.9993 - precision: 0.8471 - recall: 0.6746 - auc: 0.9360 - prc: 0.7521 - val_loss: 0.0025 - val_tp: 53.0000 - val_fp: 6.0000 - val_tn: 45495.0000 - val_fn: 15.0000 - val_accuracy: 0.9995 - val_precision: 0.8983 - val_recall: 0.7794 - val_auc: 0.9189 - val_prc: 0.8106
Epoch 36/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0035 - tp: 96.2308 - fp: 13.4725 - tn: 93970.5934 - fn: 60.2747 - accuracy: 0.9992 - precision: 0.8913 - recall: 0.5955 - auc: 0.9341 - prc: 0.7553 - val_loss: 0.0025 - val_tp: 53.0000 - val_fp: 6.0000 - val_tn: 45495.0000 - val_fn: 15.0000 - val_accuracy: 0.9995 - val_precision: 0.8983 - val_recall: 0.7794 - val_auc: 0.9189 - val_prc: 0.8117
Epoch 37/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0033 - tp: 103.7143 - fp: 18.4176 - tn: 93965.2527 - fn: 53.1868 - accuracy: 0.9992 - precision: 0.8413 - recall: 0.6468 - auc: 0.9377 - prc: 0.7731 - val_loss: 0.0025 - val_tp: 53.0000 - val_fp: 6.0000 - val_tn: 45495.0000 - val_fn: 15.0000 - val_accuracy: 0.9995 - val_precision: 0.8983 - val_recall: 0.7794 - val_auc: 0.9189 - val_prc: 0.8126
Epoch 38/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0035 - tp: 104.4396 - fp: 16.0549 - tn: 93964.2637 - fn: 55.8132 - accuracy: 0.9993 - precision: 0.8727 - recall: 0.6465 - auc: 0.9343 - prc: 0.7556 - val_loss: 0.0026 - val_tp: 56.0000 - val_fp: 9.0000 - val_tn: 45492.0000 - val_fn: 12.0000 - val_accuracy: 0.9995 - val_precision: 0.8615 - val_recall: 0.8235 - val_auc: 0.9189 - val_prc: 0.8103
Epoch 39/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0040 - tp: 102.4725 - fp: 12.5714 - tn: 93967.0110 - fn: 58.5165 - accuracy: 0.9992 - precision: 0.8937 - recall: 0.6246 - auc: 0.9189 - prc: 0.7465 - val_loss: 0.0026 - val_tp: 56.0000 - val_fp: 9.0000 - val_tn: 45492.0000 - val_fn: 12.0000 - val_accuracy: 0.9995 - val_precision: 0.8615 - val_recall: 0.8235 - val_auc: 0.9188 - val_prc: 0.8127
Epoch 40/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0034 - tp: 103.3846 - fp: 18.2747 - tn: 93964.5275 - fn: 54.3846 - accuracy: 0.9993 - precision: 0.8263 - recall: 0.6413 - auc: 0.9277 - prc: 0.7341 - val_loss: 0.0026 - val_tp: 56.0000 - val_fp: 9.0000 - val_tn: 45492.0000 - val_fn: 12.0000 - val_accuracy: 0.9995 - val_precision: 0.8615 - val_recall: 0.8235 - val_auc: 0.9189 - val_prc: 0.8117
Epoch 41/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0033 - tp: 106.4505 - fp: 12.4725 - tn: 93972.9341 - fn: 48.7143 - accuracy: 0.9994 - precision: 0.9049 - recall: 0.6904 - auc: 0.9539 - prc: 0.7850 - val_loss: 0.0025 - val_tp: 56.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 12.0000 - val_accuracy: 0.9996 - val_precision: 0.8889 - val_recall: 0.8235 - val_auc: 0.9189 - val_prc: 0.8117
Epoch 42/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0034 - tp: 111.4066 - fp: 10.6703 - tn: 93959.9780 - fn: 58.5165 - accuracy: 0.9993 - precision: 0.9265 - recall: 0.6647 - auc: 0.9373 - prc: 0.7868 - val_loss: 0.0026 - val_tp: 56.0000 - val_fp: 9.0000 - val_tn: 45492.0000 - val_fn: 12.0000 - val_accuracy: 0.9995 - val_precision: 0.8615 - val_recall: 0.8235 - val_auc: 0.9189 - val_prc: 0.8121
Epoch 43/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0037 - tp: 99.8571 - fp: 16.3077 - tn: 93965.9011 - fn: 58.5055 - accuracy: 0.9992 - precision: 0.8576 - recall: 0.6338 - auc: 0.9307 - prc: 0.7444 - val_loss: 0.0025 - val_tp: 55.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 13.0000 - val_accuracy: 0.9996 - val_precision: 0.8871 - val_recall: 0.8088 - val_auc: 0.9189 - val_prc: 0.8118
Epoch 44/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0033 - tp: 109.6484 - fp: 13.6813 - tn: 93968.2637 - fn: 48.9780 - accuracy: 0.9993 - precision: 0.9065 - recall: 0.6834 - auc: 0.9362 - prc: 0.7890 - val_loss: 0.0025 - val_tp: 55.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 13.0000 - val_accuracy: 0.9996 - val_precision: 0.8871 - val_recall: 0.8088 - val_auc: 0.9189 - val_prc: 0.8127
Epoch 45/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0032 - tp: 116.9451 - fp: 13.6593 - tn: 93960.2637 - fn: 49.7033 - accuracy: 0.9993 - precision: 0.8926 - recall: 0.7199 - auc: 0.9435 - prc: 0.7952 - val_loss: 0.0026 - val_tp: 56.0000 - val_fp: 9.0000 - val_tn: 45492.0000 - val_fn: 12.0000 - val_accuracy: 0.9995 - val_precision: 0.8615 - val_recall: 0.8235 - val_auc: 0.9188 - val_prc: 0.8121
Epoch 46/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0043 - tp: 92.4396 - fp: 15.2857 - tn: 93968.2418 - fn: 64.6044 - accuracy: 0.9991 - precision: 0.8350 - recall: 0.5461 - auc: 0.9052 - prc: 0.6640 - val_loss: 0.0026 - val_tp: 56.0000 - val_fp: 9.0000 - val_tn: 45492.0000 - val_fn: 12.0000 - val_accuracy: 0.9995 - val_precision: 0.8615 - val_recall: 0.8235 - val_auc: 0.9189 - val_prc: 0.8122
Epoch 47/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0033 - tp: 113.8022 - fp: 11.4176 - tn: 93965.9341 - fn: 49.4176 - accuracy: 0.9993 - precision: 0.9212 - recall: 0.6823 - auc: 0.9503 - prc: 0.7969 - val_loss: 0.0025 - val_tp: 53.0000 - val_fp: 5.0000 - val_tn: 45496.0000 - val_fn: 15.0000 - val_accuracy: 0.9996 - val_precision: 0.9138 - val_recall: 0.7794 - val_auc: 0.9189 - val_prc: 0.8123
Epoch 48/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0034 - tp: 106.2088 - fp: 13.6374 - tn: 93967.2967 - fn: 53.4286 - accuracy: 0.9993 - precision: 0.8812 - recall: 0.6867 - auc: 0.9409 - prc: 0.7938 - val_loss: 0.0025 - val_tp: 55.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 13.0000 - val_accuracy: 0.9996 - val_precision: 0.8871 - val_recall: 0.8088 - val_auc: 0.9189 - val_prc: 0.8120
Epoch 49/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0032 - tp: 107.6923 - fp: 15.1209 - tn: 93963.7582 - fn: 54.0000 - accuracy: 0.9993 - precision: 0.8910 - recall: 0.6819 - auc: 0.9464 - prc: 0.8039 - val_loss: 0.0025 - val_tp: 55.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 13.0000 - val_accuracy: 0.9996 - val_precision: 0.8871 - val_recall: 0.8088 - val_auc: 0.9188 - val_prc: 0.8123
Epoch 50/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0035 - tp: 101.0000 - fp: 15.2857 - tn: 93964.5714 - fn: 59.7143 - accuracy: 0.9992 - precision: 0.8562 - recall: 0.6303 - auc: 0.9412 - prc: 0.7679 - val_loss: 0.0025 - val_tp: 54.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 14.0000 - val_accuracy: 0.9995 - val_precision: 0.8852 - val_recall: 0.7941 - val_auc: 0.9188 - val_prc: 0.8117
Epoch 51/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0033 - tp: 101.8571 - fp: 12.4505 - tn: 93967.6374 - fn: 58.6264 - accuracy: 0.9993 - precision: 0.9125 - recall: 0.6385 - auc: 0.9308 - prc: 0.7944 - val_loss: 0.0025 - val_tp: 55.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 13.0000 - val_accuracy: 0.9996 - val_precision: 0.8871 - val_recall: 0.8088 - val_auc: 0.9188 - val_prc: 0.8124
Epoch 52/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0035 - tp: 110.9780 - fp: 16.0659 - tn: 93957.0769 - fn: 56.4505 - accuracy: 0.9992 - precision: 0.8809 - recall: 0.6611 - auc: 0.9317 - prc: 0.7987 - val_loss: 0.0025 - val_tp: 54.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 14.0000 - val_accuracy: 0.9995 - val_precision: 0.8852 - val_recall: 0.7941 - val_auc: 0.9188 - val_prc: 0.8119
Epoch 53/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0035 - tp: 92.7912 - fp: 12.6374 - tn: 93978.9341 - fn: 56.2088 - accuracy: 0.9993 - precision: 0.8743 - recall: 0.6134 - auc: 0.9244 - prc: 0.7301 - val_loss: 0.0026 - val_tp: 56.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 12.0000 - val_accuracy: 0.9996 - val_precision: 0.8889 - val_recall: 0.8235 - val_auc: 0.9189 - val_prc: 0.8123
Epoch 54/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0038 - tp: 97.0879 - fp: 15.4176 - tn: 93968.2857 - fn: 59.7802 - accuracy: 0.9992 - precision: 0.8669 - recall: 0.6103 - auc: 0.9308 - prc: 0.7203 - val_loss: 0.0026 - val_tp: 56.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 12.0000 - val_accuracy: 0.9996 - val_precision: 0.8889 - val_recall: 0.8235 - val_auc: 0.9189 - val_prc: 0.8131
Epoch 55/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0036 - tp: 101.2637 - fp: 15.1319 - tn: 93963.3736 - fn: 60.8022 - accuracy: 0.9992 - precision: 0.8608 - recall: 0.6418 - auc: 0.9401 - prc: 0.7634 - val_loss: 0.0026 - val_tp: 56.0000 - val_fp: 9.0000 - val_tn: 45492.0000 - val_fn: 12.0000 - val_accuracy: 0.9995 - val_precision: 0.8615 - val_recall: 0.8235 - val_auc: 0.9189 - val_prc: 0.8118
Epoch 56/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0036 - tp: 108.9121 - fp: 16.1758 - tn: 93958.0440 - fn: 57.4396 - accuracy: 0.9992 - precision: 0.8744 - recall: 0.6679 - auc: 0.9271 - prc: 0.7703 - val_loss: 0.0026 - val_tp: 54.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 14.0000 - val_accuracy: 0.9995 - val_precision: 0.8852 - val_recall: 0.7941 - val_auc: 0.9188 - val_prc: 0.8121
Epoch 57/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0036 - tp: 103.0220 - fp: 15.6813 - tn: 93963.6813 - fn: 58.1868 - accuracy: 0.9992 - precision: 0.8605 - recall: 0.6106 - auc: 0.9387 - prc: 0.7517 - val_loss: 0.0026 - val_tp: 55.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 13.0000 - val_accuracy: 0.9996 - val_precision: 0.8871 - val_recall: 0.8088 - val_auc: 0.9189 - val_prc: 0.8124
Epoch 58/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0033 - tp: 110.8462 - fp: 15.1429 - tn: 93964.6374 - fn: 49.9451 - accuracy: 0.9993 - precision: 0.8759 - recall: 0.7108 - auc: 0.9361 - prc: 0.7958 - val_loss: 0.0026 - val_tp: 56.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 12.0000 - val_accuracy: 0.9996 - val_precision: 0.8889 - val_recall: 0.8235 - val_auc: 0.9189 - val_prc: 0.8113
Epoch 59/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0035 - tp: 103.7033 - fp: 12.3846 - tn: 93967.6264 - fn: 56.8571 - accuracy: 0.9993 - precision: 0.8925 - recall: 0.6632 - auc: 0.9310 - prc: 0.7654 - val_loss: 0.0026 - val_tp: 55.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 13.0000 - val_accuracy: 0.9996 - val_precision: 0.8871 - val_recall: 0.8088 - val_auc: 0.9189 - val_prc: 0.8124
Epoch 60/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0039 - tp: 103.7143 - fp: 13.6703 - tn: 93962.8242 - fn: 60.3626 - accuracy: 0.9992 - precision: 0.8722 - recall: 0.6274 - auc: 0.9213 - prc: 0.7442 - val_loss: 0.0026 - val_tp: 56.0000 - val_fp: 9.0000 - val_tn: 45492.0000 - val_fn: 12.0000 - val_accuracy: 0.9995 - val_precision: 0.8615 - val_recall: 0.8235 - val_auc: 0.9189 - val_prc: 0.8116
Epoch 61/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0028 - tp: 110.0769 - fp: 15.0989 - tn: 93971.7363 - fn: 43.6593 - accuracy: 0.9994 - precision: 0.8802 - recall: 0.7295 - auc: 0.9507 - prc: 0.8124 - val_loss: 0.0025 - val_tp: 54.0000 - val_fp: 6.0000 - val_tn: 45495.0000 - val_fn: 14.0000 - val_accuracy: 0.9996 - val_precision: 0.9000 - val_recall: 0.7941 - val_auc: 0.9189 - val_prc: 0.8127
Epoch 62/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0033 - tp: 109.6264 - fp: 14.5275 - tn: 93961.8242 - fn: 54.5934 - accuracy: 0.9993 - precision: 0.8920 - recall: 0.6897 - auc: 0.9415 - prc: 0.7955 - val_loss: 0.0025 - val_tp: 53.0000 - val_fp: 6.0000 - val_tn: 45495.0000 - val_fn: 15.0000 - val_accuracy: 0.9995 - val_precision: 0.8983 - val_recall: 0.7794 - val_auc: 0.9189 - val_prc: 0.8133
Epoch 63/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0032 - tp: 101.8791 - fp: 14.3736 - tn: 93972.7253 - fn: 51.5934 - accuracy: 0.9993 - precision: 0.8681 - recall: 0.6648 - auc: 0.9556 - prc: 0.7755 - val_loss: 0.0026 - val_tp: 56.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 12.0000 - val_accuracy: 0.9996 - val_precision: 0.8889 - val_recall: 0.8235 - val_auc: 0.9188 - val_prc: 0.8152
Epoch 64/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0039 - tp: 101.2308 - fp: 14.1319 - tn: 93961.9121 - fn: 63.2967 - accuracy: 0.9991 - precision: 0.8763 - recall: 0.6071 - auc: 0.9303 - prc: 0.7584 - val_loss: 0.0025 - val_tp: 54.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 14.0000 - val_accuracy: 0.9995 - val_precision: 0.8852 - val_recall: 0.7941 - val_auc: 0.9188 - val_prc: 0.8133
Epoch 65/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0031 - tp: 105.0000 - fp: 14.0440 - tn: 93967.2308 - fn: 54.2967 - accuracy: 0.9992 - precision: 0.8744 - recall: 0.6644 - auc: 0.9423 - prc: 0.8041 - val_loss: 0.0026 - val_tp: 56.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 12.0000 - val_accuracy: 0.9996 - val_precision: 0.8889 - val_recall: 0.8235 - val_auc: 0.9188 - val_prc: 0.8137
Epoch 66/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0037 - tp: 107.6154 - fp: 14.5495 - tn: 93963.4835 - fn: 54.9231 - accuracy: 0.9993 - precision: 0.8716 - recall: 0.6530 - auc: 0.9344 - prc: 0.7388 - val_loss: 0.0026 - val_tp: 56.0000 - val_fp: 8.0000 - val_tn: 45493.0000 - val_fn: 12.0000 - val_accuracy: 0.9996 - val_precision: 0.8750 - val_recall: 0.8235 - val_auc: 0.9188 - val_prc: 0.8129
Epoch 67/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0034 - tp: 107.6593 - fp: 14.4505 - tn: 93962.2198 - fn: 56.2418 - accuracy: 0.9993 - precision: 0.8726 - recall: 0.6556 - auc: 0.9181 - prc: 0.7642 - val_loss: 0.0025 - val_tp: 54.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 14.0000 - val_accuracy: 0.9995 - val_precision: 0.8852 - val_recall: 0.7941 - val_auc: 0.9188 - val_prc: 0.8121
Epoch 68/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0031 - tp: 107.8791 - fp: 14.8132 - tn: 93968.4505 - fn: 49.4286 - accuracy: 0.9993 - precision: 0.8583 - recall: 0.7114 - auc: 0.9471 - prc: 0.7898 - val_loss: 0.0026 - val_tp: 56.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 12.0000 - val_accuracy: 0.9996 - val_precision: 0.8889 - val_recall: 0.8235 - val_auc: 0.9262 - val_prc: 0.8181
Epoch 69/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0034 - tp: 101.1978 - fp: 13.4725 - tn: 93970.8681 - fn: 55.0330 - accuracy: 0.9993 - precision: 0.8976 - recall: 0.6211 - auc: 0.9399 - prc: 0.7708 - val_loss: 0.0026 - val_tp: 56.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 12.0000 - val_accuracy: 0.9996 - val_precision: 0.8889 - val_recall: 0.8235 - val_auc: 0.9262 - val_prc: 0.8175
Epoch 70/100
90/90 [==============================] - 1s 10ms/step - loss: 0.0032 - tp: 113.5495 - fp: 13.8901 - tn: 93960.8901 - fn: 52.2418 - accuracy: 0.9993 - precision: 0.8860 - recall: 0.6642 - auc: 0.9449 - prc: 0.7916 - val_loss: 0.0025 - val_tp: 56.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 12.0000 - val_accuracy: 0.9996 - val_precision: 0.8889 - val_recall: 0.8235 - val_auc: 0.9188 - val_prc: 0.8138
Epoch 71/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0033 - tp: 105.6264 - fp: 14.3297 - tn: 93969.8681 - fn: 50.7473 - accuracy: 0.9993 - precision: 0.8754 - recall: 0.6786 - auc: 0.9566 - prc: 0.7724 - val_loss: 0.0025 - val_tp: 56.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 12.0000 - val_accuracy: 0.9996 - val_precision: 0.8889 - val_recall: 0.8235 - val_auc: 0.9188 - val_prc: 0.8135
Epoch 72/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0036 - tp: 101.1209 - fp: 15.3187 - tn: 93963.5385 - fn: 60.5934 - accuracy: 0.9991 - precision: 0.8674 - recall: 0.6031 - auc: 0.9403 - prc: 0.7656 - val_loss: 0.0026 - val_tp: 56.0000 - val_fp: 8.0000 - val_tn: 45493.0000 - val_fn: 12.0000 - val_accuracy: 0.9996 - val_precision: 0.8750 - val_recall: 0.8235 - val_auc: 0.9189 - val_prc: 0.8141
Epoch 73/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0034 - tp: 105.5055 - fp: 12.9011 - tn: 93968.3956 - fn: 53.7692 - accuracy: 0.9993 - precision: 0.8836 - recall: 0.6633 - auc: 0.9523 - prc: 0.7820 - val_loss: 0.0026 - val_tp: 56.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 12.0000 - val_accuracy: 0.9996 - val_precision: 0.8889 - val_recall: 0.8235 - val_auc: 0.9189 - val_prc: 0.8134
Epoch 74/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0035 - tp: 121.5824 - fp: 13.2198 - tn: 93954.3077 - fn: 51.4615 - accuracy: 0.9993 - precision: 0.9037 - recall: 0.7077 - auc: 0.9275 - prc: 0.7906 - val_loss: 0.0025 - val_tp: 54.0000 - val_fp: 6.0000 - val_tn: 45495.0000 - val_fn: 14.0000 - val_accuracy: 0.9996 - val_precision: 0.9000 - val_recall: 0.7941 - val_auc: 0.9189 - val_prc: 0.8144
Epoch 75/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0030 - tp: 93.4945 - fp: 13.2857 - tn: 93977.4725 - fn: 56.3187 - accuracy: 0.9993 - precision: 0.8720 - recall: 0.6380 - auc: 0.9458 - prc: 0.7553 - val_loss: 0.0026 - val_tp: 56.0000 - val_fp: 9.0000 - val_tn: 45492.0000 - val_fn: 12.0000 - val_accuracy: 0.9995 - val_precision: 0.8615 - val_recall: 0.8235 - val_auc: 0.9262 - val_prc: 0.8218
Epoch 76/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0036 - tp: 110.3516 - fp: 17.9780 - tn: 93961.7802 - fn: 50.4615 - accuracy: 0.9993 - precision: 0.8681 - recall: 0.6863 - auc: 0.9334 - prc: 0.7584 - val_loss: 0.0025 - val_tp: 56.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 12.0000 - val_accuracy: 0.9996 - val_precision: 0.8889 - val_recall: 0.8235 - val_auc: 0.9262 - val_prc: 0.8195
Epoch 77/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0033 - tp: 105.8352 - fp: 13.1538 - tn: 93968.4945 - fn: 53.0879 - accuracy: 0.9993 - precision: 0.8966 - recall: 0.6416 - auc: 0.9405 - prc: 0.7829 - val_loss: 0.0026 - val_tp: 56.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 12.0000 - val_accuracy: 0.9996 - val_precision: 0.8889 - val_recall: 0.8235 - val_auc: 0.9262 - val_prc: 0.8194
Epoch 78/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0035 - tp: 102.8022 - fp: 15.2857 - tn: 93968.1648 - fn: 54.3187 - accuracy: 0.9992 - precision: 0.8885 - recall: 0.6319 - auc: 0.9194 - prc: 0.7679 - val_loss: 0.0025 - val_tp: 56.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 12.0000 - val_accuracy: 0.9996 - val_precision: 0.8889 - val_recall: 0.8235 - val_auc: 0.9262 - val_prc: 0.8197
Epoch 79/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0033 - tp: 100.5385 - fp: 15.6374 - tn: 93975.2088 - fn: 49.1868 - accuracy: 0.9993 - precision: 0.8636 - recall: 0.6720 - auc: 0.9338 - prc: 0.7498 - val_loss: 0.0025 - val_tp: 56.0000 - val_fp: 6.0000 - val_tn: 45495.0000 - val_fn: 12.0000 - val_accuracy: 0.9996 - val_precision: 0.9032 - val_recall: 0.8235 - val_auc: 0.9188 - val_prc: 0.8137
Epoch 80/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0031 - tp: 111.4396 - fp: 14.0440 - tn: 93965.8681 - fn: 49.2198 - accuracy: 0.9993 - precision: 0.8957 - recall: 0.6898 - auc: 0.9409 - prc: 0.8043 - val_loss: 0.0026 - val_tp: 56.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 12.0000 - val_accuracy: 0.9996 - val_precision: 0.8889 - val_recall: 0.8235 - val_auc: 0.9188 - val_prc: 0.8121
Epoch 81/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0030 - tp: 101.1538 - fp: 14.1648 - tn: 93973.0769 - fn: 52.1758 - accuracy: 0.9993 - precision: 0.8660 - recall: 0.6539 - auc: 0.9406 - prc: 0.7857 - val_loss: 0.0026 - val_tp: 56.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 12.0000 - val_accuracy: 0.9996 - val_precision: 0.8889 - val_recall: 0.8235 - val_auc: 0.9262 - val_prc: 0.8189
Epoch 82/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0031 - tp: 105.2527 - fp: 14.5165 - tn: 93968.0769 - fn: 52.7253 - accuracy: 0.9993 - precision: 0.8838 - recall: 0.6871 - auc: 0.9512 - prc: 0.8111 - val_loss: 0.0026 - val_tp: 56.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 12.0000 - val_accuracy: 0.9996 - val_precision: 0.8889 - val_recall: 0.8235 - val_auc: 0.9262 - val_prc: 0.8179
Epoch 83/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0031 - tp: 107.6703 - fp: 17.5275 - tn: 93966.5275 - fn: 48.8462 - accuracy: 0.9993 - precision: 0.8445 - recall: 0.7097 - auc: 0.9534 - prc: 0.7994 - val_loss: 0.0026 - val_tp: 56.0000 - val_fp: 7.0000 - val_tn: 45494.0000 - val_fn: 12.0000 - val_accuracy: 0.9996 - val_precision: 0.8889 - val_recall: 0.8235 - val_auc: 0.9262 - val_prc: 0.8180
Epoch 84/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0032 - tp: 106.7033 - fp: 16.6703 - tn: 93969.5055 - fn: 47.6923 - accuracy: 0.9993 - precision: 0.8538 - recall: 0.6670 - auc: 0.9237 - prc: 0.7391 - val_loss: 0.0025 - val_tp: 53.0000 - val_fp: 6.0000 - val_tn: 45495.0000 - val_fn: 15.0000 - val_accuracy: 0.9995 - val_precision: 0.8983 - val_recall: 0.7794 - val_auc: 0.9262 - val_prc: 0.8196
Epoch 85/100
90/90 [==============================] - 1s 8ms/step - loss: 0.0035 - tp: 107.5714 - fp: 14.6044 - tn: 93961.9670 - fn: 56.4286 - accuracy: 0.9992 - precision: 0.8835 - recall: 0.6642 - auc: 0.9320 - prc: 0.7804 - val_loss: 0.0025 - val_tp: 54.0000 - val_fp: 5.0000 - val_tn: 45496.0000 - val_fn: 14.0000 - val_accuracy: 0.9996 - val_precision: 0.9153 - val_recall: 0.7941 - val_auc: 0.9261 - val_prc: 0.8199
Restoring model weights from the end of the best epoch.
Epoch 00085: early stopping


### Check training history

In this section, you will produce plots of your model's accuracy and loss on the training and validation set. These are useful to check for overfitting, which you can learn more about in this tutorial.

Additionally, you can produce these plots for any of the metrics you created above. False negatives are included as an example.

def plot_metrics(history):
metrics = ['loss', 'prc', 'precision', 'recall']
for n, metric in enumerate(metrics):
name = metric.replace("_"," ").capitalize()
plt.subplot(2,2,n+1)
plt.plot(history.epoch, history.history[metric], color=colors[0], label='Train')
plt.plot(history.epoch, history.history['val_'+metric],
color=colors[0], linestyle="--", label='Val')
plt.xlabel('Epoch')
plt.ylabel(name)
if metric == 'loss':
plt.ylim([0, plt.ylim()[1]])
elif metric == 'auc':
plt.ylim([0.8,1])
else:
plt.ylim([0,1])

plt.legend()

plot_metrics(baseline_history)


### Evaluate metrics

You can use a confusion matrix to summarize the actual vs. predicted labels where the X axis is the predicted label and the Y axis is the actual label.

train_predictions_baseline = model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_baseline = model.predict(test_features, batch_size=BATCH_SIZE)

def plot_cm(labels, predictions, p=0.5):
cm = confusion_matrix(labels, predictions > p)
plt.figure(figsize=(5,5))
sns.heatmap(cm, annot=True, fmt="d")
plt.title('Confusion matrix @{:.2f}'.format(p))
plt.ylabel('Actual label')
plt.xlabel('Predicted label')

print('Legitimate Transactions Detected (True Negatives): ', cm[0][0])
print('Legitimate Transactions Incorrectly Detected (False Positives): ', cm[0][1])
print('Fraudulent Transactions Missed (False Negatives): ', cm[1][0])
print('Fraudulent Transactions Detected (True Positives): ', cm[1][1])
print('Total Fraudulent Transactions: ', np.sum(cm[1]))


Evaluate your model on the test dataset and display the results for the metrics you created above.

baseline_results = model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(model.metrics_names, baseline_results):
print(name, ': ', value)
print()

plot_cm(test_labels, test_predictions_baseline)

loss :  0.0036890378687530756
tp :  86.0
fp :  16.0
tn :  56832.0
fn :  28.0
accuracy :  0.9992275834083557
precision :  0.843137264251709
recall :  0.7543859481811523
auc :  0.9251542687416077
prc :  0.80972820520401

Legitimate Transactions Detected (True Negatives):  56832
Legitimate Transactions Incorrectly Detected (False Positives):  16
Fraudulent Transactions Missed (False Negatives):  28
Fraudulent Transactions Detected (True Positives):  86
Total Fraudulent Transactions:  114


If the model had predicted everything perfectly, this would be a diagonal matrix where values off the main diagonal, indicating incorrect predictions, would be zero. In this case the matrix shows that you have relatively few false positives, meaning that there were relatively few legitimate transactions that were incorrectly flagged. However, you would likely want to have even fewer false negatives despite the cost of increasing the number of false positives. This trade off may be preferable because false negatives would allow fraudulent transactions to go through, whereas false positives may cause an email to be sent to a customer to ask them to verify their card activity.

### Plot the ROC

Now plot the ROC. This plot is useful because it shows, at a glance, the range of performance the model can reach just by tuning the output threshold.

def plot_roc(name, labels, predictions, **kwargs):
fp, tp, _ = sklearn.metrics.roc_curve(labels, predictions)

plt.plot(100*fp, 100*tp, label=name, linewidth=2, **kwargs)
plt.xlabel('False positives [%]')
plt.ylabel('True positives [%]')
plt.xlim([-0.5,20])
plt.ylim([80,100.5])
plt.grid(True)
ax = plt.gca()
ax.set_aspect('equal')

plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plt.legend(loc='lower right')

<matplotlib.legend.Legend at 0x7fe47c289e80>


### Plot the ROC

Now plot the AUPRC. Area under the interpolated precision-recall curve, obtained by plotting (recall, precision) points for different values of the classification threshold. Depending on how it's calculated, PR AUC may be equivalent to the average precision of the model.

def plot_prc(name, labels, predictions, **kwargs):
precision, recall, _ = sklearn.metrics.precision_recall_curve(labels, predictions)

plt.plot(precision, recall, label=name, linewidth=2, **kwargs)
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.grid(True)
ax = plt.gca()
ax.set_aspect('equal')

plot_prc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_prc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plt.legend(loc='lower right')

<matplotlib.legend.Legend at 0x7fe47c1954e0>


It looks like the precision is relatively high, but the recall and the area under the ROC curve (AUC) aren't as high as you might like. Classifiers often face challenges when trying to maximize both precision and recall, which is especially true when working with imbalanced datasets. It is important to consider the costs of different types of errors in the context of the problem you care about. In this example, a false negative (a fraudulent transaction is missed) may have a financial cost, while a false positive (a transaction is incorrectly flagged as fraudulent) may decrease user happiness.

## Class weights

### Calculate class weights

The goal is to identify fraudulent transactions, but you don't have very many of those positive samples to work with, so you would want to have the classifier heavily weight the few examples that are available. You can do this by passing Keras weights for each class through a parameter. These will cause the model to "pay more attention" to examples from an under-represented class.

# Scaling by total/2 helps keep the loss to a similar magnitude.
# The sum of the weights of all examples stays the same.
weight_for_0 = (1 / neg)*(total)/2.0
weight_for_1 = (1 / pos)*(total)/2.0

class_weight = {0: weight_for_0, 1: weight_for_1}

print('Weight for class 0: {:.2f}'.format(weight_for_0))
print('Weight for class 1: {:.2f}'.format(weight_for_1))

Weight for class 0: 0.50
Weight for class 1: 289.44


### Train a model with class weights

Now try re-training and evaluating the model with class weights to see how that affects the predictions.

weighted_model = make_model()

weighted_history = weighted_model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks=[early_stopping],
validation_data=(val_features, val_labels),
# The class weights go here
class_weight=class_weight)

Epoch 1/100
90/90 [==============================] - 3s 17ms/step - loss: 2.1890 - tp: 122.9560 - fp: 136.3516 - tn: 150694.1978 - fn: 149.0659 - accuracy: 0.9982 - precision: 0.5290 - recall: 0.4737 - auc: 0.8244 - prc: 0.4563 - val_loss: 0.0090 - val_tp: 47.0000 - val_fp: 12.0000 - val_tn: 45489.0000 - val_fn: 21.0000 - val_accuracy: 0.9993 - val_precision: 0.7966 - val_recall: 0.6912 - val_auc: 0.9233 - val_prc: 0.6339
Epoch 2/100
90/90 [==============================] - 1s 8ms/step - loss: 0.9281 - tp: 87.6923 - fp: 335.0330 - tn: 93653.0659 - fn: 64.7802 - accuracy: 0.9959 - precision: 0.2146 - recall: 0.5577 - auc: 0.8684 - prc: 0.4249 - val_loss: 0.0124 - val_tp: 53.0000 - val_fp: 16.0000 - val_tn: 45485.0000 - val_fn: 15.0000 - val_accuracy: 0.9993 - val_precision: 0.7681 - val_recall: 0.7794 - val_auc: 0.9468 - val_prc: 0.6940
Epoch 3/100
90/90 [==============================] - 1s 8ms/step - loss: 0.4995 - tp: 114.1978 - fp: 651.0440 - tn: 93333.2967 - fn: 42.0330 - accuracy: 0.9930 - precision: 0.1597 - recall: 0.7502 - auc: 0.9231 - prc: 0.4971 - val_loss: 0.0168 - val_tp: 55.0000 - val_fp: 31.0000 - val_tn: 45470.0000 - val_fn: 13.0000 - val_accuracy: 0.9990 - val_precision: 0.6395 - val_recall: 0.8088 - val_auc: 0.9510 - val_prc: 0.7183
Epoch 4/100
90/90 [==============================] - 1s 8ms/step - loss: 0.5683 - tp: 124.8352 - fp: 1090.5275 - tn: 92884.8242 - fn: 40.3846 - accuracy: 0.9883 - precision: 0.1012 - recall: 0.7341 - auc: 0.9115 - prc: 0.4402 - val_loss: 0.0228 - val_tp: 57.0000 - val_fp: 86.0000 - val_tn: 45415.0000 - val_fn: 11.0000 - val_accuracy: 0.9979 - val_precision: 0.3986 - val_recall: 0.8382 - val_auc: 0.9621 - val_prc: 0.7217
Epoch 5/100
90/90 [==============================] - 1s 8ms/step - loss: 0.4563 - tp: 125.4835 - fp: 1442.9560 - tn: 92539.3187 - fn: 32.8132 - accuracy: 0.9846 - precision: 0.0816 - recall: 0.7837 - auc: 0.9236 - prc: 0.4064 - val_loss: 0.0286 - val_tp: 57.0000 - val_fp: 163.0000 - val_tn: 45338.0000 - val_fn: 11.0000 - val_accuracy: 0.9962 - val_precision: 0.2591 - val_recall: 0.8382 - val_auc: 0.9678 - val_prc: 0.7198
Epoch 6/100
90/90 [==============================] - 1s 8ms/step - loss: 0.3511 - tp: 131.8791 - fp: 1896.7363 - tn: 92083.4505 - fn: 28.5055 - accuracy: 0.9799 - precision: 0.0639 - recall: 0.8074 - auc: 0.9315 - prc: 0.3888 - val_loss: 0.0363 - val_tp: 57.0000 - val_fp: 260.0000 - val_tn: 45241.0000 - val_fn: 11.0000 - val_accuracy: 0.9941 - val_precision: 0.1798 - val_recall: 0.8382 - val_auc: 0.9705 - val_prc: 0.7030
Epoch 7/100
90/90 [==============================] - 1s 8ms/step - loss: 0.3982 - tp: 140.0220 - fp: 2369.8462 - tn: 91602.0659 - fn: 28.6374 - accuracy: 0.9753 - precision: 0.0563 - recall: 0.8126 - auc: 0.9303 - prc: 0.3547 - val_loss: 0.0433 - val_tp: 57.0000 - val_fp: 340.0000 - val_tn: 45161.0000 - val_fn: 11.0000 - val_accuracy: 0.9923 - val_precision: 0.1436 - val_recall: 0.8382 - val_auc: 0.9772 - val_prc: 0.6970
Epoch 8/100
90/90 [==============================] - 1s 8ms/step - loss: 0.3107 - tp: 132.4835 - fp: 2692.9341 - tn: 91290.3297 - fn: 24.8242 - accuracy: 0.9709 - precision: 0.0456 - recall: 0.8433 - auc: 0.9469 - prc: 0.2969 - val_loss: 0.0495 - val_tp: 57.0000 - val_fp: 404.0000 - val_tn: 45097.0000 - val_fn: 11.0000 - val_accuracy: 0.9909 - val_precision: 0.1236 - val_recall: 0.8382 - val_auc: 0.9773 - val_prc: 0.6773
Epoch 9/100
90/90 [==============================] - 1s 8ms/step - loss: 0.2811 - tp: 128.4396 - fp: 2906.1429 - tn: 91085.2418 - fn: 20.7473 - accuracy: 0.9693 - precision: 0.0431 - recall: 0.8614 - auc: 0.9516 - prc: 0.3060 - val_loss: 0.0548 - val_tp: 57.0000 - val_fp: 486.0000 - val_tn: 45015.0000 - val_fn: 11.0000 - val_accuracy: 0.9891 - val_precision: 0.1050 - val_recall: 0.8382 - val_auc: 0.9787 - val_prc: 0.6772
Epoch 10/100
90/90 [==============================] - 1s 8ms/step - loss: 0.2488 - tp: 132.1648 - fp: 3072.9341 - tn: 90915.8462 - fn: 19.6264 - accuracy: 0.9671 - precision: 0.0395 - recall: 0.8613 - auc: 0.9591 - prc: 0.2658 - val_loss: 0.0562 - val_tp: 57.0000 - val_fp: 504.0000 - val_tn: 44997.0000 - val_fn: 11.0000 - val_accuracy: 0.9887 - val_precision: 0.1016 - val_recall: 0.8382 - val_auc: 0.9792 - val_prc: 0.6680
Epoch 11/100
90/90 [==============================] - 1s 8ms/step - loss: 0.3162 - tp: 139.4396 - fp: 3150.4396 - tn: 90824.8901 - fn: 25.8022 - accuracy: 0.9667 - precision: 0.0449 - recall: 0.8540 - auc: 0.9491 - prc: 0.2886 - val_loss: 0.0617 - val_tp: 58.0000 - val_fp: 579.0000 - val_tn: 44922.0000 - val_fn: 10.0000 - val_accuracy: 0.9871 - val_precision: 0.0911 - val_recall: 0.8529 - val_auc: 0.9793 - val_prc: 0.6335
Epoch 12/100
90/90 [==============================] - 1s 8ms/step - loss: 0.2796 - tp: 140.4066 - fp: 3411.0989 - tn: 90568.3187 - fn: 20.7473 - accuracy: 0.9631 - precision: 0.0404 - recall: 0.8712 - auc: 0.9575 - prc: 0.2504 - val_loss: 0.0669 - val_tp: 58.0000 - val_fp: 669.0000 - val_tn: 44832.0000 - val_fn: 10.0000 - val_accuracy: 0.9851 - val_precision: 0.0798 - val_recall: 0.8529 - val_auc: 0.9794 - val_prc: 0.6173
Epoch 13/100
90/90 [==============================] - 1s 8ms/step - loss: 0.2629 - tp: 136.8791 - fp: 3478.8022 - tn: 90503.5604 - fn: 21.3297 - accuracy: 0.9627 - precision: 0.0383 - recall: 0.8730 - auc: 0.9574 - prc: 0.2429 - val_loss: 0.0692 - val_tp: 58.0000 - val_fp: 701.0000 - val_tn: 44800.0000 - val_fn: 10.0000 - val_accuracy: 0.9844 - val_precision: 0.0764 - val_recall: 0.8529 - val_auc: 0.9796 - val_prc: 0.6173
Epoch 14/100
90/90 [==============================] - 1s 8ms/step - loss: 0.2351 - tp: 137.8462 - fp: 3475.8681 - tn: 90505.7363 - fn: 21.1209 - accuracy: 0.9632 - precision: 0.0392 - recall: 0.8787 - auc: 0.9636 - prc: 0.2659 - val_loss: 0.0715 - val_tp: 58.0000 - val_fp: 726.0000 - val_tn: 44775.0000 - val_fn: 10.0000 - val_accuracy: 0.9838 - val_precision: 0.0740 - val_recall: 0.8529 - val_auc: 0.9812 - val_prc: 0.6174
Restoring model weights from the end of the best epoch.
Epoch 00014: early stopping


### Check training history

plot_metrics(weighted_history)


### Evaluate metrics

train_predictions_weighted = weighted_model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_weighted = weighted_model.predict(test_features, batch_size=BATCH_SIZE)

weighted_results = weighted_model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(weighted_model.metrics_names, weighted_results):
print(name, ': ', value)
print()

plot_cm(test_labels, test_predictions_weighted)

loss :  0.024028101935982704
tp :  93.0
fp :  99.0
tn :  56749.0
fn :  21.0
accuracy :  0.9978933334350586
precision :  0.484375
recall :  0.8157894611358643
auc :  0.9706312417984009
prc :  0.6766911745071411

Legitimate Transactions Detected (True Negatives):  56749
Legitimate Transactions Incorrectly Detected (False Positives):  99
Fraudulent Transactions Missed (False Negatives):  21
Fraudulent Transactions Detected (True Positives):  93
Total Fraudulent Transactions:  114


Here you can see that with class weights the accuracy and precision are lower because there are more false positives, but conversely the recall and AUC are higher because the model also found more true positives. Despite having lower accuracy, this model has higher recall (and identifies more fraudulent transactions). Of course, there is a cost to both types of error (you wouldn't want to bug users by flagging too many legitimate transactions as fraudulent, either). Carefully consider the trade-offs between these different types of errors for your application.

### Plot the ROC

plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')

plot_roc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_roc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')

plt.legend(loc='lower right')

<matplotlib.legend.Legend at 0x7fe50017f198>


### Plot the AUPRC

plot_prc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_prc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')

plot_prc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_prc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')

plt.legend(loc='lower right')

<matplotlib.legend.Legend at 0x7fe47c2a8240>


## Oversampling

### Oversample the minority class

A related approach would be to resample the dataset by oversampling the minority class.

pos_features = train_features[bool_train_labels]
neg_features = train_features[~bool_train_labels]

pos_labels = train_labels[bool_train_labels]
neg_labels = train_labels[~bool_train_labels]


#### Using NumPy

You can balance the dataset manually by choosing the right number of random indices from the positive examples:

ids = np.arange(len(pos_features))
choices = np.random.choice(ids, len(neg_features))

res_pos_features = pos_features[choices]
res_pos_labels = pos_labels[choices]

res_pos_features.shape

(181966, 29)

resampled_features = np.concatenate([res_pos_features, neg_features], axis=0)
resampled_labels = np.concatenate([res_pos_labels, neg_labels], axis=0)

order = np.arange(len(resampled_labels))
np.random.shuffle(order)
resampled_features = resampled_features[order]
resampled_labels = resampled_labels[order]

resampled_features.shape

(363932, 29)


#### Using tf.data

If you're using tf.data the easiest way to produce balanced examples is to start with a positive and a negative dataset, and merge them. See the tf.data guide for more examples.

BUFFER_SIZE = 100000

def make_ds(features, labels):
ds = tf.data.Dataset.from_tensor_slices((features, labels))#.cache()
ds = ds.shuffle(BUFFER_SIZE).repeat()
return ds

pos_ds = make_ds(pos_features, pos_labels)
neg_ds = make_ds(neg_features, neg_labels)


Each dataset provides (feature, label) pairs:

for features, label in pos_ds.take(1):
print("Features:\n", features.numpy())
print()
print("Label: ", label.numpy())

Features:
[ 0.66303937  0.25245179  0.1263118   0.63886621  0.09535758 -0.27856354
0.12512054 -0.17039113 -0.07404569 -0.0401322  -0.85920451  0.69018467
1.58227421 -0.00707812  1.03705783  0.42933153 -1.02645341 -0.0811121
-0.07953324 -0.0389733  -0.15173702 -0.30237571 -0.19449175 -0.73007126
1.29012445 -0.85703142  0.08285249  0.06733918 -1.36825945]

Label:  1


Merge the two together using experimental.sample_from_datasets:

resampled_ds = tf.data.experimental.sample_from_datasets([pos_ds, neg_ds], weights=[0.5, 0.5])
resampled_ds = resampled_ds.batch(BATCH_SIZE).prefetch(2)

for features, label in resampled_ds.take(1):
print(label.numpy().mean())

0.4970703125


To use this dataset, you'll need the number of steps per epoch.

The definition of "epoch" in this case is less clear. Say it's the number of batches required to see each negative example once:

resampled_steps_per_epoch = np.ceil(2.0*neg/BATCH_SIZE)
resampled_steps_per_epoch

278.0


### Train on the oversampled data

Now try training the model with the resampled data set instead of using class weights to see how these methods compare.

resampled_model = make_model()

# Reset the bias to zero, since this dataset is balanced.
output_layer = resampled_model.layers[-1]
output_layer.bias.assign([0])

val_ds = tf.data.Dataset.from_tensor_slices((val_features, val_labels)).cache()
val_ds = val_ds.batch(BATCH_SIZE).prefetch(2)

resampled_history = resampled_model.fit(
resampled_ds,
epochs=EPOCHS,
steps_per_epoch=resampled_steps_per_epoch,
callbacks=[early_stopping],
validation_data=val_ds)

Epoch 1/100
278/278 [==============================] - 8s 23ms/step - loss: 0.6393 - tp: 120141.5197 - fp: 56062.2867 - tn: 143903.9319 - fn: 23566.9211 - accuracy: 0.7684 - precision: 0.6461 - recall: 0.7999 - auc: 0.8806 - prc: 0.8579 - val_loss: 0.2863 - val_tp: 59.0000 - val_fp: 1232.0000 - val_tn: 44269.0000 - val_fn: 9.0000 - val_accuracy: 0.9728 - val_precision: 0.0457 - val_recall: 0.8676 - val_auc: 0.9627 - val_prc: 0.7346
Epoch 2/100
278/278 [==============================] - 6s 21ms/step - loss: 0.2501 - tp: 128917.9140 - fp: 9931.4695 - tn: 133598.7133 - fn: 14264.5627 - accuracy: 0.9117 - precision: 0.9222 - recall: 0.8991 - auc: 0.9575 - prc: 0.9699 - val_loss: 0.1571 - val_tp: 60.0000 - val_fp: 859.0000 - val_tn: 44642.0000 - val_fn: 8.0000 - val_accuracy: 0.9810 - val_precision: 0.0653 - val_recall: 0.8824 - val_auc: 0.9751 - val_prc: 0.7387
Epoch 3/100
278/278 [==============================] - 6s 21ms/step - loss: 0.1866 - tp: 130614.9857 - fp: 5368.2545 - tn: 138176.4086 - fn: 12553.0108 - accuracy: 0.9370 - precision: 0.9598 - recall: 0.9121 - auc: 0.9753 - prc: 0.9813 - val_loss: 0.1155 - val_tp: 60.0000 - val_fp: 872.0000 - val_tn: 44629.0000 - val_fn: 8.0000 - val_accuracy: 0.9807 - val_precision: 0.0644 - val_recall: 0.8824 - val_auc: 0.9790 - val_prc: 0.7293
Epoch 4/100
278/278 [==============================] - 6s 21ms/step - loss: 0.1601 - tp: 131462.4731 - fp: 4495.2079 - tn: 138924.8029 - fn: 11830.1756 - accuracy: 0.9426 - precision: 0.9663 - recall: 0.9170 - auc: 0.9819 - prc: 0.9856 - val_loss: 0.0995 - val_tp: 60.0000 - val_fp: 889.0000 - val_tn: 44612.0000 - val_fn: 8.0000 - val_accuracy: 0.9803 - val_precision: 0.0632 - val_recall: 0.8824 - val_auc: 0.9785 - val_prc: 0.7294
Epoch 5/100
278/278 [==============================] - 6s 21ms/step - loss: 0.1440 - tp: 132381.8208 - fp: 4061.8566 - tn: 139115.8280 - fn: 11153.1541 - accuracy: 0.9467 - precision: 0.9701 - recall: 0.9221 - auc: 0.9861 - prc: 0.9885 - val_loss: 0.0894 - val_tp: 60.0000 - val_fp: 844.0000 - val_tn: 44657.0000 - val_fn: 8.0000 - val_accuracy: 0.9813 - val_precision: 0.0664 - val_recall: 0.8824 - val_auc: 0.9777 - val_prc: 0.7181
Epoch 6/100
278/278 [==============================] - 6s 21ms/step - loss: 0.1320 - tp: 132768.2401 - fp: 3772.4731 - tn: 139683.4946 - fn: 10488.4516 - accuracy: 0.9500 - precision: 0.9720 - recall: 0.9266 - auc: 0.9890 - prc: 0.9905 - val_loss: 0.0801 - val_tp: 58.0000 - val_fp: 779.0000 - val_tn: 44722.0000 - val_fn: 10.0000 - val_accuracy: 0.9827 - val_precision: 0.0693 - val_recall: 0.8529 - val_auc: 0.9766 - val_prc: 0.7091
Epoch 7/100
278/278 [==============================] - 6s 21ms/step - loss: 0.1217 - tp: 133198.0430 - fp: 3569.1398 - tn: 140039.7025 - fn: 9905.7742 - accuracy: 0.9529 - precision: 0.9739 - recall: 0.9307 - auc: 0.9912 - prc: 0.9920 - val_loss: 0.0729 - val_tp: 58.0000 - val_fp: 742.0000 - val_tn: 44759.0000 - val_fn: 10.0000 - val_accuracy: 0.9835 - val_precision: 0.0725 - val_recall: 0.8529 - val_auc: 0.9750 - val_prc: 0.7088
Epoch 8/100
278/278 [==============================] - 6s 21ms/step - loss: 0.1128 - tp: 134406.9140 - fp: 3395.7312 - tn: 139529.8674 - fn: 9380.1470 - accuracy: 0.9552 - precision: 0.9752 - recall: 0.9343 - auc: 0.9927 - prc: 0.9933 - val_loss: 0.0668 - val_tp: 59.0000 - val_fp: 713.0000 - val_tn: 44788.0000 - val_fn: 9.0000 - val_accuracy: 0.9842 - val_precision: 0.0764 - val_recall: 0.8676 - val_auc: 0.9690 - val_prc: 0.7089
Epoch 9/100
278/278 [==============================] - 6s 21ms/step - loss: 0.1072 - tp: 133953.2688 - fp: 3312.8315 - tn: 140438.0179 - fn: 9008.5412 - accuracy: 0.9567 - precision: 0.9755 - recall: 0.9366 - auc: 0.9936 - prc: 0.9938 - val_loss: 0.0610 - val_tp: 59.0000 - val_fp: 681.0000 - val_tn: 44820.0000 - val_fn: 9.0000 - val_accuracy: 0.9849 - val_precision: 0.0797 - val_recall: 0.8676 - val_auc: 0.9639 - val_prc: 0.6890
Epoch 10/100
278/278 [==============================] - 6s 21ms/step - loss: 0.0998 - tp: 135133.2258 - fp: 3109.3297 - tn: 140181.0538 - fn: 8289.0502 - accuracy: 0.9600 - precision: 0.9774 - recall: 0.9418 - auc: 0.9946 - prc: 0.9947 - val_loss: 0.0557 - val_tp: 59.0000 - val_fp: 631.0000 - val_tn: 44870.0000 - val_fn: 9.0000 - val_accuracy: 0.9860 - val_precision: 0.0855 - val_recall: 0.8676 - val_auc: 0.9646 - val_prc: 0.6894
Epoch 11/100
278/278 [==============================] - 6s 21ms/step - loss: 0.0944 - tp: 135644.8315 - fp: 3082.4982 - tn: 140128.5341 - fn: 7856.7957 - accuracy: 0.9618 - precision: 0.9778 - recall: 0.9451 - auc: 0.9953 - prc: 0.9952 - val_loss: 0.0517 - val_tp: 59.0000 - val_fp: 601.0000 - val_tn: 44900.0000 - val_fn: 9.0000 - val_accuracy: 0.9866 - val_precision: 0.0894 - val_recall: 0.8676 - val_auc: 0.9608 - val_prc: 0.6803
Epoch 12/100
278/278 [==============================] - 6s 21ms/step - loss: 0.0917 - tp: 135465.7993 - fp: 3084.7634 - tn: 140597.5878 - fn: 7564.5090 - accuracy: 0.9628 - precision: 0.9776 - recall: 0.9472 - auc: 0.9956 - prc: 0.9954 - val_loss: 0.0474 - val_tp: 59.0000 - val_fp: 547.0000 - val_tn: 44954.0000 - val_fn: 9.0000 - val_accuracy: 0.9878 - val_precision: 0.0974 - val_recall: 0.8676 - val_auc: 0.9626 - val_prc: 0.6808
Restoring model weights from the end of the best epoch.
Epoch 00012: early stopping


If the training process were considering the whole dataset on each gradient update, this oversampling would be basically identical to the class weighting.

But when training the model batch-wise, as you did here, the oversampled data provides a smoother gradient signal: Instead of each positive example being shown in one batch with a large weight, they're shown in many different batches each time with a small weight.

This smoother gradient signal makes it easier to train the model.

### Check training history

Note that the distributions of metrics will be different here, because the training data has a totally different distribution from the validation and test data.

plot_metrics(resampled_history)


### Re-train

Because training is easier on the balanced data, the above training procedure may overfit quickly.

So break up the epochs to give the callbacks.EarlyStopping finer control over when to stop training.

resampled_model = make_model()

# Reset the bias to zero, since this dataset is balanced.
output_layer = resampled_model.layers[-1]
output_layer.bias.assign([0])

resampled_history = resampled_model.fit(
resampled_ds,
# These are not real epochs
steps_per_epoch=20,
epochs=10*EPOCHS,
callbacks=[early_stopping],
validation_data=(val_ds))

Epoch 1/1000
20/20 [==============================] - 3s 66ms/step - loss: 1.0402 - tp: 7020.5714 - fp: 6901.5238 - tn: 49838.6667 - fn: 4238.7143 - accuracy: 0.8462 - precision: 0.4849 - recall: 0.6088 - auc: 0.8819 - prc: 0.6535 - val_loss: 0.8771 - val_tp: 59.0000 - val_fp: 26353.0000 - val_tn: 19148.0000 - val_fn: 9.0000 - val_accuracy: 0.4215 - val_precision: 0.0022 - val_recall: 0.8676 - val_auc: 0.8402 - val_prc: 0.3888
Epoch 2/1000
20/20 [==============================] - 1s 38ms/step - loss: 0.7266 - tp: 8698.4762 - fp: 6112.2857 - tn: 5196.3333 - fn: 2423.3810 - accuracy: 0.6143 - precision: 0.5840 - recall: 0.7744 - auc: 0.7501 - prc: 0.8227 - val_loss: 0.8015 - val_tp: 61.0000 - val_fp: 24135.0000 - val_tn: 21366.0000 - val_fn: 7.0000 - val_accuracy: 0.4702 - val_precision: 0.0025 - val_recall: 0.8971 - val_auc: 0.8835 - val_prc: 0.5738
Epoch 3/1000
20/20 [==============================] - 1s 28ms/step - loss: 0.6080 - tp: 9502.2381 - fp: 5591.0952 - tn: 5637.0952 - fn: 1700.0476 - accuracy: 0.6712 - precision: 0.6269 - recall: 0.8440 - auc: 0.8308 - prc: 0.8828 - val_loss: 0.7140 - val_tp: 61.0000 - val_fp: 20539.0000 - val_tn: 24962.0000 - val_fn: 7.0000 - val_accuracy: 0.5491 - val_precision: 0.0030 - val_recall: 0.8971 - val_auc: 0.9053 - val_prc: 0.6240
Epoch 4/1000
20/20 [==============================] - 1s 28ms/step - loss: 0.5383 - tp: 9802.5238 - fp: 5037.4286 - tn: 6099.2857 - fn: 1491.2381 - accuracy: 0.7060 - precision: 0.6576 - recall: 0.8667 - auc: 0.8629 - prc: 0.9076 - val_loss: 0.6353 - val_tp: 63.0000 - val_fp: 16267.0000 - val_tn: 29234.0000 - val_fn: 5.0000 - val_accuracy: 0.6429 - val_precision: 0.0039 - val_recall: 0.9265 - val_auc: 0.9249 - val_prc: 0.6533
Epoch 5/1000
20/20 [==============================] - 1s 28ms/step - loss: 0.4828 - tp: 9863.6667 - fp: 4417.0000 - tn: 6790.9048 - fn: 1358.9048 - accuracy: 0.7410 - precision: 0.6897 - recall: 0.8784 - auc: 0.8868 - prc: 0.9243 - val_loss: 0.5696 - val_tp: 63.0000 - val_fp: 12252.0000 - val_tn: 33249.0000 - val_fn: 5.0000 - val_accuracy: 0.7310 - val_precision: 0.0051 - val_recall: 0.9265 - val_auc: 0.9352 - val_prc: 0.6709
Epoch 6/1000
20/20 [==============================] - 1s 28ms/step - loss: 0.4447 - tp: 9811.3333 - fp: 3886.4762 - tn: 7419.8095 - fn: 1312.8571 - accuracy: 0.7655 - precision: 0.7129 - recall: 0.8824 - auc: 0.9000 - prc: 0.9322 - val_loss: 0.5133 - val_tp: 62.0000 - val_fp: 8676.0000 - val_tn: 36825.0000 - val_fn: 6.0000 - val_accuracy: 0.8095 - val_precision: 0.0071 - val_recall: 0.9118 - val_auc: 0.9379 - val_prc: 0.6861
Epoch 7/1000
20/20 [==============================] - 1s 27ms/step - loss: 0.4112 - tp: 9938.3333 - fp: 3295.3333 - tn: 7908.6667 - fn: 1288.1429 - accuracy: 0.7936 - precision: 0.7494 - recall: 0.8837 - auc: 0.9086 - prc: 0.9394 - val_loss: 0.4664 - val_tp: 61.0000 - val_fp: 5933.0000 - val_tn: 39568.0000 - val_fn: 7.0000 - val_accuracy: 0.8696 - val_precision: 0.0102 - val_recall: 0.8971 - val_auc: 0.9388 - val_prc: 0.6988
Epoch 8/1000
20/20 [==============================] - 1s 29ms/step - loss: 0.3792 - tp: 9959.7143 - fp: 2782.7619 - tn: 8464.3333 - fn: 1223.6667 - accuracy: 0.8203 - precision: 0.7795 - recall: 0.8913 - auc: 0.9198 - prc: 0.9466 - val_loss: 0.4276 - val_tp: 61.0000 - val_fp: 4043.0000 - val_tn: 41458.0000 - val_fn: 7.0000 - val_accuracy: 0.9111 - val_precision: 0.0149 - val_recall: 0.8971 - val_auc: 0.9401 - val_prc: 0.7077
Epoch 9/1000
20/20 [==============================] - 1s 28ms/step - loss: 0.3588 - tp: 9959.0000 - fp: 2463.6190 - tn: 8797.1429 - fn: 1210.7143 - accuracy: 0.8349 - precision: 0.7999 - recall: 0.8910 - auc: 0.9254 - prc: 0.9503 - val_loss: 0.3944 - val_tp: 60.0000 - val_fp: 2932.0000 - val_tn: 42569.0000 - val_fn: 8.0000 - val_accuracy: 0.9355 - val_precision: 0.0201 - val_recall: 0.8824 - val_auc: 0.9427 - val_prc: 0.7187
Epoch 10/1000
20/20 [==============================] - 1s 28ms/step - loss: 0.3368 - tp: 10007.8571 - fp: 2133.0476 - tn: 9099.9524 - fn: 1189.6190 - accuracy: 0.8494 - precision: 0.8213 - recall: 0.8930 - auc: 0.9327 - prc: 0.9548 - val_loss: 0.3658 - val_tp: 60.0000 - val_fp: 2290.0000 - val_tn: 43211.0000 - val_fn: 8.0000 - val_accuracy: 0.9496 - val_precision: 0.0255 - val_recall: 0.8824 - val_auc: 0.9464 - val_prc: 0.7227
Epoch 11/1000
20/20 [==============================] - 1s 28ms/step - loss: 0.3203 - tp: 10013.4762 - fp: 1845.1905 - tn: 9404.2381 - fn: 1167.5714 - accuracy: 0.8654 - precision: 0.8433 - recall: 0.8962 - auc: 0.9380 - prc: 0.9581 - val_loss: 0.3397 - val_tp: 60.0000 - val_fp: 1848.0000 - val_tn: 43653.0000 - val_fn: 8.0000 - val_accuracy: 0.9593 - val_precision: 0.0314 - val_recall: 0.8824 - val_auc: 0.9509 - val_prc: 0.7279
Epoch 12/1000
20/20 [==============================] - 1s 28ms/step - loss: 0.3045 - tp: 10044.2381 - fp: 1546.7143 - tn: 9660.4286 - fn: 1179.0952 - accuracy: 0.8787 - precision: 0.8672 - recall: 0.8958 - auc: 0.9409 - prc: 0.9605 - val_loss: 0.3167 - val_tp: 60.0000 - val_fp: 1533.0000 - val_tn: 43968.0000 - val_fn: 8.0000 - val_accuracy: 0.9662 - val_precision: 0.0377 - val_recall: 0.8824 - val_auc: 0.9557 - val_prc: 0.7327
Epoch 13/1000
20/20 [==============================] - 1s 27ms/step - loss: 0.2887 - tp: 10159.0952 - fp: 1375.8095 - tn: 9727.7143 - fn: 1167.8571 - accuracy: 0.8865 - precision: 0.8808 - recall: 0.8973 - auc: 0.9464 - prc: 0.9639 - val_loss: 0.2971 - val_tp: 60.0000 - val_fp: 1288.0000 - val_tn: 44213.0000 - val_fn: 8.0000 - val_accuracy: 0.9716 - val_precision: 0.0445 - val_recall: 0.8824 - val_auc: 0.9592 - val_prc: 0.7340
Epoch 14/1000
20/20 [==============================] - 1s 28ms/step - loss: 0.2867 - tp: 9990.0952 - fp: 1266.4762 - tn: 9978.4286 - fn: 1195.4762 - accuracy: 0.8883 - precision: 0.8848 - recall: 0.8925 - auc: 0.9460 - prc: 0.9629 - val_loss: 0.2798 - val_tp: 59.0000 - val_fp: 1140.0000 - val_tn: 44361.0000 - val_fn: 9.0000 - val_accuracy: 0.9748 - val_precision: 0.0492 - val_recall: 0.8676 - val_auc: 0.9632 - val_prc: 0.7367
Epoch 15/1000
20/20 [==============================] - 1s 28ms/step - loss: 0.2740 - tp: 10086.0476 - fp: 1087.5714 - tn: 10093.1905 - fn: 1163.6667 - accuracy: 0.8997 - precision: 0.9019 - recall: 0.8975 - auc: 0.9499 - prc: 0.9655 - val_loss: 0.2645 - val_tp: 59.0000 - val_fp: 1065.0000 - val_tn: 44436.0000 - val_fn: 9.0000 - val_accuracy: 0.9764 - val_precision: 0.0525 - val_recall: 0.8676 - val_auc: 0.9656 - val_prc: 0.7374
Epoch 16/1000
20/20 [==============================] - 1s 29ms/step - loss: 0.2618 - tp: 10030.3810 - fp: 961.4762 - tn: 10303.2857 - fn: 1135.3333 - accuracy: 0.9065 - precision: 0.9114 - recall: 0.8989 - auc: 0.9543 - prc: 0.9679 - val_loss: 0.2503 - val_tp: 59.0000 - val_fp: 1000.0000 - val_tn: 44501.0000 - val_fn: 9.0000 - val_accuracy: 0.9779 - val_precision: 0.0557 - val_recall: 0.8676 - val_auc: 0.9674 - val_prc: 0.7414
Epoch 17/1000
20/20 [==============================] - 1s 28ms/step - loss: 0.2599 - tp: 10083.9524 - fp: 909.8095 - tn: 10279.7619 - fn: 1156.9524 - accuracy: 0.9074 - precision: 0.9175 - recall: 0.8961 - auc: 0.9535 - prc: 0.9678 - val_loss: 0.2389 - val_tp: 60.0000 - val_fp: 987.0000 - val_tn: 44514.0000 - val_fn: 8.0000 - val_accuracy: 0.9782 - val_precision: 0.0573 - val_recall: 0.8824 - val_auc: 0.9688 - val_prc: 0.7439
Epoch 18/1000
20/20 [==============================] - 1s 28ms/step - loss: 0.2498 - tp: 10070.7619 - fp: 851.9524 - tn: 10387.0000 - fn: 1120.7619 - accuracy: 0.9124 - precision: 0.9219 - recall: 0.9011 - auc: 0.9582 - prc: 0.9699 - val_loss: 0.2282 - val_tp: 60.0000 - val_fp: 964.0000 - val_tn: 44537.0000 - val_fn: 8.0000 - val_accuracy: 0.9787 - val_precision: 0.0586 - val_recall: 0.8824 - val_auc: 0.9701 - val_prc: 0.7446
Epoch 19/1000
20/20 [==============================] - 1s 28ms/step - loss: 0.2428 - tp: 10210.7143 - fp: 739.7143 - tn: 10357.0952 - fn: 1122.9524 - accuracy: 0.9163 - precision: 0.9314 - recall: 0.9011 - auc: 0.9592 - prc: 0.9716 - val_loss: 0.2190 - val_tp: 60.0000 - val_fp: 966.0000 - val_tn: 44535.0000 - val_fn: 8.0000 - val_accuracy: 0.9786 - val_precision: 0.0585 - val_recall: 0.8824 - val_auc: 0.9705 - val_prc: 0.7449
Epoch 20/1000
20/20 [==============================] - 1s 28ms/step - loss: 0.2375 - tp: 10226.6190 - fp: 724.9048 - tn: 10386.2381 - fn: 1092.7143 - accuracy: 0.9190 - precision: 0.9345 - recall: 0.9032 - auc: 0.9611 - prc: 0.9727 - val_loss: 0.2099 - val_tp: 60.0000 - val_fp: 956.0000 - val_tn: 44545.0000 - val_fn: 8.0000 - val_accuracy: 0.9788 - val_precision: 0.0591 - val_recall: 0.8824 - val_auc: 0.9715 - val_prc: 0.7359
Epoch 21/1000
20/20 [==============================] - 1s 29ms/step - loss: 0.2282 - tp: 10165.0000 - fp: 661.7619 - tn: 10537.2381 - fn: 1066.4762 - accuracy: 0.9226 - precision: 0.9387 - recall: 0.9053 - auc: 0.9643 - prc: 0.9745 - val_loss: 0.2010 - val_tp: 60.0000 - val_fp: 933.0000 - val_tn: 44568.0000 - val_fn: 8.0000 - val_accuracy: 0.9793 - val_precision: 0.0604 - val_recall: 0.8824 - val_auc: 0.9720 - val_prc: 0.7365
Epoch 22/1000
20/20 [==============================] - 1s 28ms/step - loss: 0.2227 - tp: 10246.2381 - fp: 605.0000 - tn: 10517.3333 - fn: 1061.9048 - accuracy: 0.9252 - precision: 0.9437 - recall: 0.9061 - auc: 0.9657 - prc: 0.9757 - val_loss: 0.1930 - val_tp: 60.0000 - val_fp: 930.0000 - val_tn: 44571.0000 - val_fn: 8.0000 - val_accuracy: 0.9794 - val_precision: 0.0606 - val_recall: 0.8824 - val_auc: 0.9724 - val_prc: 0.7370
Epoch 23/1000
20/20 [==============================] - 1s 29ms/step - loss: 0.2182 - tp: 10200.3810 - fp: 612.1429 - tn: 10569.2381 - fn: 1048.7143 - accuracy: 0.9266 - precision: 0.9445 - recall: 0.9070 - auc: 0.9673 - prc: 0.9762 - val_loss: 0.1852 - val_tp: 60.0000 - val_fp: 923.0000 - val_tn: 44578.0000 - val_fn: 8.0000 - val_accuracy: 0.9796 - val_precision: 0.0610 - val_recall: 0.8824 - val_auc: 0.9727 - val_prc: 0.7375
Epoch 24/1000
20/20 [==============================] - 1s 28ms/step - loss: 0.2127 - tp: 10135.4286 - fp: 524.9048 - tn: 10749.3810 - fn: 1020.7619 - accuracy: 0.9311 - precision: 0.9508 - recall: 0.9084 - auc: 0.9693 - prc: 0.9773 - val_loss: 0.1776 - val_tp: 60.0000 - val_fp: 900.0000 - val_tn: 44601.0000 - val_fn: 8.0000 - val_accuracy: 0.9801 - val_precision: 0.0625 - val_recall: 0.8824 - val_auc: 0.9729 - val_prc: 0.7371
Epoch 25/1000
20/20 [==============================] - 1s 28ms/step - loss: 0.2064 - tp: 10212.3810 - fp: 509.5714 - tn: 10681.6667 - fn: 1026.8571 - accuracy: 0.9319 - precision: 0.9537 - recall: 0.9089 - auc: 0.9700 - prc: 0.9784 - val_loss: 0.1712 - val_tp: 60.0000 - val_fp: 890.0000 - val_tn: 44611.0000 - val_fn: 8.0000 - val_accuracy: 0.9803 - val_precision: 0.0632 - val_recall: 0.8824 - val_auc: 0.9733 - val_prc: 0.7388
Epoch 26/1000
20/20 [==============================] - 1s 28ms/step - loss: 0.2052 - tp: 10093.9524 - fp: 492.6190 - tn: 10820.2857 - fn: 1023.6190 - accuracy: 0.9328 - precision: 0.9529 - recall: 0.9091 - auc: 0.9707 - prc: 0.9779 - val_loss: 0.1651 - val_tp: 60.0000 - val_fp: 876.0000 - val_tn: 44625.0000 - val_fn: 8.0000 - val_accuracy: 0.9806 - val_precision: 0.0641 - val_recall: 0.8824 - val_auc: 0.9737 - val_prc: 0.7388
Epoch 27/1000
20/20 [==============================] - 1s 28ms/step - loss: 0.2015 - tp: 10138.9524 - fp: 493.2381 - tn: 10747.9048 - fn: 1050.3810 - accuracy: 0.9300 - precision: 0.9530 - recall: 0.9044 - auc: 0.9713 - prc: 0.9785 - val_loss: 0.1590 - val_tp: 60.0000 - val_fp: 853.0000 - val_tn: 44648.0000 - val_fn: 8.0000 - val_accuracy: 0.9811 - val_precision: 0.0657 - val_recall: 0.8824 - val_auc: 0.9746 - val_prc: 0.7389
Epoch 28/1000
20/20 [==============================] - 1s 29ms/step - loss: 0.1992 - tp: 10187.8571 - fp: 484.9524 - tn: 10774.2381 - fn: 983.4286 - accuracy: 0.9347 - precision: 0.9538 - recall: 0.9131 - auc: 0.9721 - prc: 0.9794 - val_loss: 0.1538 - val_tp: 60.0000 - val_fp: 854.0000 - val_tn: 44647.0000 - val_fn: 8.0000 - val_accuracy: 0.9811 - val_precision: 0.0656 - val_recall: 0.8824 - val_auc: 0.9750 - val_prc: 0.7390
Epoch 29/1000
20/20 [==============================] - 1s 29ms/step - loss: 0.1939 - tp: 10243.2381 - fp: 447.3810 - tn: 10725.4286 - fn: 1014.4286 - accuracy: 0.9347 - precision: 0.9578 - recall: 0.9099 - auc: 0.9727 - prc: 0.9798 - val_loss: 0.1502 - val_tp: 60.0000 - val_fp: 863.0000 - val_tn: 44638.0000 - val_fn: 8.0000 - val_accuracy: 0.9809 - val_precision: 0.0650 - val_recall: 0.8824 - val_auc: 0.9755 - val_prc: 0.7388
Restoring model weights from the end of the best epoch.
Epoch 00029: early stopping


### Re-check training history

plot_metrics(resampled_history)


### Evaluate metrics

train_predictions_resampled = resampled_model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_resampled = resampled_model.predict(test_features, batch_size=BATCH_SIZE)

resampled_results = resampled_model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(resampled_model.metrics_names, resampled_results):
print(name, ': ', value)
print()

plot_cm(test_labels, test_predictions_resampled)

loss :  0.21728233993053436
tp :  106.0
fp :  1097.0
tn :  55751.0
fn :  8.0
accuracy :  0.9806011319160461
precision :  0.08811304718255997
recall :  0.9298245906829834
auc :  0.9770026206970215
prc :  0.7156160473823547

Legitimate Transactions Detected (True Negatives):  55751
Legitimate Transactions Incorrectly Detected (False Positives):  1097
Fraudulent Transactions Missed (False Negatives):  8
Fraudulent Transactions Detected (True Positives):  106
Total Fraudulent Transactions:  114


### Plot the ROC

plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')

plot_roc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_roc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')

plot_roc("Train Resampled", train_labels, train_predictions_resampled, color=colors[2])
plot_roc("Test Resampled", test_labels, test_predictions_resampled, color=colors[2], linestyle='--')
plt.legend(loc='lower right')

<matplotlib.legend.Legend at 0x7fe06c1f3550>


### Plot the AUPRC

plot_prc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_prc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')

plot_prc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_prc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')

plot_prc("Train Resampled", train_labels, train_predictions_resampled, color=colors[2])
plot_prc("Test Resampled", test_labels, test_predictions_resampled, color=colors[2], linestyle='--')
plt.legend(loc='lower right')

<matplotlib.legend.Legend at 0x7fe450085358>


## Applying this tutorial to your problem

Imbalanced data classification is an inherently difficult task since there are so few samples to learn from. You should always start with the data first and do your best to collect as many samples as possible and give substantial thought to what features may be relevant so the model can get the most out of your minority class. At some point your model may struggle to improve and yield the results you want, so it is important to keep in mind the context of your problem and the trade offs between different types of errors.

[{ "type": "thumb-down", "id": "missingTheInformationINeed", "label":"Missing the information I need" },{ "type": "thumb-down", "id": "tooComplicatedTooManySteps", "label":"Too complicated / too many steps" },{ "type": "thumb-down", "id": "outOfDate", "label":"Out of date" },{ "type": "thumb-down", "id": "samplesCodeIssue", "label":"Samples / code issue" },{ "type": "thumb-down", "id": "otherDown", "label":"Other" }]
[{ "type": "thumb-up", "id": "easyToUnderstand", "label":"Easy to understand" },{ "type": "thumb-up", "id": "solvedMyProblem", "label":"Solved my problem" },{ "type": "thumb-up", "id": "otherUp", "label":"Other" }]