Have a question? Connect with the community at the TensorFlow Forum Visit Forum

Noise

View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook

Noise is present in modern day quantum computers. Qubits are susceptible to interference from the surrounding environment, imperfect fabrication, TLS and sometimes even gamma rays. Until large scale error correction is reached, the algorithms of today must be able to remain functional in the presence of noise. This makes testing algorithms under noise an important step for validating quantum algorithms / models will function on the quantum computers of today.

In this tutorial you will explore the basics of noisy circuit simulation in TFQ via the high level tfq.layers API.

Setup

pip install tensorflow==2.4.1 tensorflow-quantum
pip install -q git+https://github.com/tensorflow/docs
import random
import cirq
import sympy
import tensorflow_quantum as tfq
import tensorflow as tf
import numpy as np
# Plotting
import matplotlib.pyplot as plt
import tensorflow_docs as tfdocs
import tensorflow_docs.plots

1. Understanding quantum noise

1.1 Basic circuit noise

Noise on a quantum computer impacts the bitstring samples you are able to measure from it. One intuitive way you can start to think about this is that a noisy quantum computer will "insert", "delete" or "replace" gates in random places like the diagram below:

Building off of this intuition, when dealing with noise, you are no longer using a single pure state $|\psi \rangle$ but instead dealing with an ensemble of all possible noisy realizations of your desired circuit: $\rho = \sum_j p_j |\psi_j \rangle \langle \psi_j |$ . Where $p_j$ gives the probability that the system is in $|\psi_j \rangle$ .

Revisiting the above picture, if we knew beforehand that 90% of the time our system executed perfectly, or errored 10% of the time with just this one mode of failure, then our ensemble would be:

$\rho = 0.9 |\psi_\text{desired} \rangle \langle \psi_\text{desired}| + 0.1 |\psi_\text{noisy} \rangle \langle \psi_\text{noisy}| $

If there was more than just one way that our circuit could error, then the ensemble $\rho$ would contain more than just two terms (one for each new noisy realization that could happen). $\rho$ is referred to as the density matrix describing your noisy system.

1.2 Using channels to model circuit noise

Unfortunately in practice it's nearly impossible to know all the ways your circuit might error and their exact probabilities. A simplifying assumption you can make is that after each operation in your circuit there is some kind of channel that roughly captures how that operation might error. You can quickly create a circuit with some noise:

def x_circuit(qubits):
  """Produces an X wall circuit on `qubits`."""
  return cirq.Circuit(cirq.X.on_each(*qubits))

def make_noisy(circuit, p):
  """Add a depolarization channel to all qubits in `circuit` before measurement."""
  return circuit + cirq.Circuit(cirq.depolarize(p).on_each(*circuit.all_qubits()))

my_qubits = cirq.GridQubit.rect(1, 2)
my_circuit = x_circuit(my_qubits)
my_noisy_circuit = make_noisy(my_circuit, 0.5)
my_circuit
my_noisy_circuit

You can examine the noiseless density matrix $\rho$ with:

rho = cirq.final_density_matrix(my_circuit)
np.round(rho, 3)
array([[0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],
       [0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],
       [0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],
       [0.+0.j, 0.+0.j, 0.+0.j, 1.+0.j]], dtype=complex64)

And the noisy density matrix $\rho$ with:

rho = cirq.final_density_matrix(my_noisy_circuit)
np.round(rho, 3)
array([[0.111+0.j, 0.   +0.j, 0.   +0.j, 0.   +0.j],
       [0.   +0.j, 0.222+0.j, 0.   +0.j, 0.   +0.j],
       [0.   +0.j, 0.   +0.j, 0.222+0.j, 0.   +0.j],
       [0.   +0.j, 0.   +0.j, 0.   +0.j, 0.444+0.j]], dtype=complex64)

Comparing the two different $ \rho $ 's you can see that the noise has impacted the amplitudes of the state (and consequently sampling probabilities). In the noiseless case you would always expect to sample the $ |11\rangle $ state. But in the noisy state there is now a nonzero probability of sampling $ |00\rangle $ or $ |01\rangle $ or $ |10\rangle $ as well:

"""Sample from my_noisy_circuit."""
def plot_samples(circuit):
  samples = cirq.sample(circuit + cirq.measure(*circuit.all_qubits(), key='bits'), repetitions=1000)
  freqs, _ = np.histogram(samples.data['bits'], bins=[i+0.01 for i in range(-1,2** len(my_qubits))])
  plt.figure(figsize=(10,5))
  plt.title('Noisy Circuit Sampling')
  plt.xlabel('Bitstring')
  plt.ylabel('Frequency')
  plt.bar([i for i in range(2** len(my_qubits))], freqs, tick_label=['00','01','10','11'])

plot_samples(my_noisy_circuit)

png

Without any noise you will always get $|11\rangle$:

"""Sample from my_circuit."""
plot_samples(my_circuit)

png

If you increase the noise a little further it will become harder and harder to distinguish the desired behavior (sampling $|11\rangle$ ) from the noise:

my_really_noisy_circuit = make_noisy(my_circuit, 0.75)
plot_samples(my_really_noisy_circuit)

png

2. Basic noise in TFQ

With this understanding of how noise can impact circuit execution, you can explore how noise works in TFQ. TensorFlow Quantum uses monte-carlo / trajectory based simulation as an alternative to density matrix simulation. This is because the memory complexity of density matrix simulation limits large simulations to being <= 20 qubits with traditional full density matrix simulation methods. Monte-carlo / trajectory trades this cost in memory for additional cost in time. The backend='noisy' option available to all tfq.layers.Sample, tfq.layers.SampledExpectation and tfq.layers.Expectation (In the case of Expectation this does add a required repetitions parameter).

2.1 Noisy sampling in TFQ

To recreate the above plots using TFQ and trajectory simulation you can use tfq.layers.Sample

"""Draw bitstring samples from `my_noisy_circuit`"""
bitstrings = tfq.layers.Sample(backend='noisy')(my_noisy_circuit, repetitions=1000)
numeric_values = np.einsum('ijk,k->ij', bitstrings.to_tensor().numpy(), [1, 2])[0]
freqs, _ = np.histogram(numeric_values, bins=[i+0.01 for i in range(-1,2** len(my_qubits))])
plt.figure(figsize=(10,5))
plt.title('Noisy Circuit Sampling')
plt.xlabel('Bitstring')
plt.ylabel('Frequency')
plt.bar([i for i in range(2** len(my_qubits))], freqs, tick_label=['00','01','10','11'])
<BarContainer object of 4 artists>

png

2.2 Noisy sample based expectation

To do noisy sample based expectation calculation you can use tfq.layers.SampleExpectation:

some_observables = [cirq.X(my_qubits[0]), cirq.Z(my_qubits[0]), 3.0 * cirq.Y(my_qubits[1]) + 1]
some_observables
[cirq.X(cirq.GridQubit(0, 0)),
 cirq.Z(cirq.GridQubit(0, 0)),
 cirq.PauliSum(cirq.LinearDict({frozenset({(cirq.GridQubit(0, 1), cirq.Y)}): (3+0j), frozenset(): (1+0j)}))]

Compute the noiseless expectation estimates via sampling from the circuit:

noiseless_sampled_expectation = tfq.layers.SampledExpectation(backend='noiseless')(
    my_circuit, operators=some_observables, repetitions=10000
)
noiseless_sampled_expectation.numpy()
array([[ 0.007 , -1.    ,  1.0018]], dtype=float32)

Compare those with the noisy versions:

noisy_sampled_expectation = tfq.layers.SampledExpectation(backend='noisy')(
    [my_noisy_circuit, my_really_noisy_circuit], operators=some_observables, repetitions=10000
)
noisy_sampled_expectation.numpy()
array([[ 0.0074    , -0.33379996,  0.9166    ],
       [ 0.0012    , -0.0168    ,  1.0024    ]], dtype=float32)

You can see that the noise has particularly impacted the $\langle \psi | Z | \psi \rangle$ accuracy, with my_really_noisy_circuit concentrating very quickly towards 0.

2.3 Noisy analytic expectation calculation

Doing noisy analytic expectation calculations is nearly identical to above:

noiseless_analytic_expectation = tfq.layers.Expectation(backend='noiseless')(
    my_circuit, operators=some_observables
)
noiseless_analytic_expectation.numpy()
array([[ 1.9106853e-15, -1.0000000e+00,  1.0000002e+00]], dtype=float32)
noisy_analytic_expectation = tfq.layers.Expectation(backend='noisy')(
    [my_noisy_circuit, my_really_noisy_circuit], operators=some_observables, repetitions=10000
)
noisy_analytic_expectation.numpy()
array([[ 1.9106857e-15, -3.5119998e-01,  1.0000000e+00],
       [ 1.9106850e-15, -2.3999999e-03,  1.0000000e+00]], dtype=float32)

3. Hybrid models and quantum data noise

Now that you have implemented some noisy circuit simulations in TFQ, you can experiment with how noise impacts quantum and hybrid quantum classical models, by comparing and contrasting their noisy vs noiseless performance. A good first check to see if a model or algorithm is robust to noise is to test under a circuit wide depolarizing model which looks something like this:

Where each time slice of the circuit (sometimes referred to as moment) has a depolarizing channel appended after each gate operation in that time slice. The depolarizing channel with apply one of ${X, Y, Z }$ with probability $p$ or apply nothing (keep the original operation) with probability $1-p$.

3.1 Data

For this example you can use some prepared circuits in the tfq.datasets module as training data:

qubits = cirq.GridQubit.rect(1, 8)
circuits, labels, pauli_sums, _ = tfq.datasets.xxz_chain(qubits, 'closed')
circuits[0]
Downloading data from https://storage.googleapis.com/download.tensorflow.org/data/quantum/spin_systems/XXZ_chain.zip 
184451072/184449737 [==============================] - 2s 0us/step

Writing a small helper function will help to generate the data for the noisy vs noiseless case:

def get_data(qubits, depolarize_p=0.):
  """Return quantum data circuits and labels in `tf.Tensor` form."""
  circuits, labels, pauli_sums, _ = tfq.datasets.xxz_chain(qubits, 'closed')
  if depolarize_p >= 1e-5:
    circuits = [circuit.with_noise(cirq.depolarize(depolarize_p)) for circuit in circuits]
  tmp = list(zip(circuits, labels))
  random.shuffle(tmp)
  circuits_tensor = tfq.convert_to_tensor([x[0] for x in tmp])
  labels_tensor = tf.convert_to_tensor([x[1] for x in tmp])

  return circuits_tensor, labels_tensor

3.2 Define a model circuit

Now that you have quantum data in the form of circuits, you will need a circuit to model this data, like with the data you can write a helper function to generate this circuit optionally containing noise:

def modelling_circuit(qubits, depth, depolarize_p=0.):
  """A simple classifier circuit."""
  dim = len(qubits)
  ret = cirq.Circuit(cirq.H.on_each(*qubits))

  for i in range(depth):
    # Entangle layer.
    ret += cirq.Circuit(cirq.CX(q1, q2) for (q1, q2) in zip(qubits[::2], qubits[1::2]))
    ret += cirq.Circuit(cirq.CX(q1, q2) for (q1, q2) in zip(qubits[1::2], qubits[2::2]))
    # Learnable rotation layer.
    # i_params = sympy.symbols(f'layer-{i}-0:{dim}')
    param = sympy.Symbol(f'layer-{i}')
    single_qb = cirq.X
    if i % 2 == 1:
      single_qb = cirq.Y
    ret += cirq.Circuit(single_qb(q) ** param for q in qubits)

  if depolarize_p >= 1e-5:
    ret = ret.with_noise(cirq.depolarize(depolarize_p))

  return ret, [op(q) for q in qubits for op in [cirq.X, cirq.Y, cirq.Z]]

modelling_circuit(qubits, 3)[0]

3.3 Model building and training

With your data and model circuit built, the final helper function you will need is one that can assemble both a noisy or a noiseless hybrid quantum tf.keras.Model:

def build_keras_model(qubits, depolarize_p=0.):
  """Prepare a noisy hybrid quantum classical Keras model."""
  spin_input = tf.keras.Input(shape=(), dtype=tf.dtypes.string)

  circuit_and_readout = modelling_circuit(qubits, 4, depolarize_p)
  if depolarize_p >= 1e-5:
    quantum_model = tfq.layers.NoisyPQC(*circuit_and_readout, sample_based=False, repetitions=10)(spin_input)
  else:
    quantum_model = tfq.layers.PQC(*circuit_and_readout)(spin_input)

  intermediate = tf.keras.layers.Dense(4, activation='sigmoid')(quantum_model)
  post_process = tf.keras.layers.Dense(1)(intermediate)

  return tf.keras.Model(inputs=[spin_input], outputs=[post_process])

4. Compare performance

4.1 Noiseless baseline

With your data generation and model building code, you can now compare and contrast model performance in the noiseless and noisy settings, first you can run a reference noiseless training:

training_histories = dict()
depolarize_p = 0.
n_epochs = 50
phase_classifier = build_keras_model(qubits, depolarize_p)

phase_classifier.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.02),
                   loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
                   metrics=['accuracy'])


# Show the keras plot of the model
tf.keras.utils.plot_model(phase_classifier, show_shapes=True, dpi=70)

png

noiseless_data, noiseless_labels = get_data(qubits, depolarize_p)
training_histories['noiseless'] = phase_classifier.fit(x=noiseless_data,
                         y=noiseless_labels,
                         batch_size=16,
                         epochs=n_epochs,
                         validation_split=0.15,
                         verbose=1)
Epoch 1/50
4/4 [==============================] - 1s 236ms/step - loss: 0.7270 - accuracy: 0.4813 - val_loss: 0.6975 - val_accuracy: 0.5000
Epoch 2/50
4/4 [==============================] - 0s 95ms/step - loss: 0.7006 - accuracy: 0.4500 - val_loss: 0.6966 - val_accuracy: 0.5000
Epoch 3/50
4/4 [==============================] - 0s 92ms/step - loss: 0.6887 - accuracy: 0.4271 - val_loss: 0.7002 - val_accuracy: 0.5000
Epoch 4/50
4/4 [==============================] - 0s 86ms/step - loss: 0.6904 - accuracy: 0.4604 - val_loss: 0.6981 - val_accuracy: 0.5000
Epoch 5/50
4/4 [==============================] - 0s 88ms/step - loss: 0.6886 - accuracy: 0.4625 - val_loss: 0.6934 - val_accuracy: 0.5000
Epoch 6/50
4/4 [==============================] - 0s 88ms/step - loss: 0.6849 - accuracy: 0.4604 - val_loss: 0.6882 - val_accuracy: 0.5000
Epoch 7/50
4/4 [==============================] - 0s 85ms/step - loss: 0.6840 - accuracy: 0.4813 - val_loss: 0.6814 - val_accuracy: 0.5000
Epoch 8/50
4/4 [==============================] - 0s 87ms/step - loss: 0.6713 - accuracy: 0.4292 - val_loss: 0.6757 - val_accuracy: 0.5000
Epoch 9/50
4/4 [==============================] - 0s 87ms/step - loss: 0.6789 - accuracy: 0.5104 - val_loss: 0.6680 - val_accuracy: 0.5000
Epoch 10/50
4/4 [==============================] - 0s 88ms/step - loss: 0.6663 - accuracy: 0.4729 - val_loss: 0.6610 - val_accuracy: 0.5000
Epoch 11/50
4/4 [==============================] - 0s 86ms/step - loss: 0.6637 - accuracy: 0.5083 - val_loss: 0.6508 - val_accuracy: 0.5000
Epoch 12/50
4/4 [==============================] - 0s 87ms/step - loss: 0.6445 - accuracy: 0.4708 - val_loss: 0.6380 - val_accuracy: 0.5000
Epoch 13/50
4/4 [==============================] - 0s 140ms/step - loss: 0.6269 - accuracy: 0.4229 - val_loss: 0.6218 - val_accuracy: 0.5000
Epoch 14/50
4/4 [==============================] - 0s 86ms/step - loss: 0.6150 - accuracy: 0.4500 - val_loss: 0.5998 - val_accuracy: 0.5000
Epoch 15/50
4/4 [==============================] - 0s 85ms/step - loss: 0.5996 - accuracy: 0.4979 - val_loss: 0.5809 - val_accuracy: 0.5000
Epoch 16/50
4/4 [==============================] - 0s 86ms/step - loss: 0.5901 - accuracy: 0.4604 - val_loss: 0.5623 - val_accuracy: 0.5000
Epoch 17/50
4/4 [==============================] - 0s 87ms/step - loss: 0.5624 - accuracy: 0.4896 - val_loss: 0.5401 - val_accuracy: 0.5000
Epoch 18/50
4/4 [==============================] - 0s 88ms/step - loss: 0.5380 - accuracy: 0.5167 - val_loss: 0.5153 - val_accuracy: 0.5000
Epoch 19/50
4/4 [==============================] - 0s 86ms/step - loss: 0.5170 - accuracy: 0.6396 - val_loss: 0.4912 - val_accuracy: 0.6667
Epoch 20/50
4/4 [==============================] - 0s 85ms/step - loss: 0.4954 - accuracy: 0.6937 - val_loss: 0.4654 - val_accuracy: 0.7500
Epoch 21/50
4/4 [==============================] - 0s 86ms/step - loss: 0.4666 - accuracy: 0.7708 - val_loss: 0.4402 - val_accuracy: 0.7500
Epoch 22/50
4/4 [==============================] - 0s 88ms/step - loss: 0.4416 - accuracy: 0.7896 - val_loss: 0.4152 - val_accuracy: 0.8333
Epoch 23/50
4/4 [==============================] - 0s 87ms/step - loss: 0.4194 - accuracy: 0.7812 - val_loss: 0.3893 - val_accuracy: 0.8333
Epoch 24/50
4/4 [==============================] - 0s 85ms/step - loss: 0.4102 - accuracy: 0.8187 - val_loss: 0.3660 - val_accuracy: 0.8333
Epoch 25/50
4/4 [==============================] - 0s 85ms/step - loss: 0.3653 - accuracy: 0.8521 - val_loss: 0.3449 - val_accuracy: 0.8333
Epoch 26/50
4/4 [==============================] - 0s 86ms/step - loss: 0.3639 - accuracy: 0.8292 - val_loss: 0.3213 - val_accuracy: 0.8333
Epoch 27/50
4/4 [==============================] - 0s 87ms/step - loss: 0.3576 - accuracy: 0.8063 - val_loss: 0.2992 - val_accuracy: 0.8333
Epoch 28/50
4/4 [==============================] - 0s 89ms/step - loss: 0.3105 - accuracy: 0.8771 - val_loss: 0.2819 - val_accuracy: 0.8333
Epoch 29/50
4/4 [==============================] - 0s 83ms/step - loss: 0.3080 - accuracy: 0.8729 - val_loss: 0.2664 - val_accuracy: 0.8333
Epoch 30/50
4/4 [==============================] - 0s 85ms/step - loss: 0.2714 - accuracy: 0.9042 - val_loss: 0.2495 - val_accuracy: 0.8333
Epoch 31/50
4/4 [==============================] - 0s 83ms/step - loss: 0.2746 - accuracy: 0.9208 - val_loss: 0.2338 - val_accuracy: 0.8333
Epoch 32/50
4/4 [==============================] - 0s 143ms/step - loss: 0.2501 - accuracy: 0.9375 - val_loss: 0.2216 - val_accuracy: 0.8333
Epoch 33/50
4/4 [==============================] - 0s 89ms/step - loss: 0.2728 - accuracy: 0.9021 - val_loss: 0.2106 - val_accuracy: 0.8333
Epoch 34/50
4/4 [==============================] - 0s 86ms/step - loss: 0.2283 - accuracy: 0.9417 - val_loss: 0.2009 - val_accuracy: 0.8333
Epoch 35/50
4/4 [==============================] - 0s 84ms/step - loss: 0.2256 - accuracy: 0.9375 - val_loss: 0.1899 - val_accuracy: 0.9167
Epoch 36/50
4/4 [==============================] - 0s 85ms/step - loss: 0.2068 - accuracy: 0.9479 - val_loss: 0.1827 - val_accuracy: 0.9167
Epoch 37/50
4/4 [==============================] - 0s 85ms/step - loss: 0.2023 - accuracy: 0.9271 - val_loss: 0.1721 - val_accuracy: 0.9167
Epoch 38/50
4/4 [==============================] - 0s 87ms/step - loss: 0.2032 - accuracy: 0.9375 - val_loss: 0.1683 - val_accuracy: 0.9167
Epoch 39/50
4/4 [==============================] - 0s 86ms/step - loss: 0.1936 - accuracy: 0.9667 - val_loss: 0.1636 - val_accuracy: 0.9167
Epoch 40/50
4/4 [==============================] - 0s 86ms/step - loss: 0.1976 - accuracy: 0.9437 - val_loss: 0.1508 - val_accuracy: 0.9167
Epoch 41/50
4/4 [==============================] - 0s 86ms/step - loss: 0.1997 - accuracy: 0.9375 - val_loss: 0.1477 - val_accuracy: 0.9167
Epoch 42/50
4/4 [==============================] - 0s 87ms/step - loss: 0.1912 - accuracy: 0.9479 - val_loss: 0.1415 - val_accuracy: 0.9167
Epoch 43/50
4/4 [==============================] - 0s 87ms/step - loss: 0.1718 - accuracy: 0.9500 - val_loss: 0.1452 - val_accuracy: 0.9167
Epoch 44/50
4/4 [==============================] - 0s 89ms/step - loss: 0.1662 - accuracy: 0.9646 - val_loss: 0.1459 - val_accuracy: 0.9167
Epoch 45/50
4/4 [==============================] - 0s 88ms/step - loss: 0.1564 - accuracy: 0.9437 - val_loss: 0.1322 - val_accuracy: 0.9167
Epoch 46/50
4/4 [==============================] - 0s 83ms/step - loss: 0.1600 - accuracy: 0.9604 - val_loss: 0.1220 - val_accuracy: 1.0000
Epoch 47/50
4/4 [==============================] - 0s 85ms/step - loss: 0.1614 - accuracy: 0.9500 - val_loss: 0.1171 - val_accuracy: 1.0000
Epoch 48/50
4/4 [==============================] - 0s 87ms/step - loss: 0.1360 - accuracy: 0.9583 - val_loss: 0.1162 - val_accuracy: 1.0000
Epoch 49/50
4/4 [==============================] - 0s 141ms/step - loss: 0.1453 - accuracy: 0.9500 - val_loss: 0.1216 - val_accuracy: 0.9167
Epoch 50/50
4/4 [==============================] - 0s 84ms/step - loss: 0.1347 - accuracy: 0.9604 - val_loss: 0.1238 - val_accuracy: 0.9167

And explore the results and accuracy:

loss_plotter = tfdocs.plots.HistoryPlotter(metric = 'loss', smoothing_std=10)
loss_plotter.plot(training_histories)

png

acc_plotter = tfdocs.plots.HistoryPlotter(metric = 'accuracy', smoothing_std=10)
acc_plotter.plot(training_histories)

png

4.2 Noisy comparison

Now you can build a new model with noisy structure and compare to the above, the code is nearly identical:

depolarize_p = 0.001
n_epochs = 50
noisy_phase_classifier = build_keras_model(qubits, depolarize_p)

noisy_phase_classifier.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.02),
                   loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
                   metrics=['accuracy'])


# Show the keras plot of the model
tf.keras.utils.plot_model(noisy_phase_classifier, show_shapes=True, dpi=70)

png

noisy_data, noisy_labels = get_data(qubits, depolarize_p)
training_histories['noisy'] = noisy_phase_classifier.fit(x=noisy_data,
                         y=noisy_labels,
                         batch_size=16,
                         epochs=n_epochs,
                         validation_split=0.15,
                         verbose=1)
Epoch 1/50
4/4 [==============================] - 9s 2s/step - loss: 0.6921 - accuracy: 0.4833 - val_loss: 0.6732 - val_accuracy: 0.3333
Epoch 2/50
4/4 [==============================] - 7s 2s/step - loss: 0.6741 - accuracy: 0.4604 - val_loss: 0.6911 - val_accuracy: 0.3333
Epoch 3/50
4/4 [==============================] - 7s 2s/step - loss: 0.6738 - accuracy: 0.5229 - val_loss: 0.6924 - val_accuracy: 0.3333
Epoch 4/50
4/4 [==============================] - 7s 2s/step - loss: 0.6628 - accuracy: 0.5458 - val_loss: 0.6868 - val_accuracy: 0.3333
Epoch 5/50
4/4 [==============================] - 7s 2s/step - loss: 0.6607 - accuracy: 0.4667 - val_loss: 0.6527 - val_accuracy: 0.3333
Epoch 6/50
4/4 [==============================] - 7s 2s/step - loss: 0.6416 - accuracy: 0.5208 - val_loss: 0.6230 - val_accuracy: 0.3333
Epoch 7/50
4/4 [==============================] - 7s 2s/step - loss: 0.6289 - accuracy: 0.5250 - val_loss: 0.5937 - val_accuracy: 0.3333
Epoch 8/50
4/4 [==============================] - 7s 2s/step - loss: 0.6145 - accuracy: 0.5813 - val_loss: 0.5995 - val_accuracy: 0.4167
Epoch 9/50
4/4 [==============================] - 7s 2s/step - loss: 0.6097 - accuracy: 0.6021 - val_loss: 0.5835 - val_accuracy: 0.4167
Epoch 10/50
4/4 [==============================] - 7s 2s/step - loss: 0.6058 - accuracy: 0.5667 - val_loss: 0.5851 - val_accuracy: 0.4167
Epoch 11/50
4/4 [==============================] - 7s 2s/step - loss: 0.5769 - accuracy: 0.5792 - val_loss: 0.5693 - val_accuracy: 0.5000
Epoch 12/50
4/4 [==============================] - 7s 2s/step - loss: 0.5572 - accuracy: 0.7104 - val_loss: 0.5322 - val_accuracy: 0.7500
Epoch 13/50
4/4 [==============================] - 7s 2s/step - loss: 0.5390 - accuracy: 0.7625 - val_loss: 0.5406 - val_accuracy: 0.6667
Epoch 14/50
4/4 [==============================] - 7s 2s/step - loss: 0.5166 - accuracy: 0.7708 - val_loss: 0.4989 - val_accuracy: 0.7500
Epoch 15/50
4/4 [==============================] - 7s 2s/step - loss: 0.5098 - accuracy: 0.7229 - val_loss: 0.4833 - val_accuracy: 0.7500
Epoch 16/50
4/4 [==============================] - 7s 2s/step - loss: 0.4990 - accuracy: 0.7229 - val_loss: 0.4949 - val_accuracy: 0.8333
Epoch 17/50
4/4 [==============================] - 7s 2s/step - loss: 0.4601 - accuracy: 0.8292 - val_loss: 0.5033 - val_accuracy: 0.6667
Epoch 18/50
4/4 [==============================] - 7s 2s/step - loss: 0.4583 - accuracy: 0.8292 - val_loss: 0.4675 - val_accuracy: 0.7500
Epoch 19/50
4/4 [==============================] - 7s 2s/step - loss: 0.4360 - accuracy: 0.8479 - val_loss: 0.4227 - val_accuracy: 0.9167
Epoch 20/50
4/4 [==============================] - 7s 2s/step - loss: 0.4342 - accuracy: 0.8500 - val_loss: 0.4027 - val_accuracy: 0.8333
Epoch 21/50
4/4 [==============================] - 7s 2s/step - loss: 0.4313 - accuracy: 0.8917 - val_loss: 0.4099 - val_accuracy: 0.9167
Epoch 22/50
4/4 [==============================] - 7s 2s/step - loss: 0.4012 - accuracy: 0.8146 - val_loss: 0.4077 - val_accuracy: 0.7500
Epoch 23/50
4/4 [==============================] - 7s 2s/step - loss: 0.3696 - accuracy: 0.8375 - val_loss: 0.3895 - val_accuracy: 0.8333
Epoch 24/50
4/4 [==============================] - 7s 2s/step - loss: 0.3742 - accuracy: 0.8667 - val_loss: 0.3538 - val_accuracy: 1.0000
Epoch 25/50
4/4 [==============================] - 7s 2s/step - loss: 0.3497 - accuracy: 0.9062 - val_loss: 0.3352 - val_accuracy: 0.9167
Epoch 26/50
4/4 [==============================] - 7s 2s/step - loss: 0.3300 - accuracy: 0.8938 - val_loss: 0.3857 - val_accuracy: 0.9167
Epoch 27/50
4/4 [==============================] - 7s 2s/step - loss: 0.3245 - accuracy: 0.8771 - val_loss: 0.3425 - val_accuracy: 0.9167
Epoch 28/50
4/4 [==============================] - 7s 2s/step - loss: 0.3189 - accuracy: 0.9062 - val_loss: 0.3165 - val_accuracy: 1.0000
Epoch 29/50
4/4 [==============================] - 7s 2s/step - loss: 0.3047 - accuracy: 0.9604 - val_loss: 0.2984 - val_accuracy: 1.0000
Epoch 30/50
4/4 [==============================] - 7s 2s/step - loss: 0.2899 - accuracy: 0.9271 - val_loss: 0.3225 - val_accuracy: 0.8333
Epoch 31/50
4/4 [==============================] - 7s 2s/step - loss: 0.2648 - accuracy: 0.8854 - val_loss: 0.2659 - val_accuracy: 1.0000
Epoch 32/50
4/4 [==============================] - 7s 2s/step - loss: 0.2505 - accuracy: 0.9437 - val_loss: 0.2462 - val_accuracy: 1.0000
Epoch 33/50
4/4 [==============================] - 7s 2s/step - loss: 0.2762 - accuracy: 0.9729 - val_loss: 0.2345 - val_accuracy: 1.0000
Epoch 34/50
4/4 [==============================] - 7s 2s/step - loss: 0.2273 - accuracy: 0.9500 - val_loss: 0.2556 - val_accuracy: 1.0000
Epoch 35/50
4/4 [==============================] - 7s 2s/step - loss: 0.2646 - accuracy: 0.9083 - val_loss: 0.2678 - val_accuracy: 0.9167
Epoch 36/50
4/4 [==============================] - 7s 2s/step - loss: 0.2238 - accuracy: 0.9604 - val_loss: 0.2108 - val_accuracy: 1.0000
Epoch 37/50
4/4 [==============================] - 7s 2s/step - loss: 0.2451 - accuracy: 0.9375 - val_loss: 0.2701 - val_accuracy: 0.9167
Epoch 38/50
4/4 [==============================] - 7s 2s/step - loss: 0.2205 - accuracy: 0.9563 - val_loss: 0.2186 - val_accuracy: 1.0000
Epoch 39/50
4/4 [==============================] - 7s 2s/step - loss: 0.2530 - accuracy: 0.9437 - val_loss: 0.2028 - val_accuracy: 1.0000
Epoch 40/50
4/4 [==============================] - 7s 2s/step - loss: 0.2204 - accuracy: 0.9062 - val_loss: 0.1502 - val_accuracy: 1.0000
Epoch 41/50
4/4 [==============================] - 7s 2s/step - loss: 0.1769 - accuracy: 0.9417 - val_loss: 0.2355 - val_accuracy: 1.0000
Epoch 42/50
4/4 [==============================] - 7s 2s/step - loss: 0.1965 - accuracy: 0.9542 - val_loss: 0.1879 - val_accuracy: 1.0000
Epoch 43/50
4/4 [==============================] - 7s 2s/step - loss: 0.1789 - accuracy: 0.9563 - val_loss: 0.2359 - val_accuracy: 0.9167
Epoch 44/50
4/4 [==============================] - 7s 2s/step - loss: 0.2002 - accuracy: 0.9208 - val_loss: 0.2459 - val_accuracy: 1.0000
Epoch 45/50
4/4 [==============================] - 7s 2s/step - loss: 0.2178 - accuracy: 0.8979 - val_loss: 0.1523 - val_accuracy: 1.0000
Epoch 46/50
4/4 [==============================] - 7s 2s/step - loss: 0.1845 - accuracy: 0.9104 - val_loss: 0.2397 - val_accuracy: 0.9167
Epoch 47/50
4/4 [==============================] - 7s 2s/step - loss: 0.1579 - accuracy: 0.9437 - val_loss: 0.2535 - val_accuracy: 0.9167
Epoch 48/50
4/4 [==============================] - 7s 2s/step - loss: 0.2078 - accuracy: 0.9458 - val_loss: 0.1270 - val_accuracy: 1.0000
Epoch 49/50
4/4 [==============================] - 7s 2s/step - loss: 0.1839 - accuracy: 0.9250 - val_loss: 0.1836 - val_accuracy: 0.9167
Epoch 50/50
4/4 [==============================] - 7s 2s/step - loss: 0.1835 - accuracy: 0.9083 - val_loss: 0.2158 - val_accuracy: 0.8333
loss_plotter.plot(training_histories)

png

acc_plotter.plot(training_histories)

png