Yardım Kaggle üzerinde TensorFlow ile Büyük Bariyer Resifi korumak Meydan Üyelik

MoveNet ve TensorFlow Lite ile İnsan Pozu Sınıflandırması

Bu not defteri size MoveNet ve TensorFlow Lite kullanarak bir poz sınıflandırma modelini nasıl eğiteceğinizi öğretir. Sonuç, MoveNet modelinden gelen çıktıyı girdi olarak kabul eden ve bir yoga pozu adı gibi bir poz sınıflandırması veren yeni bir TensorFlow Lite modelidir.

Bu defterdeki prosedür 3 bölümden oluşmaktadır:

  • Bölüm 1: Poz sınıflandırma eğitim verilerini, MoveNet modeli tarafından algılanan yer işaretlerini (gövde anahtar noktaları) ve kesinlik poz etiketlerini belirten bir CSV dosyasına önceden işleyin.
  • Bölüm 2: Girdi olarak CSV dosyasından yer işareti koordinatlarını alan ve tahmin edilen etiketlerin çıktısını veren bir poz sınıflandırma modeli oluşturun ve eğitin.
  • Bölüm 3: Poz sınıflandırma modelini TFLite'a dönüştürün.

Varsayılan olarak, bu not defteri, etiketli yoga pozları içeren bir görüntü veri kümesi kullanır, ancak Bölüm 1'de kendi poz görüntü veri kümenizi yükleyebileceğiniz bir bölüm de ekledik.

TensorFlow.org'da görüntüleyin Google Colab'da çalıştırın Kaynağı GitHub'da görüntüleyin Not defterini indir TF Hub modeline bakın

Hazırlık

Bu bölümde, gerekli kitaplıkları içe aktaracak ve eğitim görüntülerini yer işareti koordinatlarını ve kesin doğruluk etiketlerini içeren bir CSV dosyasına önceden işlemek için çeşitli işlevler tanımlayacaksınız.

Burada gözlemlenebilir hiçbir şey olmaz, ancak daha sonra çağıracağımız bazı işlevlerin uygulamasını görmek için gizli kod hücrelerini genişletebilirsiniz.

Tüm detayları bilmeden yalnızca CSV dosyası oluşturmak istiyorsanız, bu bölümü çalıştırın ve Bölüm 1'e geçin.

pip install -q opencv-python
import csv
import cv2
import itertools
import numpy as np
import pandas as pd
import os
import sys
import tempfile
import tqdm

from matplotlib import pyplot as plt
from matplotlib.collections import LineCollection

import tensorflow as tf
import tensorflow_hub as hub
from tensorflow import keras

from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix

MoveNet kullanarak poz tahminini çalıştırma kodu

MoveNet ile poz tahmini çalıştırma işlevleri

Cloning into 'examples'...
remote: Enumerating objects: 19571, done.[K
remote: Counting objects: 100% (1391/1391), done.[K
remote: Compressing objects: 100% (695/695), done.[K
remote: Total 19571 (delta 625), reused 1175 (delta 467), pack-reused 18180[K
Receiving objects: 100% (19571/19571), 32.14 MiB | 30.03 MiB/s, done.
Resolving deltas: 100% (10719/10719), done.

Poz tahmini sonuçlarını görselleştirme işlevleri.

Görüntüleri yüklemek, poz yer işaretlerini tespit etmek ve bunları bir CSV dosyasına kaydetmek için kod

(İsteğe bağlı) Movenet poz tahmini mantığını denemek için kod parçacığı

--2021-11-02 12:42:20--  https://cdn.pixabay.com/photo/2017/03/03/17/30/yoga-2114512_960_720.jpg
Resolving cdn.pixabay.com (cdn.pixabay.com)... 104.18.20.183, 104.18.21.183, 2606:4700::6812:15b7, ...
Connecting to cdn.pixabay.com (cdn.pixabay.com)|104.18.20.183|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 21743 (21K) [image/jpeg]
Saving to: ‘/tmp/image.jpeg’

/tmp/image.jpeg     100%[===================>]  21.23K  --.-KB/s    in 0s      

2021-11-02 12:42:20 (94.3 MB/s) - ‘/tmp/image.jpeg’ saved [21743/21743]

png

Bölüm 1: Giriş görüntülerini ön işleme tabi tutun

Bizim poz sınıflandırıcı için giriş MoveNet modelinden çıkış noktaları olduğundan, MoveNet aracılığıyla etiketlenmiş görüntüleri çalışan ve daha sonra bir CSV dosyası olarak tüm dönüm verilerini ve yer gerçeği etiketleri yakalayarak bizim eğitim veri kümesi oluşturmak gerekiyor.

Bu eğitim için sağladığımız veri kümesi, CG tarafından oluşturulan bir yoga pozu veri kümesidir. 5 farklı yoga pozu yapan CG tarafından oluşturulan birden fazla modelin görüntülerini içerir. Dizin zaten bir ayrılmıştır train veri kümesi ve bir test veri kümesi.

Yani bu bölümde, biz yoga veri kümesi indirmek edeceğiz ve bir CSV dosyası olarak tüm işaretlerini yakalayabilir böylece MoveNet aracılığıyla çalıştırmak ... Ancak, MoveNet bizim yoga veri kümesini beslemek için yaklaşık 15 dakika sürer ve bu CSV dosyası oluşturulur . Yani alternatif olarak, sen ayarlayarak yoga veri kümesi için önceden varolan bir CSV dosyasını indirebilirsiniz is_skip_step_1 True olarak aşağıda parametre. Bu şekilde, bu adımı atlarsınız ve bunun yerine bu ön işleme adımında oluşturulacak aynı CSV dosyasını indirirsiniz.

Kendi görüntü veri kümesi ile poz sınıflandırıcı eğitmek istiyorsanız Öte yandan, resimlerinizi yükleyebilir ve bu ön işleme adımı (bırakın çalıştırmak için gereken is_skip_step_1 Aşağıdaki talimatlar Kendi poz veri kümesini yüklemek için -Takip False).

(İsteğe bağlı) Kendi poz veri kümenizi yükleyin

Poz sınıflandırıcıyı kendi etiketli pozlarınızla (yalnızca yoga pozları değil, herhangi bir poz olabilir) eğitmek istiyorsanız şu adımları izleyin:

  1. Yukarıdaki Set use_custom_dataset True olarak seçeneğini.

  2. Görüntü veri kümenizi içeren bir klasör içeren bir arşiv dosyası (ZIP, TAR veya diğer) hazırlayın. Klasör, pozlarınızın sıralanmış görüntülerini aşağıdaki gibi içermelidir.

    Zaten tren ve test setleri halinde veri kümesini ayrılmışlar, o zaman set dataset_is_split True olarak. Yani, resimler klasörünüz aşağıdaki gibi "tren" ve "test" dizinlerini içermelidir:

    yoga_poses/
    |__ train/
        |__ downdog/
            |______ 00000128.jpg
            |______ ...
    |__ test/
        |__ downdog/
            |______ 00000181.jpg
            |______ ...
    

    Sizin veri kümesi henüz bölünmüş DEĞİLDİR Ya da, daha sonra belirlenen dataset_is_split False ve belirli bölünmüş fraksiyonu dayalı it up ayıracağız. Yani, yüklediğiniz resimler klasörünüz şöyle görünmelidir:

    yoga_poses/
    |__ downdog/
        |______ 00000128.jpg
        |______ 00000181.jpg
        |______ ...
    |__ goddess/
        |______ 00000243.jpg
        |______ 00000306.jpg
        |______ ...
    
  3. Soldaki (klasör simgesi) Dosyalar sekmesini ve ardından oturum depolama (dosya simgesi) yükle seçeneğini tıklayın.

  4. Arşiv dosyanızı seçin ve devam etmeden önce yüklemenin bitmesini bekleyin.

  5. Arşiv dosyanızın ve resim dizininizin adını belirtmek için aşağıdaki kod bloğunu düzenleyin. (Varsayılan olarak bir ZIP dosyası bekliyoruz, bu nedenle arşiviniz başka bir formattaysa o kısmı da değiştirmeniz gerekecektir.)

  6. Şimdi not defterinin geri kalanını çalıştırın.

if use_custom_dataset:
  # ATTENTION:
  # You must edit these two lines to match your archive and images folder name:
  # !tar -xf YOUR_DATASET_ARCHIVE_NAME.tar
  !unzip -q YOUR_DATASET_ARCHIVE_NAME.zip
  dataset_in = 'YOUR_DATASET_DIR_NAME'

  # You can leave the rest alone:
  if not os.path.isdir(dataset_in):
    raise Exception("dataset_in is not a valid directory")
  if dataset_is_split:
    IMAGES_ROOT = dataset_in
  else:
    dataset_out = 'split_' + dataset_in
    split_into_train_test(dataset_in, dataset_out, test_split=0.2)
    IMAGES_ROOT = dataset_out

Yoga veri setini indirin

if not is_skip_step_1 and not use_custom_dataset:
  !wget -O yoga_poses.zip http://download.tensorflow.org/data/pose_classification/yoga_poses.zip
  !unzip -q yoga_poses.zip -d yoga_cg
  IMAGES_ROOT = "yoga_cg"
--2021-11-02 12:42:22--  http://download.tensorflow.org/data/pose_classification/yoga_poses.zip
Resolving download.tensorflow.org (download.tensorflow.org)... 74.125.142.128, 2607:f8b0:400e:c02::80
Connecting to download.tensorflow.org (download.tensorflow.org)|74.125.142.128|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 102517581 (98M) [application/zip]
Saving to: ‘yoga_poses.zip’

yoga_poses.zip      100%[===================>]  97.77M   255MB/s    in 0.4s    

2021-11-02 12:42:23 (255 MB/s) - ‘yoga_poses.zip’ saved [102517581/102517581]

Ön işlemeden TRAIN veri kümesini

if not is_skip_step_1:
  images_in_train_folder = os.path.join(IMAGES_ROOT, 'train')
  images_out_train_folder = 'poses_images_out_train'
  csvs_out_train_path = 'train_data.csv'

  preprocessor = MoveNetPreprocessor(
      images_in_folder=images_in_train_folder,
      images_out_folder=images_out_train_folder,
      csvs_out_path=csvs_out_train_path,
  )

  preprocessor.process(per_pose_class_limit=None)
Preprocessing chair
  0%|          | 0/200 [00:00<?, ?it/s]/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/ipykernel_launcher.py:128: DeprecationWarning: `np.str` is a deprecated alias for the builtin `str`. To silence this warning, use `str` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.str_` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
100%|██████████| 200/200 [00:33<00:00,  6.05it/s]
Preprocessing cobra
100%|██████████| 200/200 [00:31<00:00,  6.34it/s]
Preprocessing dog
100%|██████████| 200/200 [00:31<00:00,  6.37it/s]
Preprocessing tree
100%|██████████| 200/200 [00:33<00:00,  5.98it/s]
Preprocessing warrior
100%|██████████| 200/200 [00:30<00:00,  6.57it/s]
Skipped yoga_cg/train/chair/girl3_chair091.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair092.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair093.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair094.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair096.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair097.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair099.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair100.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair104.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair106.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair110.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair114.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair115.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair118.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair122.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair123.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair124.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair125.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair131.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair132.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair133.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair134.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair136.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair138.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair139.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair142.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/guy2_chair089.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/guy2_chair136.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/guy2_chair140.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/guy2_chair143.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/guy2_chair144.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/guy2_chair145.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/guy2_chair146.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra026.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra029.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra030.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra038.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra040.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra041.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra048.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra050.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra051.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra055.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra059.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra061.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra068.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra070.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra081.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra087.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra088.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra089.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra090.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra091.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra092.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra093.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra094.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra096.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra099.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra102.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra110.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra112.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra115.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra119.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra122.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra128.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra129.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra136.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra140.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl2_cobra029.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl2_cobra046.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl2_cobra050.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl2_cobra053.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl2_cobra108.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl2_cobra117.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl2_cobra129.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl2_cobra133.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl2_cobra136.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl2_cobra140.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra028.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra030.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra032.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra039.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra040.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra051.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra052.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra058.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra062.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra068.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra072.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra076.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra078.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra079.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra082.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra088.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra092.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra097.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra099.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra107.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra129.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra130.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra132.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra134.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra138.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra034.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra042.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra043.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra047.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra053.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra065.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra077.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra078.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra080.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra081.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra084.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra088.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra089.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra102.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra105.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra108.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra139.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl1_dog027.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl1_dog028.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl1_dog030.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl1_dog032.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog075.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog080.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog083.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog085.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog087.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog088.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog090.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog091.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog093.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog094.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog095.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog099.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog100.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog101.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog103.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog104.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog105.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog107.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog111.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog025.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog026.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog027.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog028.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog031.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog033.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog035.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog037.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog040.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog041.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog047.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog052.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog062.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog072.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog074.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog075.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog077.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog081.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog082.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog086.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog088.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog090.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog092.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog094.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog095.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog096.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog100.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog102.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog103.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog104.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog106.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog107.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog111.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/guy1_dog070.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/guy1_dog076.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/guy2_dog070.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/guy2_dog071.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/guy2_dog082.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/girl2_tree119.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/girl2_tree122.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/girl2_tree161.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/girl2_tree163.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy1_tree139.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy1_tree140.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy1_tree141.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy1_tree143.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy2_tree085.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy2_tree086.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy2_tree087.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy2_tree088.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy2_tree090.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy2_tree092.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy2_tree145.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy2_tree147.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior049.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior053.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior064.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior066.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior067.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior072.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior075.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior077.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior080.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior083.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior084.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior087.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior089.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior093.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior094.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior095.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior098.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior099.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior100.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior103.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior108.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior109.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior111.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior112.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior113.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior114.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior116.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior117.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior047.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior049.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior050.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior052.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior057.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior058.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior063.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior068.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior079.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior083.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior085.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior088.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior092.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior096.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior097.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior102.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior106.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior108.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior042.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior043.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior047.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior049.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior051.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior054.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior056.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior057.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior061.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior066.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior067.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior073.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior074.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior075.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior079.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior087.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior089.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior090.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior091.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior092.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior094.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior095.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior096.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior100.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior103.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior107.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior115.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior117.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior134.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior140.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior143.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior043.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior048.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior051.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior052.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior055.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior057.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior062.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior068.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior069.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior073.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior076.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior077.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior080.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior081.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior082.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior088.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior091.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior092.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior093.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior094.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior097.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior118.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior120.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior121.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior124.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior125.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior126.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior131.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior134.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior135.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior138.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior143.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior145.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior148.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior051.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior086.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior111.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior118.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior122.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior129.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior131.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior135.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior137.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior139.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior145.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior148.jpg. No pose was confidentlly detected.

Ön işlemden TEST veri kümesi

if not is_skip_step_1:
  images_in_test_folder = os.path.join(IMAGES_ROOT, 'test')
  images_out_test_folder = 'poses_images_out_test'
  csvs_out_test_path = 'test_data.csv'

  preprocessor = MoveNetPreprocessor(
      images_in_folder=images_in_test_folder,
      images_out_folder=images_out_test_folder,
      csvs_out_path=csvs_out_test_path,
  )

  preprocessor.process(per_pose_class_limit=None)
Preprocessing chair
  0%|          | 0/84 [00:00<?, ?it/s]/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/ipykernel_launcher.py:128: DeprecationWarning: `np.str` is a deprecated alias for the builtin `str`. To silence this warning, use `str` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.str_` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
100%|██████████| 84/84 [00:15<00:00,  5.40it/s]
Preprocessing cobra
100%|██████████| 116/116 [00:19<00:00,  6.09it/s]
Preprocessing dog
100%|██████████| 90/90 [00:15<00:00,  5.99it/s]
Preprocessing tree
100%|██████████| 96/96 [00:16<00:00,  5.83it/s]
Preprocessing warrior
100%|██████████| 109/109 [00:17<00:00,  6.36it/s]
Skipped yoga_cg/test/cobra/guy3_cobra048.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra050.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra051.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra052.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra053.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra054.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra055.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra056.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra057.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra058.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra059.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra060.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra062.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra069.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra075.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra077.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra081.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra124.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra131.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra132.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra134.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra135.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra136.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/dog/guy3_dog025.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/dog/guy3_dog026.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/dog/guy3_dog036.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/dog/guy3_dog042.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/dog/guy3_dog106.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/dog/guy3_dog108.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior042.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior043.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior044.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior045.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior046.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior047.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior048.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior050.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior051.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior052.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior053.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior054.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior055.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior056.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior059.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior060.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior062.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior063.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior065.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior066.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior068.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior070.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior071.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior072.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior073.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior074.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior075.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior076.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior077.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior079.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior080.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior081.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior082.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior083.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior084.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior085.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior086.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior087.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior088.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior089.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior137.jpg. No pose was confidentlly detected.

Bölüm 2: Yer işareti koordinatlarını girdi olarak alan ve tahmin edilen etiketlerin çıktısını alan bir poz sınıflandırma modeli eğitin.

Yer işareti koordinatlarını alan ve giriş görüntüsündeki kişinin gerçekleştirdiği poz sınıfını tahmin eden bir TensorFlow modeli oluşturacaksınız. Model iki alt modelden oluşmaktadır:

  • Alt model 1, algılanan yer işareti koordinatlarından bir poz yerleştirme (diğer bir deyişle özellik vektörü) hesaplar.
  • Submodel 2 beslemeleri birkaç aracılığıyla katıştırma poz Dense poz sınıfını tahmin etmek tabakasının.

Ardından, modeli 1. bölümde önceden işlenmiş veri kümesine dayalı olarak eğiteceksiniz.

(İsteğe bağlı) 1. bölümü çalıştırmadıysanız, önceden işlenmiş veri kümesini indirin

# Download the preprocessed CSV files which are the same as the output of step 1
if is_skip_step_1:
  !wget -O train_data.csv http://download.tensorflow.org/data/pose_classification/yoga_train_data.csv
  !wget -O test_data.csv http://download.tensorflow.org/data/pose_classification/yoga_test_data.csv

  csvs_out_train_path = 'train_data.csv'
  csvs_out_test_path = 'test_data.csv'
  is_skipped_step_1 = True

İçine Önişlenmiş CSV'leri yükleyin TRAIN ve TEST veri setleri.

def load_pose_landmarks(csv_path):
  """Loads a CSV created by MoveNetPreprocessor.

  Returns:
    X: Detected landmark coordinates and scores of shape (N, 17 * 3)
    y: Ground truth labels of shape (N, label_count)
    classes: The list of all class names found in the dataset
    dataframe: The CSV loaded as a Pandas dataframe features (X) and ground
      truth labels (y) to use later to train a pose classification model.
  """

  # Load the CSV file
  dataframe = pd.read_csv(csv_path)
  df_to_process = dataframe.copy()

  # Drop the file_name columns as you don't need it during training.
  df_to_process.drop(columns=['file_name'], inplace=True)

  # Extract the list of class names
  classes = df_to_process.pop('class_name').unique()

  # Extract the labels
  y = df_to_process.pop('class_no')

  # Convert the input features and labels into the correct format for training.
  X = df_to_process.astype('float64')
  y = keras.utils.to_categorical(y)

  return X, y, classes, dataframe

Yük ve orijinal bölünmüş TRAIN içine veri kümesi TRAIN (veri% 85) ve VALIDATE (kalan% 15).

# Load the train data
X, y, class_names, _ = load_pose_landmarks(csvs_out_train_path)

# Split training data (X, y) into (X_train, y_train) and (X_val, y_val)
X_train, X_val, y_train, y_val = train_test_split(X, y,
                                                  test_size=0.15)
# Load the test data
X_test, y_test, _, df_test = load_pose_landmarks(csvs_out_test_path)

Poz sınıflandırması için poz işaretlerini bir poz yerleştirmeye (diğer adıyla özellik vektörü) dönüştürmek için işlevleri tanımlayın

Ardından, yer işareti koordinatlarını şu şekilde bir özellik vektörüne dönüştürün:

  1. Poz merkezini başlangıç ​​noktasına taşıma.
  2. Poz boyutu 1 olacak şekilde pozu ölçekleme
  3. Bu koordinatları bir özellik vektörüne düzleştirme

Ardından, sinir ağı tabanlı bir poz sınıflandırıcı eğitmek için bu özellik vektörünü kullanın.

def get_center_point(landmarks, left_bodypart, right_bodypart):
  """Calculates the center point of the two given landmarks."""

  left = tf.gather(landmarks, left_bodypart.value, axis=1)
  right = tf.gather(landmarks, right_bodypart.value, axis=1)
  center = left * 0.5 + right * 0.5
  return center


def get_pose_size(landmarks, torso_size_multiplier=2.5):
  """Calculates pose size.

  It is the maximum of two values:
    * Torso size multiplied by `torso_size_multiplier`
    * Maximum distance from pose center to any pose landmark
  """
  # Hips center
  hips_center = get_center_point(landmarks, BodyPart.LEFT_HIP, 
                                 BodyPart.RIGHT_HIP)

  # Shoulders center
  shoulders_center = get_center_point(landmarks, BodyPart.LEFT_SHOULDER,
                                      BodyPart.RIGHT_SHOULDER)

  # Torso size as the minimum body size
  torso_size = tf.linalg.norm(shoulders_center - hips_center)

  # Pose center
  pose_center_new = get_center_point(landmarks, BodyPart.LEFT_HIP, 
                                     BodyPart.RIGHT_HIP)
  pose_center_new = tf.expand_dims(pose_center_new, axis=1)
  # Broadcast the pose center to the same size as the landmark vector to
  # perform substraction
  pose_center_new = tf.broadcast_to(pose_center_new,
                                    [tf.size(landmarks) // (17*2), 17, 2])

  # Dist to pose center
  d = tf.gather(landmarks - pose_center_new, 0, axis=0,
                name="dist_to_pose_center")
  # Max dist to pose center
  max_dist = tf.reduce_max(tf.linalg.norm(d, axis=0))

  # Normalize scale
  pose_size = tf.maximum(torso_size * torso_size_multiplier, max_dist)

  return pose_size


def normalize_pose_landmarks(landmarks):
  """Normalizes the landmarks translation by moving the pose center to (0,0) and
  scaling it to a constant pose size.
  """
  # Move landmarks so that the pose center becomes (0,0)
  pose_center = get_center_point(landmarks, BodyPart.LEFT_HIP, 
                                 BodyPart.RIGHT_HIP)
  pose_center = tf.expand_dims(pose_center, axis=1)
  # Broadcast the pose center to the same size as the landmark vector to perform
  # substraction
  pose_center = tf.broadcast_to(pose_center, 
                                [tf.size(landmarks) // (17*2), 17, 2])
  landmarks = landmarks - pose_center

  # Scale the landmarks to a constant pose size
  pose_size = get_pose_size(landmarks)
  landmarks /= pose_size

  return landmarks


def landmarks_to_embedding(landmarks_and_scores):
  """Converts the input landmarks into a pose embedding."""
  # Reshape the flat input into a matrix with shape=(17, 3)
  reshaped_inputs = keras.layers.Reshape((17, 3))(landmarks_and_scores)

  # Normalize landmarks 2D
  landmarks = normalize_pose_landmarks(reshaped_inputs[:, :, :2])

  # Flatten the normalized landmark coordinates into a vector
  embedding = keras.layers.Flatten()(landmarks)

  return embedding

Poz sınıflandırması için bir Keras modeli tanımlayın

Keras modelimiz algılanan poz işaretlerini alır, ardından poz yerleştirmeyi hesaplar ve poz sınıfını tahmin eder.

# Define the model
inputs = tf.keras.Input(shape=(51))
embedding = landmarks_to_embedding(inputs)

layer = keras.layers.Dense(128, activation=tf.nn.relu6)(embedding)
layer = keras.layers.Dropout(0.5)(layer)
layer = keras.layers.Dense(64, activation=tf.nn.relu6)(layer)
layer = keras.layers.Dropout(0.5)(layer)
outputs = keras.layers.Dense(5, activation="softmax")(layer)

model = keras.Model(inputs, outputs)
model.summary()
Model: "model"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            [(None, 51)]         0                                            
__________________________________________________________________________________________________
reshape (Reshape)               (None, 17, 3)        0           input_1[0][0]                    
__________________________________________________________________________________________________
tf.__operators__.getitem (Slici (None, 17, 2)        0           reshape[0][0]                    
__________________________________________________________________________________________________
tf.compat.v1.gather (TFOpLambda (None, 2)            0           tf.__operators__.getitem[0][0]   
__________________________________________________________________________________________________
tf.compat.v1.gather_1 (TFOpLamb (None, 2)            0           tf.__operators__.getitem[0][0]   
__________________________________________________________________________________________________
tf.math.multiply (TFOpLambda)   (None, 2)            0           tf.compat.v1.gather[0][0]        
__________________________________________________________________________________________________
tf.math.multiply_1 (TFOpLambda) (None, 2)            0           tf.compat.v1.gather_1[0][0]      
__________________________________________________________________________________________________
tf.__operators__.add (TFOpLambd (None, 2)            0           tf.math.multiply[0][0]           
                                                                 tf.math.multiply_1[0][0]         
__________________________________________________________________________________________________
tf.compat.v1.size (TFOpLambda)  ()                   0           tf.__operators__.getitem[0][0]   
__________________________________________________________________________________________________
tf.expand_dims (TFOpLambda)     (None, 1, 2)         0           tf.__operators__.add[0][0]       
__________________________________________________________________________________________________
tf.compat.v1.floor_div (TFOpLam ()                   0           tf.compat.v1.size[0][0]          
__________________________________________________________________________________________________
tf.broadcast_to (TFOpLambda)    (None, 17, 2)        0           tf.expand_dims[0][0]             
                                                                 tf.compat.v1.floor_div[0][0]     
__________________________________________________________________________________________________
tf.math.subtract (TFOpLambda)   (None, 17, 2)        0           tf.__operators__.getitem[0][0]   
                                                                 tf.broadcast_to[0][0]            
__________________________________________________________________________________________________
tf.compat.v1.gather_6 (TFOpLamb (None, 2)            0           tf.math.subtract[0][0]           
__________________________________________________________________________________________________
tf.compat.v1.gather_7 (TFOpLamb (None, 2)            0           tf.math.subtract[0][0]           
__________________________________________________________________________________________________
tf.math.multiply_6 (TFOpLambda) (None, 2)            0           tf.compat.v1.gather_6[0][0]      
__________________________________________________________________________________________________
tf.math.multiply_7 (TFOpLambda) (None, 2)            0           tf.compat.v1.gather_7[0][0]      
__________________________________________________________________________________________________
tf.__operators__.add_3 (TFOpLam (None, 2)            0           tf.math.multiply_6[0][0]         
                                                                 tf.math.multiply_7[0][0]         
__________________________________________________________________________________________________
tf.compat.v1.size_1 (TFOpLambda ()                   0           tf.math.subtract[0][0]           
__________________________________________________________________________________________________
tf.compat.v1.gather_4 (TFOpLamb (None, 2)            0           tf.math.subtract[0][0]           
__________________________________________________________________________________________________
tf.compat.v1.gather_5 (TFOpLamb (None, 2)            0           tf.math.subtract[0][0]           
__________________________________________________________________________________________________
tf.compat.v1.gather_2 (TFOpLamb (None, 2)            0           tf.math.subtract[0][0]           
__________________________________________________________________________________________________
tf.compat.v1.gather_3 (TFOpLamb (None, 2)            0           tf.math.subtract[0][0]           
__________________________________________________________________________________________________
tf.expand_dims_1 (TFOpLambda)   (None, 1, 2)         0           tf.__operators__.add_3[0][0]     
__________________________________________________________________________________________________
tf.compat.v1.floor_div_1 (TFOpL ()                   0           tf.compat.v1.size_1[0][0]        
__________________________________________________________________________________________________
tf.math.multiply_4 (TFOpLambda) (None, 2)            0           tf.compat.v1.gather_4[0][0]      
__________________________________________________________________________________________________
tf.math.multiply_5 (TFOpLambda) (None, 2)            0           tf.compat.v1.gather_5[0][0]      
__________________________________________________________________________________________________
tf.math.multiply_2 (TFOpLambda) (None, 2)            0           tf.compat.v1.gather_2[0][0]      
__________________________________________________________________________________________________
tf.math.multiply_3 (TFOpLambda) (None, 2)            0           tf.compat.v1.gather_3[0][0]      
__________________________________________________________________________________________________
tf.broadcast_to_1 (TFOpLambda)  (None, 17, 2)        0           tf.expand_dims_1[0][0]           
                                                                 tf.compat.v1.floor_div_1[0][0]   
__________________________________________________________________________________________________
tf.__operators__.add_2 (TFOpLam (None, 2)            0           tf.math.multiply_4[0][0]         
                                                                 tf.math.multiply_5[0][0]         
__________________________________________________________________________________________________
tf.__operators__.add_1 (TFOpLam (None, 2)            0           tf.math.multiply_2[0][0]         
                                                                 tf.math.multiply_3[0][0]         
__________________________________________________________________________________________________
tf.math.subtract_2 (TFOpLambda) (None, 17, 2)        0           tf.math.subtract[0][0]           
                                                                 tf.broadcast_to_1[0][0]          
__________________________________________________________________________________________________
tf.math.subtract_1 (TFOpLambda) (None, 2)            0           tf.__operators__.add_2[0][0]     
                                                                 tf.__operators__.add_1[0][0]     
__________________________________________________________________________________________________
tf.compat.v1.gather_8 (TFOpLamb (17, 2)              0           tf.math.subtract_2[0][0]         
__________________________________________________________________________________________________
tf.compat.v1.norm (TFOpLambda)  ()                   0           tf.math.subtract_1[0][0]         
__________________________________________________________________________________________________
tf.compat.v1.norm_1 (TFOpLambda (2,)                 0           tf.compat.v1.gather_8[0][0]      
__________________________________________________________________________________________________
tf.math.multiply_8 (TFOpLambda) ()                   0           tf.compat.v1.norm[0][0]          
__________________________________________________________________________________________________
tf.math.reduce_max (TFOpLambda) ()                   0           tf.compat.v1.norm_1[0][0]        
__________________________________________________________________________________________________
tf.math.maximum (TFOpLambda)    ()                   0           tf.math.multiply_8[0][0]         
                                                                 tf.math.reduce_max[0][0]         
__________________________________________________________________________________________________
tf.math.truediv (TFOpLambda)    (None, 17, 2)        0           tf.math.subtract[0][0]           
                                                                 tf.math.maximum[0][0]            
__________________________________________________________________________________________________
flatten (Flatten)               (None, 34)           0           tf.math.truediv[0][0]            
__________________________________________________________________________________________________
dense (Dense)                   (None, 128)          4480        flatten[0][0]                    
__________________________________________________________________________________________________
dropout (Dropout)               (None, 128)          0           dense[0][0]                      
__________________________________________________________________________________________________
dense_1 (Dense)                 (None, 64)           8256        dropout[0][0]                    
__________________________________________________________________________________________________
dropout_1 (Dropout)             (None, 64)           0           dense_1[0][0]                    
__________________________________________________________________________________________________
dense_2 (Dense)                 (None, 5)            325         dropout_1[0][0]                  
==================================================================================================
Total params: 13,061
Trainable params: 13,061
Non-trainable params: 0
__________________________________________________________________________________________________
model.compile(
    optimizer='adam',
    loss='categorical_crossentropy',
    metrics=['accuracy']
)

# Add a checkpoint callback to store the checkpoint that has the highest
# validation accuracy.
checkpoint_path = "weights.best.hdf5"
checkpoint = keras.callbacks.ModelCheckpoint(checkpoint_path,
                             monitor='val_accuracy',
                             verbose=1,
                             save_best_only=True,
                             mode='max')
earlystopping = keras.callbacks.EarlyStopping(monitor='val_accuracy', 
                                              patience=20)

# Start training
history = model.fit(X_train, y_train,
                    epochs=200,
                    batch_size=16,
                    validation_data=(X_val, y_val),
                    callbacks=[checkpoint, earlystopping])
Epoch 1/200
37/37 [==============================] - 1s 10ms/step - loss: 1.5122 - accuracy: 0.3979 - val_loss: 1.3203 - val_accuracy: 0.7549

Epoch 00001: val_accuracy improved from -inf to 0.75490, saving model to weights.best.hdf5
Epoch 2/200
37/37 [==============================] - 0s 3ms/step - loss: 1.2401 - accuracy: 0.5225 - val_loss: 0.9707 - val_accuracy: 0.5490

Epoch 00002: val_accuracy did not improve from 0.75490
Epoch 3/200
37/37 [==============================] - 0s 3ms/step - loss: 1.0415 - accuracy: 0.5329 - val_loss: 0.8071 - val_accuracy: 0.7843

Epoch 00003: val_accuracy improved from 0.75490 to 0.78431, saving model to weights.best.hdf5
Epoch 4/200
37/37 [==============================] - 0s 3ms/step - loss: 0.9158 - accuracy: 0.6073 - val_loss: 0.7049 - val_accuracy: 0.8137

Epoch 00004: val_accuracy improved from 0.78431 to 0.81373, saving model to weights.best.hdf5
Epoch 5/200
37/37 [==============================] - 0s 3ms/step - loss: 0.8329 - accuracy: 0.6211 - val_loss: 0.6346 - val_accuracy: 0.7843

Epoch 00005: val_accuracy did not improve from 0.81373
Epoch 6/200
37/37 [==============================] - 0s 3ms/step - loss: 0.7479 - accuracy: 0.7093 - val_loss: 0.5388 - val_accuracy: 0.8922

Epoch 00006: val_accuracy improved from 0.81373 to 0.89216, saving model to weights.best.hdf5
Epoch 7/200
37/37 [==============================] - 0s 3ms/step - loss: 0.6842 - accuracy: 0.7301 - val_loss: 0.4944 - val_accuracy: 0.8725

Epoch 00007: val_accuracy did not improve from 0.89216
Epoch 8/200
37/37 [==============================] - 0s 3ms/step - loss: 0.6161 - accuracy: 0.7543 - val_loss: 0.4425 - val_accuracy: 0.9412

Epoch 00008: val_accuracy improved from 0.89216 to 0.94118, saving model to weights.best.hdf5
Epoch 9/200
37/37 [==============================] - 0s 3ms/step - loss: 0.5525 - accuracy: 0.8080 - val_loss: 0.3835 - val_accuracy: 0.9314

Epoch 00009: val_accuracy did not improve from 0.94118
Epoch 10/200
37/37 [==============================] - 0s 3ms/step - loss: 0.5186 - accuracy: 0.8010 - val_loss: 0.3561 - val_accuracy: 0.9216

Epoch 00010: val_accuracy did not improve from 0.94118
Epoch 11/200
37/37 [==============================] - 0s 3ms/step - loss: 0.4875 - accuracy: 0.8149 - val_loss: 0.3246 - val_accuracy: 0.9412

Epoch 00011: val_accuracy did not improve from 0.94118
Epoch 12/200
37/37 [==============================] - 0s 3ms/step - loss: 0.4780 - accuracy: 0.8201 - val_loss: 0.3037 - val_accuracy: 0.9314

Epoch 00012: val_accuracy did not improve from 0.94118
Epoch 13/200
37/37 [==============================] - 0s 3ms/step - loss: 0.4100 - accuracy: 0.8754 - val_loss: 0.2749 - val_accuracy: 0.9510

Epoch 00013: val_accuracy improved from 0.94118 to 0.95098, saving model to weights.best.hdf5
Epoch 14/200
37/37 [==============================] - 0s 3ms/step - loss: 0.4016 - accuracy: 0.8668 - val_loss: 0.2526 - val_accuracy: 0.9510

Epoch 00014: val_accuracy did not improve from 0.95098
Epoch 15/200
37/37 [==============================] - 0s 3ms/step - loss: 0.3637 - accuracy: 0.8927 - val_loss: 0.2360 - val_accuracy: 0.9510

Epoch 00015: val_accuracy did not improve from 0.95098
Epoch 16/200
37/37 [==============================] - 0s 3ms/step - loss: 0.3389 - accuracy: 0.9014 - val_loss: 0.2149 - val_accuracy: 0.9510

Epoch 00016: val_accuracy did not improve from 0.95098
Epoch 17/200
37/37 [==============================] - 0s 3ms/step - loss: 0.3337 - accuracy: 0.8979 - val_loss: 0.2083 - val_accuracy: 0.9510

Epoch 00017: val_accuracy did not improve from 0.95098
Epoch 18/200
37/37 [==============================] - 0s 4ms/step - loss: 0.3122 - accuracy: 0.9239 - val_loss: 0.1979 - val_accuracy: 0.9510

Epoch 00018: val_accuracy did not improve from 0.95098
Epoch 19/200
37/37 [==============================] - 0s 4ms/step - loss: 0.2708 - accuracy: 0.9239 - val_loss: 0.1775 - val_accuracy: 0.9510

Epoch 00019: val_accuracy did not improve from 0.95098
Epoch 20/200
37/37 [==============================] - 0s 3ms/step - loss: 0.2841 - accuracy: 0.9152 - val_loss: 0.1687 - val_accuracy: 0.9510

Epoch 00020: val_accuracy did not improve from 0.95098
Epoch 21/200
37/37 [==============================] - 0s 3ms/step - loss: 0.2656 - accuracy: 0.9273 - val_loss: 0.1517 - val_accuracy: 0.9510

Epoch 00021: val_accuracy did not improve from 0.95098
Epoch 22/200
37/37 [==============================] - 0s 3ms/step - loss: 0.2637 - accuracy: 0.9066 - val_loss: 0.1465 - val_accuracy: 0.9510

Epoch 00022: val_accuracy did not improve from 0.95098
Epoch 23/200
37/37 [==============================] - 0s 3ms/step - loss: 0.2231 - accuracy: 0.9394 - val_loss: 0.1390 - val_accuracy: 0.9608

Epoch 00023: val_accuracy improved from 0.95098 to 0.96078, saving model to weights.best.hdf5
Epoch 24/200
37/37 [==============================] - 0s 3ms/step - loss: 0.2281 - accuracy: 0.9464 - val_loss: 0.1425 - val_accuracy: 0.9510

Epoch 00024: val_accuracy did not improve from 0.96078
Epoch 25/200
37/37 [==============================] - 0s 3ms/step - loss: 0.2298 - accuracy: 0.9273 - val_loss: 0.1306 - val_accuracy: 0.9510

Epoch 00025: val_accuracy did not improve from 0.96078
Epoch 26/200
37/37 [==============================] - 0s 3ms/step - loss: 0.2233 - accuracy: 0.9377 - val_loss: 0.1160 - val_accuracy: 0.9804

Epoch 00026: val_accuracy improved from 0.96078 to 0.98039, saving model to weights.best.hdf5
Epoch 27/200
37/37 [==============================] - 0s 3ms/step - loss: 0.2064 - accuracy: 0.9256 - val_loss: 0.1145 - val_accuracy: 0.9804

Epoch 00027: val_accuracy did not improve from 0.98039
Epoch 28/200
37/37 [==============================] - 0s 4ms/step - loss: 0.1826 - accuracy: 0.9481 - val_loss: 0.1148 - val_accuracy: 0.9804

Epoch 00028: val_accuracy did not improve from 0.98039
Epoch 29/200
37/37 [==============================] - 0s 3ms/step - loss: 0.1817 - accuracy: 0.9412 - val_loss: 0.1077 - val_accuracy: 0.9804

Epoch 00029: val_accuracy did not improve from 0.98039
Epoch 30/200
37/37 [==============================] - 0s 3ms/step - loss: 0.2035 - accuracy: 0.9464 - val_loss: 0.1040 - val_accuracy: 0.9804

Epoch 00030: val_accuracy did not improve from 0.98039
Epoch 31/200
37/37 [==============================] - 0s 4ms/step - loss: 0.1689 - accuracy: 0.9567 - val_loss: 0.1041 - val_accuracy: 0.9706

Epoch 00031: val_accuracy did not improve from 0.98039
Epoch 32/200
37/37 [==============================] - 0s 3ms/step - loss: 0.1537 - accuracy: 0.9602 - val_loss: 0.0953 - val_accuracy: 0.9804

Epoch 00032: val_accuracy did not improve from 0.98039
Epoch 33/200
37/37 [==============================] - 0s 3ms/step - loss: 0.1910 - accuracy: 0.9377 - val_loss: 0.0996 - val_accuracy: 0.9804

Epoch 00033: val_accuracy did not improve from 0.98039
Epoch 34/200
37/37 [==============================] - 0s 3ms/step - loss: 0.1646 - accuracy: 0.9550 - val_loss: 0.0945 - val_accuracy: 0.9706

Epoch 00034: val_accuracy did not improve from 0.98039
Epoch 35/200
37/37 [==============================] - 0s 3ms/step - loss: 0.1585 - accuracy: 0.9498 - val_loss: 0.0856 - val_accuracy: 0.9804

Epoch 00035: val_accuracy did not improve from 0.98039
Epoch 36/200
37/37 [==============================] - 0s 3ms/step - loss: 0.1643 - accuracy: 0.9464 - val_loss: 0.0842 - val_accuracy: 0.9804

Epoch 00036: val_accuracy did not improve from 0.98039
Epoch 37/200
37/37 [==============================] - 0s 3ms/step - loss: 0.1495 - accuracy: 0.9585 - val_loss: 0.0832 - val_accuracy: 0.9804

Epoch 00037: val_accuracy did not improve from 0.98039
Epoch 38/200
37/37 [==============================] - 0s 3ms/step - loss: 0.1366 - accuracy: 0.9585 - val_loss: 0.0738 - val_accuracy: 0.9804

Epoch 00038: val_accuracy did not improve from 0.98039
Epoch 39/200
37/37 [==============================] - 0s 3ms/step - loss: 0.1362 - accuracy: 0.9654 - val_loss: 0.0712 - val_accuracy: 0.9706

Epoch 00039: val_accuracy did not improve from 0.98039
Epoch 40/200
37/37 [==============================] - 0s 3ms/step - loss: 0.1144 - accuracy: 0.9775 - val_loss: 0.0695 - val_accuracy: 0.9706

Epoch 00040: val_accuracy did not improve from 0.98039
Epoch 41/200
37/37 [==============================] - 0s 3ms/step - loss: 0.1259 - accuracy: 0.9637 - val_loss: 0.0645 - val_accuracy: 0.9804

Epoch 00041: val_accuracy did not improve from 0.98039
Epoch 42/200
37/37 [==============================] - 0s 3ms/step - loss: 0.1066 - accuracy: 0.9723 - val_loss: 0.0575 - val_accuracy: 0.9804

Epoch 00042: val_accuracy did not improve from 0.98039
Epoch 43/200
37/37 [==============================] - 0s 3ms/step - loss: 0.1253 - accuracy: 0.9619 - val_loss: 0.0548 - val_accuracy: 0.9804

Epoch 00043: val_accuracy did not improve from 0.98039
Epoch 44/200
37/37 [==============================] - 0s 3ms/step - loss: 0.1036 - accuracy: 0.9689 - val_loss: 0.0702 - val_accuracy: 0.9804

Epoch 00044: val_accuracy did not improve from 0.98039
Epoch 45/200
37/37 [==============================] - 0s 3ms/step - loss: 0.1221 - accuracy: 0.9567 - val_loss: 0.0570 - val_accuracy: 0.9804

Epoch 00045: val_accuracy did not improve from 0.98039
Epoch 46/200
37/37 [==============================] - 0s 3ms/step - loss: 0.1217 - accuracy: 0.9654 - val_loss: 0.0470 - val_accuracy: 0.9804

Epoch 00046: val_accuracy did not improve from 0.98039
# Visualize the training history to see whether you're overfitting.
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['TRAIN', 'VAL'], loc='lower right')
plt.show()

png

# Evaluate the model using the TEST dataset
loss, accuracy = model.evaluate(X_test, y_test)
14/14 [==============================] - 0s 2ms/step - loss: 0.0426 - accuracy: 0.9953

Model performansını daha iyi anlamak için karışıklık matrisini çizin

def plot_confusion_matrix(cm, classes,
                          normalize=False,
                          title='Confusion matrix',
                          cmap=plt.cm.Blues):
  """Plots the confusion matrix."""
  if normalize:
    cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
    print("Normalized confusion matrix")
  else:
    print('Confusion matrix, without normalization')

  plt.imshow(cm, interpolation='nearest', cmap=cmap)
  plt.title(title)
  plt.colorbar()
  tick_marks = np.arange(len(classes))
  plt.xticks(tick_marks, classes, rotation=55)
  plt.yticks(tick_marks, classes)
  fmt = '.2f' if normalize else 'd'
  thresh = cm.max() / 2.
  for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
    plt.text(j, i, format(cm[i, j], fmt),
              horizontalalignment="center",
              color="white" if cm[i, j] > thresh else "black")

  plt.ylabel('True label')
  plt.xlabel('Predicted label')
  plt.tight_layout()

# Classify pose in the TEST dataset using the trained model
y_pred = model.predict(X_test)

# Convert the prediction result to class name
y_pred_label = [class_names[i] for i in np.argmax(y_pred, axis=1)]
y_true_label = [class_names[i] for i in np.argmax(y_test, axis=1)]

# Plot the confusion matrix
cm = confusion_matrix(np.argmax(y_test, axis=1), np.argmax(y_pred, axis=1))
plot_confusion_matrix(cm,
                      class_names,
                      title ='Confusion Matrix of Pose Classification Model')

# Print the classification report
print('\nClassification Report:\n', classification_report(y_true_label,
                                                          y_pred_label))
Confusion matrix, without normalization

Classification Report:
               precision    recall  f1-score   support

       chair       1.00      1.00      1.00        84
       cobra       0.98      1.00      0.99        93
         dog       1.00      1.00      1.00        84
        tree       1.00      1.00      1.00        96
     warrior       1.00      0.97      0.99        68

    accuracy                           1.00       425
   macro avg       1.00      0.99      0.99       425
weighted avg       1.00      1.00      1.00       425

png

(İsteğe bağlı) Yanlış tahminleri araştırın

Sen den pozlar bakabilirsiniz TEST yanlış modeli doğruluğu geliştirilebilir olup olmadığını görmek için tahmin edilmiştir veri kümesi.

if is_skip_step_1:
  raise RuntimeError('You must have run step 1 to run this cell.')

# If step 1 was skipped, skip this step.
IMAGE_PER_ROW = 3
MAX_NO_OF_IMAGE_TO_PLOT = 30

# Extract the list of incorrectly predicted poses
false_predict = [id_in_df for id_in_df in range(len(y_test)) \
                if y_pred_label[id_in_df] != y_true_label[id_in_df]]
if len(false_predict) > MAX_NO_OF_IMAGE_TO_PLOT:
  false_predict = false_predict[:MAX_NO_OF_IMAGE_TO_PLOT]

# Plot the incorrectly predicted images
row_count = len(false_predict) // IMAGE_PER_ROW + 1
fig = plt.figure(figsize=(10 * IMAGE_PER_ROW, 10 * row_count))
for i, id_in_df in enumerate(false_predict):
  ax = fig.add_subplot(row_count, IMAGE_PER_ROW, i + 1)
  image_path = os.path.join(images_out_test_folder,
                            df_test.iloc[id_in_df]['file_name'])

  image = cv2.imread(image_path)
  plt.title("Predict: %s; Actual: %s"
            % (y_pred_label[id_in_df], y_true_label[id_in_df]))
  plt.imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))
plt.show()

png

Bölüm 3: Poz sınıflandırma modelini TensorFlow Lite'a dönüştürün

Mobil uygulamalara, web tarayıcılarına ve IoT cihazlarına dağıtabilmeniz için Keras poz sınıflandırma modelini TensorFlow Lite biçimine dönüştüreceksiniz. Modeli dönüştürürken, uygularız dinamik aralık nicemlemesini önemsiz doğruluk kaybıyla 4 hakkında kat poz sınıflandırma TensorFlow Lite modeli boyutunu azaltmak için.

converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_model = converter.convert()

print('Model size: %dKB' % (len(tflite_model) / 1024))

with open('pose_classifier.tflite', 'wb') as f:
  f.write(tflite_model)
2021-11-02 12:46:36.839507: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
INFO:tensorflow:Assets written to: /tmp/tmp9dovzgpg/assets
Model size: 26KB
2021-11-02 12:46:39.019543: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:351] Ignored output_format.
2021-11-02 12:46:39.019594: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:354] Ignored drop_control_dependency.

Ardından, sınıf dizinlerinden insan tarafından okunabilir sınıf adlarına eşleme içeren etiket dosyasını yazacaksınız.

with open('pose_labels.txt', 'w') as f:
  f.write('\n'.join(class_names))

Model boyutunu küçültmek için niceleme uyguladığınıza göre, doğruluk düşüşünün kabul edilebilir olup olmadığını kontrol etmek için nicelenmiş TFLite modelini değerlendirelim.

def evaluate_model(interpreter, X, y_true):
  """Evaluates the given TFLite model and return its accuracy."""
  input_index = interpreter.get_input_details()[0]["index"]
  output_index = interpreter.get_output_details()[0]["index"]

  # Run predictions on all given poses.
  y_pred = []
  for i in range(len(y_true)):
    # Pre-processing: add batch dimension and convert to float32 to match with
    # the model's input data format.
    test_image = X[i: i + 1].astype('float32')
    interpreter.set_tensor(input_index, test_image)

    # Run inference.
    interpreter.invoke()

    # Post-processing: remove batch dimension and find the class with highest
    # probability.
    output = interpreter.tensor(output_index)
    predicted_label = np.argmax(output()[0])
    y_pred.append(predicted_label)

  # Compare prediction results with ground truth labels to calculate accuracy.
  y_pred = keras.utils.to_categorical(y_pred)
  return accuracy_score(y_true, y_pred)

# Evaluate the accuracy of the converted TFLite model
classifier_interpreter = tf.lite.Interpreter(model_content=tflite_model)
classifier_interpreter.allocate_tensors()
print('Accuracy of TFLite model: %s' %
      evaluate_model(classifier_interpreter, X_test, y_test))
Accuracy of TFLite model: 0.9976470588235294

Şimdi TFLite modeli (indirebilirsiniz pose_classifier.tflite ) ve etiket dosyası ( pose_labels.txt sınıflandırmak özel pozlar kadar). Bkz Android'i ve Python / Ahududu Pi TFLite poz sınıflandırma modeli nasıl kullanılacağına dair bir uçtan-uca örneğin örnek uygulamasını.

zip pose_classifier.zip pose_labels.txt pose_classifier.tflite
adding: pose_labels.txt (stored 0%)
  adding: pose_classifier.tflite (deflated 35%)
# Download the zip archive if running on Colab.
try:
  from google.colab import files
  files.download('pose_classifier.zip')
except:
  pass