طبقه بندی ژست انسان با MoveNet و TensorFlow Lite

این نوت بوک به شما آموزش می دهد که چگونه یک مدل طبقه بندی پوز را با استفاده از MoveNet و TensorFlow Lite آموزش دهید. نتیجه یک مدل جدید TensorFlow Lite است که خروجی مدل MoveNet را به عنوان ورودی خود می‌پذیرد و یک طبقه‌بندی پوز مانند نام پوز یوگا را خروجی می‌دهد.

روش کار در این دفترچه شامل 3 قسمت است:

  • بخش 1: داده‌های آموزش طبقه‌بندی پوز را در یک فایل CSV پیش پردازش کنید که نشانه‌های (نقاط کلیدی بدن) شناسایی‌شده توسط مدل MoveNet را به همراه برچسب‌های پوز حقیقت زمین مشخص می‌کند.
  • قسمت 2: ساخت و آموزش یک مدل طبقه بندی پوز که مختصات نقطه عطفی را از فایل CSV به عنوان ورودی می گیرد و برچسب های پیش بینی شده را خروجی می کند.
  • قسمت 3: مدل طبقه بندی پوز را به TFLite تبدیل کنید.

به‌طور پیش‌فرض، این نوت‌بوک از مجموعه داده‌های تصویری با ژست‌های یوگا برچسب‌گذاری شده استفاده می‌کند، اما ما همچنین بخشی را در قسمت 1 قرار داده‌ایم که می‌توانید مجموعه داده‌های تصویری ژست‌های خود را آپلود کنید.

مشاهده در TensorFlow.org در Google Colab اجرا شود مشاهده منبع در GitHub دانلود دفترچه یادداشت مدل TF Hub را ببینید

آماده سازی

در این بخش، کتابخانه‌های لازم را وارد می‌کنید و چندین عملکرد را برای پیش‌پردازش تصاویر آموزشی در یک فایل CSV که حاوی مختصات نقطه‌نما و برچسب‌های حقیقت زمینی است، تعریف می‌کنید.

هیچ چیز قابل مشاهده‌ای در اینجا اتفاق نمی‌افتد، اما می‌توانید سلول‌های کد پنهان را گسترش دهید تا اجرای برخی از توابع را که بعداً فراخوانی خواهیم کرد، مشاهده کنید.

اگر فقط می خواهید فایل CSV را بدون دانستن تمام جزئیات ایجاد کنید، کافی است این بخش را اجرا کنید و به قسمت 1 بروید.

pip install -q opencv-python
import csv
import cv2
import itertools
import numpy as np
import pandas as pd
import os
import sys
import tempfile
import tqdm

from matplotlib import pyplot as plt
from matplotlib.collections import LineCollection

import tensorflow as tf
import tensorflow_hub as hub
from tensorflow import keras

from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix

کد برای اجرای تخمین پوز با استفاده از MoveNet

توابع برای اجرای تخمین پوز با MoveNet

Cloning into 'examples'...
remote: Enumerating objects: 20141, done.[K
remote: Counting objects: 100% (1961/1961), done.[K
remote: Compressing objects: 100% (1055/1055), done.[K
remote: Total 20141 (delta 909), reused 1584 (delta 595), pack-reused 18180[K
Receiving objects: 100% (20141/20141), 33.15 MiB | 25.83 MiB/s, done.
Resolving deltas: 100% (11003/11003), done.

توابع برای تجسم نتایج تخمین پوز.

کد برای بارگیری تصاویر، شناسایی نشانه های پوز و ذخیره آنها در یک فایل CSV

(اختیاری) قطعه کد برای امتحان منطق تخمین پوز Movenet

--2021-12-21 12:07:36--  https://cdn.pixabay.com/photo/2017/03/03/17/30/yoga-2114512_960_720.jpg
Resolving cdn.pixabay.com (cdn.pixabay.com)... 104.18.20.183, 104.18.21.183, 2606:4700::6812:14b7, ...
Connecting to cdn.pixabay.com (cdn.pixabay.com)|104.18.20.183|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 28665 (28K) [image/jpeg]
Saving to: ‘/tmp/image.jpeg’

/tmp/image.jpeg     100%[===================>]  27.99K  --.-KB/s    in 0s      

2021-12-21 12:07:36 (111 MB/s) - ‘/tmp/image.jpeg’ saved [28665/28665]

png

قسمت 1: تصاویر ورودی را از قبل پردازش کنید

از آنجا که ورودی برای طبقه بندی ژست ما نشانه خروجی از مدل MoveNet است، ما نیاز به تولید مجموعه داده های آموزشی ما، با اجرای تصاویر دارای برچسب طریق MoveNet و سپس گرفتن تمام داده ها و برچسب نقطه عطفی حقیقت زمین را به یک فایل CSV.

مجموعه داده ای که ما برای این آموزش ارائه کرده ایم یک مجموعه داده پوز یوگا تولید شده توسط CG است. این شامل تصاویری از چندین مدل تولید شده توسط CG است که 5 حالت مختلف یوگا را انجام می دهند. دایرکتوری است در حال حاضر به تقسیم train مجموعه داده و یک test مجموعه داده.

بنابراین در این بخش، ما مجموعه داده یوگا دانلود و اجرا آن را از طریق MoveNet بنابراین ما می توانیم تمام نقاط دیدنی به یک فایل CSV گرفتن ... با این حال، در حدود 15 دقیقه طول می کشد برای تغذیه مجموعه داده یوگا ما به MoveNet و تولید این فایل CSV . بنابراین به عنوان یک جایگزین، شما می توانید یک فایل CSV از قبل موجود برای مجموعه داده یوگا با تنظیم دانلود is_skip_step_1 پارامتر زیر را برای درست. به این ترتیب، از این مرحله رد می‌شوید و در عوض همان فایل CSV را دانلود می‌کنید که در این مرحله پیش‌پردازش ایجاد می‌شود.

از سوی دیگر، اگر شما می خواهید برای آموزش طبقه بندی ژست با مجموعه داده تصویر خود را، شما نیاز به آپلود تصاویر خود و اجرای این مرحله پیش پردازش (مرخصی is_skip_step_1 کاذب)، به دنبال دستورالعمل های زیر را برای آپلود مجموعه داده ژست خود را.

(اختیاری) مجموعه داده ژست خود را آپلود کنید

اگر می‌خواهید طبقه‌بندی‌کننده پوز را با ژست‌های برچسب‌گذاری‌شده خود آموزش دهید (آنها می‌توانند هر حالتی باشند، نه فقط حرکات یوگا)، این مراحل را دنبال کنید:

  1. تنظیم بالا use_custom_dataset را به true.

  2. یک فایل بایگانی (ZIP، TAR یا موارد دیگر) که شامل یک پوشه با مجموعه داده های تصاویر شما باشد، آماده کنید. پوشه باید شامل تصاویر مرتب شده از ژست های شما به شرح زیر باشد.

    اگر شما در حال حاضر مجموعه داده های خود را به مجموعه آموزش و آزمون تقسیم، و سپس راه dataset_is_split به درست. یعنی پوشه تصاویر شما باید شامل دایرکتوری های "train" و "test" مانند این باشد:

    yoga_poses/
    |__ train/
        |__ downdog/
            |______ 00000128.jpg
            |______ ...
    |__ test/
        |__ downdog/
            |______ 00000181.jpg
            |______ ...
    

    یا، اگر مجموعه داده خود را تقسیم نشده است، پس از آن مجموعه dataset_is_split به غلط و ما آن را تا خرد شده در بخش تقسیم مشخص شده است. یعنی پوشه تصاویر آپلود شده شما باید به شکل زیر باشد:

    yoga_poses/
    |__ downdog/
        |______ 00000128.jpg
        |______ 00000181.jpg
        |______ ...
    |__ goddess/
        |______ 00000243.jpg
        |______ 00000306.jpg
        |______ ...
    
  3. کلیک بر روی زبانه فایل در سمت چپ (آیکون پوشه) و سپس کلیک کنید فایل به ذخیره سازی جلسه (آیکون فایل).

  4. فایل بایگانی خود را انتخاب کنید و منتظر بمانید تا آپلود کامل شود قبل از اینکه ادامه دهید.

  5. بلوک کد زیر را ویرایش کنید تا نام فایل آرشیو و فهرست تصاویر خود را مشخص کنید. (به طور پیش فرض، ما یک فایل ZIP را انتظار داریم، بنابراین اگر بایگانی شما فرمت دیگری دارد، باید آن قسمت را نیز تغییر دهید.)

  6. حالا بقیه نوت بوک را اجرا کنید.

if use_custom_dataset:
  # ATTENTION:
  # You must edit these two lines to match your archive and images folder name:
  # !tar -xf YOUR_DATASET_ARCHIVE_NAME.tar
  !unzip -q YOUR_DATASET_ARCHIVE_NAME.zip
  dataset_in = 'YOUR_DATASET_DIR_NAME'

  # You can leave the rest alone:
  if not os.path.isdir(dataset_in):
    raise Exception("dataset_in is not a valid directory")
  if dataset_is_split:
    IMAGES_ROOT = dataset_in
  else:
    dataset_out = 'split_' + dataset_in
    split_into_train_test(dataset_in, dataset_out, test_split=0.2)
    IMAGES_ROOT = dataset_out

مجموعه داده یوگا را دانلود کنید

if not is_skip_step_1 and not use_custom_dataset:
  !wget -O yoga_poses.zip http://download.tensorflow.org/data/pose_classification/yoga_poses.zip
  !unzip -q yoga_poses.zip -d yoga_cg
  IMAGES_ROOT = "yoga_cg"
--2021-12-21 12:07:46--  http://download.tensorflow.org/data/pose_classification/yoga_poses.zip
Resolving download.tensorflow.org (download.tensorflow.org)... 172.217.218.128, 2a00:1450:4013:c08::80
Connecting to download.tensorflow.org (download.tensorflow.org)|172.217.218.128|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 102517581 (98M) [application/zip]
Saving to: ‘yoga_poses.zip’

yoga_poses.zip      100%[===================>]  97.77M  76.7MB/s    in 1.3s    

2021-12-21 12:07:48 (76.7 MB/s) - ‘yoga_poses.zip’ saved [102517581/102517581]

پیش پردازش TRAIN مجموعه داده

if not is_skip_step_1:
  images_in_train_folder = os.path.join(IMAGES_ROOT, 'train')
  images_out_train_folder = 'poses_images_out_train'
  csvs_out_train_path = 'train_data.csv'

  preprocessor = MoveNetPreprocessor(
      images_in_folder=images_in_train_folder,
      images_out_folder=images_out_train_folder,
      csvs_out_path=csvs_out_train_path,
  )

  preprocessor.process(per_pose_class_limit=None)
Preprocessing chair
  0%|          | 0/200 [00:00<?, ?it/s]/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/ipykernel_launcher.py:128: DeprecationWarning: `np.str` is a deprecated alias for the builtin `str`. To silence this warning, use `str` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.str_` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
100%|██████████| 200/200 [00:32<00:00,  6.10it/s]
Preprocessing cobra
100%|██████████| 200/200 [00:31<00:00,  6.27it/s]
Preprocessing dog
100%|██████████| 200/200 [00:32<00:00,  6.15it/s]
Preprocessing tree
100%|██████████| 200/200 [00:33<00:00,  6.00it/s]
Preprocessing warrior
100%|██████████| 200/200 [00:30<00:00,  6.54it/s]
Skipped yoga_cg/train/chair/girl3_chair091.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair092.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair093.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair094.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair096.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair097.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair099.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair100.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair104.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair106.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair110.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair114.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair115.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair118.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair122.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair123.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair124.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair125.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair131.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair132.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair133.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair134.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair136.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair138.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair139.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/girl3_chair142.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/guy2_chair089.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/guy2_chair136.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/guy2_chair140.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/guy2_chair143.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/guy2_chair144.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/guy2_chair145.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/chair/guy2_chair146.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra026.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra029.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra030.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra038.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra040.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra041.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra048.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra050.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra051.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra055.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra059.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra061.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra068.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra070.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra081.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra087.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra088.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra089.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra090.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra091.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra092.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra093.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra094.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra096.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra099.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra102.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra110.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra112.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra115.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra119.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra122.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra128.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra129.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra136.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl1_cobra140.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl2_cobra029.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl2_cobra046.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl2_cobra050.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl2_cobra053.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl2_cobra108.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl2_cobra117.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl2_cobra129.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl2_cobra133.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl2_cobra136.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl2_cobra140.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra028.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra030.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra032.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra039.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra040.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra051.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra052.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra058.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra062.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra068.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra072.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra076.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra078.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra079.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra082.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra088.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra092.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra097.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra099.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra107.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra129.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra130.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra132.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra134.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/girl3_cobra138.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra034.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra042.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra043.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra047.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra053.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra065.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra077.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra078.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra080.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra081.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra084.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra088.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra089.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra102.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra105.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra108.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/cobra/guy2_cobra139.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl1_dog027.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl1_dog028.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl1_dog030.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl1_dog032.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog075.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog080.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog083.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog085.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog087.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog088.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog090.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog091.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog093.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog094.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog095.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog099.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog100.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog101.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog103.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog104.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog105.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog107.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl2_dog111.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog025.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog026.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog027.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog028.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog031.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog033.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog035.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog037.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog040.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog041.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog047.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog052.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog062.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog072.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog074.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog075.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog077.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog081.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog082.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog086.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog088.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog090.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog092.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog094.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog095.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog096.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog100.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog102.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog103.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog104.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog106.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog107.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/girl3_dog111.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/guy1_dog070.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/guy1_dog076.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/guy2_dog070.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/guy2_dog071.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/dog/guy2_dog082.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/girl2_tree119.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/girl2_tree122.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/girl2_tree161.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/girl2_tree163.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy1_tree139.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy1_tree140.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy1_tree141.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy1_tree143.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy2_tree085.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy2_tree086.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy2_tree087.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy2_tree088.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy2_tree090.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy2_tree092.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy2_tree145.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/tree/guy2_tree147.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior049.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior053.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior064.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior066.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior067.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior072.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior075.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior077.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior080.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior083.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior084.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior087.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior089.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior093.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior094.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior095.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior098.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior099.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior100.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior103.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior108.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior109.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior111.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior112.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior113.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior114.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior116.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl1_warrior117.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior047.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior049.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior050.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior052.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior057.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior058.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior063.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior068.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior079.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior083.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior085.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior088.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior092.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior096.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior097.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior102.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior106.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl2_warrior108.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior042.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior043.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior047.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior049.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior051.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior054.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior056.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior057.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior061.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior066.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior067.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior073.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior074.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior075.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior079.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior087.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior089.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior090.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior091.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior092.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior094.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior095.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior096.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior100.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior103.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior107.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior115.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior117.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior134.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior140.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/girl3_warrior143.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior043.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior048.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior051.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior052.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior055.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior057.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior062.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior068.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior069.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior073.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior076.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior077.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior080.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior081.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior082.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior088.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior091.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior092.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior093.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior094.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior097.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior118.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior120.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior121.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior124.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior125.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior126.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior131.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior134.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior135.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior138.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior143.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior145.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy1_warrior148.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior051.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior086.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior111.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior118.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior122.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior129.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior131.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior135.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior137.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior139.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior145.jpg. No pose was confidentlly detected.
Skipped yoga_cg/train/warrior/guy2_warrior148.jpg. No pose was confidentlly detected.

پیش پردازش TEST مجموعه داده

if not is_skip_step_1:
  images_in_test_folder = os.path.join(IMAGES_ROOT, 'test')
  images_out_test_folder = 'poses_images_out_test'
  csvs_out_test_path = 'test_data.csv'

  preprocessor = MoveNetPreprocessor(
      images_in_folder=images_in_test_folder,
      images_out_folder=images_out_test_folder,
      csvs_out_path=csvs_out_test_path,
  )

  preprocessor.process(per_pose_class_limit=None)
Preprocessing chair
  0%|          | 0/84 [00:00<?, ?it/s]/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/ipykernel_launcher.py:128: DeprecationWarning: `np.str` is a deprecated alias for the builtin `str`. To silence this warning, use `str` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.str_` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
100%|██████████| 84/84 [00:15<00:00,  5.51it/s]
Preprocessing cobra
100%|██████████| 116/116 [00:19<00:00,  6.10it/s]
Preprocessing dog
100%|██████████| 90/90 [00:14<00:00,  6.03it/s]
Preprocessing tree
100%|██████████| 96/96 [00:16<00:00,  5.98it/s]
Preprocessing warrior
100%|██████████| 109/109 [00:17<00:00,  6.38it/s]
Skipped yoga_cg/test/cobra/guy3_cobra048.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra050.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra051.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra052.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra053.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra054.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra055.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra056.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra057.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra058.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra059.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra060.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra062.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra069.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra075.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra077.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra081.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra124.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra131.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra132.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra134.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra135.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/cobra/guy3_cobra136.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/dog/guy3_dog025.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/dog/guy3_dog026.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/dog/guy3_dog036.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/dog/guy3_dog042.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/dog/guy3_dog106.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/dog/guy3_dog108.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior042.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior043.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior044.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior045.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior046.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior047.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior048.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior050.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior051.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior052.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior053.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior054.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior055.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior056.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior059.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior060.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior062.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior063.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior065.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior066.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior068.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior070.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior071.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior072.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior073.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior074.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior075.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior076.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior077.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior079.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior080.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior081.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior082.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior083.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior084.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior085.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior086.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior087.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior088.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior089.jpg. No pose was confidentlly detected.
Skipped yoga_cg/test/warrior/guy3_warrior137.jpg. No pose was confidentlly detected.

بخش 2: یک مدل طبقه بندی پوز را آموزش دهید که مختصات نقطه عطف را به عنوان ورودی می گیرد و برچسب های پیش بینی شده را خروجی می کند.

شما یک مدل TensorFlow خواهید ساخت که مختصات نقطه عطف را می گیرد و کلاس حالتی را که فرد در تصویر ورودی انجام می دهد، پیش بینی می کند. این مدل از دو مدل فرعی تشکیل شده است:

  • مدل فرعی 1 یک جاسازی پوز (معروف به بردار ویژگی) را از مختصات نقطه عطف شناسایی شده محاسبه می کند.
  • Submodel 2 فید مطرح تعبیه طریق چندین Dense لایه برای پیش بینی کلاس برخواهد داشت.

سپس مدل را بر اساس مجموعه داده ای که در قسمت 1 از قبل پردازش شده بود، آموزش خواهید داد.

(اختیاری) اگر قسمت 1 را اجرا نکردید، مجموعه داده از پیش پردازش شده را دانلود کنید

# Download the preprocessed CSV files which are the same as the output of step 1
if is_skip_step_1:
  !wget -O train_data.csv http://download.tensorflow.org/data/pose_classification/yoga_train_data.csv
  !wget -O test_data.csv http://download.tensorflow.org/data/pose_classification/yoga_test_data.csv

  csvs_out_train_path = 'train_data.csv'
  csvs_out_test_path = 'test_data.csv'
  is_skipped_step_1 = True

بارگذاری CSVs پیش پردازش به TRAIN و TEST مجموعه داده.

def load_pose_landmarks(csv_path):
  """Loads a CSV created by MoveNetPreprocessor.

  Returns:
    X: Detected landmark coordinates and scores of shape (N, 17 * 3)
    y: Ground truth labels of shape (N, label_count)
    classes: The list of all class names found in the dataset
    dataframe: The CSV loaded as a Pandas dataframe features (X) and ground
      truth labels (y) to use later to train a pose classification model.
  """

  # Load the CSV file
  dataframe = pd.read_csv(csv_path)
  df_to_process = dataframe.copy()

  # Drop the file_name columns as you don't need it during training.
  df_to_process.drop(columns=['file_name'], inplace=True)

  # Extract the list of class names
  classes = df_to_process.pop('class_name').unique()

  # Extract the labels
  y = df_to_process.pop('class_no')

  # Convert the input features and labels into the correct format for training.
  X = df_to_process.astype('float64')
  y = keras.utils.to_categorical(y)

  return X, y, classes, dataframe

بار و تقسیم اصلی TRAIN مجموعه داده به TRAIN (85٪ از داده ها) و VALIDATE (باقی مانده 15٪).

# Load the train data
X, y, class_names, _ = load_pose_landmarks(csvs_out_train_path)

# Split training data (X, y) into (X_train, y_train) and (X_val, y_val)
X_train, X_val, y_train, y_val = train_test_split(X, y,
                                                  test_size=0.15)
# Load the test data
X_test, y_test, _, df_test = load_pose_landmarks(csvs_out_test_path)

تعریف توابع برای تبدیل نشانه های پوز به جاسازی پوس (با نام مستعار بردار ویژگی) برای طبقه بندی ژست

سپس، مختصات نقطه عطف را به یک بردار ویژگی تبدیل کنید:

  1. انتقال مرکز پوز به مبدا.
  2. ژست را به گونه ای تنظیم کنید که اندازه ژست 1 شود
  3. مسطح کردن این مختصات در یک بردار ویژگی

سپس از این بردار ویژگی برای آموزش یک طبقه‌بندی کننده حالت مبتنی بر شبکه عصبی استفاده کنید.

def get_center_point(landmarks, left_bodypart, right_bodypart):
  """Calculates the center point of the two given landmarks."""

  left = tf.gather(landmarks, left_bodypart.value, axis=1)
  right = tf.gather(landmarks, right_bodypart.value, axis=1)
  center = left * 0.5 + right * 0.5
  return center


def get_pose_size(landmarks, torso_size_multiplier=2.5):
  """Calculates pose size.

  It is the maximum of two values:
    * Torso size multiplied by `torso_size_multiplier`
    * Maximum distance from pose center to any pose landmark
  """
  # Hips center
  hips_center = get_center_point(landmarks, BodyPart.LEFT_HIP, 
                                 BodyPart.RIGHT_HIP)

  # Shoulders center
  shoulders_center = get_center_point(landmarks, BodyPart.LEFT_SHOULDER,
                                      BodyPart.RIGHT_SHOULDER)

  # Torso size as the minimum body size
  torso_size = tf.linalg.norm(shoulders_center - hips_center)

  # Pose center
  pose_center_new = get_center_point(landmarks, BodyPart.LEFT_HIP, 
                                     BodyPart.RIGHT_HIP)
  pose_center_new = tf.expand_dims(pose_center_new, axis=1)
  # Broadcast the pose center to the same size as the landmark vector to
  # perform substraction
  pose_center_new = tf.broadcast_to(pose_center_new,
                                    [tf.size(landmarks) // (17*2), 17, 2])

  # Dist to pose center
  d = tf.gather(landmarks - pose_center_new, 0, axis=0,
                name="dist_to_pose_center")
  # Max dist to pose center
  max_dist = tf.reduce_max(tf.linalg.norm(d, axis=0))

  # Normalize scale
  pose_size = tf.maximum(torso_size * torso_size_multiplier, max_dist)

  return pose_size


def normalize_pose_landmarks(landmarks):
  """Normalizes the landmarks translation by moving the pose center to (0,0) and
  scaling it to a constant pose size.
  """
  # Move landmarks so that the pose center becomes (0,0)
  pose_center = get_center_point(landmarks, BodyPart.LEFT_HIP, 
                                 BodyPart.RIGHT_HIP)
  pose_center = tf.expand_dims(pose_center, axis=1)
  # Broadcast the pose center to the same size as the landmark vector to perform
  # substraction
  pose_center = tf.broadcast_to(pose_center, 
                                [tf.size(landmarks) // (17*2), 17, 2])
  landmarks = landmarks - pose_center

  # Scale the landmarks to a constant pose size
  pose_size = get_pose_size(landmarks)
  landmarks /= pose_size

  return landmarks


def landmarks_to_embedding(landmarks_and_scores):
  """Converts the input landmarks into a pose embedding."""
  # Reshape the flat input into a matrix with shape=(17, 3)
  reshaped_inputs = keras.layers.Reshape((17, 3))(landmarks_and_scores)

  # Normalize landmarks 2D
  landmarks = normalize_pose_landmarks(reshaped_inputs[:, :, :2])

  # Flatten the normalized landmark coordinates into a vector
  embedding = keras.layers.Flatten()(landmarks)

  return embedding

یک مدل کراس برای طبقه بندی پوزها تعریف کنید

مدل Keras ما نشانه‌های پوز شناسایی شده را می‌گیرد، سپس جاسازی پوز را محاسبه می‌کند و کلاس حالت را پیش‌بینی می‌کند.

# Define the model
inputs = tf.keras.Input(shape=(51))
embedding = landmarks_to_embedding(inputs)

layer = keras.layers.Dense(128, activation=tf.nn.relu6)(embedding)
layer = keras.layers.Dropout(0.5)(layer)
layer = keras.layers.Dense(64, activation=tf.nn.relu6)(layer)
layer = keras.layers.Dropout(0.5)(layer)
outputs = keras.layers.Dense(len(class_names), activation="softmax")(layer)

model = keras.Model(inputs, outputs)
model.summary()
Model: "model"
__________________________________________________________________________________________________
 Layer (type)                   Output Shape         Param #     Connected to                     
==================================================================================================
 input_1 (InputLayer)           [(None, 51)]         0           []                               
                                                                                                  
 reshape (Reshape)              (None, 17, 3)        0           ['input_1[0][0]']                
                                                                                                  
 tf.__operators__.getitem (Slic  (None, 17, 2)       0           ['reshape[0][0]']                
 ingOpLambda)                                                                                     
                                                                                                  
 tf.compat.v1.gather (TFOpLambd  (None, 2)           0           ['tf.__operators__.getitem[0][0]'
 a)                                                              ]                                
                                                                                                  
 tf.compat.v1.gather_1 (TFOpLam  (None, 2)           0           ['tf.__operators__.getitem[0][0]'
 bda)                                                            ]                                
                                                                                                  
 tf.math.multiply (TFOpLambda)  (None, 2)            0           ['tf.compat.v1.gather[0][0]']    
                                                                                                  
 tf.math.multiply_1 (TFOpLambda  (None, 2)           0           ['tf.compat.v1.gather_1[0][0]']  
 )                                                                                                
                                                                                                  
 tf.__operators__.add (TFOpLamb  (None, 2)           0           ['tf.math.multiply[0][0]',       
 da)                                                              'tf.math.multiply_1[0][0]']     
                                                                                                  
 tf.compat.v1.size (TFOpLambda)  ()                  0           ['tf.__operators__.getitem[0][0]'
                                                                 ]                                
                                                                                                  
 tf.expand_dims (TFOpLambda)    (None, 1, 2)         0           ['tf.__operators__.add[0][0]']   
                                                                                                  
 tf.compat.v1.floor_div (TFOpLa  ()                  0           ['tf.compat.v1.size[0][0]']      
 mbda)                                                                                            
                                                                                                  
 tf.broadcast_to (TFOpLambda)   (None, 17, 2)        0           ['tf.expand_dims[0][0]',         
                                                                  'tf.compat.v1.floor_div[0][0]'] 
                                                                                                  
 tf.math.subtract (TFOpLambda)  (None, 17, 2)        0           ['tf.__operators__.getitem[0][0]'
                                                                 , 'tf.broadcast_to[0][0]']       
                                                                                                  
 tf.compat.v1.gather_6 (TFOpLam  (None, 2)           0           ['tf.math.subtract[0][0]']       
 bda)                                                                                             
                                                                                                  
 tf.compat.v1.gather_7 (TFOpLam  (None, 2)           0           ['tf.math.subtract[0][0]']       
 bda)                                                                                             
                                                                                                  
 tf.math.multiply_6 (TFOpLambda  (None, 2)           0           ['tf.compat.v1.gather_6[0][0]']  
 )                                                                                                
                                                                                                  
 tf.math.multiply_7 (TFOpLambda  (None, 2)           0           ['tf.compat.v1.gather_7[0][0]']  
 )                                                                                                
                                                                                                  
 tf.__operators__.add_3 (TFOpLa  (None, 2)           0           ['tf.math.multiply_6[0][0]',     
 mbda)                                                            'tf.math.multiply_7[0][0]']     
                                                                                                  
 tf.compat.v1.size_1 (TFOpLambd  ()                  0           ['tf.math.subtract[0][0]']       
 a)                                                                                               
                                                                                                  
 tf.compat.v1.gather_4 (TFOpLam  (None, 2)           0           ['tf.math.subtract[0][0]']       
 bda)                                                                                             
                                                                                                  
 tf.compat.v1.gather_5 (TFOpLam  (None, 2)           0           ['tf.math.subtract[0][0]']       
 bda)                                                                                             
                                                                                                  
 tf.compat.v1.gather_2 (TFOpLam  (None, 2)           0           ['tf.math.subtract[0][0]']       
 bda)                                                                                             
                                                                                                  
 tf.compat.v1.gather_3 (TFOpLam  (None, 2)           0           ['tf.math.subtract[0][0]']       
 bda)                                                                                             
                                                                                                  
 tf.expand_dims_1 (TFOpLambda)  (None, 1, 2)         0           ['tf.__operators__.add_3[0][0]'] 
                                                                                                  
 tf.compat.v1.floor_div_1 (TFOp  ()                  0           ['tf.compat.v1.size_1[0][0]']    
 Lambda)                                                                                          
                                                                                                  
 tf.math.multiply_4 (TFOpLambda  (None, 2)           0           ['tf.compat.v1.gather_4[0][0]']  
 )                                                                                                
                                                                                                  
 tf.math.multiply_5 (TFOpLambda  (None, 2)           0           ['tf.compat.v1.gather_5[0][0]']  
 )                                                                                                
                                                                                                  
 tf.math.multiply_2 (TFOpLambda  (None, 2)           0           ['tf.compat.v1.gather_2[0][0]']  
 )                                                                                                
                                                                                                  
 tf.math.multiply_3 (TFOpLambda  (None, 2)           0           ['tf.compat.v1.gather_3[0][0]']  
 )                                                                                                
                                                                                                  
 tf.broadcast_to_1 (TFOpLambda)  (None, 17, 2)       0           ['tf.expand_dims_1[0][0]',       
                                                                  'tf.compat.v1.floor_div_1[0][0]'
                                                                 ]                                
                                                                                                  
 tf.__operators__.add_2 (TFOpLa  (None, 2)           0           ['tf.math.multiply_4[0][0]',     
 mbda)                                                            'tf.math.multiply_5[0][0]']     
                                                                                                  
 tf.__operators__.add_1 (TFOpLa  (None, 2)           0           ['tf.math.multiply_2[0][0]',     
 mbda)                                                            'tf.math.multiply_3[0][0]']     
                                                                                                  
 tf.math.subtract_2 (TFOpLambda  (None, 17, 2)       0           ['tf.math.subtract[0][0]',       
 )                                                                'tf.broadcast_to_1[0][0]']      
                                                                                                  
 tf.math.subtract_1 (TFOpLambda  (None, 2)           0           ['tf.__operators__.add_2[0][0]', 
 )                                                                'tf.__operators__.add_1[0][0]'] 
                                                                                                  
 tf.compat.v1.gather_8 (TFOpLam  (17, 2)             0           ['tf.math.subtract_2[0][0]']     
 bda)                                                                                             
                                                                                                  
 tf.compat.v1.norm (TFOpLambda)  ()                  0           ['tf.math.subtract_1[0][0]']     
                                                                                                  
 tf.compat.v1.norm_1 (TFOpLambd  (2,)                0           ['tf.compat.v1.gather_8[0][0]']  
 a)                                                                                               
                                                                                                  
 tf.math.multiply_8 (TFOpLambda  ()                  0           ['tf.compat.v1.norm[0][0]']      
 )                                                                                                
                                                                                                  
 tf.math.reduce_max (TFOpLambda  ()                  0           ['tf.compat.v1.norm_1[0][0]']    
 )                                                                                                
                                                                                                  
 tf.math.maximum (TFOpLambda)   ()                   0           ['tf.math.multiply_8[0][0]',     
                                                                  'tf.math.reduce_max[0][0]']     
                                                                                                  
 tf.math.truediv (TFOpLambda)   (None, 17, 2)        0           ['tf.math.subtract[0][0]',       
                                                                  'tf.math.maximum[0][0]']        
                                                                                                  
 flatten (Flatten)              (None, 34)           0           ['tf.math.truediv[0][0]']        
                                                                                                  
 dense (Dense)                  (None, 128)          4480        ['flatten[0][0]']                
                                                                                                  
 dropout (Dropout)              (None, 128)          0           ['dense[0][0]']                  
                                                                                                  
 dense_1 (Dense)                (None, 64)           8256        ['dropout[0][0]']                
                                                                                                  
 dropout_1 (Dropout)            (None, 64)           0           ['dense_1[0][0]']                
                                                                                                  
 dense_2 (Dense)                (None, 5)            325         ['dropout_1[0][0]']              
                                                                                                  
==================================================================================================
Total params: 13,061
Trainable params: 13,061
Non-trainable params: 0
__________________________________________________________________________________________________
model.compile(
    optimizer='adam',
    loss='categorical_crossentropy',
    metrics=['accuracy']
)

# Add a checkpoint callback to store the checkpoint that has the highest
# validation accuracy.
checkpoint_path = "weights.best.hdf5"
checkpoint = keras.callbacks.ModelCheckpoint(checkpoint_path,
                             monitor='val_accuracy',
                             verbose=1,
                             save_best_only=True,
                             mode='max')
earlystopping = keras.callbacks.EarlyStopping(monitor='val_accuracy', 
                                              patience=20)

# Start training
history = model.fit(X_train, y_train,
                    epochs=200,
                    batch_size=16,
                    validation_data=(X_val, y_val),
                    callbacks=[checkpoint, earlystopping])
Epoch 1/200
19/37 [==============>...............] - ETA: 0s - loss: 1.5703 - accuracy: 0.3684 
Epoch 00001: val_accuracy improved from -inf to 0.64706, saving model to weights.best.hdf5
37/37 [==============================] - 1s 11ms/step - loss: 1.5090 - accuracy: 0.4602 - val_loss: 1.3352 - val_accuracy: 0.6471
Epoch 2/200
20/37 [===============>..............] - ETA: 0s - loss: 1.3372 - accuracy: 0.4844
Epoch 00002: val_accuracy improved from 0.64706 to 0.67647, saving model to weights.best.hdf5
37/37 [==============================] - 0s 4ms/step - loss: 1.2375 - accuracy: 0.5190 - val_loss: 1.0193 - val_accuracy: 0.6765
Epoch 3/200
20/37 [===============>..............] - ETA: 0s - loss: 1.0596 - accuracy: 0.5469
Epoch 00003: val_accuracy improved from 0.67647 to 0.75490, saving model to weights.best.hdf5
37/37 [==============================] - 0s 4ms/step - loss: 1.0096 - accuracy: 0.5796 - val_loss: 0.8397 - val_accuracy: 0.7549
Epoch 4/200
21/37 [================>.............] - ETA: 0s - loss: 0.8922 - accuracy: 0.6220
Epoch 00004: val_accuracy improved from 0.75490 to 0.81373, saving model to weights.best.hdf5
37/37 [==============================] - 0s 4ms/step - loss: 0.8798 - accuracy: 0.6349 - val_loss: 0.7103 - val_accuracy: 0.8137
Epoch 5/200
20/37 [===============>..............] - ETA: 0s - loss: 0.7895 - accuracy: 0.6875
Epoch 00005: val_accuracy improved from 0.81373 to 0.82353, saving model to weights.best.hdf5
37/37 [==============================] - 0s 4ms/step - loss: 0.7810 - accuracy: 0.6903 - val_loss: 0.6120 - val_accuracy: 0.8235
Epoch 6/200
20/37 [===============>..............] - ETA: 0s - loss: 0.7324 - accuracy: 0.7250
Epoch 00006: val_accuracy improved from 0.82353 to 0.92157, saving model to weights.best.hdf5
37/37 [==============================] - 0s 4ms/step - loss: 0.7263 - accuracy: 0.7093 - val_loss: 0.5297 - val_accuracy: 0.9216
Epoch 7/200
19/37 [==============>...............] - ETA: 0s - loss: 0.6852 - accuracy: 0.7467
Epoch 00007: val_accuracy did not improve from 0.92157
37/37 [==============================] - 0s 4ms/step - loss: 0.6450 - accuracy: 0.7595 - val_loss: 0.4635 - val_accuracy: 0.8922
Epoch 8/200
20/37 [===============>..............] - ETA: 0s - loss: 0.6007 - accuracy: 0.7719
Epoch 00008: val_accuracy did not improve from 0.92157
37/37 [==============================] - 0s 4ms/step - loss: 0.5751 - accuracy: 0.7837 - val_loss: 0.4012 - val_accuracy: 0.9216
Epoch 9/200
20/37 [===============>..............] - ETA: 0s - loss: 0.5358 - accuracy: 0.8125
Epoch 00009: val_accuracy improved from 0.92157 to 0.93137, saving model to weights.best.hdf5
37/37 [==============================] - 0s 4ms/step - loss: 0.5272 - accuracy: 0.8097 - val_loss: 0.3547 - val_accuracy: 0.9314
Epoch 10/200
20/37 [===============>..............] - ETA: 0s - loss: 0.5200 - accuracy: 0.8094
Epoch 00010: val_accuracy improved from 0.93137 to 0.98039, saving model to weights.best.hdf5
37/37 [==============================] - 0s 5ms/step - loss: 0.5051 - accuracy: 0.8218 - val_loss: 0.3014 - val_accuracy: 0.9804
Epoch 11/200
19/37 [==============>...............] - ETA: 0s - loss: 0.4413 - accuracy: 0.8322
Epoch 00011: val_accuracy did not improve from 0.98039
37/37 [==============================] - 0s 4ms/step - loss: 0.4509 - accuracy: 0.8374 - val_loss: 0.2786 - val_accuracy: 0.9706
Epoch 12/200
20/37 [===============>..............] - ETA: 0s - loss: 0.4323 - accuracy: 0.8156
Epoch 00012: val_accuracy improved from 0.98039 to 0.99020, saving model to weights.best.hdf5
37/37 [==============================] - 0s 5ms/step - loss: 0.4377 - accuracy: 0.8253 - val_loss: 0.2440 - val_accuracy: 0.9902
Epoch 13/200
20/37 [===============>..............] - ETA: 0s - loss: 0.4037 - accuracy: 0.8719
Epoch 00013: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.4187 - accuracy: 0.8668 - val_loss: 0.2109 - val_accuracy: 0.9804
Epoch 14/200
20/37 [===============>..............] - ETA: 0s - loss: 0.3664 - accuracy: 0.8813
Epoch 00014: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.3733 - accuracy: 0.8772 - val_loss: 0.2030 - val_accuracy: 0.9804
Epoch 15/200
20/37 [===============>..............] - ETA: 0s - loss: 0.3708 - accuracy: 0.8781
Epoch 00015: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.3684 - accuracy: 0.8754 - val_loss: 0.1765 - val_accuracy: 0.9902
Epoch 16/200
21/37 [================>.............] - ETA: 0s - loss: 0.3238 - accuracy: 0.9137
Epoch 00016: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.3213 - accuracy: 0.9100 - val_loss: 0.1662 - val_accuracy: 0.9804
Epoch 17/200
20/37 [===============>..............] - ETA: 0s - loss: 0.2739 - accuracy: 0.9281
Epoch 00017: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.3015 - accuracy: 0.9100 - val_loss: 0.1423 - val_accuracy: 0.9804
Epoch 18/200
20/37 [===============>..............] - ETA: 0s - loss: 0.3076 - accuracy: 0.9062
Epoch 00018: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.3022 - accuracy: 0.9048 - val_loss: 0.1407 - val_accuracy: 0.9804
Epoch 19/200
20/37 [===============>..............] - ETA: 0s - loss: 0.2719 - accuracy: 0.9250
Epoch 00019: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.2697 - accuracy: 0.9291 - val_loss: 0.1191 - val_accuracy: 0.9902
Epoch 20/200
20/37 [===============>..............] - ETA: 0s - loss: 0.2960 - accuracy: 0.9031
Epoch 00020: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.2775 - accuracy: 0.9100 - val_loss: 0.1120 - val_accuracy: 0.9902
Epoch 21/200
20/37 [===============>..............] - ETA: 0s - loss: 0.2590 - accuracy: 0.9250
Epoch 00021: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.2537 - accuracy: 0.9273 - val_loss: 0.1022 - val_accuracy: 0.9902
Epoch 22/200
20/37 [===============>..............] - ETA: 0s - loss: 0.2504 - accuracy: 0.9344
Epoch 00022: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.2661 - accuracy: 0.9204 - val_loss: 0.0976 - val_accuracy: 0.9902
Epoch 23/200
20/37 [===============>..............] - ETA: 0s - loss: 0.2384 - accuracy: 0.9156
Epoch 00023: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.2182 - accuracy: 0.9308 - val_loss: 0.0944 - val_accuracy: 0.9902
Epoch 24/200
20/37 [===============>..............] - ETA: 0s - loss: 0.2157 - accuracy: 0.9375
Epoch 00024: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.2031 - accuracy: 0.9412 - val_loss: 0.0844 - val_accuracy: 0.9902
Epoch 25/200
20/37 [===============>..............] - ETA: 0s - loss: 0.1944 - accuracy: 0.9469
Epoch 00025: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.2080 - accuracy: 0.9343 - val_loss: 0.0811 - val_accuracy: 0.9902
Epoch 26/200
20/37 [===============>..............] - ETA: 0s - loss: 0.2232 - accuracy: 0.9312
Epoch 00026: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.2033 - accuracy: 0.9394 - val_loss: 0.0703 - val_accuracy: 0.9902
Epoch 27/200
20/37 [===============>..............] - ETA: 0s - loss: 0.2120 - accuracy: 0.9281
Epoch 00027: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.1845 - accuracy: 0.9481 - val_loss: 0.0708 - val_accuracy: 0.9902
Epoch 28/200
20/37 [===============>..............] - ETA: 0s - loss: 0.2696 - accuracy: 0.9156
Epoch 00028: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.2355 - accuracy: 0.9273 - val_loss: 0.0679 - val_accuracy: 0.9902
Epoch 29/200
20/37 [===============>..............] - ETA: 0s - loss: 0.1794 - accuracy: 0.9531
Epoch 00029: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.1938 - accuracy: 0.9498 - val_loss: 0.0623 - val_accuracy: 0.9902
Epoch 30/200
20/37 [===============>..............] - ETA: 0s - loss: 0.1831 - accuracy: 0.9406
Epoch 00030: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.1758 - accuracy: 0.9498 - val_loss: 0.0599 - val_accuracy: 0.9902
Epoch 31/200
20/37 [===============>..............] - ETA: 0s - loss: 0.1967 - accuracy: 0.9375
Epoch 00031: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.1724 - accuracy: 0.9516 - val_loss: 0.0565 - val_accuracy: 0.9902
Epoch 32/200
20/37 [===============>..............] - ETA: 0s - loss: 0.1868 - accuracy: 0.9219
Epoch 00032: val_accuracy did not improve from 0.99020
37/37 [==============================] - 0s 4ms/step - loss: 0.1676 - accuracy: 0.9360 - val_loss: 0.0503 - val_accuracy: 0.9902
# Visualize the training history to see whether you're overfitting.
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['TRAIN', 'VAL'], loc='lower right')
plt.show()

png

# Evaluate the model using the TEST dataset
loss, accuracy = model.evaluate(X_test, y_test)
14/14 [==============================] - 0s 2ms/step - loss: 0.0612 - accuracy: 0.9976

برای درک بهتر عملکرد مدل، ماتریس سردرگمی را رسم کنید

def plot_confusion_matrix(cm, classes,
                          normalize=False,
                          title='Confusion matrix',
                          cmap=plt.cm.Blues):
  """Plots the confusion matrix."""
  if normalize:
    cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
    print("Normalized confusion matrix")
  else:
    print('Confusion matrix, without normalization')

  plt.imshow(cm, interpolation='nearest', cmap=cmap)
  plt.title(title)
  plt.colorbar()
  tick_marks = np.arange(len(classes))
  plt.xticks(tick_marks, classes, rotation=55)
  plt.yticks(tick_marks, classes)
  fmt = '.2f' if normalize else 'd'
  thresh = cm.max() / 2.
  for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
    plt.text(j, i, format(cm[i, j], fmt),
              horizontalalignment="center",
              color="white" if cm[i, j] > thresh else "black")

  plt.ylabel('True label')
  plt.xlabel('Predicted label')
  plt.tight_layout()

# Classify pose in the TEST dataset using the trained model
y_pred = model.predict(X_test)

# Convert the prediction result to class name
y_pred_label = [class_names[i] for i in np.argmax(y_pred, axis=1)]
y_true_label = [class_names[i] for i in np.argmax(y_test, axis=1)]

# Plot the confusion matrix
cm = confusion_matrix(np.argmax(y_test, axis=1), np.argmax(y_pred, axis=1))
plot_confusion_matrix(cm,
                      class_names,
                      title ='Confusion Matrix of Pose Classification Model')

# Print the classification report
print('\nClassification Report:\n', classification_report(y_true_label,
                                                          y_pred_label))
Confusion matrix, without normalization

Classification Report:
               precision    recall  f1-score   support

       chair       1.00      1.00      1.00        84
       cobra       0.99      1.00      0.99        93
         dog       1.00      1.00      1.00        84
        tree       1.00      1.00      1.00        96
     warrior       1.00      0.99      0.99        68

    accuracy                           1.00       425
   macro avg       1.00      1.00      1.00       425
weighted avg       1.00      1.00      1.00       425

png

(اختیاری) پیش بینی های نادرست را بررسی کنید

شما می توانید در شمار از نگاه TEST مجموعه داده که به اشتباه پیش بینی شده بود برای دیدن اینکه آیا صحت مدل را می توان بهبود یافته است.

if is_skip_step_1:
  raise RuntimeError('You must have run step 1 to run this cell.')

# If step 1 was skipped, skip this step.
IMAGE_PER_ROW = 3
MAX_NO_OF_IMAGE_TO_PLOT = 30

# Extract the list of incorrectly predicted poses
false_predict = [id_in_df for id_in_df in range(len(y_test)) \
                if y_pred_label[id_in_df] != y_true_label[id_in_df]]
if len(false_predict) > MAX_NO_OF_IMAGE_TO_PLOT:
  false_predict = false_predict[:MAX_NO_OF_IMAGE_TO_PLOT]

# Plot the incorrectly predicted images
row_count = len(false_predict) // IMAGE_PER_ROW + 1
fig = plt.figure(figsize=(10 * IMAGE_PER_ROW, 10 * row_count))
for i, id_in_df in enumerate(false_predict):
  ax = fig.add_subplot(row_count, IMAGE_PER_ROW, i + 1)
  image_path = os.path.join(images_out_test_folder,
                            df_test.iloc[id_in_df]['file_name'])

  image = cv2.imread(image_path)
  plt.title("Predict: %s; Actual: %s"
            % (y_pred_label[id_in_df], y_true_label[id_in_df]))
  plt.imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))
plt.show()

png

قسمت 3: مدل طبقه بندی پوز را به TensorFlow Lite تبدیل کنید

شما مدل طبقه‌بندی پوز Keras را به قالب TensorFlow Lite تبدیل می‌کنید تا بتوانید آن را در برنامه‌های تلفن همراه، مرورگرهای وب و دستگاه‌های IoT مستقر کنید. هنگام تبدیل مدل، شما اعمال تدریج محدوده دینامیکی به منظور کاهش طبقه بندی ژست TensorFlow بازگشت به محتوا | اندازه مدل در حدود 4 بار با از دست دادن دقت ناچیز است.

converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_model = converter.convert()

print('Model size: %dKB' % (len(tflite_model) / 1024))

with open('pose_classifier.tflite', 'wb') as f:
  f.write(tflite_model)
2021-12-21 12:12:00.560331: W tensorflow/python/util/util.cc:368] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
INFO:tensorflow:Assets written to: /tmp/tmpr1ewa_xj/assets
2021-12-21 12:12:02.324896: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:363] Ignored output_format.
2021-12-21 12:12:02.324941: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:366] Ignored drop_control_dependency.
WARNING:absl:Buffer deduplication procedure will be skipped when flatbuffer library is not properly loaded
Model size: 26KB

سپس فایل برچسبی را می نویسید که حاوی نقشه برداری از فهرست های کلاس به نام کلاس های قابل خواندن توسط انسان است.

with open('pose_labels.txt', 'w') as f:
  f.write('\n'.join(class_names))

همانطور که کوانتیزاسیون را برای کاهش اندازه مدل اعمال کرده اید، بیایید مدل TFLite کوانتیزه شده را ارزیابی کنیم تا بررسی کنیم که آیا افت دقت قابل قبول است یا خیر.

def evaluate_model(interpreter, X, y_true):
  """Evaluates the given TFLite model and return its accuracy."""
  input_index = interpreter.get_input_details()[0]["index"]
  output_index = interpreter.get_output_details()[0]["index"]

  # Run predictions on all given poses.
  y_pred = []
  for i in range(len(y_true)):
    # Pre-processing: add batch dimension and convert to float32 to match with
    # the model's input data format.
    test_image = X[i: i + 1].astype('float32')
    interpreter.set_tensor(input_index, test_image)

    # Run inference.
    interpreter.invoke()

    # Post-processing: remove batch dimension and find the class with highest
    # probability.
    output = interpreter.tensor(output_index)
    predicted_label = np.argmax(output()[0])
    y_pred.append(predicted_label)

  # Compare prediction results with ground truth labels to calculate accuracy.
  y_pred = keras.utils.to_categorical(y_pred)
  return accuracy_score(y_true, y_pred)

# Evaluate the accuracy of the converted TFLite model
classifier_interpreter = tf.lite.Interpreter(model_content=tflite_model)
classifier_interpreter.allocate_tensors()
print('Accuracy of TFLite model: %s' %
      evaluate_model(classifier_interpreter, X_test, y_test))
Accuracy of TFLite model: 1.0

در حال حاضر شما می توانید مدل TFLite (دانلود pose_classifier.tflite ) و فایل برچسب ( pose_labels.txt ) به شمار سفارشی طبقه بندی. را ببینید آندروید و پایتون / تمشک پی برنامه نمونه برای مثال پایان به پایان چگونه به استفاده از مدل طبقه بندی TFLite برخواهد داشت.

zip pose_classifier.zip pose_labels.txt pose_classifier.tflite
adding: pose_labels.txt (stored 0%)
  adding: pose_classifier.tflite (deflated 35%)
# Download the zip archive if running on Colab.
try:
  from google.colab import files
  files.download('pose_classifier.zip')
except:
  pass