tflite_model_maker.audio_classifier.YamNetSpec

Model good at detecting environmental sounds, using YAMNet embedding.

Used in the notebooks

Used in the tutorials

model_dir The location to save the model checkpoint files.
strategy An instance of TF distribute strategy. If none, it will use the default strategy (either SingleDeviceStrategy or the current scoped strategy.
yamnet_model_handle Path of the TFHub model for retrining.
frame_length The number of samples in each audio frame. If the audio file is shorter than frame_length, then the audio file will be ignored.
frame_step The number of samples between two audio frames. This value should be smaller than frame_length, otherwise some samples will be ignored.
keep_yamnet_and_custom_heads Boolean, decides if the final TFLite model contains both YAMNet and custom trained classification heads. When set to False, only the trained custom head will be preserved.

target_sample_rate

Methods

create_model

View source

create_serving_model

View source

Create a model for serving.

export_tflite

View source

Converts the retrained model to tflite format and saves it.

This method overrides the default CustomModel._export_tflite method, and include the spectrom extraction in the model.

The exported model has input shape (1, number of wav samples)

Args
model An instance of the keras classification model to be exported.
tflite_filepath File path to save tflite model.
with_metadata Whether the output tflite model contains metadata.
export_metadata_json_file Whether to export metadata in json file. If True, export the metadata in the same directory as tflite model. Used only if with_metadata is True.
index_to_label A list that map from index to label class name.
quantization_config Configuration for post-training quantization.

get_default_quantization_config

View source

Gets the default quantization configuration.

preprocess_ds

View source

Returns a preprocessed dataset.

run_classifier

View source

EMBEDDING_SIZE 1024
EXPECTED_WAVEFORM_LENGTH 15600