Attend the Women in ML Symposium on December 7 Register now

tflite_model_maker.searcher.ScoreAH

Stay organized with collections Save and categorize content based on your preferences.

Product Quantization (PQ) based in-partition scoring configuration.

Used in the notebooks

Used in the tutorials

In ScaNN we use PQ to compress the database embeddings, but not the query embedding. We called it Asymmetric Hashing. See https://research.google/pubs/pub41694/

dimensions_per_block How many dimensions in each PQ block. If the embedding vector dimensionality is a multiple of this value, there will be number_of_dimensions / dimensions_per_block PQ blocks. Otherwise, the last block will be the remainder. For example, if a vector has 12 dimensions, and dimensions_per_block is 2, then there will be 6 2-dimension blocks. However, if the vector has 13 dimensions and dimensions_per_block is still 2, there will be 6 2-dimension blocks and one 1-dimension block.
anisotropic_quantization_threshold If this value is set, we will penalize the quantization error that's parallel to the original vector differently than the orthogonal error. A generally recommended value for this parameter would be 0.2. For more details, please look at ScaNN's 2020 ICML paper https://arxiv.org/abs/1908.10396 and the Google AI Blog post https://ai.googleblog.com/2020/07/announcing-scann-efficient-vector.html
training_sample_size How many database points to sample for training the K-Means for PQ centers. A good starting value would be 100k or the whole dataset if it's smaller than that.
training_iterations How many iterations to run K-Means for PQ.

Methods

__eq__

anisotropic_quantization_threshold nan
training_iterations 10
training_sample_size 100000