Product Quantization (PQ) based in-partition scoring configuration.

Used in the notebooks

Used in the tutorials

In ScaNN we use PQ to compress the database embeddings, but not the query embedding. We called it Asymmetric Hashing. See

dimensions_per_block How many dimensions in each PQ block. If the embedding vector dimensionality is a multiple of this value, there will be number_of_dimensions / dimensions_per_block PQ blocks. Otherwise, the last block will be the remainder. For example, if a vector has 12 dimensions, and dimensions_per_block is 2, then there will be 6 2-dimension blocks. However, if the vector has 13 dimensions and dimensions_per_block is still 2, there will be 6 2-dimension blocks and one 1-dimension block.
anisotropic_quantization_threshold If this value is set, we will penalize the quantization error that's parallel to the original vector differently than the orthogonal error. A generally recommended value for this parameter would be 0.2. For more details, please look at ScaNN's 2020 ICML paper and the Google AI Blog post
training_sample_size How many database points to sample for training the K-Means for PQ centers. A good starting value would be 100k or the whole dataset if it's smaller than that.
training_iterations How many iterations to run K-Means for PQ.



anisotropic_quantization_threshold nan
training_iterations 10
training_sample_size 100000