חיפוש סמנטי עם השכנים הקרובים ביותר והטבעות טקסט

הצג באתר TensorFlow.org הפעל ב-Google Colab הצג ב-GitHub הורד מחברת ראה דגמי TF Hub

מדריך זה ממחיש כיצד ליצור הטמעות ממודול TensorFlow Hub (TF-Hub) בהינתן נתוני קלט, ולבנות אינדקס קרובים (ANN) משוערים באמצעות ההטבעות שחולצו. לאחר מכן ניתן להשתמש באינדקס להתאמה ואחזור של דמיון בזמן אמת.

כאשר עוסקים בקורפוס גדול של נתונים, זה לא יעיל לבצע התאמה מדויקת על ידי סריקת כל המאגר כדי למצוא את הפריטים הדומים ביותר לשאילתה נתונה בזמן אמת. לפיכך, אנו משתמשים באלגוריתם התאמת דמיון משוער המאפשר לנו להחליף מעט דיוק במציאת התאמות השכנות המדויקות ביותר לשיפור משמעותי במהירות.

במדריך זה, אנו מציגים דוגמה לחיפוש טקסט בזמן אמת על פני קורפוס של כותרות חדשות כדי למצוא את הכותרות הדומות ביותר לשאילתה. בניגוד לחיפוש מילות מפתח, זה לוכד את הדמיון הסמנטי המקודד בהטמעת הטקסט.

השלבים של הדרכה זו הם:

  1. הורד נתונים לדוגמה.
  2. צור הטמעות עבור הנתונים באמצעות מודול TF-Hub
  3. בניית אינדקס ANN עבור ההטבעות
  4. השתמש באינדקס להתאמת דמיון

אנו משתמשים ב-Apache Beam עם TensorFlow Transform (TF-Transform) כדי ליצור את ההטמעות ממודול TF-Hub. אנו משתמשים גם בספריית ANNOY של Spotify כדי לבנות את אינדקס השכנים הקרובים ביותר. אתה יכול למצוא benchmarking של מסגרת ANN במאגר Github זה.

מדריך זה משתמש ב-TensorFlow 1.0 ועובד רק עם מודולי TF1 Hub מ-TF-Hub. ראה את גרסת TF2 המעודכנת של מדריך זה .

להכין

התקן את הספריות הנדרשות.

pip install -q apache_beam
pip install -q sklearn
pip install -q annoy

ייבא את הספריות הנדרשות

import os
import sys
import pathlib
import pickle
from collections import namedtuple
from datetime import datetime

import numpy as np
import apache_beam as beam
import annoy
from sklearn.random_projection import gaussian_random_matrix

import tensorflow.compat.v1 as tf
import tensorflow_hub as hub
# TFT needs to be installed afterwards
!pip install -q tensorflow_transform==0.24
import tensorflow_transform as tft
import tensorflow_transform.beam as tft_beam
print('TF version: {}'.format(tf.__version__))
print('TF-Hub version: {}'.format(hub.__version__))
print('TF-Transform version: {}'.format(tft.__version__))
print('Apache Beam version: {}'.format(beam.__version__))
TF version: 2.3.1
TF-Hub version: 0.10.0
TF-Transform version: 0.24.0
Apache Beam version: 2.25.0

1. הורד נתונים לדוגמה

מערך נתונים של מיליון חדשות כותרות מכיל כותרות חדשות שפורסמו על פני תקופה של 15 שנים שמקורן בחברת השידור האוסטרלית המכובד (ABC). למערך החדשות הזה יש תיעוד היסטורי מסכם של אירועים ראויים לציון בעולם מתחילת 2003 ועד סוף 2017 עם התמקדות מפורטת יותר באוסטרליה.

פורמט : נתוני שתי עמודות מופרדות בטאבים: 1) תאריך פרסום ו-2) טקסט כותרת. אנחנו מתעניינים רק בטקסט הכותרת.

wget 'https://dataverse.harvard.edu/api/access/datafile/3450625?format=tab&gbrecs=true' -O raw.tsv
wc -l raw.tsv
head raw.tsv
--2020-12-03 12:12:21--  https://dataverse.harvard.edu/api/access/datafile/3450625?format=tab&gbrecs=true
Resolving dataverse.harvard.edu (dataverse.harvard.edu)... 206.191.184.198
Connecting to dataverse.harvard.edu (dataverse.harvard.edu)|206.191.184.198|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 57600231 (55M) [text/tab-separated-values]
Saving to: ‘raw.tsv’

raw.tsv             100%[===================>]  54.93M  15.1MB/s    in 4.3s    

2020-12-03 12:12:27 (12.7 MB/s) - ‘raw.tsv’ saved [57600231/57600231]

1103664 raw.tsv
publish_date    headline_text
20030219    "aba decides against community broadcasting licence"
20030219    "act fire witnesses must be aware of defamation"
20030219    "a g calls for infrastructure protection summit"
20030219    "air nz staff in aust strike for pay rise"
20030219    "air nz strike to affect australian travellers"
20030219    "ambitious olsson wins triple jump"
20030219    "antic delighted with record breaking barca"
20030219    "aussie qualifier stosur wastes four memphis match"
20030219    "aust addresses un security council over iraq"

לשם הפשטות, אנו שומרים רק את טקסט הכותרת ומסירים את תאריך הפרסום

!rm -r corpus
!mkdir corpus

with open('corpus/text.txt', 'w') as out_file:
  with open('raw.tsv', 'r') as in_file:
    for line in in_file:
      headline = line.split('\t')[1].strip().strip('"')
      out_file.write(headline+"\n")
rm: cannot remove 'corpus': No such file or directory

tail corpus/text.txt
severe storms forecast for nye in south east queensland
snake catcher pleads for people not to kill reptiles
south australia prepares for party to welcome new year
strikers cool off the heat with big win in adelaide
stunning images from the sydney to hobart yacht
the ashes smiths warners near miss liven up boxing day test
timelapse: brisbanes new year fireworks
what 2017 meant to the kids of australia
what the papodopoulos meeting may mean for ausus
who is george papadopoulos the former trump campaign aide

פונקציית עוזר לטעינת מודול TF-Hub

def load_module(module_url):
  embed_module = hub.Module(module_url)
  placeholder = tf.placeholder(dtype=tf.string)
  embed = embed_module(placeholder)
  session = tf.Session()
  session.run([tf.global_variables_initializer(), tf.tables_initializer()])
  print('TF-Hub module is loaded.')

  def _embeddings_fn(sentences):
    computed_embeddings = session.run(
        embed, feed_dict={placeholder: sentences})
    return computed_embeddings

  return _embeddings_fn

2. צור הטמעות עבור הנתונים.

במדריך זה, אנו משתמשים במקודד המשפטים האוניברסלי ליצירת התקנות עבור נתוני הכותרות. ניתן להשתמש בהטמעות המשפט בקלות כדי לחשב דמיון ברמת המשפט. אנו מפעילים את תהליך יצירת ההטמעה באמצעות Apache Beam ו-TF-Transform.

שיטת מיצוי הטבעה

encoder = None

def embed_text(text, module_url, random_projection_matrix):
  # Beam will run this function in different processes that need to
  # import hub and load embed_fn (if not previously loaded)
  global encoder
  if not encoder:
    encoder = hub.Module(module_url)
  embedding = encoder(text)
  if random_projection_matrix is not None:
    # Perform random projection for the embedding
    embedding = tf.matmul(
        embedding, tf.cast(random_projection_matrix, embedding.dtype))
  return embedding

צור שיטת TFT preprocess_fn

def make_preprocess_fn(module_url, random_projection_matrix=None):
  '''Makes a tft preprocess_fn'''

  def _preprocess_fn(input_features):
    '''tft preprocess_fn'''
    text = input_features['text']
    # Generate the embedding for the input text
    embedding = embed_text(text, module_url, random_projection_matrix)

    output_features = {
        'text': text, 
        'embedding': embedding
        }

    return output_features

  return _preprocess_fn

צור מטא נתונים של מערך נתונים

def create_metadata():
  '''Creates metadata for the raw data'''
  from tensorflow_transform.tf_metadata import dataset_metadata
  from tensorflow_transform.tf_metadata import schema_utils
  feature_spec = {'text': tf.FixedLenFeature([], dtype=tf.string)}
  schema = schema_utils.schema_from_feature_spec(feature_spec)
  metadata = dataset_metadata.DatasetMetadata(schema)
  return metadata

צינור קרן

def run_hub2emb(args):
  '''Runs the embedding generation pipeline'''

  options = beam.options.pipeline_options.PipelineOptions(**args)
  args = namedtuple("options", args.keys())(*args.values())

  raw_metadata = create_metadata()
  converter = tft.coders.CsvCoder(
      column_names=['text'], schema=raw_metadata.schema)

  with beam.Pipeline(args.runner, options=options) as pipeline:
    with tft_beam.Context(args.temporary_dir):
      # Read the sentences from the input file
      sentences = ( 
          pipeline
          | 'Read sentences from files' >> beam.io.ReadFromText(
              file_pattern=args.data_dir)
          | 'Convert to dictionary' >> beam.Map(converter.decode)
      )

      sentences_dataset = (sentences, raw_metadata)
      preprocess_fn = make_preprocess_fn(args.module_url, args.random_projection_matrix)
      # Generate the embeddings for the sentence using the TF-Hub module
      embeddings_dataset, _ = (
          sentences_dataset
          | 'Extract embeddings' >> tft_beam.AnalyzeAndTransformDataset(preprocess_fn)
      )

      embeddings, transformed_metadata = embeddings_dataset
      # Write the embeddings to TFRecords files
      embeddings | 'Write embeddings to TFRecords' >> beam.io.tfrecordio.WriteToTFRecord(
          file_path_prefix='{}/emb'.format(args.output_dir),
          file_name_suffix='.tfrecords',
          coder=tft.coders.ExampleProtoCoder(transformed_metadata.schema))

יצירת מטריצת משקל הקרנה אקראית

הקרנה אקראית היא טכניקה פשוטה אך רבת עוצמה המשמשת להפחתת הממדיות של קבוצה של נקודות הנמצאות במרחב האוקלידי. לרקע תיאורטי, ראה הלמה של ג'ונסון-לינדנשטראוס .

צמצום הממדיות של ההטמעות באמצעות הקרנה אקראית פירושה פחות זמן הדרוש לבנות ולשאול את אינדקס ANN.

במדריך זה אנו משתמשים בהקרנה אקראית גאוסית מספריית Scikit-learn .

def generate_random_projection_weights(original_dim, projected_dim):
  random_projection_matrix = None
  if projected_dim and original_dim > projected_dim:
    random_projection_matrix = gaussian_random_matrix(
        n_components=projected_dim, n_features=original_dim).T
    print("A Gaussian random weight matrix was creates with shape of {}".format(random_projection_matrix.shape))
    print('Storing random projection matrix to disk...')
    with open('random_projection_matrix', 'wb') as handle:
      pickle.dump(random_projection_matrix, 
                  handle, protocol=pickle.HIGHEST_PROTOCOL)

  return random_projection_matrix

הגדר פרמטרים

אם ברצונך לבנות אינדקס באמצעות שטח ההטמעה המקורי ללא הקרנה אקראית, הגדר את הפרמטר projected_dim ל- None . שים לב שזה יאט את שלב האינדקס עבור הטבעות במידות גבוהות.

הפעל צינור

import tempfile

output_dir = pathlib.Path(tempfile.mkdtemp())
temporary_dir = pathlib.Path(tempfile.mkdtemp())

g = tf.Graph()
with g.as_default():
  original_dim = load_module(module_url)(['']).shape[1]
  random_projection_matrix = None

  if projected_dim:
    random_projection_matrix = generate_random_projection_weights(
        original_dim, projected_dim)

args = {
    'job_name': 'hub2emb-{}'.format(datetime.utcnow().strftime('%y%m%d-%H%M%S')),
    'runner': 'DirectRunner',
    'batch_size': 1024,
    'data_dir': 'corpus/*.txt',
    'output_dir': output_dir,
    'temporary_dir': temporary_dir,
    'module_url': module_url,
    'random_projection_matrix': random_projection_matrix,
}

print("Pipeline args are set.")
args
INFO:tensorflow:Saver not created because there are no variables in the graph to restore

INFO:tensorflow:Saver not created because there are no variables in the graph to restore

TF-Hub module is loaded.
A Gaussian random weight matrix was creates with shape of (512, 64)
Storing random projection matrix to disk...
Pipeline args are set.

/home/kbuilder/.local/lib/python3.6/site-packages/sklearn/utils/deprecation.py:86: FutureWarning: Function gaussian_random_matrix is deprecated; gaussian_random_matrix is deprecated in 0.22 and will be removed in version 0.24.
  warnings.warn(msg, category=FutureWarning)

{'job_name': 'hub2emb-201203-121305',
 'runner': 'DirectRunner',
 'batch_size': 1024,
 'data_dir': 'corpus/*.txt',
 'output_dir': PosixPath('/tmp/tmp3_9agsp3'),
 'temporary_dir': PosixPath('/tmp/tmp75ty7xfk'),
 'module_url': 'https://tfhub.dev/google/universal-sentence-encoder/2',
 'random_projection_matrix': array([[ 0.21470759, -0.05258816, -0.0972597 , ...,  0.04385087,
         -0.14274348,  0.11220471],
        [ 0.03580492, -0.16426251, -0.14089037, ...,  0.0101535 ,
         -0.22515438, -0.21514454],
        [-0.15639698,  0.01808027, -0.13684782, ...,  0.11841098,
         -0.04303762,  0.00745478],
        ...,
        [-0.18584684,  0.14040793,  0.18339619, ...,  0.13763638,
         -0.13028201, -0.16183348],
        [ 0.20997704, -0.2241034 , -0.12709368, ..., -0.03352462,
          0.11281993, -0.16342795],
        [-0.23761595,  0.00275779, -0.1585855 , ..., -0.08995121,
          0.1475089 , -0.26595401]])}
!rm -r {output_dir}
!rm -r {temporary_dir}

print("Running pipeline...")
%time run_hub2emb(args)
print("Pipeline is done.")
WARNING:apache_beam.runners.interactive.interactive_environment:Dependencies required for Interactive Beam PCollection visualization are not available, please use: `pip install apache-beam[interactive]` to install necessary dependencies to enable all data visualization features.

Running pipeline...

Warning:tensorflow:Tensorflow version (2.3.1) found. Note that Tensorflow Transform support for TF 2.0 is currently in beta, and features such as tf.function may not work as intended. 

Warning:tensorflow:Tensorflow version (2.3.1) found. Note that Tensorflow Transform support for TF 2.0 is currently in beta, and features such as tf.function may not work as intended. 

Warning:tensorflow:Tensorflow version (2.3.1) found. Note that Tensorflow Transform support for TF 2.0 is currently in beta, and features such as tf.function may not work as intended. 

Warning:tensorflow:Tensorflow version (2.3.1) found. Note that Tensorflow Transform support for TF 2.0 is currently in beta, and features such as tf.function may not work as intended. 

Warning:tensorflow:You are passing instance dicts and DatasetMetadata to TFT which will not provide optimal performance. Consider following the TFT guide to upgrade to the TFXIO format (Apache Arrow RecordBatch).

Warning:tensorflow:You are passing instance dicts and DatasetMetadata to TFT which will not provide optimal performance. Consider following the TFT guide to upgrade to the TFXIO format (Apache Arrow RecordBatch).

INFO:tensorflow:Saver not created because there are no variables in the graph to restore

INFO:tensorflow:Saver not created because there are no variables in the graph to restore

Warning:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/saved_model/signature_def_utils_impl.py:201: build_tensor_info (from tensorflow.python.saved_model.utils_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.utils.build_tensor_info or tf.compat.v1.saved_model.build_tensor_info.

Warning:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/saved_model/signature_def_utils_impl.py:201: build_tensor_info (from tensorflow.python.saved_model.utils_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.utils.build_tensor_info or tf.compat.v1.saved_model.build_tensor_info.

INFO:tensorflow:Assets added to graph.

INFO:tensorflow:Assets added to graph.

INFO:tensorflow:No assets to write.

INFO:tensorflow:No assets to write.

INFO:tensorflow:SavedModel written to: /tmp/tmp75ty7xfk/tftransform_tmp/0839c04b1a8d4dd0b3d2832fbe9f5904/saved_model.pb

INFO:tensorflow:SavedModel written to: /tmp/tmp75ty7xfk/tftransform_tmp/0839c04b1a8d4dd0b3d2832fbe9f5904/saved_model.pb

Warning:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow_transform/tf_utils.py:218: Tensor.experimental_ref (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use ref() instead.

Warning:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow_transform/tf_utils.py:218: Tensor.experimental_ref (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use ref() instead.

Warning:tensorflow:Tensorflow version (2.3.1) found. Note that Tensorflow Transform support for TF 2.0 is currently in beta, and features such as tf.function may not work as intended. 

Warning:tensorflow:Tensorflow version (2.3.1) found. Note that Tensorflow Transform support for TF 2.0 is currently in beta, and features such as tf.function may not work as intended. 

Warning:tensorflow:You are passing instance dicts and DatasetMetadata to TFT which will not provide optimal performance. Consider following the TFT guide to upgrade to the TFXIO format (Apache Arrow RecordBatch).

Warning:tensorflow:You are passing instance dicts and DatasetMetadata to TFT which will not provide optimal performance. Consider following the TFT guide to upgrade to the TFXIO format (Apache Arrow RecordBatch).
WARNING:apache_beam.io.tfrecordio:Couldn't find python-snappy so the implementation of _TFRecordUtil._masked_crc32c is not as fast as it could be.

CPU times: user 2min 50s, sys: 6.6 s, total: 2min 57s
Wall time: 2min 40s
Pipeline is done.

ls {output_dir}
emb-00000-of-00001.tfrecords

קרא כמה מההטבעות שנוצרו...

import itertools

embed_file = os.path.join(output_dir, 'emb-00000-of-00001.tfrecords')
sample = 5
record_iterator =  tf.io.tf_record_iterator(path=embed_file)
for string_record in itertools.islice(record_iterator, sample):
  example = tf.train.Example()
  example.ParseFromString(string_record)
  text = example.features.feature['text'].bytes_list.value
  embedding = np.array(example.features.feature['embedding'].float_list.value)
  print("Embedding dimensions: {}".format(embedding.shape[0]))
  print("{}: {}".format(text, embedding[:10]))
WARNING:tensorflow:From <ipython-input-1-3d6f4d54c65b>:5: tf_record_iterator (from tensorflow.python.lib.io.tf_record) is deprecated and will be removed in a future version.
Instructions for updating:
Use eager execution and: 
`tf.data.TFRecordDataset(path)`

Warning:tensorflow:From <ipython-input-1-3d6f4d54c65b>:5: tf_record_iterator (from tensorflow.python.lib.io.tf_record) is deprecated and will be removed in a future version.
Instructions for updating:
Use eager execution and: 
`tf.data.TFRecordDataset(path)`

Embedding dimensions: 64
[b'headline_text']: [-0.04724706  0.27573067 -0.02340046  0.12461437  0.04809146  0.00246292
  0.15367804 -0.17551982 -0.02778188 -0.185176  ]
Embedding dimensions: 64
[b'aba decides against community broadcasting licence']: [-0.0466345   0.00110549 -0.08875479  0.05938878  0.01933165 -0.05704207
  0.18913773 -0.12833942  0.1816328   0.06035798]
Embedding dimensions: 64
[b'act fire witnesses must be aware of defamation']: [-0.31556517 -0.07618773 -0.14239314 -0.14500496  0.04438541 -0.00983415
  0.01349827 -0.15908629 -0.12947078  0.31871504]
Embedding dimensions: 64
[b'a g calls for infrastructure protection summit']: [ 0.15422247 -0.09829048 -0.16913125 -0.17129296  0.01204466 -0.16008876
 -0.00540507 -0.20552996  0.11388192 -0.03878446]
Embedding dimensions: 64
[b'air nz staff in aust strike for pay rise']: [ 0.13039729 -0.06921542 -0.08830801 -0.09704516 -0.05936369 -0.13036506
 -0.16644046 -0.06228216  0.00742535 -0.13592219]

3. בניית אינדקס ANN עבור ההטמעות

ANNOY (Approximate Nearest Neighbors Oh Yeah) היא ספריית C++ עם כריכות Python לחיפוש נקודות במרחב הקרובות לנקודת שאילתה נתונה. זה גם יוצר מבני נתונים גדולים המבוססים על קבצים לקריאה בלבד, שמוצמדים לזיכרון. הוא נבנה ומשמש את Spotify להמלצות מוזיקה.

def build_index(embedding_files_pattern, index_filename, vector_length, 
    metric='angular', num_trees=100):
  '''Builds an ANNOY index'''

  annoy_index = annoy.AnnoyIndex(vector_length, metric=metric)
  # Mapping between the item and its identifier in the index
  mapping = {}

  embed_files = tf.gfile.Glob(embedding_files_pattern)
  print('Found {} embedding file(s).'.format(len(embed_files)))

  item_counter = 0
  for f, embed_file in enumerate(embed_files):
    print('Loading embeddings in file {} of {}...'.format(
      f+1, len(embed_files)))
    record_iterator = tf.io.tf_record_iterator(
      path=embed_file)

    for string_record in record_iterator:
      example = tf.train.Example()
      example.ParseFromString(string_record)
      text = example.features.feature['text'].bytes_list.value[0].decode("utf-8")
      mapping[item_counter] = text
      embedding = np.array(
        example.features.feature['embedding'].float_list.value)
      annoy_index.add_item(item_counter, embedding)
      item_counter += 1
      if item_counter % 100000 == 0:
        print('{} items loaded to the index'.format(item_counter))

  print('A total of {} items added to the index'.format(item_counter))

  print('Building the index with {} trees...'.format(num_trees))
  annoy_index.build(n_trees=num_trees)
  print('Index is successfully built.')

  print('Saving index to disk...')
  annoy_index.save(index_filename)
  print('Index is saved to disk.')
  print("Index file size: {} GB".format(
    round(os.path.getsize(index_filename) / float(1024 ** 3), 2)))
  annoy_index.unload()

  print('Saving mapping to disk...')
  with open(index_filename + '.mapping', 'wb') as handle:
    pickle.dump(mapping, handle, protocol=pickle.HIGHEST_PROTOCOL)
  print('Mapping is saved to disk.')
  print("Mapping file size: {} MB".format(
    round(os.path.getsize(index_filename + '.mapping') / float(1024 ** 2), 2)))
embedding_files = "{}/emb-*.tfrecords".format(output_dir)
embedding_dimension = projected_dim
index_filename = "index"

!rm {index_filename}
!rm {index_filename}.mapping

%time build_index(embedding_files, index_filename, embedding_dimension)
rm: cannot remove 'index': No such file or directory
rm: cannot remove 'index.mapping': No such file or directory
Found 1 embedding file(s).
Loading embeddings in file 1 of 1...
100000 items loaded to the index
200000 items loaded to the index
300000 items loaded to the index
400000 items loaded to the index
500000 items loaded to the index
600000 items loaded to the index
700000 items loaded to the index
800000 items loaded to the index
900000 items loaded to the index
1000000 items loaded to the index
1100000 items loaded to the index
A total of 1103664 items added to the index
Building the index with 100 trees...
Index is successfully built.
Saving index to disk...
Index is saved to disk.
Index file size: 1.66 GB
Saving mapping to disk...
Mapping is saved to disk.
Mapping file size: 50.61 MB
CPU times: user 6min 10s, sys: 3.7 s, total: 6min 14s
Wall time: 1min 36s

ls
corpus  index.mapping         raw.tsv
index   random_projection_matrix  semantic_approximate_nearest_neighbors.ipynb

4. השתמש באינדקס להתאמת דמיון

כעת אנו יכולים להשתמש באינדקס ANN כדי למצוא כותרות חדשות שקרובות מבחינה סמנטית לשאילתת קלט.

טען את האינדקס ואת קובצי המיפוי

index = annoy.AnnoyIndex(embedding_dimension)
index.load(index_filename, prefault=True)
print('Annoy index is loaded.')
with open(index_filename + '.mapping', 'rb') as handle:
  mapping = pickle.load(handle)
print('Mapping file is loaded.')
Annoy index is loaded.

/tmpfs/src/tf_docs_env/lib/python3.6/site-packages/ipykernel_launcher.py:1: FutureWarning: The default argument for metric will be removed in future version of Annoy. Please pass metric='angular' explicitly.
  """Entry point for launching an IPython kernel.

Mapping file is loaded.

שיטת התאמת דמיון

def find_similar_items(embedding, num_matches=5):
  '''Finds similar items to a given embedding in the ANN index'''
  ids = index.get_nns_by_vector(
  embedding, num_matches, search_k=-1, include_distances=False)
  items = [mapping[i] for i in ids]
  return items

חלץ הטמעה משאילתה נתונה

# Load the TF-Hub module
print("Loading the TF-Hub module...")
g = tf.Graph()
with g.as_default():
  embed_fn = load_module(module_url)
print("TF-Hub module is loaded.")

random_projection_matrix = None
if os.path.exists('random_projection_matrix'):
  print("Loading random projection matrix...")
  with open('random_projection_matrix', 'rb') as handle:
    random_projection_matrix = pickle.load(handle)
  print('random projection matrix is loaded.')

def extract_embeddings(query):
  '''Generates the embedding for the query'''
  query_embedding =  embed_fn([query])[0]
  if random_projection_matrix is not None:
    query_embedding = query_embedding.dot(random_projection_matrix)
  return query_embedding
Loading the TF-Hub module...
INFO:tensorflow:Saver not created because there are no variables in the graph to restore

INFO:tensorflow:Saver not created because there are no variables in the graph to restore

TF-Hub module is loaded.
TF-Hub module is loaded.
Loading random projection matrix...
random projection matrix is loaded.

extract_embeddings("Hello Machine Learning!")[:10]
array([-0.06277051,  0.14012653, -0.15893948,  0.15775941, -0.1226441 ,
       -0.11202384,  0.07953477, -0.08003543,  0.03763271,  0.0302215 ])

הזן שאילתה כדי למצוא את הפריטים הדומים ביותר

Generating embedding for the query...
CPU times: user 32.9 ms, sys: 19.8 ms, total: 52.7 ms
Wall time: 6.96 ms

Finding relevant items in the index...
CPU times: user 7.19 ms, sys: 370 µs, total: 7.56 ms
Wall time: 953 µs

Results:
=========
confronting global challenges
downer challenges un to follow aust example
fairfax loses oshane challenge
jericho social media and the border farce
territory on search for raw comedy talent
interview gred jericho
interview: josh frydenberg; environment and energy
interview: josh frydenberg; environment and energy
world science festival music and climate change
interview with aussie bobsledder

רוצה ללמוד עוד?

תוכל ללמוד עוד על TensorFlow בכתובת tensorflow.org ולראות את התיעוד של TF-Hub API בכתובת tensorflow.org/hub . מצא מודולים זמינים של TensorFlow Hub בכתובת tfhub.dev , כולל מודולים נוספים להטמעת טקסט ומודולים וקטוריים של תכונת תמונה.

בדוק גם את קורס ההתרסקות של Machine Learning , שהוא המבוא המהיר והמעשי של Google ללמידת מכונה.