Path: blob/master/site/en-snapshot/hub/tutorials/bert_experts.ipynb
25118 views
Copyright 2020 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
BERT Experts from TF-Hub
This colab demonstrates how to:
Load BERT models from TensorFlow Hub that have been trained on different tasks including MNLI, SQuAD, and PubMed
Use a matching preprocessing model to tokenize raw text and convert it to ids
Generate the pooled and sequence output from the token input ids using the loaded model
Look at the semantic similarity of the pooled outputs of different sentences
Note: This colab should be run with a GPU runtime
Set up and imports
Sentences
Let's take some sentences from Wikipedia to run through model
Run the model
We'll load the BERT model from TF-Hub, tokenize our sentences using the matching preprocessing model from TF-Hub, then feed in the tokenized sentences to the model. To keep this colab fast and simple, we recommend running on GPU.
Go to Runtime → Change runtime type to make sure that GPU is selected
Semantic similarity
Now let's take a look at the pooled_output
embeddings of our sentences and compare how similar they are across sentences.
Learn more
Find more BERT models on TensorFlow Hub
This notebook demonstrates simple inference with BERT, you can find a more advanced tutorial about fine-tuning BERT at tensorflow.org/official_models/fine_tuning_bert
We used just one GPU chip to run the model, you can learn more about how to load models using tf.distribute at tensorflow.org/tutorials/distribute/save_and_load