CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutSign UpSign In
huggingface

Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.

GitHub Repository: huggingface/notebooks
Path: blob/main/course/videos/model_api_pt.ipynb
Views: 2542
Kernel: Unknown Kernel

This notebook regroups the code sample of the video below, which is a part of the Hugging Face course.

#@title from IPython.display import HTML HTML('<iframe width="560" height="315" src="https://www.youtube.com/embed/AhChOFRegn4?rel=0&amp;controls=0&amp;showinfo=0" frameborder="0" allowfullscreen></iframe>')

Install the Transformers and Datasets libraries to run this notebook.

! pip install datasets transformers[sentencepiece]
from transformers import AutoModel bert_model = AutoModel.from_pretrained("bert-base-cased") print(type(bert_model)) gpt_model = AutoModel.from_pretrained("gpt2") print(type(gpt_model)) bart_model = AutoModel.from_pretrained("facebook/bart-base") print(type(bart_model))
Some weights of the model checkpoint at bert-base-cased were not used when initializing BertModel: ['cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.seq_relationship.weight', 'cls.predictions.decoder.weight', 'cls.predictions.transform.dense.weight', 'cls.predictions.bias', 'cls.predictions.transform.dense.bias', 'cls.seq_relationship.bias'] - This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
<class 'transformers.models.bert.modeling_bert.BertModel'> <class 'transformers.models.gpt2.modeling_gpt2.GPT2Model'>
Some weights of the model checkpoint at facebook/bart-base were not used when initializing BartModel: ['final_logits_bias'] - This IS expected if you are initializing BartModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing BartModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
<class 'transformers.models.bart.modeling_bart.BartModel'>
from transformers import AutoConfig bert_config = AutoConfig.from_pretrained("bert-base-cased") print(type(bert_config)) gpt_config = AutoConfig.from_pretrained("gpt2") print(type(gpt_config)) bart_config = AutoConfig.from_pretrained("facebook/bart-base") print(type(bart_config))
<class 'transformers.models.bert.configuration_bert.BertConfig'> <class 'transformers.models.gpt2.configuration_gpt2.GPT2Config'> <class 'transformers.models.bart.configuration_bart.BartConfig'>
from transformers import BertConfig bert_config = BertConfig.from_pretrained("bert-base-cased") print(type(bert_config))
<class 'transformers.models.bert.configuration_bert.BertConfig'>
from transformers import GPT2Config gpt_config = GPT2Config.from_pretrained("gpt2") print(type(gpt_config))
<class 'transformers.models.gpt2.configuration_gpt2.GPT2Config'>
from transformers import BartConfig bart_config = BartConfig.from_pretrained("facebook/bart-base") print(type(bart_config))
<class 'transformers.models.bart.configuration_bart.BartConfig'>
from transformers import BertConfig bert_config = BertConfig.from_pretrained("bert-base-cased") print(bert_config)
BertConfig { "architectures": [ "BertForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "model_type": "bert", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 0, "position_embedding_type": "absolute", "transformers_version": "4.7.0.dev0", "type_vocab_size": 2, "use_cache": true, "vocab_size": 28996 }
from transformers import BertConfig, BertModel bert_config = BertConfig.from_pretrained("bert-base-cased") bert_model = BertModel(bert_config)
from transformers import BertConfig, BertModel bert_config = BertConfig.from_pretrained("bert-base-cased") bert_model = BertModel(bert_config) # Training code bert_model.save_pretrained("my_bert_model")