wandb: Currently logged in as: ayamerushia. Use `wandb login --relogin` to force relogin
env: WANDB_PROJECT=indo-roberta-small-finetune-sentiment-analysis
env: WANDB_WATCH=true
env: WANDB_LOG_MODEL=true
If you’re opening this notebook locally, make sure your environment has an install from the last version of those libraries.
To be able to share your model with the community and generate results like the one shown in the picture below via the inference API, there are a few more steps to follow.
First you have to store your authentication token from the Hugging Face website (sign up here if you haven’t already!) then execute the following cell and input your username and password:
from huggingface_hub import notebook_loginnotebook_login()
Then you need to install Git-LFS. Uncomment the following instructions:
!apt install git-lfs
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
git-lfs is already the newest version (3.0.2-1ubuntu0.2).
0 upgraded, 0 newly installed, 0 to remove and 35 not upgraded.
Make sure your version of Transformers is at least 4.11.0 since the functionality was introduced in that version:
You can find a script version of this notebook to fine-tune your model in a distributed fashion using multiple GPUs or TPUs here.
Fine-tuning a model on a text classification task
In this notebook, we will see how to fine-tune one of the 🤗 Transformers model to a text classification task of the GLUE Benchmark.
The GLUE Benchmark is a group of nine classification tasks on sentences or pairs of sentences which are:
CoLA (Corpus of Linguistic Acceptability) Determine if a sentence is grammatically correct or not.is a dataset containing sentences labeled grammatically correct or not.
MNLI (Multi-Genre Natural Language Inference) Determine if a sentence entails, contradicts or is unrelated to a given hypothesis. (This dataset has two versions, one with the validation and test set coming from the same distribution, another called mismatched where the validation and test use out-of-domain data.)
MRPC (Microsoft Research Paraphrase Corpus) Determine if two sentences are paraphrases from one another or not.
QNLI (Question-answering Natural Language Inference) Determine if the answer to a question is in the second sentence or not. (This dataset is built from the SQuAD dataset.)
QQP (Quora Question Pairs2) Determine if two questions are semantically equivalent or not.
RTE (Recognizing Textual Entailment) Determine if a sentence entails a given hypothesis or not.
SST-2 (Stanford Sentiment Treebank) Determine if the sentence has a positive or negative sentiment.
STS-B (Semantic Textual Similarity Benchmark) Determine the similarity of two sentences with a score from 1 to 5.
WNLI (Winograd Natural Language Inference) Determine if a sentence with an anonymous pronoun and a sentence with this pronoun replaced are entailed or not. (This dataset is built from the Winograd Schema Challenge dataset.)
We will see how to easily load the dataset for each one of those tasks and use the Trainer API to fine-tune a model on it. Each task is named by its acronym, with mnli-mm standing for the mismatched version of MNLI (so same training set as mnli but different validation and test sets):
This notebook is built to run on any of the tasks in the list above, with any model checkpoint from the Model Hub as long as that model has a version with a classification head. Depending on you model and the GPU you are using, you might need to adjust the batch size to avoid out-of-memory errors. Set those three parameters, then the rest of the notebook should run smoothly:
We will use the 🤗 Datasets library to download the data and get the metric we need to use for evaluation (to compare our model to the benchmark). This can be easily done with the functions load_dataset and load_metric.
from datasets import load_dataset, load_metric
Apart from mnli-mm being a special code, we can directly pass our task name to those functions. load_dataset will cache the dataset to avoid downloading it again the next time you run this cell.
/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_token.py:88: UserWarning:
The secret `HF_TOKEN` does not exist in your Colab secrets.
To authenticate with the Hugging Face Hub, create a token in your settings tab (https://huggingface.co/settings/tokens), set it as secret in your Google Colab and restart your session.
You will be able to reuse this secret in all of your notebooks.
Please note that authentication is recommended but still optional to access public models or datasets.
warnings.warn(
/usr/local/lib/python3.10/dist-packages/datasets/load.py:1461: FutureWarning: The repository for indonlu contains custom code which must be executed to correctly load the dataset. You can inspect the repository content at https://hf.co/datasets/indonlu
You can avoid this message in future by passing the argument `trust_remote_code=True`.
Passing `trust_remote_code=True` will be mandatory to load this dataset from the next major release of `datasets`.
warnings.warn(
<ipython-input-9-777f794771cd>:6: FutureWarning: load_metric is deprecated and will be removed in the next major version of datasets. Use 'evaluate.load' instead, from the new library 🤗 Evaluate: https://huggingface.co/docs/evaluate
metric = load_metric('glue', "mnli")
/usr/local/lib/python3.10/dist-packages/datasets/load.py:756: FutureWarning: The repository for glue contains custom code which must be executed to correctly load the metric. You can inspect the repository content at https://raw.githubusercontent.com/huggingface/datasets/2.18.0/metrics/glue/glue.py
You can avoid this message in future by passing the argument `trust_remote_code=True`.
Passing `trust_remote_code=True` will be mandatory to load this metric from the next major release of `datasets`.
warnings.warn(
from transformers import AutoModelForSequenceClassification, TrainingArguments, Trainernum_labels =3model = AutoModelForSequenceClassification.from_pretrained(model_checkpoint, num_labels=num_labels)
Some weights of RobertaForSequenceClassification were not initialized from the model checkpoint at w11wo/indo-roberta-small and are newly initialized: ['classifier.dense.bias', 'classifier.dense.weight', 'classifier.out_proj.bias', 'classifier.out_proj.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Here we set the evaluation to be done at the end of each epoch, tweak the learning rate, use the batch_size defined at the top of the notebook and customize the number of epochs for training, as well as the weight decay. Since the best model might not be the one at the end of training, we ask the Trainer to load the best model it saved (according to metric_name) at the end of training.
The last argument to setup everything so we can push the model to the Hub regularly during training. Remove it if you didn’t follow the installation steps at the top of the notebook. If you want to save your model locally in a name that is different than the name of the repository it will be pushed, or if you want to push your model under an organization and not your name space, use the hub_model_id argument to set the repo name (it needs to be the full name, including your namespace: for instance "sgugger/bert-finetuned-mrpc" or "huggingface/bert-finetuned-mrpc").
The last thing to define for our Trainer is how to compute the metrics from the predictions. We need to define a function for this, which will just use the metric we loaded earlier, the only preprocessing we have to do is to take the argmax of our predicted logits (our just squeeze the last axis in the case of STS-B):
You might wonder why we pass along the tokenizer when we already preprocessed our data. This is because we will use it once last time to make all the samples we gather the same length by applying padding, which requires knowing the model’s preferences regarding padding (to the left or right? with which token?). The tokenizer has a pad method that will do all of this right for us, and the Trainer will use it. You can customize this part by defining and passing your own data_collator which will receive the samples like the dictionaries seen above and will need to return a dictionary of tensors.
We can now finetune our model by just calling the train method:
import numpy as nptrainer.train()
wandb: Currently logged in as: ayamerushia. Use `wandb login --relogin` to force relogin
Tracking run with wandb version 0.16.4
Run data is saved locally in /content/wandb/run-20240307_033625-cauwc60e
Checkpoint destination directory indo-roberta-small-finetuned-indonlu-smsa/checkpoint-344 already exists and is non-empty. Saving will proceed but saved results may be invalid.
Checkpoint destination directory indo-roberta-small-finetuned-indonlu-smsa/checkpoint-688 already exists and is non-empty. Saving will proceed but saved results may be invalid.
Checkpoint destination directory indo-roberta-small-finetuned-indonlu-smsa/checkpoint-1032 already exists and is non-empty. Saving will proceed but saved results may be invalid.
Checkpoint destination directory indo-roberta-small-finetuned-indonlu-smsa/checkpoint-1376 already exists and is non-empty. Saving will proceed but saved results may be invalid.
Checkpoint destination directory indo-roberta-small-finetuned-indonlu-smsa/checkpoint-1720 already exists and is non-empty. Saving will proceed but saved results may be invalid.