Skip to content

New canine model card #38631

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 59 commits into from
Jun 10, 2025
Merged
Changes from all commits
Commits
Show all changes
59 commits
Select commit Hold shift + click to select a range
d287bf4
Updated BERTweet model card.
RogerSinghChugh May 6, 2025
419007e
Merge branch 'main' into new_bertweet_model_card
RogerSinghChugh May 6, 2025
3b8ceea
Update docs/source/en/model_doc/bertweet.md
RogerSinghChugh May 22, 2025
c613d24
Update docs/source/en/model_doc/bertweet.md
RogerSinghChugh May 22, 2025
b585ebc
Update docs/source/en/model_doc/bertweet.md
RogerSinghChugh May 22, 2025
b5e9d69
Update docs/source/en/model_doc/bertweet.md
RogerSinghChugh May 22, 2025
1ecb15f
Update docs/source/en/model_doc/bertweet.md
RogerSinghChugh May 22, 2025
d1d8f4f
Update docs/source/en/model_doc/bertweet.md
RogerSinghChugh May 22, 2025
67c8954
Update docs/source/en/model_doc/bertweet.md
RogerSinghChugh May 22, 2025
c92b390
updated toctree (EN).
RogerSinghChugh May 24, 2025
d513054
Updated BERTweet model card.
RogerSinghChugh May 6, 2025
3a49a68
Update docs/source/en/model_doc/bertweet.md
RogerSinghChugh May 22, 2025
661bf0c
Update docs/source/en/model_doc/bertweet.md
RogerSinghChugh May 22, 2025
42e9fcf
Update docs/source/en/model_doc/bertweet.md
RogerSinghChugh May 22, 2025
448881a
Update docs/source/en/model_doc/bertweet.md
RogerSinghChugh May 22, 2025
3891562
Update docs/source/en/model_doc/bertweet.md
RogerSinghChugh May 22, 2025
11c05e9
Update docs/source/en/model_doc/bertweet.md
RogerSinghChugh May 22, 2025
61e6218
Update docs/source/en/model_doc/bertweet.md
RogerSinghChugh May 22, 2025
562c3c1
updated toctree (EN).
RogerSinghChugh May 24, 2025
53f72ff
Merge branch 'new_bertweet_model_card' of https://github.com/RogerSin…
RogerSinghChugh May 27, 2025
aafd90b
Updated BERTweet model card.
RogerSinghChugh May 6, 2025
47052bd
Update docs/source/en/model_doc/bertweet.md
RogerSinghChugh May 22, 2025
3daf357
Update docs/source/en/model_doc/bertweet.md
RogerSinghChugh May 22, 2025
241677e
Update docs/source/en/model_doc/bertweet.md
RogerSinghChugh May 22, 2025
ffdfe84
Update docs/source/en/model_doc/bertweet.md
RogerSinghChugh May 22, 2025
3068354
Update docs/source/en/model_doc/bertweet.md
RogerSinghChugh May 22, 2025
f41e66c
Update docs/source/en/model_doc/bertweet.md
RogerSinghChugh May 22, 2025
d850b53
Update docs/source/en/model_doc/bertweet.md
RogerSinghChugh May 22, 2025
3538e9b
updated toctree (EN).
RogerSinghChugh May 24, 2025
aad09a8
Merge branch 'new_bertweet_model_card' of https://github.com/RogerSin…
RogerSinghChugh May 27, 2025
678d8ad
Commit for new_gpt_model_card.
RogerSinghChugh May 31, 2025
e9e5c79
Merge branch 'main' into new_gpt_neo_model_card
RogerSinghChugh May 31, 2025
dca55fc
Update docs/source/en/model_doc/gpt_neo.md
RogerSinghChugh Jun 2, 2025
41e7335
Update docs/source/en/model_doc/gpt_neo.md
RogerSinghChugh Jun 2, 2025
99ab141
Update docs/source/en/model_doc/gpt_neo.md
RogerSinghChugh Jun 2, 2025
91a4e6d
Update docs/source/en/model_doc/gpt_neo.md
RogerSinghChugh Jun 2, 2025
0e13ceb
Update docs/source/en/model_doc/gpt_neo.md
RogerSinghChugh Jun 2, 2025
ee2c27b
Update docs/source/en/model_doc/gpt_neo.md
RogerSinghChugh Jun 2, 2025
5077e11
Merge branch 'main' into new_gpt_neo_model_card
RogerSinghChugh Jun 2, 2025
9b3ff87
Update docs/source/en/model_doc/gpt_neo.md
RogerSinghChugh Jun 3, 2025
5556f85
Update docs/source/en/model_doc/gpt_neo.md
RogerSinghChugh Jun 3, 2025
19a002c
Merge branch 'main' into new_gpt_neo_model_card
RogerSinghChugh Jun 3, 2025
bfca14b
Merge branch 'main' into new_gpt_neo_model_card
RogerSinghChugh Jun 4, 2025
a18f55b
Merge branch 'main' into new_gpt_neo_model_card
RogerSinghChugh Jun 4, 2025
ccf8696
Merge branch 'main' into new_gpt_neo_model_card
RogerSinghChugh Jun 4, 2025
4212c48
commit for new canine model card.
RogerSinghChugh Jun 6, 2025
36ca186
Merge branch 'huggingface:main' into new_canine_model_card
RogerSinghChugh Jun 6, 2025
937a822
Merge branch 'main' into new_canine_model_card
RogerSinghChugh Jun 6, 2025
eec352b
Merge branch 'main' into new_canine_model_card
RogerSinghChugh Jun 9, 2025
a8bb78a
Update docs/source/en/model_doc/canine.md
RogerSinghChugh Jun 9, 2025
d495ca3
Update docs/source/en/model_doc/canine.md
RogerSinghChugh Jun 9, 2025
a50bf33
Update docs/source/en/model_doc/canine.md
RogerSinghChugh Jun 9, 2025
15161a3
Update docs/source/en/model_doc/canine.md
RogerSinghChugh Jun 9, 2025
4b13a26
Update docs/source/en/model_doc/canine.md
RogerSinghChugh Jun 9, 2025
51f7446
Merge branch 'huggingface:main' into new_canine_model_card
RogerSinghChugh Jun 10, 2025
98cfd75
Update docs/source/en/model_doc/canine.md
RogerSinghChugh Jun 10, 2025
9cdb64e
implemented suggestion by @stevhliu.
RogerSinghChugh Jun 10, 2025
e193676
Merge branch 'main' into new_canine_model_card
RogerSinghChugh Jun 10, 2025
c38d29b
Update canine.md
stevhliu Jun 10, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
121 changes: 50 additions & 71 deletions docs/source/en/model_doc/canine.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,99 +14,78 @@ rendered properly in your Markdown viewer.

-->

# CANINE

<div class="flex flex-wrap space-x-1">
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
<div style="float: right;">
<div class="flex flex-wrap space-x-1">
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
</div>
</div>

## Overview

The CANINE model was proposed in [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language
Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting. It's
among the first papers that trains a Transformer without using an explicit tokenization step (such as Byte Pair
Encoding (BPE), WordPiece or SentencePiece). Instead, the model is trained directly at a Unicode character-level.
Training at a character-level inevitably comes with a longer sequence length, which CANINE solves with an efficient
downsampling strategy, before applying a deep Transformer encoder.

The abstract from the paper is the following:
# CANINE

*Pipelined NLP systems have largely been superseded by end-to-end neural modeling, yet nearly all commonly-used models
still require an explicit tokenization step. While recent tokenization approaches based on data-derived subword
lexicons are less brittle than manually engineered tokenizers, these techniques are not equally suited to all
languages, and the use of any fixed vocabulary may limit a model's ability to adapt. In this paper, we present CANINE,
a neural encoder that operates directly on character sequences, without explicit tokenization or vocabulary, and a
pre-training strategy that operates either directly on characters or optionally uses subwords as a soft inductive bias.
To use its finer-grained input effectively and efficiently, CANINE combines downsampling, which reduces the input
sequence length, with a deep transformer stack, which encodes context. CANINE outperforms a comparable mBERT model by
2.8 F1 on TyDi QA, a challenging multilingual benchmark, despite having 28% fewer model parameters.*
[CANINE](https://huggingface.co/papers/2103.06874) is a tokenization-free Transformer. It skips the usual step of splitting text into subwords or wordpieces and processes text character by character. That means it works directly with raw Unicode, making it especially useful for languages with complex or inconsistent tokenization rules and even noisy inputs like typos. Since working with characters means handling longer sequences, CANINE uses a smart trick. The model compresses the input early on (called downsampling) so the transformer doesn’t have to process every character individually. This keeps things fast and efficient.

This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/google-research/language/tree/master/language/canine).
You can find all the original CANINE checkpoints under the [Google](https://huggingface.co/google?search_models=canine) organization.

## Usage tips
> [!TIP]
> Click on the CANINE models in the right sidebar for more examples of how to apply CANINE to different language tasks.

- CANINE uses no less than 3 Transformer encoders internally: 2 "shallow" encoders (which only consist of a single
layer) and 1 "deep" encoder (which is a regular BERT encoder). First, a "shallow" encoder is used to contextualize
the character embeddings, using local attention. Next, after downsampling, a "deep" encoder is applied. Finally,
after upsampling, a "shallow" encoder is used to create the final character embeddings. Details regarding up- and
downsampling can be found in the paper.
- CANINE uses a max sequence length of 2048 characters by default. One can use [`CanineTokenizer`]
to prepare text for the model.
- Classification can be done by placing a linear layer on top of the final hidden state of the special [CLS] token
(which has a predefined Unicode code point). For token classification tasks however, the downsampled sequence of
tokens needs to be upsampled again to match the length of the original character sequence (which is 2048). The
details for this can be found in the paper.
The example below demonstrates how to generate embeddings with [`Pipeline`], [`AutoModel`], and from the command line.

Model checkpoints:
<hfoptions id="usage">
<hfoption id="Pipeline">

- [google/canine-c](https://huggingface.co/google/canine-c): Pre-trained with autoregressive character loss,
12-layer, 768-hidden, 12-heads, 121M parameters (size ~500 MB).
- [google/canine-s](https://huggingface.co/google/canine-s): Pre-trained with subword loss, 12-layer,
768-hidden, 12-heads, 121M parameters (size ~500 MB).
```py
import torch
from transformers import pipeline

pipeline = pipeline(
task="feature-extraction",
model="google/canine-c",
device=0,
)

## Usage example
pipeline("Plant create energy through a process known as photosynthesis.")
```

CANINE works on raw characters, so it can be used **without a tokenizer**:
</hfoption>
<hfoption id="AutoModel">

```python
>>> from transformers import CanineModel
>>> import torch
```py
import torch
from transformers import AutoModel

>>> model = CanineModel.from_pretrained("google/canine-c") # model pre-trained with autoregressive character loss
model = AutoModel.from_pretrained("google/canine-c")

>>> text = "hello world"
>>> # use Python's built-in ord() function to turn each character into its unicode code point id
>>> input_ids = torch.tensor([[ord(char) for char in text]])
text = "Plant create energy through a process known as photosynthesis."
input_ids = torch.tensor([[ord(char) for char in text]])

>>> outputs = model(input_ids) # forward pass
>>> pooled_output = outputs.pooler_output
>>> sequence_output = outputs.last_hidden_state
outputs = model(input_ids)
pooled_output = outputs.pooler_output
sequence_output = outputs.last_hidden_state
```

For batched inference and training, it is however recommended to make use of the tokenizer (to pad/truncate all
sequences to the same length):

```python
>>> from transformers import CanineTokenizer, CanineModel
</hfoption>
<hfoption id="transformers CLI">

>>> model = CanineModel.from_pretrained("google/canine-c")
>>> tokenizer = CanineTokenizer.from_pretrained("google/canine-c")
```bash
echo -e "Plant create energy through a process known as photosynthesis." | transformers-cli run --task feature-extraction --model google/canine-c --device 0
```

>>> inputs = ["Life is like a box of chocolates.", "You never know what you gonna get."]
>>> encoding = tokenizer(inputs, padding="longest", truncation=True, return_tensors="pt")
</hfoption>
</hfoptions>

>>> outputs = model(**encoding) # forward pass
>>> pooled_output = outputs.pooler_output
>>> sequence_output = outputs.last_hidden_state
```
## Notes

## Resources
- CANINE skips tokenization entirely — it works directly on raw characters, not subwords. You can use it with or without a tokenizer. For batched inference and training, it is recommended to use the tokenizer to pad and truncate all sequences to the same length.

- [Text classification task guide](../tasks/sequence_classification)
- [Token classification task guide](../tasks/token_classification)
- [Question answering task guide](../tasks/question_answering)
- [Multiple choice task guide](../tasks/multiple_choice)
```py
from transformers import AutoTokenizer, AutoModel

tokenizer = AutoTokenizer("google/canine-c")
inputs = ["Life is like a box of chocolates.", "You never know what you gonna get."]
encoding = tokenizer(inputs, padding="longest", truncation=True, return_tensors="pt")
```
- CANINE is primarily designed to be fine-tuned on a downstream task. The pretrained model can be used for either masked language modeling or next sentence prediction.

## CanineConfig

Expand Down