From f8690b635f40ecc637680e8e000d1e7ed90db0f0 Mon Sep 17 00:00:00 2001 From: "Ashvanth.S" Date: Sat, 29 Mar 2025 16:11:46 +0530 Subject: [PATCH 1/5] Update Model card for gpt2 --- docs/source/en/model_doc/gpt2.md | 126 ++++++++++++++++--------------- 1 file changed, 64 insertions(+), 62 deletions(-) diff --git a/docs/source/en/model_doc/gpt2.md b/docs/source/en/model_doc/gpt2.md index 89a0429cca41..640043230fa2 100644 --- a/docs/source/en/model_doc/gpt2.md +++ b/docs/source/en/model_doc/gpt2.md @@ -14,30 +14,30 @@ rendered properly in your Markdown viewer. --> -# OpenAI GPT2 - -
- -Models - - -Spaces - +
+
+ PyTorch + TensorFlow + FlashAttention + SDPA + + Models + + + Spaces + +
-## Overview -OpenAI GPT-2 model was proposed in [Language Models are Unsupervised Multitask Learners](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) by Alec -Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei and Ilya Sutskever from [OpenAI](https://huggingface.co/openai). It's a causal (unidirectional) -transformer pretrained using language modeling on a very large corpus of ~40 GB of text data. +# GPT2 -The abstract from the paper is the following: +GPT-2 is a causal transformer language model introduced by OpenAI through the paper [Language Models are Unsupervised Multitask Learners](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf). The model represents a significant scaling up from its predecessor **GPT**, with 10× more parameters and training data. + +GPT-2 was developed with a straightforward objective to predict the next word in a sequence based on all preceding words. By training on a diverse 40GB corpus of web text, this seemingly simple approach enabled the model to develop sophisticated text generation capabilities across multiple domains and writing styles without task-specific training. + +The model architecture uses a unidirectional (causal) attention mechanism where each token can only attend to previous tokens, making it particularly effective for text generation tasks. -*GPT-2 is a large transformer-based language model with 1.5 billion parameters, trained on a dataset[1] of 8 million -web pages. GPT-2 is trained with a simple objective: predict the next word, given all of the previous words within some -text. The diversity of the dataset causes this simple goal to contain naturally occurring demonstrations of many tasks -across diverse domains. GPT-2 is a direct scale-up of GPT, with more than 10X the parameters and trained on more than -10X the amount of data.* [Write With Transformer](https://transformer.huggingface.co/doc/gpt2-large) is a webapp created and hosted by Hugging Face showcasing the generative capabilities of several models. GPT-2 is one of them and is available in five @@ -45,51 +45,58 @@ different sizes: small, medium, large, xl and a distilled version of the small c This model was contributed by [thomwolf](https://huggingface.co/thomwolf). The original code can be found [here](https://openai.com/blog/better-language-models/). -## Usage tips - -- GPT-2 is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than - the left. -- GPT-2 was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next - token in a sequence. Leveraging this feature allows GPT-2 to generate syntactically coherent text as it can be - observed in the *run_generation.py* example script. -- The model can take the *past_key_values* (for PyTorch) or *past* (for TF) as input, which is the previously computed - key/value attention pairs. Using this (*past_key_values* or *past*) value prevents the model from re-computing - pre-computed values in the context of text generation. For PyTorch, see *past_key_values* argument of the - [`GPT2Model.forward`] method, or for TF the *past* argument of the - [`TFGPT2Model.call`] method for more information on its usage. +> [!TIP] +> Click on the GPT models in the right sidebar for more examples of how to apply GPT to different language tasks. + +- GPT-2 has absolute position embeddings, hence advised to pad inputs on the right rather than the left. +- The model was trained with a causal language modeling (CLM) objective, making it excellent at predicting the next token in a sequence. This enables GPT-2 to generate coherent text, as demonstrated in the `run_generation.py` example script. +- For efficient text generation, GPT-2 can reuse previously computed key/value attention pairs. Access this feature via the past_key_values parameter in PyTorch (see [GPT2Model.forward] method) or the past parameter in TensorFlow (see [TFGPT2Model.call] method). - Enabling the *scale_attn_by_inverse_layer_idx* and *reorder_and_upcast_attn* flags will apply the training stability improvements from [Mistral](https://github.com/stanford-crfm/mistral/) (for PyTorch only). -## Usage example +The example below demonstrates how to generate text with [`Pipeline`] and [`Automodel`] class -The `generate()` method can be used to generate text using GPT2 model. + + -```python ->>> from transformers import AutoModelForCausalLM, AutoTokenizer +```py +from transformers import pipeline, set_seed ->>> model = AutoModelForCausalLM.from_pretrained("gpt2") ->>> tokenizer = AutoTokenizer.from_pretrained("gpt2") - ->>> prompt = "GPT2 is a model developed by OpenAI." +generator = pipeline(task='text-generation', model='gpt2') ->>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids +set_seed(42) ->>> gen_tokens = model.generate( -... input_ids, -... do_sample=True, -... temperature=0.9, -... max_length=100, -... ) ->>> gen_text = tokenizer.batch_decode(gen_tokens)[0] +generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5) ``` + + + + +```py +from transformers import AutoModelForCausalLM, AutoTokenizer -## Using Flash Attention 2 +model = AutoModelForCausalLM.from_pretrained("gpt2") +tokenizer = AutoTokenizer.from_pretrained("gpt2") -Flash Attention 2 is a faster, optimized version of the attention scores computation which relies on `cuda` kernels. +prompt = "GPT2 is a model developed by OpenAI." -### Installation +input_ids = tokenizer(prompt, return_tensors="pt").input_ids -First, check whether your hardware is compatible with Flash Attention 2. The latest list of compatible hardware can be found in the [official documentation](https://github.com/Dao-AILab/flash-attention#installation-and-features). If your hardware is not compatible with Flash Attention 2, you can still benefit from attention kernel optimisations through Better Transformer support covered [above](https://huggingface.co/docs/transformers/main/en/model_doc/bark#using-better-transformer). +gen_tokens = model.generate( + input_ids, + do_sample=True, + temperature=0.9, + max_length=100, + ) +gen_text = tokenizer.batch_decode(gen_tokens)[0] +``` + + + + +Flash Attention 2 provides significant speedups for transformer models through optimized `CUDA` kernels for attention computation. + +Do check whether your hardware is compatible with Flash Attention 2 before implementation. The latest list of compatible hardware can be found in the [official documentation](https://github.com/Dao-AILab/flash-attention#installation-and-features). If your hardware is not compatible with Flash Attention 2, you can still benefit from attention kernel optimisations through Better Transformer support covered [above](https://huggingface.co/docs/transformers/main/en/model_doc/bark#using-better-transformer). Next, [install](https://github.com/Dao-AILab/flash-attention#installation-and-features) the latest version of Flash Attention 2: @@ -97,9 +104,7 @@ Next, [install](https://github.com/Dao-AILab/flash-attention#installation-and-fe pip install -U flash-attn --no-build-isolation ``` -### Usage - -To load a model using Flash Attention 2, we can pass the argument `attn_implementation="flash_attention_2"` to [`.from_pretrained`](https://huggingface.co/docs/transformers/main/en/main_classes/model#transformers.PreTrainedModel.from_pretrained). We'll also load the model in half-precision (e.g. `torch.float16`), since it results in almost no degradation to audio quality but significantly lower memory usage and faster inference: +Enable Flash Attention 2 by specifying `attn_implementation="flash_attention_2"` to to [`.from_pretrained`](https://huggingface.co/docs/transformers/main/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) when loading your model.For optimal performance, use half-precision (e.g., torch.float16), which maintains quality while reducing memory usage and accelerating inference: ```python >>> import torch @@ -118,9 +123,6 @@ To load a model using Flash Attention 2, we can pass the argument `attn_implemen >>> tokenizer.batch_decode(generated_ids)[0] ``` - -### Expected speedups - Below is an expected speedup diagram that compares pure inference time between the native implementation in transformers using `gpt2` checkpoint and the Flash Attention 2 version of the model using a sequence length of 512.
@@ -128,7 +130,6 @@ Below is an expected speedup diagram that compares pure inference time between t
-## Using Scaled Dot Product Attention (SDPA) PyTorch includes a native scaled dot-product attention (SDPA) operator as part of `torch.nn.functional`. This function encompasses several implementations that can be applied depending on the inputs and the hardware in use. See the [official documentation](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html) @@ -141,7 +142,6 @@ SDPA is used by default for `torch>=2.1.1` when an implementation is available, ```python from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("gpt2", torch_dtype=torch.float16, attn_implementation="sdpa") -... ``` For the best speedups, we recommend loading the model in half-precision (e.g. `torch.float16` or `torch.bfloat16`). @@ -150,7 +150,8 @@ On a local benchmark (rtx3080ti-16GB, PyTorch 2.2.1, OS Ubuntu 22.04) using `flo [gpt2-large](https://huggingface.co/openai-community/gpt2-large), we saw the following speedups during training and inference. -### Training +The table below shows the training benchmark for GPT2 using Eager and SDPA implementations. + | Batch size | Seq len | Time per batch (Eager - s) | Time per batch (SDPA - s) | Speedup (%) | Eager peak mem (MB) | SDPA peak mem (MB) | Mem saving (%) | |-----------:|--------:|----------------------------:|--------------------------:|------------:|--------------------:|-------------------:|------------------:| | 1 | 128 | 0.039 | 0.032 | 23.042 | 3482.32 | 3494.62 | -0.352 | @@ -166,7 +167,8 @@ following speedups during training and inference. | 4 | 512 | 0.494 | 0.406 | 21.687 | 12466.6 | 8102.64 | 53.858 | | 4 | 1024 | OOM | 0.795 | / | OOM | 14568.2 | SDPA does not OOM | -### Inference +The table below shows the inference time and memory usage for GPT2 using Eager and SDPA implementations. + | Batch size | Seq len | Per token latency Eager (ms) | Per token latency SDPA (ms) | Speedup (%) | Mem Eager (MB) | Mem SDPA (MB) | Mem saved (%) | |-----------:|--------:|-----------------------------:|----------------------------:|------------:|---------------:|--------------:|--------------:| | 1 | 128 | 7.991 | 6.968 | 14.681 | 1685.2 | 1701.32 | -0.947 | @@ -185,7 +187,7 @@ following speedups during training and inference. -## Resources +## Notes A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with GPT2. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. From 8bfd186e1c42717b9285322ff31965def25cb898 Mon Sep 17 00:00:00 2001 From: "Ashvanth.S" Date: Sat, 29 Mar 2025 16:22:34 +0530 Subject: [PATCH 2/5] Update link for gpt2 space --- docs/source/en/model_doc/gpt2.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/en/model_doc/gpt2.md b/docs/source/en/model_doc/gpt2.md index 640043230fa2..e79bc68cd650 100644 --- a/docs/source/en/model_doc/gpt2.md +++ b/docs/source/en/model_doc/gpt2.md @@ -39,7 +39,7 @@ GPT-2 was developed with a straightforward objective to predict the next word in The model architecture uses a unidirectional (causal) attention mechanism where each token can only attend to previous tokens, making it particularly effective for text generation tasks. -[Write With Transformer](https://transformer.huggingface.co/doc/gpt2-large) is a webapp created and hosted by +[Write With Transformer](https://huggingface.co/spaces/merve/write-with-transformer) is a webapp created and hosted by Hugging Face showcasing the generative capabilities of several models. GPT-2 is one of them and is available in five different sizes: small, medium, large, xl and a distilled version of the small checkpoint: *distilgpt-2*. From 3679cd7e3f901a6825243758c69326b6a2fce180 Mon Sep 17 00:00:00 2001 From: "Ashvanth.S" Date: Thu, 3 Apr 2025 22:07:36 +0530 Subject: [PATCH 3/5] fixes docs based on suggestions --- docs/source/en/model_doc/gpt2.md | 55 ++++++++++++-------------------- 1 file changed, 21 insertions(+), 34 deletions(-) diff --git a/docs/source/en/model_doc/gpt2.md b/docs/source/en/model_doc/gpt2.md index e79bc68cd650..0120ae87845e 100644 --- a/docs/source/en/model_doc/gpt2.md +++ b/docs/source/en/model_doc/gpt2.md @@ -30,65 +30,46 @@ rendered properly in your Markdown viewer.
-# GPT2 +# GPT-2 -GPT-2 is a causal transformer language model introduced by OpenAI through the paper [Language Models are Unsupervised Multitask Learners](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf). The model represents a significant scaling up from its predecessor **GPT**, with 10× more parameters and training data. - -GPT-2 was developed with a straightforward objective to predict the next word in a sequence based on all preceding words. By training on a diverse 40GB corpus of web text, this seemingly simple approach enabled the model to develop sophisticated text generation capabilities across multiple domains and writing styles without task-specific training. +[GPT-2](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) is a scaled up version of GPT, a causal transformer language model, with 10x more parameters and training data. The model was pretrained on a 40GB dataset to predict the next word in a sequence based on all the previous words. This approach enabled the model to perform many downstream tasks in a zero-shot setting. The model architecture uses a unidirectional (causal) attention mechanism where each token can only attend to previous tokens, making it particularly effective for text generation tasks. -[Write With Transformer](https://huggingface.co/spaces/merve/write-with-transformer) is a webapp created and hosted by -Hugging Face showcasing the generative capabilities of several models. GPT-2 is one of them and is available in five -different sizes: small, medium, large, xl and a distilled version of the small checkpoint: *distilgpt-2*. +You can find all the original GPT-2 checkpoints under the [OpenAI community](https://huggingface.co/openai-community) organization.GPT-2 is one of them and is available in five different sizes: small, medium, large, xl and a distilled version of the small checkpoint: *distilgpt-2*. This model was contributed by [thomwolf](https://huggingface.co/thomwolf). The original code can be found [here](https://openai.com/blog/better-language-models/). > [!TIP] -> Click on the GPT models in the right sidebar for more examples of how to apply GPT to different language tasks. +> Click on the GPT-2 models in the right sidebar for more examples of how to apply GPT-2 to different language tasks. -- GPT-2 has absolute position embeddings, hence advised to pad inputs on the right rather than the left. -- The model was trained with a causal language modeling (CLM) objective, making it excellent at predicting the next token in a sequence. This enables GPT-2 to generate coherent text, as demonstrated in the `run_generation.py` example script. -- For efficient text generation, GPT-2 can reuse previously computed key/value attention pairs. Access this feature via the past_key_values parameter in PyTorch (see [GPT2Model.forward] method) or the past parameter in TensorFlow (see [TFGPT2Model.call] method). -- Enabling the *scale_attn_by_inverse_layer_idx* and *reorder_and_upcast_attn* flags will apply the training stability - improvements from [Mistral](https://github.com/stanford-crfm/mistral/) (for PyTorch only). - -The example below demonstrates how to generate text with [`Pipeline`] and [`Automodel`] class +The example below demonstrates how to generate text with [`Pipeline`] or the [`AutoModel`], and from the command line. ```py -from transformers import pipeline, set_seed - -generator = pipeline(task='text-generation', model='gpt2') - -set_seed(42) +import torch +from transformers import pipeline -generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5) +pipeline = pipeline(task="text-generation", model="openai-community/gpt2", torch_dtype=torch.float16, device=0) +pipeline("Hellow, I'm a language model") ``` - ```py +import torch from transformers import AutoModelForCausalLM, AutoTokenizer -model = AutoModelForCausalLM.from_pretrained("gpt2") -tokenizer = AutoTokenizer.from_pretrained("gpt2") - -prompt = "GPT2 is a model developed by OpenAI." +model = AutoModelForCausalLM.from_pretrained("openai-community/gpt2", torch_dtype=torch.float16, device_map="autp", attn_implementation="sdpa") +tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2") -input_ids = tokenizer(prompt, return_tensors="pt").input_ids +input_ids = tokenzier("GPT2 is a model developed by OpenAI.". return_tensors="pt").to("cuda") -gen_tokens = model.generate( - input_ids, - do_sample=True, - temperature=0.9, - max_length=100, - ) -gen_text = tokenizer.batch_decode(gen_tokens)[0] +output = model.generate(**input_ids, cache_implementation="static") +print(tokenizer.decode(output[0], skip_special_tokens=True)) ``` @@ -189,6 +170,12 @@ The table below shows the inference time and memory usage for GPT2 using Eager a ## Notes +- GPT-2 has absolute position embeddings, hence advised to pad inputs on the right rather than the left. +- The model was trained with a causal language modeling (CLM) objective, making it excellent at predicting the next token in a sequence. This enables GPT-2 to generate coherent text, as demonstrated in the `run_generation.py` example script. +- For efficient text generation, GPT-2 can reuse previously computed key/value attention pairs. Access this feature via the past_key_values parameter in PyTorch (see [GPT2Model.forward] method) or the past parameter in TensorFlow (see [TFGPT2Model.call] method). +- Enabling the *scale_attn_by_inverse_layer_idx* and *reorder_and_upcast_attn* flags will apply the training stability + improvements from [Mistral](https://github.com/stanford-crfm/mistral/) (for PyTorch only). + A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with GPT2. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. From 4e92f2eecf9a67b5b0db065e5804344bc0e6903d Mon Sep 17 00:00:00 2001 From: "Ashvanth.S" Date: Thu, 3 Apr 2025 22:51:39 +0530 Subject: [PATCH 4/5] Add transformers-cli and quantization example for GPT-2 --- docs/source/en/model_doc/gpt2.md | 34 ++++++++++++++++++++++++++++++++ 1 file changed, 34 insertions(+) diff --git a/docs/source/en/model_doc/gpt2.md b/docs/source/en/model_doc/gpt2.md index 0120ae87845e..bb7f7359fe08 100644 --- a/docs/source/en/model_doc/gpt2.md +++ b/docs/source/en/model_doc/gpt2.md @@ -72,9 +72,43 @@ output = model.generate(**input_ids, cache_implementation="static") print(tokenizer.decode(output[0], skip_special_tokens=True)) ``` + + + +```bash +transformers-cli chat --model_name_or_path openai-community/gpt2 --torch_dtype auto --device 0 +``` + +Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the [Quantization](../quantization/overview) overview for more available quantization backends. + +The example below uses [bitsandbytes](../quantization/bitsandbytes) to only quantize the weights to 4-bits. + +```py +import torch +from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, pipeline + +quantization_config = BitsAndBytesConfig( + load_in_4bit=True, + bnb_4bit_quant_type="nf4", + bnb_4bit_compute_dtype="float16", + bnb_4bit_use_double_quant=True +) + +model = AutoModelForCausalLM.from_pretrained( + "openai-community/gpt2-xl", + quantization_config=quantization_config, + device_map="auto" +) + +tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2-xl") +inputs = tokenizer("Once upon a time, there was a magical forest", return_tensors="pt").to("cuda") +outputs = model.generate(**inputs, max_new_tokens=100) +print(tokenizer.decode(outputs[0], skip_special_tokens=True)) +``` + Flash Attention 2 provides significant speedups for transformer models through optimized `CUDA` kernels for attention computation. Do check whether your hardware is compatible with Flash Attention 2 before implementation. The latest list of compatible hardware can be found in the [official documentation](https://github.com/Dao-AILab/flash-attention#installation-and-features). If your hardware is not compatible with Flash Attention 2, you can still benefit from attention kernel optimisations through Better Transformer support covered [above](https://huggingface.co/docs/transformers/main/en/model_doc/bark#using-better-transformer). From a488d26697663d399abe5c91b68382edd03c2ae0 Mon Sep 17 00:00:00 2001 From: "Ashvanth.S" Date: Sat, 5 Apr 2025 16:12:25 +0530 Subject: [PATCH 5/5] Remove resources and flash attention docs and fix typos --- docs/source/en/model_doc/gpt2.md | 141 ++----------------------------- 1 file changed, 9 insertions(+), 132 deletions(-) diff --git a/docs/source/en/model_doc/gpt2.md b/docs/source/en/model_doc/gpt2.md index bb7f7359fe08..6ce006c9081d 100644 --- a/docs/source/en/model_doc/gpt2.md +++ b/docs/source/en/model_doc/gpt2.md @@ -15,17 +15,11 @@ rendered properly in your Markdown viewer. -->
-
+
@@ -36,10 +30,7 @@ rendered properly in your Markdown viewer. The model architecture uses a unidirectional (causal) attention mechanism where each token can only attend to previous tokens, making it particularly effective for text generation tasks. - -You can find all the original GPT-2 checkpoints under the [OpenAI community](https://huggingface.co/openai-community) organization.GPT-2 is one of them and is available in five different sizes: small, medium, large, xl and a distilled version of the small checkpoint: *distilgpt-2*. - -This model was contributed by [thomwolf](https://huggingface.co/thomwolf). The original code can be found [here](https://openai.com/blog/better-language-models/). +You can find all the original GPT-2 checkpoints under the [OpenAI community](https://huggingface.co/openai-community?search_models=gpt) organization. > [!TIP] > Click on the GPT-2 models in the right sidebar for more examples of how to apply GPT-2 to different language tasks. @@ -54,7 +45,7 @@ import torch from transformers import pipeline pipeline = pipeline(task="text-generation", model="openai-community/gpt2", torch_dtype=torch.float16, device=0) -pipeline("Hellow, I'm a language model") +pipeline("Hello, I'm a language model") ``` @@ -63,10 +54,10 @@ pipeline("Hellow, I'm a language model") import torch from transformers import AutoModelForCausalLM, AutoTokenizer -model = AutoModelForCausalLM.from_pretrained("openai-community/gpt2", torch_dtype=torch.float16, device_map="autp", attn_implementation="sdpa") +model = AutoModelForCausalLM.from_pretrained("openai-community/gpt2", torch_dtype=torch.float16, device_map="auto", attn_implementation="sdpa") tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2") -input_ids = tokenzier("GPT2 is a model developed by OpenAI.". return_tensors="pt").to("cuda") +input_ids = tokenzier("Hello, I'm a language model". return_tensors="pt").to("cuda") output = model.generate(**input_ids, cache_implementation="static") print(tokenizer.decode(output[0], skip_special_tokens=True)) @@ -76,7 +67,7 @@ print(tokenizer.decode(output[0], skip_special_tokens=True)) ```bash -transformers-cli chat --model_name_or_path openai-community/gpt2 --torch_dtype auto --device 0 +echo -e "Hello, I'm a language model" | transformers-cli run --task text-generation --model openai-community/gpt2 --device 0 ``` @@ -109,125 +100,11 @@ outputs = model.generate(**inputs, max_new_tokens=100) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` -Flash Attention 2 provides significant speedups for transformer models through optimized `CUDA` kernels for attention computation. - -Do check whether your hardware is compatible with Flash Attention 2 before implementation. The latest list of compatible hardware can be found in the [official documentation](https://github.com/Dao-AILab/flash-attention#installation-and-features). If your hardware is not compatible with Flash Attention 2, you can still benefit from attention kernel optimisations through Better Transformer support covered [above](https://huggingface.co/docs/transformers/main/en/model_doc/bark#using-better-transformer). - -Next, [install](https://github.com/Dao-AILab/flash-attention#installation-and-features) the latest version of Flash Attention 2: - -```bash -pip install -U flash-attn --no-build-isolation -``` - -Enable Flash Attention 2 by specifying `attn_implementation="flash_attention_2"` to to [`.from_pretrained`](https://huggingface.co/docs/transformers/main/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) when loading your model.For optimal performance, use half-precision (e.g., torch.float16), which maintains quality while reducing memory usage and accelerating inference: - -```python ->>> import torch ->>> from transformers import AutoModelForCausalLM, AutoTokenizer ->>> device = "cuda" # the device to load the model onto - ->>> model = AutoModelForCausalLM.from_pretrained("gpt2", torch_dtype=torch.float16, attn_implementation="flash_attention_2") ->>> tokenizer = AutoTokenizer.from_pretrained("gpt2") - ->>> prompt = "def hello_world():" - ->>> model_inputs = tokenizer([prompt], return_tensors="pt").to(device) ->>> model.to(device) - ->>> generated_ids = model.generate(**model_inputs, max_new_tokens=100, do_sample=True) ->>> tokenizer.batch_decode(generated_ids)[0] -``` - -Below is an expected speedup diagram that compares pure inference time between the native implementation in transformers using `gpt2` checkpoint and the Flash Attention 2 version of the model using a sequence length of 512. - -
- -
- - -PyTorch includes a native scaled dot-product attention (SDPA) operator as part of `torch.nn.functional`. This function -encompasses several implementations that can be applied depending on the inputs and the hardware in use. See the -[official documentation](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html) -or the [GPU Inference](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#pytorch-scaled-dot-product-attention) -page for more information. - -SDPA is used by default for `torch>=2.1.1` when an implementation is available, but you may also set -`attn_implementation="sdpa"` in `from_pretrained()` to explicitly request SDPA to be used. - -```python -from transformers import AutoModelForCausalLM -model = AutoModelForCausalLM.from_pretrained("gpt2", torch_dtype=torch.float16, attn_implementation="sdpa") -``` - -For the best speedups, we recommend loading the model in half-precision (e.g. `torch.float16` or `torch.bfloat16`). - -On a local benchmark (rtx3080ti-16GB, PyTorch 2.2.1, OS Ubuntu 22.04) using `float16` with -[gpt2-large](https://huggingface.co/openai-community/gpt2-large), we saw the -following speedups during training and inference. - -The table below shows the training benchmark for GPT2 using Eager and SDPA implementations. - -| Batch size | Seq len | Time per batch (Eager - s) | Time per batch (SDPA - s) | Speedup (%) | Eager peak mem (MB) | SDPA peak mem (MB) | Mem saving (%) | -|-----------:|--------:|----------------------------:|--------------------------:|------------:|--------------------:|-------------------:|------------------:| -| 1 | 128 | 0.039 | 0.032 | 23.042 | 3482.32 | 3494.62 | -0.352 | -| 1 | 256 | 0.073 | 0.059 | 25.15 | 3546.66 | 3552.6 | -0.167 | -| 1 | 512 | 0.155 | 0.118 | 30.96 | 4230.1 | 3665.59 | 15.4 | -| 1 | 1024 | 0.316 | 0.209 | 50.839 | 8682.26 | 4881.09 | 77.875 | -| 2 | 128 | 0.07 | 0.06 | 15.324 | 3557.8 | 3545.91 | 0.335 | -| 2 | 256 | 0.143 | 0.122 | 16.53 | 3901.5 | 3657.68 | 6.666 | -| 2 | 512 | 0.267 | 0.213 | 25.626 | 7062.21 | 4876.47 | 44.822 | -| 2 | 1024 | OOM | 0.404 | / | OOM | 8096.35 | SDPA does not OOM | -| 4 | 128 | 0.134 | 0.128 | 4.412 | 3675.79 | 3648.72 | 0.742 | -| 4 | 256 | 0.243 | 0.217 | 12.292 | 6129.76 | 4871.12 | 25.839 | -| 4 | 512 | 0.494 | 0.406 | 21.687 | 12466.6 | 8102.64 | 53.858 | -| 4 | 1024 | OOM | 0.795 | / | OOM | 14568.2 | SDPA does not OOM | - -The table below shows the inference time and memory usage for GPT2 using Eager and SDPA implementations. - -| Batch size | Seq len | Per token latency Eager (ms) | Per token latency SDPA (ms) | Speedup (%) | Mem Eager (MB) | Mem SDPA (MB) | Mem saved (%) | -|-----------:|--------:|-----------------------------:|----------------------------:|------------:|---------------:|--------------:|--------------:| -| 1 | 128 | 7.991 | 6.968 | 14.681 | 1685.2 | 1701.32 | -0.947 | -| 1 | 256 | 8.462 | 7.199 | 17.536 | 1745.49 | 1770.78 | -1.428 | -| 1 | 512 | 8.68 | 7.853 | 10.529 | 1907.69 | 1921.29 | -0.708 | -| 1 | 768 | 9.101 | 8.365 | 8.791 | 2032.93 | 2068.12 | -1.701 | -| 2 | 128 | 9.169 | 9.001 | 1.861 | 1803.84 | 1811.4 | -0.418 | -| 2 | 256 | 9.907 | 9.78 | 1.294 | 1907.72 | 1921.44 | -0.714 | -| 2 | 512 | 11.519 | 11.644 | -1.071 | 2176.86 | 2197.75 | -0.951 | -| 2 | 768 | 13.022 | 13.407 | -2.873 | 2464.3 | 2491.06 | -1.074 | -| 4 | 128 | 10.097 | 9.831 | 2.709 | 1942.25 | 1985.13 | -2.16 | -| 4 | 256 | 11.599 | 11.398 | 1.764 | 2177.28 | 2197.86 | -0.937 | -| 4 | 512 | 14.653 | 14.45 | 1.411 | 2753.16 | 2772.57 | -0.7 | -| 4 | 768 | 17.846 | 17.617 | 1.299 | 3327.04 | 3343.97 | -0.506 | - - - - ## Notes -- GPT-2 has absolute position embeddings, hence advised to pad inputs on the right rather than the left. -- The model was trained with a causal language modeling (CLM) objective, making it excellent at predicting the next token in a sequence. This enables GPT-2 to generate coherent text, as demonstrated in the `run_generation.py` example script. -- For efficient text generation, GPT-2 can reuse previously computed key/value attention pairs. Access this feature via the past_key_values parameter in PyTorch (see [GPT2Model.forward] method) or the past parameter in TensorFlow (see [TFGPT2Model.call] method). -- Enabling the *scale_attn_by_inverse_layer_idx* and *reorder_and_upcast_attn* flags will apply the training stability - improvements from [Mistral](https://github.com/stanford-crfm/mistral/) (for PyTorch only). - -A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with GPT2. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. - - - -- A blog on how to [Finetune a non-English GPT-2 Model with Hugging Face](https://www.philschmid.de/fine-tune-a-non-english-gpt-2-model-with-huggingface). -- A blog on [How to generate text: using different decoding methods for language generation with Transformers](https://huggingface.co/blog/how-to-generate) with GPT-2. -- A blog on [Training CodeParrot 🦜 from Scratch](https://huggingface.co/blog/codeparrot), a large GPT-2 model. -- A blog on [Faster Text Generation with TensorFlow and XLA](https://huggingface.co/blog/tf-xla-generate) with GPT-2. -- A blog on [How to train a Language Model with Megatron-LM](https://huggingface.co/blog/megatron-training) with a GPT-2 model. -- A notebook on how to [finetune GPT2 to generate lyrics in the style of your favorite artist](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb). 🌎 -- A notebook on how to [finetune GPT2 to generate tweets in the style of your favorite Twitter user](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb). 🌎 -- [Causal language modeling](https://huggingface.co/course/en/chapter7/6?fw=pt#training-a-causal-language-model-from-scratch) chapter of the 🤗 Hugging Face Course. -- [`GPT2LMHeadModel`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#gpt-2gpt-and-causal-language-modeling), [text generation example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-generation), and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb). -- [`TFGPT2LMHeadModel`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_clmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb). -- [`FlaxGPT2LMHeadModel`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#causal-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/causal_language_modeling_flax.ipynb). -- [Text classification task guide](../tasks/sequence_classification) -- [Token classification task guide](../tasks/token_classification) -- [Causal language modeling task guide](../tasks/language_modeling) +- Pad inputs on the right because GPT-2 uses absolute position embeddings. +- GPT-2 can reuse previously computed key-value attention pairs. Access this feature with the [past_key_values](https://huggingface.co/docs/transformers//en/model_doc/gpt2#transformers.GPT2Model.forward.past_key_values) parameter in [`GPT2Model.forward`]. +- Enable the [scale_attn_by_inverse_layer_idx](https://huggingface.co/docs/transformers/en/model_doc/gpt2#transformers.GPT2Config.scale_attn_by_inverse_layer_idx) and [reorder_and_upcast_attn](https://huggingface.co/docs/transformers/en/model_doc/gpt2#transformers.GPT2Config.reorder_and_upcast_attn) parameters to apply the training stability improvements from [Mistral](./mistral). ## GPT2Config