Skip to content

[LLM]Support Gemma model #8082

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 44 commits into from
Apr 12, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
44 commits
Select commit Hold shift + click to select a range
024236d
add model
Southpika Mar 8, 2024
ddabd9c
add cfg&tokenizer
Southpika Mar 8, 2024
f8359d0
update weight share
Southpika Mar 11, 2024
2165d4b
rm useless notes
Southpika Mar 11, 2024
7b10862
rm pdb
Southpika Mar 11, 2024
afa538b
add cfg
Southpika Mar 12, 2024
325e5f5
fix version
Southpika Mar 12, 2024
a9b67ca
fix
Southpika Mar 12, 2024
7611088
fix des
Southpika Mar 12, 2024
c38c85c
fix des
Southpika Mar 12, 2024
01a2ffb
update url
Southpika Mar 12, 2024
abd7586
fix some logic
Southpika Mar 12, 2024
5082373
add it model
Southpika Mar 12, 2024
d1955a3
update_cfg
Southpika Mar 18, 2024
4351994
add modeling ut
Southpika Mar 18, 2024
73ff43a
Merge remote-tracking branch 'upstream/develop' into gemma-model
Southpika Mar 18, 2024
4a7bd0c
add link
Southpika Mar 11, 2024
3b950ee
fix typing
Southpika Mar 18, 2024
f4ff965
fix typing
Southpika Mar 18, 2024
8a379de
add init
Southpika Mar 18, 2024
1e2ba5e
add tokenizer ut
Southpika Mar 18, 2024
690056c
fix small bug
Southpika Mar 15, 2024
acec8fe
add alibi
Southpika Mar 18, 2024
492ccac
add addedtoken str
Southpika Mar 18, 2024
0479e79
add special attr
Southpika Mar 18, 2024
65f0a71
add tp ut
Southpika Mar 18, 2024
a99f3fc
merge
Southpika Mar 25, 2024
78497e7
Merge remote-tracking branch 'upstream/develop' into gemma-model
Southpika Mar 25, 2024
87410c3
apply api change
Southpika Mar 25, 2024
eb0c556
rm hard code cfg
Southpika Apr 1, 2024
ef7fc3e
fix ut
Southpika Apr 1, 2024
aad3b56
fix ut
Southpika Apr 1, 2024
f600d02
add zero padding
Southpika Apr 1, 2024
e1c1e85
upload bf16 safetensors model
Southpika Apr 2, 2024
d4f09c0
fix
Southpika Apr 2, 2024
8863852
add gemma in sft data convert
Southpika Apr 2, 2024
2f8b5d8
add gemma in sft data convert
Southpika Apr 2, 2024
282259c
add gemma tokenizer
Southpika Apr 8, 2024
f0e8cd5
fix des
Southpika Apr 8, 2024
cbfe50f
rm fast tokenizer
Southpika Apr 8, 2024
bcd21b6
update sft cfg
Southpika Apr 8, 2024
363b661
cherry-pick pp from develop
Southpika Apr 8, 2024
50d81ca
Merge remote-tracking branch 'upstream/develop' into gemma-model
Southpika Apr 8, 2024
bdb18ef
add cfg
Southpika Apr 8, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions llm/data.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,11 +44,11 @@ def get_convert_example(model):

if base_model_prefix == "chatglm":
return convert_example_chatglm
elif base_model_prefix in ["chatglm_v2", "llama", "bloom", "opt", "qwen", "mixtral"]:
elif base_model_prefix in ["chatglm_v2", "llama", "bloom", "opt", "qwen", "mixtral", "gemma"]:
return convert_example_common
else:
raise ValueError(
f"Unknown base_model_prefix: {model.base_model_prefix}. Supported base_model_prefix list: chatglm, bloom, llama, qwen, mixtral"
f"Unknown base_model_prefix: {model.base_model_prefix}. Supported base_model_prefix list: chatglm, bloom, llama, qwen, mixtral, gemma"
)


Expand Down
18 changes: 18 additions & 0 deletions llm/gemma/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
# Gemma

## 1.模型介绍

[Gemma](https://blog.google/technology/developers/gemma-open-models/) 由谷歌DeepMind和谷歌其他团队开发,是一个轻量级、最先进的开放式模型家族,采用与Gemini模型相同的研究和技术构建。

**支持模型权重:**

| Model |
| ------------------ |
| google/gemma-7b |
| google/gemma-7b-it |
| google/gemma-2b |
| google/gemma-2b-it |

## 2. 模型精调

请参考[LLM全流程工具介绍](../README.md)
30 changes: 30 additions & 0 deletions llm/gemma/sft_argument.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
{
"model_name_or_path": "google/gemma-2b/",
"dataset_name_or_path": "./data",
"output_dir": "./checkpoints/gemma_sft_ckpts",
"per_device_train_batch_size": 2,
"gradient_accumulation_steps": 1,
"per_device_eval_batch_size": 8,
"eval_accumulation_steps":16,
"num_train_epochs": 3,
"learning_rate": 3e-05,
"warmup_steps": 30,
"logging_steps": 1,
"evaluation_strategy": "epoch",
"save_strategy": "epoch",
"src_length": 512,
"max_length": 1024,
"fp16": true,
"fp16_opt_level": "O2",
"do_train": true,
"do_eval": true,
"disable_tqdm": true,
"load_best_model_at_end": true,
"eval_with_do_generation": false,
"metric_for_best_model": "accuracy",
"recompute": true,
"save_total_limit": 1,
"tensor_parallel_degree": 2,
"zero_padding": false,
"use_flash_attention": false
}
32 changes: 32 additions & 0 deletions llm/gemma/sft_argument_7b.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
{
"model_name_or_path": "google/gemma-7b",
"dataset_name_or_path": "./data",
"output_dir": "./checkpoints/gemma_sft_ckpts",
"per_device_train_batch_size": 8,
"gradient_accumulation_steps": 1,
"per_device_eval_batch_size": 8,
"eval_accumulation_steps":1,
"num_train_epochs": 3,
"learning_rate": 3e-06,
"warmup_steps": 30,
"logging_steps": 1,
"evaluation_strategy": "epoch",
"save_strategy": "epoch",
"src_length": 512,
"max_length": 1024,
"bf16": true,
"fp16_opt_level": "O2",
"do_train": true,
"do_eval": true,
"do_predict": true,
"disable_tqdm": true,
"load_best_model_at_end": true,
"eval_with_do_generation": false,
"metric_for_best_model": "accuracy",
"recompute": true,
"save_total_limit": 1,
"tensor_parallel_degree": 8,
"pipeline_parallel_degree": 1,
"zero_padding": false,
"use_flash_attention": false
}
33 changes: 33 additions & 0 deletions llm/gemma/sft_argument_7b_sharding.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
{
"model_name_or_path": "google/gemma-7b",
"dataset_name_or_path": "./data",
"output_dir": "./checkpoints/llama_sft_ckpts",
"per_device_train_batch_size": 1,
"gradient_accumulation_steps": 1,
"per_device_eval_batch_size": 8,
"eval_accumulation_steps":1,
"num_train_epochs": 3,
"learning_rate": 3e-06,
"warmup_steps": 30,
"logging_steps": 1,
"evaluation_strategy": "epoch",
"save_strategy": "epoch",
"src_length": 1024,
"max_length": 2048,
"fp16": true,
"fp16_opt_level": "O2",
"do_train": true,
"do_eval": true,
"do_predict": true,
"disable_tqdm": true,
"load_best_model_at_end": true,
"eval_with_do_generation": false,
"metric_for_best_model": "accuracy",
"recompute": true,
"save_total_limit": 1,
"sharding_parallel_degree": 8,
"sharding": "stage3",
"pipeline_parallel_degree": 1,
"zero_padding": false,
"use_flash_attention": false
}
31 changes: 31 additions & 0 deletions llm/gemma/sft_argument_sharding.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
{
"model_name_or_path": "google/gemma-2b/",
"dataset_name_or_path": "./data",
"output_dir": "./checkpoints/chatglm2_sft_ckpts",
"per_device_train_batch_size": 1,
"gradient_accumulation_steps": 1,
"per_device_eval_batch_size": 1,
"eval_accumulation_steps":1,
"num_train_epochs": 3,
"learning_rate": 3e-05,
"warmup_steps": 30,
"logging_steps": 1,
"evaluation_strategy": "epoch",
"save_strategy": "epoch",
"src_length": 512,
"max_length": 1024,
"fp16": true,
"fp16_opt_level": "O2",
"do_train": true,
"do_eval": true,
"disable_tqdm": true,
"load_best_model_at_end": true,
"eval_with_do_generation": false,
"metric_for_best_model": "accuracy",
"recompute": true,
"save_total_limit": 1,
"sharding_parallel_degree": 2,
"sharding": "stage3",
"zero_padding": false,
"use_flash_attention": false
}
1 change: 1 addition & 0 deletions paddlenlp/transformers/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -200,6 +200,7 @@
from .gau_alpha.modeling import *
from .gau_alpha.tokenizer import *
from .gau_alpha.configuration import *
from .gemma import *
from .roformerv2.modeling import *
from .roformerv2.tokenizer import *
from .roformerv2.configuration import *
Expand Down
1 change: 1 addition & 0 deletions paddlenlp/transformers/auto/modeling.py
Original file line number Diff line number Diff line change
Expand Up @@ -118,6 +118,7 @@
("Bloom", "bloom"),
("QWen", "qwen"),
("Mixtral", "mixtral"),
("Gemma", "gemma"),
]
)

Expand Down
1 change: 1 addition & 0 deletions paddlenlp/transformers/auto/tokenizer.py
Original file line number Diff line number Diff line change
Expand Up @@ -97,6 +97,7 @@
("BloomTokenizer", "bloom"),
("SpeechT5Tokenizer", "speecht5"),
("QWenTokenizer", "qwen"),
("GemmaTokenizer", "gemma"),
]
)

Expand Down
18 changes: 18 additions & 0 deletions paddlenlp/transformers/gemma/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
# Copyright (c) 2024 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from .configuration import *
from .modeling import *
from .modeling_pp import *
from .tokenizer import *
Loading