-
Notifications
You must be signed in to change notification settings - Fork 3k
[NPU] support npu llama2-13B export & inference #8442
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 1 commit
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,14 @@ | ||
# PaddleNLP 自定义 OP | ||
|
||
此文档介绍如何编译安装 PaddleNLP NPU 自定义 OP。 | ||
|
||
# 1. 安装 PaddleCustomDevice | ||
|
||
参考 [PaddleCustomDevice NPU 安装文档](https://github.com/PaddlePaddle/PaddleCustomDevice/blob/develop/backends/npu/README_cn.md) 进行安装 | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 目前CustomDevice有NPU编译后的版本吗? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 目前没有用于高性能推理的包,推荐是自行编译 |
||
|
||
# 2. 安装 paddlenlp_ops | ||
```shell | ||
python setup.py build bdist_wheel | ||
|
||
pip install dist/paddlenlp_ops*.whl | ||
``` |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,15 @@ | ||
# Copyright (c) 2024 PaddlePaddle Authors. All Rights Reserved. | ||
# | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
|
||
from paddle_custom_device.npu.ops import * |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,59 @@ | ||
# Copyright (c) 2024 PaddlePaddle Authors. All Rights Reserved. | ||
# | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
|
||
import os | ||
|
||
from setuptools import Distribution, setup | ||
|
||
packages = [] | ||
package_data = {} | ||
|
||
|
||
class BinaryDistribution(Distribution): | ||
def has_ext_modules(self): | ||
return True | ||
|
||
|
||
def main(): | ||
setup( | ||
name="paddlenlp_ops", | ||
version="0.0.0", | ||
description="PaddleNLP NPU CustomOps", | ||
long_description="", | ||
long_description_content_type="text/markdown", | ||
author_email="Paddle-better@baidu.com", | ||
maintainer="PaddlePaddle", | ||
maintainer_email="Paddle-better@baidu.com", | ||
project_urls={}, | ||
license="Apache Software License", | ||
packages=[ | ||
"paddlenlp_ops", | ||
], | ||
include_package_data=True, | ||
package_data={ | ||
"": ["*.py"], | ||
}, | ||
package_dir={ | ||
"": "python", | ||
}, | ||
zip_safe=False, | ||
distclass=BinaryDistribution, | ||
entry_points={"console_scripts": []}, | ||
classifiers=[], | ||
keywords="PaddleNLP NPU CustomOps", | ||
) | ||
|
||
|
||
if __name__ == "__main__": | ||
main() |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -40,9 +40,12 @@ def load_inference_model(model_path, model_name, param_name, exe): | |
return paddle.static.io.load_inference_model(model_path, exe) | ||
|
||
|
||
def validate_pdmodel(model_path, model_prefix): | ||
def validate_pdmodel(model_path, model_prefix, device): | ||
paddle.enable_static() | ||
place = paddle.CUDAPlace(0) | ||
if device == "gpu": | ||
place = paddle.CUDAPlace(0) | ||
else: | ||
place = paddle.CustomPlace(device, 0) | ||
exe = paddle.static.Executor(place) | ||
scope = paddle.static.Scope() | ||
|
||
|
@@ -95,7 +98,12 @@ def main(): | |
|
||
if tensor_parallel_degree > 1: | ||
export_args.output_path = os.path.join(export_args.output_path, f"rank_{tensor_parallel_rank}") | ||
validate_pdmodel(export_args.output_path, predictor_args.model_prefix) | ||
validate_pdmodel(export_args.output_path, predictor_args.model_prefix, predictor_args.device) | ||
|
||
if predictor_args.device == "npu": | ||
from llama.npu.export_utils import process_params | ||
|
||
process_params(os.path.join(export_args.output_path, predictor_args.model_prefix)) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 这里对NPU的模型的op的attr进行修改的原因是什么了? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. NPU高性能推理时
|
||
|
||
|
||
if __name__ == "__main__": | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,110 @@ | ||
# Copyright (c) 2024 PaddlePaddle Authors. All Rights Reserved. | ||
# | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
|
||
import argparse | ||
|
||
import numpy as np | ||
import paddle | ||
from tqdm import tqdm | ||
|
||
|
||
def parse_arguments(): | ||
parser = argparse.ArgumentParser() | ||
parser.add_argument("--model_path", default="inference/model", help="The directory of exported model.") | ||
return parser.parse_args() | ||
|
||
|
||
def trans_weight(var): | ||
shape = var.desc.shape() | ||
new_shape = [shape[1], shape[0]] | ||
var.desc.set_shape(new_shape) | ||
|
||
var_data = np.array(var.get_value()) | ||
var.get_value().set(var_data.T, paddle.CPUPlace()) | ||
|
||
|
||
def convert_dequant_scale(var): | ||
deq_scale = np.array(var.get_value()).astype(np.float32) | ||
new_deq_scale = np.stack([deq_scale.reshape(-1, 1), np.zeros_like(deq_scale).reshape(-1, 1)], axis=-1).reshape(-1) | ||
var.get_value().set(np.frombuffer(new_deq_scale.tobytes(), dtype=np.int64), paddle.CPUPlace()) | ||
|
||
|
||
def process_params(model_path): | ||
paddle.enable_static() | ||
exe = paddle.static.Executor(paddle.CPUPlace()) | ||
|
||
prog = paddle.static.Program() | ||
startup_prog = paddle.static.Program() | ||
scope = paddle.static.Scope() | ||
with paddle.base.scope_guard(scope): | ||
with paddle.base.program_guard(prog, startup_prog): | ||
[program, feed_target_names, fetch_targets] = paddle.static.io.load_inference_model(model_path, exe) | ||
|
||
feed_targets = [] | ||
for var in program.list_vars(): | ||
if var.name in feed_target_names: | ||
feed_targets.append(var) | ||
|
||
block = program.global_block() | ||
|
||
for op in tqdm(block.ops, desc="processing the linear layer for NPU"): | ||
if op.type == "matmul_v2": | ||
w_name = op.input_arg_names[-1] | ||
if w_name.endswith("qkv_weight") and op.attr("trans_y") == False: | ||
op._set_attr("trans_y", True) | ||
w = block.var(w_name) | ||
trans_weight(w) | ||
elif w_name.endswith("out_proj_weight") and op.attr("trans_y") == False: | ||
op._set_attr("trans_y", True) | ||
w = block.var(w_name) | ||
trans_weight(w) | ||
elif w_name.endswith("ffn1_weight") and op.attr("trans_y") == False: | ||
op._set_attr("trans_y", True) | ||
w = block.var(w_name) | ||
trans_weight(w) | ||
elif w_name.endswith("ffn2_weight") and op.attr("trans_y") == False: | ||
op._set_attr("trans_y", True) | ||
w = block.var(w_name) | ||
trans_weight(w) | ||
elif w_name == "llama_lm_head_0.w_0" and op.attr("trans_y") == False: | ||
op._set_attr("trans_y", True) | ||
w = block.var(w_name) | ||
trans_weight(w) | ||
|
||
for var_name in tqdm(block.vars, desc="processing the dequant layer for NPU"): | ||
if var_name.endswith("qkv_out_scale"): | ||
var = block.var(var_name) | ||
convert_dequant_scale(var) | ||
elif var_name.endswith("linear_out_scale"): | ||
var = block.var(var_name) | ||
convert_dequant_scale(var) | ||
elif var_name.endswith("ffn1_out_scale"): | ||
var = block.var(var_name) | ||
convert_dequant_scale(var) | ||
elif var_name.endswith("ffn2_out_scale"): | ||
var = block.var(var_name) | ||
convert_dequant_scale(var) | ||
|
||
paddle.static.save_inference_model( | ||
model_path, feed_targets, fetch_targets, exe, program=program, skip_prune_program=True | ||
) | ||
|
||
|
||
def main(): | ||
args = parse_arguments() | ||
process_params(args.model_path) | ||
|
||
|
||
if __name__ == "__main__": | ||
main() |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -570,7 +570,7 @@ | |
return ln_out | ||
|
||
def compute_qkv_linear(self, ln_out, i): | ||
if float(paddle.version.cuda()) < 11.6: | ||
if paddle.version.cuda() == "False" or float(paddle.version.cuda()) < 11.6: | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 昆仑芯推理也是走了这个逻辑吗,如果是对昆仑芯推理有影响吗? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 没有影响,这里只影响 paddle-cpu 版本 (npu用的paddle-cpu版本) 走上面这个分支,否则 float(paddle.version.cuda()) 会报错,cpu版本的paddle.version.cuda() 返回的是字符串 False |
||
qkv_out = paddle.matmul(ln_out, self.qkv_weights[i], False, True) | ||
if self.qkv_biases[i] is not None: | ||
qkv_out = paddle.add(qkv_out, self.qkv_biases[i]) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里建议不要单独创建新的csrc目录,因为后续多硬件接入会非常多,建议直接在csrc目录创建一个npu目录
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已修改,csrc_npu -> csrc/npu