-
Notifications
You must be signed in to change notification settings - Fork 5.7k
fix model run error when use auto parallel and recompute(use_reentrant=false) #65188
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix model run error when use auto parallel and recompute(use_reentrant=false) #65188
Conversation
你的PR提交成功,感谢你对开源项目的贡献! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM for Tensor construction
…t=false) (PaddlePaddle#65188) * fix model run error when auto parallel and recompute and use_reentrant=false * solve the defect of TensorWrapper not considering DistTensor * add unittest * fix recompute have not support cpu when use_reentrant is false
…t=false) (PaddlePaddle#65188) * fix model run error when auto parallel and recompute and use_reentrant=false * solve the defect of TensorWrapper not considering DistTensor * add unittest * fix recompute have not support cpu when use_reentrant is false
PR Category
Auto Parallel
PR Types
Bug fixes
Description
pcard-84677
model will run error and core dump when meets 2 conditions: