Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
[LLM] support sparse attention for LLAMA #8592
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Uh oh!
There was an error while loading. Please reload this page.
[LLM] support sparse attention for LLAMA #8592
Changes from all commits
cf69b0b
73c6740
7f3014c
84cf5a8
61f2314
File filter
Filter by extension
Conversations
Uh oh!
There was an error while loading. Please reload this page.
Jump to
Uh oh!
There was an error while loading. Please reload this page.
There are no files selected for viewing
Check warning on line 214 in paddlenlp/transformers/llama/fusion_ops.py
paddlenlp/transformers/llama/fusion_ops.py#L212-L214
Check warning on line 222 in paddlenlp/transformers/llama/fusion_ops.py
paddlenlp/transformers/llama/fusion_ops.py#L222
Check warning on line 1585 in paddlenlp/transformers/llama/modeling.py
paddlenlp/transformers/llama/modeling.py#L1585
Check warning on line 59 in paddlenlp/transformers/llama/modeling_pp.py
paddlenlp/transformers/llama/modeling_pp.py#L52-L59
Check warning on line 63 in paddlenlp/transformers/llama/modeling_pp.py
paddlenlp/transformers/llama/modeling_pp.py#L63
Check warning on line 68 in paddlenlp/transformers/llama/modeling_pp.py
paddlenlp/transformers/llama/modeling_pp.py#L68
Check warning on line 77 in paddlenlp/transformers/llama/modeling_pp.py
paddlenlp/transformers/llama/modeling_pp.py#L76-L77
Check warning on line 82 in paddlenlp/transformers/llama/modeling_pp.py
paddlenlp/transformers/llama/modeling_pp.py#L82
Check warning on line 93 in paddlenlp/transformers/llama/modeling_pp.py
paddlenlp/transformers/llama/modeling_pp.py#L92-L93
Check warning on line 131 in paddlenlp/transformers/llama/modeling_pp.py
paddlenlp/transformers/llama/modeling_pp.py#L131
Check warning on line 145 in paddlenlp/transformers/llama/modeling_pp.py
paddlenlp/transformers/llama/modeling_pp.py#L145
Check warning on line 170 in paddlenlp/transformers/llama/modeling_pp.py
paddlenlp/transformers/llama/modeling_pp.py#L170
Check warning on line 189 in paddlenlp/transformers/llama/modeling_pp.py
paddlenlp/transformers/llama/modeling_pp.py#L189
Check warning on line 194 in paddlenlp/transformers/llama/modeling_pp.py
paddlenlp/transformers/llama/modeling_pp.py#L194
Check warning on line 205 in paddlenlp/transformers/llama/modeling_pp.py
paddlenlp/transformers/llama/modeling_pp.py#L205
Check warning on line 233 in paddlenlp/transformers/llama/modeling_pp.py
paddlenlp/transformers/llama/modeling_pp.py#L233
Check warning on line 242 in paddlenlp/transformers/llama/modeling_pp.py
paddlenlp/transformers/llama/modeling_pp.py#L242
Check warning on line 266 in paddlenlp/transformers/llama/modeling_pp.py
paddlenlp/transformers/llama/modeling_pp.py#L265-L266
Check warning on line 272 in paddlenlp/transformers/llama/modeling_pp.py
paddlenlp/transformers/llama/modeling_pp.py#L268-L272
Check warning on line 275 in paddlenlp/transformers/llama/modeling_pp.py
paddlenlp/transformers/llama/modeling_pp.py#L274-L275
Check warning on line 282 in paddlenlp/transformers/llama/modeling_pp.py
paddlenlp/transformers/llama/modeling_pp.py#L280-L282
Uh oh!
There was an error while loading. Please reload this page.