Skip to content

Dsv3 dev #10273

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 83 commits into
base: develop
Choose a base branch
from
Open

Dsv3 dev #10273

wants to merge 83 commits into from

Conversation

phlrain
Copy link
Collaborator

@phlrain phlrain commented Mar 26, 2025

Before submitting

  • Lint code. If there are lint issues, please format the code first.
# Install and register `pre-commit` in the project folder
pip install pre-commit && pre-commit install

# Process previous code files separately
pre-commit run --file XXXX.py
  • Add test cases into tests folder. If there are codecov issues, please add tests cases first.

PR types

PR changes

Description

zhangbo9674 and others added 30 commits March 13, 2025 16:24
* add distributed run

* fix topo

* add distributed print
* Add fused_swiglu_act(transpose)_quant op to extern op in gpt-3

* Polishing code.

* remove unecessary lines.

* remove unecessary lines in cu

* Add padding function to fused_swiglu_act_quant_op
* [Distribution] Support DualPipeV for deepseek

* add

* fix

* add

* add
* commit for save

* revert gpu sum in add loss for mtp
* add flag DSV3_USE_FP8_GEMM

* fix
* add flag DSV3_USE_FP8_GEMM

* fix

* add fp8 comm

* fix bug

* fix bug

* fix bug

* fix bug

* fix bug

* fix

* fix bug

* fix bug

* replace index_select to gather

* close fuse_moe

* fix dequant bug
* fix dequant bug

* fix bug

* fix bug

* fix
…reparation for GroupedGEMM using in training. (#10190)

* Add regroup_tokens op and optest, fix topk_to_multihot setup.py

* Add test file

* fix miscs

* Add tokens_unzip & weighted_zip in preparation of fp8-groupedgemm

* Added expert_idx output to tokens_unzip op.

* Fix prob datatype issue.

* Implemented double input&output regroup op.

* Further fix bf16 issues.

* Fix implicit bug.

* Change the unzip op to save more useful data

* Refractor and combining tokens_unzip_and_zip.

* Fixed concurrent semaphore bug.

* delete synchronize in zip op, and starting to add guided unzip kernel.

* Add fp8 support for unzip op, but cannot fake a tensor for testing.

* Added guided_unzip op.

* Modified guided_unzip to satisfy real usage.

* polish code

* Fix typos and polish & tested code
* Added two fused op, refractor some old swiglu code.

* delete unecessary print.
* optimize atten impl

* optimize_attention_output_linear_fp8_memory
* Revert "add timer for deepep (#10211)"

This reverts commit a874a9b.

* revert timer
* Support overlap for fusion moe, fix memlory leakage of fusion moe

* fix

* fix conflict
phlrain and others added 26 commits April 3, 2025 17:37
Co-authored-by: Pan Zhaowu <panzhaowu@baidu.com>
* First version, passed precision test.

* Add optest.

* restore setup.py

* Adding optional prob for spaq

* Optimized spaq in last-dim 8x cases.

* fix type

* Further improve performance with_prob

* remove unessesary calculations.
* Add arbitrary expert_num and topk support for unzip and zip.

* Merge bfloat16 zip prob support for flex num_experts and topk
* merge

* Update m_grouped_gemm.py
* support fusion moe

* fix

* fix

* fix
* Add fused swiglu_probs_bwd op

* add o2s as output

* fix 3d tensor input and add vectorize optimizations.

* fix tests of vec4

* Optimize reduce performance

* delete timeline

* Update setup_fp8.py

fix arch

* Fix multi-dimension issue.
* control stack use to prevent overflow

* fix
* Disable fast math for fused precision isssue.

* Disable TDU fm.
* refine fp8 code

* fix bug

* fix bug

* reine mem

* fix

* refine meme

* add fuse pass

* add fuse config

* refine tma

* add cinn decorate

* refine
@CLAassistant
Copy link

CLAassistant commented Jun 12, 2025

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.
10 out of 11 committers have signed the CLA.

✅ umiswing
✅ zhangbo9674
✅ phlrain
✅ chen2016013
✅ risemeup1
✅ zhangyuqin1998
✅ sneaxiy
✅ lshpku
✅ A-nnonymous
✅ ForFishes
❌ Zhaowu Pan


Zhaowu Pan seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.