Skip to content

Commit d8adc2b

Browse files
authored
[LLM] disable part of MC2 in lora (#8505)
1 parent c1cfe63 commit d8adc2b

File tree

1 file changed

+4
-2
lines changed

1 file changed

+4
-2
lines changed

paddlenlp/peft/lora/lora_layers.py

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -409,7 +409,8 @@ def forward(self, x: paddle.Tensor):
409409

410410
if not self.merged:
411411
input_mp = self.lora_dropout(input_mp)
412-
if MC2RowSeqParallelCoreLinear is None:
412+
# TODO(@gexiao): temporary workaround for deterministic calculation
413+
if True or MC2RowSeqParallelCoreLinear is None:
413414
input_mp = input_mp @ self.lora_A
414415
input_mp = ReduceScatterOp.apply(input_mp)
415416
else:
@@ -651,7 +652,8 @@ def forward(self, x: paddle.Tensor):
651652

652653
if not self.merged:
653654
input_a = self.lora_dropout(x) @ self.lora_A
654-
if MC2ColumnSeqParallelCoreLinear is None:
655+
# TODO(@gexiao): temporary workaround for deterministic calculation
656+
if True or MC2ColumnSeqParallelCoreLinear is None:
655657
input_a = AllGatherOp.apply(input_a)
656658
delta_mp = (input_a @ self.lora_B) * self.scaling
657659
else:

0 commit comments

Comments
 (0)