Skip to content

Commit d4f09c0

Browse files
committed
fix
1 parent e1c1e85 commit d4f09c0

File tree

1 file changed

+0
-1
lines changed

1 file changed

+0
-1
lines changed

paddlenlp/transformers/gemma/modeling.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -459,7 +459,6 @@ def forward(self, x):
459459
class GemmaAttention(nn.Layer):
460460
"""Multi-headed attention from 'Attention Is All You Need' paper"""
461461

462-
# Ignore copy
463462
def __init__(self, config: GemmaConfig, layerwise_recompute: bool = False):
464463
super().__init__()
465464

0 commit comments

Comments
 (0)