Skip to content

Commit 895a816

Browse files
authored
[benchmark]close skip_memory_metrics for ips (#7732)
* update mem from B to MB * fix ft * fix pretrain * Revert "update mem from B to MB" This reverts commit 044a88c.
1 parent f720794 commit 895a816

File tree

3 files changed

+0
-3
lines changed

3 files changed

+0
-3
lines changed

tests/test_tipc/dygraph/ft/benchmark_common/run_benchmark.sh

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -111,7 +111,6 @@ function _train(){
111111
--lora ${lora} \
112112
--prefix_tuning ${prefix_tuning} \
113113
--benchmark 1 \
114-
--skip_memory_metrics 0 \
115114
--intokens 1 \
116115
--device gpu"
117116

tests/test_tipc/dygraph/hybrid_parallelism/ce_gpt/benchmark_common/run_benchmark.sh

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -130,7 +130,6 @@ function _train(){
130130
--max_steps ${max_iter}\
131131
--save_steps 5000\
132132
--device gpu\
133-
--skip_memory_metrics 0 \
134133
--warmup_ratio 0.01\
135134
--scale_loss 32768\
136135
--per_device_train_batch_size ${micro_batch_size}\

tests/test_tipc/dygraph/hybrid_parallelism/llama/benchmark_common/run_benchmark.sh

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -143,7 +143,6 @@ function _train(){
143143
--tensor_parallel_config ${tensor_parallel_config} ${pipeline_parallel_config_args} \
144144
--recompute ${recompute} \
145145
--recompute_use_reentrant ${recompute_use_reentrant} \
146-
--skip_memory_metrics 0 \
147146
--data_cache ./data_cache"
148147

149148
if [ ${PADDLE_TRAINER_ID} ]

0 commit comments

Comments
 (0)