在做ptuning-V2微调的时候,运行demo发现chatglm-6b只能进行单卡训练,在进行多卡训练时则会报错;如果把模型换成chatglm-6b-int8则可以运行,有遇到过这种问题的吗
CUDA_VISIBLE_DEVICES=0,1 python3 main.py \
--do_train \
--train_file AdvertiseGen/train.json \
--validation_file AdvertiseGen/dev.json \
--prompt_column content \
--response_column summary \
--overwrite_cache \
--model_name_or_path /home/ChatGLM-6B/model/chatglm-6b \
--output_dir output/adgen-chatglm-6b-pt-$PRE_SEQ_LEN-$LR \
--overwrite_output_dir \
--max_source_length 64 \
--max_target_length 64 \
--per_device_train_batch_size 8 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 16 \
--predict_with_generate \
--max_steps 3000 \
--logging_steps 10 \
--save_steps 1000 \
--learning_rate $LR \
--pre_seq_len $PRE_SEQ_LEN
Traceback (most recent call last):
File "/home/ChatGLM-6B/ptuning/main.py", line 433, in
main()
File "/home/ChatGLM-6B/ptuning/main.py", line 372, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/ChatGLM-6B/ptuning/trainer.py", line 1635, in train
return inner_training_loop(
File "/home/ChatGLM-6B/ptuning/trainer.py", line 1904, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/home/ChatGLM-6B/ptuning/trainer.py", line 2665, in training_step
loss.backward()
File "/home/anaconda3/envs/vicuna/lib/python3.10/site-packages/torch/_tensor.py", line 487, in backward
torch.autograd.backward(
File "/home/anaconda3/envs/vicuna/lib/python3.10/site-packages/torch/autograd/init.py", line 200, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 96.00 MiB (GPU 0; 23.68 GiB total capacity; 22.49 GiB already allocated; 29.31 MiB free; 22.63 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Environment
- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA Support (`python -c "import torch; print(torch.cuda.is_available())"`) :