train.sh如下: PRE_SEQ_LEN=128 LR=2e-2
CUDA_VISIBLE_DEVICES=2,3 python3 main.py \ --do_train \ --train_file data/PanguData/train.json \ --validation_file data/PanguData/dev.json \ --prompt_column content \ --response_column summary \ --overwrite_cache \ --model_name_or_path /home/llm_files/chatglm-6b-v1_1 \ --output_dir output/pangu-chatglm-6b-pt-$PRE_SEQ_LEN-$LR \ --overwrite_output_dir \ --max_source_length 128 \ --max_target_length 128 \ --per_device_train_batch_size 1 \ --per_device_eval_batch_size 1 \ --gradient_accumulation_steps 16 \ --predict_with_generate \ --max_steps 200 \ --logging_steps 10 \ --save_steps 10 \ --learning_rate $LR \ --pre_seq_len $PRE_SEQ_LEN \
cuda设置成CUDA_VISIBLE_DEVICES=2,3就oom,设置成CUDA_VISIBLE_DEVICES=2就可以跑,跪求大佬!
报错如下: RuntimeError: HIP out of memory. Tried to allocate 128.00 MiB (GPU 0; 31.98 GiB total capacity; 31.60 GiB already allocated; 0 bytes free; 31.64 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_HIP_ALLOC_CONF
Environment- OS:
- Python:3.8
- Transformers:
- PyTorch:
- CUDA Support (`python -c "import torch; print(torch.cuda.is_available())"`) :