[THUDM/ChatGLM-6B]训练上7百万数据loss没有下降一走在3.x徘徊震荡怎么回事?还有训练完后并没有使用我的数据来回答,不知怎么回事?

2024-05-20 238 views
2

训练上7百万数据loss没有下降一直在3.x徘徊震荡怎么回事?

数据格式: {"prompt": "心率失常二度房室阻滞脑梗糖尿病心血管狭窄。遗传高血压,心脏病,糖尿病17年,2011年室上速手术后,2015年出现二度二型传导阻滞,2018 年脑梗。", "summary": "是否有症状?", "history": []} {"prompt": "现在有症状,每天有几次跳的跳的停一下,有时候一天有7—8次,10来次,有时候几天出现几次,不规律。", "summary": "有没有头晕,眼前发黑,乏力等症状。", "history": [["心率失常二度房室阻滞脑梗糖尿病心血管狭窄。遗传高血压,心脏病,糖尿病17年,2011年室上速手术后,2015年出现二度二型传导阻滞,2018 年脑梗。", "是否有症状?"]]} {"prompt": "心跳不规则以后有发力,偶尔有头晕,没有眼前发黑症状。", "summary": "可网上预约挂号,把当地检查资料带过来,再做动态心电图看看。", "history": [["心率失常二度房室阻滞脑梗糖尿病心血管狭窄。遗传高血压,心脏病,糖尿病17年,2011年室上速手术后,2015年出现二度二型传导阻滞,2018 年脑梗。", "是否有症状?"], ["现在有症状,每天有几次跳的跳的停一下,有时候一天有7—8次,10来次,有时候几天出现几次,不规律。", "有没有头晕,眼前发黑,乏力等症状。"]]} {"prompt": "最近出现症状比较频繁,所以在当地医院住院检查,想着还是北京心血管还是最权威的,想问问您这种情况怎么办。现在去北京用隔离吗?挂号是挂阜外医院心内科吗?", "summary": "北京现在住不了院,可以门诊查,我下周不出门诊,你可以门诊挂谁的号都可以,先开单子去检查,建议查心脏核素显像,和动态心电图。", "history": [["心率失常二度房室阻滞脑梗糖尿病心血管狭窄。遗传高血压,心脏病,糖尿病17年,2011年室上速手术后,2015年出现二度二型传导阻滞,2018 年脑梗。", "是否有症状?"], ["现在有症状,每天有几次跳的跳的停一下,有时候一天有7—8次,10来次,有时候几天出现几次,不规律。", "有没有头晕,眼前发黑,乏力等症状。"], ["心跳不规则以后有发力,偶尔有头晕,没有眼前发黑症状。", "可网上预约挂号,把当地检查资料带过来,再做动态心电图看看。"]]} {"prompt": "我能网上预约您的号吗?关键我们是张家口的,怕一到北京被隔离了,另外,您看我给你发过去的病历,动态心电图,CT片子严重吗?", "summary": "你的动态心电图是二度房室传导阻滞,冠脉CT不很清楚,但是也没有看到严重问题。", "history": [["心率失常二度房室阻滞脑梗糖尿病心血管狭窄。遗传高血压,心脏病,糖尿病17年,2011年室上速手术后,2015年出现二度二型传导阻滞,2018 年脑梗。", "是否有症状?"], ["现在有症状,每天有几次跳的跳的停一下,有时候一天有7—8次,10来次,有时候几天出现几次,不规律。", "有没有头晕,眼前发黑,乏力等症状。"], ["心跳不规则以后有发力,偶尔有头晕,没有眼前发黑症状。", "可网上预约挂号,把当地检查资料带过来,再做动态心电图看看。"], ["最近出现症状比较频繁,所以在当地医院住院检查,想着还是北京心血管还是最权威的,想问问您这种情况怎么办。现在去北京用隔离吗?挂号是挂阜外医院心内科吗?", "北京现在住不了院,可以门诊查,我下周不出门诊,你可以门诊挂谁的号都可以,先开单子去检查,建议查心脏核素显像,和动态心电图。"]]} {"prompt": "这边医院256心脏CT说是血管堵了60%,但是病人总出现心跳早博,有时会伴有胸闷的症状。", "summary": "60%不严重,心跳是有问题,需要多次查动态心电图看看是否严重。", "history": [["心率失常二度房室阻滞脑梗糖尿病心血管狭窄。遗传高血压,心脏病,糖尿病17年,2011年室上速手术后,2015年出现二度二型传导阻滞,2018 年脑梗。", "是否有症状?"], ["现在有症状,每天有几次跳的跳的停一下,有时候一天有7—8次,10来次,有时候几天出现几次,不规律。", "有没有头晕,眼前发黑,乏力等症状。"], ["心跳不规则以后有发力,偶尔有头晕,没有眼前发黑症状。", "可网上预约挂号,把当地检查资料带过来,再做动态心电图看看。"], ["最近出现症状比较频繁,所以在当地医院住院检查,想着还是北京心血管还是最权威的,想问问您这种情况怎么办。现在去北京用隔离吗?挂号是挂阜外医院心内科吗?", "北京现在住不了院,可以门诊查,我下周不出门诊,你可以门诊挂谁的号都可以,先开单子去检查,建议查心脏核素显像,和动态心电图。"], ["我能网上预约您的号吗?关键我们是张家口的,怕一到北京被隔离了,另外,您看我给你发过去的病历,动态心电图,CT片子严重吗?", "你的动态心电图是二度房室传导阻滞,冠脉CT不很清楚,但是也没有看到严重问题。"]]}

运行训练:

PRE_SEQ_LEN=128 LR=1e-2 CUDA_VISIBLE_DEVICES=0
python main.py --do_train --train_file chat/train.json --validation_file chat/dev.json --prompt_column prompt --response_column summary --history_column history --overwrite_cache --model_name_or_path THUDM/chatglm-6b --output_dir output --overwrite_output_dir --max_source_length 255 --max_target_length 2500 --per_device_train_batch_size 1 --per_device_eval_batch_size 1 --gradient_accumulation_steps 16 --predict_with_generate --max_steps 400 --logging_steps 10 --save_steps 30 --learning_rate $LR --pre_seq_len $PRE_SEQ_LEN --quantization_bit 4

以下是打印结果

05/12/2023 02:30:07 - WARNING - main - Process rank: -1, device: cuda:0, n_gpu: 1distributed training: False, 16-bits training: False 05/12/2023 02:30:07 - INFO - main - Training/evaluation parameters Seq2SeqTrainingArguments( _n_gpu=1, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_pin_memory=True, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, do_eval=False, do_predict=False, do_train=True, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'fsdp_min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, generation_max_length=None, generation_num_beams=None, gradient_accumulation_steps=16, gradient_checkpointing=False, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.01, length_column_name=length, load_best_model_at_end=False, local_rank=-1, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/runs/May12_02-30-06_localhost.localdomain, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=10, logging_strategy=steps, lr_scheduler_type=linear, max_grad_norm=1.0, max_steps=400, metric_for_best_model=None, mp_parameters=, no_cuda=False, num_train_epochs=3.0, optim=adamw_hf, optim_args=None, output_dir=output, overwrite_output_dir=True, past_index=-1, per_device_eval_batch_size=1, per_device_train_batch_size=1, predict_with_generate=True, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=True, report_to=[], resume_from_checkpoint=None, run_name=output, save_on_each_node=False, save_steps=30, save_strategy=steps, save_total_limit=None, seed=42, sharded_ddp=[], skip_memory_metrics=True, sortish_sampler=False, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=0, weight_decay=0.0, xpu_backend=None, ) Downloading and preparing dataset json/default to /root/.cache/huggingface/datasets/json/default-6c9b46862f5c4641/0.0.0/fe5dd6ea2639a6df622901539cb550cf8797e5a6b2dd7af1cf934bed8e233e6e... Downloading data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 4355.46it/s] Extracting data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 799.07it/s] Dataset json downloaded and prepared to /root/.cache/huggingface/datasets/json/default-6c9b46862f5c4641/0.0.0/fe5dd6ea2639a6df622901539cb550cf8797e5a6b2dd7af1cf934bed8e233e6e. Subsequent calls will reuse this data. 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 10.82it/s] [INFO|configuration_utils.py:666] 2023-05-12 02:32:53,416 >> loading configuration file THUDM/chatglm-6b/config.json [WARNING|configuration_auto.py:905] 2023-05-12 02:32:53,417 >> Explicitly passing a revision is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision. [INFO|configuration_utils.py:666] 2023-05-12 02:32:53,491 >> loading configuration file THUDM/chatglm-6b/config.json [INFO|configuration_utils.py:720] 2023-05-12 02:32:53,492 >> Model config ChatGLMConfig { "_name_or_path": "THUDM/chatglm-6b", "architectures": [ "ChatGLMModel" ], "auto_map": { "AutoConfig": "configuration_chatglm.ChatGLMConfig", "AutoModel": "modeling_chatglm.ChatGLMForConditionalGeneration", "AutoModelForSeq2SeqLM": "modeling_chatglm.ChatGLMForConditionalGeneration" }, "bos_token_id": 130004, "eos_token_id": 130005, "gmask_token_id": 130001, "hidden_size": 4096, "inner_hidden_size": 16384, "layernorm_epsilon": 1e-05, "mask_token_id": 130000, "max_sequence_length": 2048, "model_type": "chatglm", "num_attention_heads": 32, "num_layers": 28, "pad_token_id": 3, "position_encoding_2d": true, "pre_seq_len": null, "prefix_projection": false, "quantization_bit": 0, "torch_dtype": "float16", "transformers_version": "4.27.1", "use_cache": true, "vocab_size": 130528 }

[WARNING|tokenization_auto.py:652] 2023-05-12 02:32:53,493 >> Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision. [INFO|tokenization_utils_base.py:1800] 2023-05-12 02:32:53,580 >> loading file ice_text.model [INFO|tokenization_utils_base.py:1800] 2023-05-12 02:32:53,580 >> loading file added_tokens.json [INFO|tokenization_utils_base.py:1800] 2023-05-12 02:32:53,580 >> loading file special_tokens_map.json [INFO|tokenization_utils_base.py:1800] 2023-05-12 02:32:53,581 >> loading file tokenizer_config.json [WARNING|auto_factory.py:456] 2023-05-12 02:32:53,877 >> Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision. [INFO|modeling_utils.py:2400] 2023-05-12 02:32:54,010 >> loading weights file THUDM/chatglm-6b/pytorch_model.bin.index.json [INFO|configuration_utils.py:575] 2023-05-12 02:32:54,011 >> Generate config GenerationConfig { "_from_model_config": true, "bos_token_id": 130004, "eos_token_id": 130005, "pad_token_id": 3, "transformers_version": "4.27.1" }

Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:59<00:00, 7.43s/it] [INFO|modeling_utils.py:3032] 2023-05-12 02:33:54,963 >> All model checkpoint weights were used when initializing ChatGLMForConditionalGeneration.

[WARNING|modeling_utils.py:3034] 2023-05-12 02:33:54,980 >> Some weights of ChatGLMForConditionalGeneration were not initialized from the model checkpoint at THUDM/chatglm-6b and are newly initialized: ['transformer.prefix_encoder.embedding.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. [INFO|modeling_utils.py:2690] 2023-05-12 02:33:55,057 >> Generation config file not found, using a generation config created from the model config. Quantized to 4 bit Running tokenizer on train dataset: 0%|▍ | 9000/5321511 [01:02<10:10:42, 144.98 examples/s][WARNING|tokenization_utils_base.py:3561] 2023-05-12 02:37:21,058 >> Token indices sequence length is longer than the specified maximum sequence length for this model (2431 > 2048). Running this sequence through the model will result in indexing errors input_ids [53, 6945, 5, 8, 42, 4, 64286, 12, 87904, 75004, 66500, 6, 98877, 79112, 63841, 65505, 93556, 64600, 64879, 66119, 6, 64152, 64310, 64553, 66431, 64605, 63848, 66119, 63823, 4, 67342, 12, 130001, 130004, 5, 64213, 87527, 6, 63873, 64925, 63881, 70738, 72373, 65219, 63823, 130005, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3] inputs [Round 0] 问:强制性脊柱炎,晚上睡觉翻身时腰骶骨区域疼痛,其他身体任何部位均不疼痛。 答: 应该没有问题,但最好把图像上传看看。 labelids [-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 130004, 5, 64213, 87527, 6, 63873, 64925, 63881, 70738, 72373, 65219, 63823, 130005, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100] labels <image-100> 应该没有问题,但最好把图像上传看看。 /opt/conda/lib/python3.10/site-packages/transformers/optimization.py:391: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set no_deprecation_warning=True to disable this warning warnings.warn( 0%| | 0/400 [00:00<?, ?it/s]05/12/2023 13:42:53 - WARNING - transformers_modules.chatglm-6b.modeling_chatglm - use_cache=True is incompatible with gradient checkpointing. Setting use_cache=False... {'loss': 5.523, 'learning_rate': 0.00975, 'epoch': 0.0}
{'loss': 4.1672, 'learning_rate': 0.0095, 'epoch': 0.0}
{'loss': 3.6686, 'learning_rate': 0.009250000000000001, 'epoch': 0.0}
8%|█████████████████████▍ | 30/400 [30:02<6:08:17, 59.72s/it]Saving PrefixEncoder [INFO|configuration_utils.py:457] 2023-05-12 14:12:55,131 >> Configuration saved in output/checkpoint-30/config.json [INFO|configuration_utils.py:362] 2023-05-12 14:12:55,136 >> Configuration saved in output/checkpoint-30/generation_config.json [INFO|modeling_utils.py:1762] 2023-05-12 14:12:55,429 >> Model weights saved in output/checkpoint-30/pytorch_model.bin [INFO|tokenization_utils_base.py:2163] 2023-05-12 14:12:55,431 >> tokenizer config file saved in output/checkpoint-30/tokenizer_config.json [INFO|tokenization_utils_base.py:2170] 2023-05-12 14:12:55,431 >> Special tokens file saved in output/checkpoint-30/special_tokens_map.json {'loss': 3.5784, 'learning_rate': 0.009000000000000001, 'epoch': 0.0}
{'loss': 3.4801, 'learning_rate': 0.00875, 'epoch': 0.0}
{'loss': 3.7665, 'learning_rate': 0.0085, 'epoch': 0.0}
15%|██████████████████████████████████████████▍ | 60/400 [1:00:00<5:38:21, 59.71s/it]Saving PrefixEncoder [INFO|configuration_utils.py:457] 2023-05-12 14:42:52,341 >> Configuration saved in output/checkpoint-60/config.json [INFO|configuration_utils.py:362] 2023-05-12 14:42:52,344 >> Configuration saved in output/checkpoint-60/generation_config.json [INFO|modeling_utils.py:1762] 2023-05-12 14:42:52,454 >> Model weights saved in output/checkpoint-60/pytorch_model.bin [INFO|tokenization_utils_base.py:2163] 2023-05-12 14:42:52,455 >> tokenizer config file saved in output/checkpoint-60/tokenizer_config.json [INFO|tokenization_utils_base.py:2170] 2023-05-12 14:42:52,455 >> Special tokens file saved in output/checkpoint-60/special_tokens_map.json {'loss': 3.4308, 'learning_rate': 0.00825, 'epoch': 0.0}
{'loss': 3.412, 'learning_rate': 0.008, 'epoch': 0.0}
{'loss': 3.5149, 'learning_rate': 0.007750000000000001, 'epoch': 0.0}
22%|███████████████████████████████████████████████████████████████▋ | 90/400 [1:29:59<5:08:28, 59.71s/it]Saving PrefixEncoder [INFO|configuration_utils.py:457] 2023-05-12 15:12:51,419 >> Configuration saved in output/checkpoint-90/config.json [INFO|configuration_utils.py:362] 2023-05-12 15:12:51,422 >> Configuration saved in output/checkpoint-90/generation_config.json [INFO|modeling_utils.py:1762] 2023-05-12 15:12:51,530 >> Model weights saved in output/checkpoint-90/pytorch_model.bin [INFO|tokenization_utils_base.py:2163] 2023-05-12 15:12:51,532 >> tokenizer config file saved in output/checkpoint-90/tokenizer_config.json [INFO|tokenization_utils_base.py:2170] 2023-05-12 15:12:51,532 >> Special tokens file saved in output/checkpoint-90/special_tokens_map.json {'loss': 3.763, 'learning_rate': 0.0075, 'epoch': 0.0}
{'loss': 3.5365, 'learning_rate': 0.0072499999999999995, 'epoch': 0.0}
{'loss': 3.5289, 'learning_rate': 0.006999999999999999, 'epoch': 0.0}
30%|████████████████████████████████████████████████████████████████████████████████████▌ | 120/400 [1:59:51<4:38:38, 59.71s/it]Saving PrefixEncoder [INFO|configuration_utils.py:457] 2023-05-12 15:42:43,383 >> Configuration saved in output/checkpoint-120/config.json [INFO|configuration_utils.py:362] 2023-05-12 15:42:43,386 >> Configuration saved in output/checkpoint-120/generation_config.json [INFO|modeling_utils.py:1762] 2023-05-12 15:42:43,494 >> Model weights saved in output/checkpoint-120/pytorch_model.bin [INFO|tokenization_utils_base.py:2163] 2023-05-12 15:42:43,495 >> tokenizer config file saved in output/checkpoint-120/tokenizer_config.json [INFO|tokenization_utils_base.py:2170] 2023-05-12 15:42:43,495 >> Special tokens file saved in output/checkpoint-120/special_tokens_map.json {'loss': 3.306, 'learning_rate': 0.006750000000000001, 'epoch': 0.0}
{'loss': 3.4882, 'learning_rate': 0.006500000000000001, 'epoch': 0.0}
{'loss': 3.5012, 'learning_rate': 0.00625, 'epoch': 0.0}
38%|█████████████████████████████████████████████████████████████████████████████████████████████████████████▊ | 150/400 [2:29:46<4:08:40, 59.68s/it]Saving PrefixEncoder [INFO|configuration_utils.py:457] 2023-05-12 16:12:38,227 >> Configuration saved in output/checkpoint-150/config.json [INFO|configuration_utils.py:362] 2023-05-12 16:12:38,230 >> Configuration saved in output/checkpoint-150/generation_config.json [INFO|modeling_utils.py:1762] 2023-05-12 16:12:38,339 >> Model weights saved in output/checkpoint-150/pytorch_model.bin [INFO|tokenization_utils_base.py:2163] 2023-05-12 16:12:38,340 >> tokenizer config file saved in output/checkpoint-150/tokenizer_config.json [INFO|tokenization_utils_base.py:2170] 2023-05-12 16:12:38,340 >> Special tokens file saved in output/checkpoint-150/special_tokens_map.json {'loss': 3.4522, 'learning_rate': 0.006, 'epoch': 0.0}
{'loss': 3.4131, 'learning_rate': 0.00575, 'epoch': 0.0}
{'loss': 3.4352, 'learning_rate': 0.0055000000000000005, 'epoch': 0.0}
45%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▉ | 180/400 [2:59:40<3:38:47, 59.67s/it]Saving PrefixEncoder [INFO|configuration_utils.py:457] 2023-05-12 16:42:32,948 >> Configuration saved in output/checkpoint-180/config.json [INFO|configuration_utils.py:362] 2023-05-12 16:42:32,951 >> Configuration saved in output/checkpoint-180/generation_config.json [INFO|modeling_utils.py:1762] 2023-05-12 16:42:33,058 >> Model weights saved in output/checkpoint-180/pytorch_model.bin [INFO|tokenization_utils_base.py:2163] 2023-05-12 16:42:33,059 >> tokenizer config file saved in output/checkpoint-180/tokenizer_config.json [INFO|tokenization_utils_base.py:2170] 2023-05-12 16:42:33,059 >> Special tokens file saved in output/checkpoint-180/special_tokens_map.json {'loss': 3.4539, 'learning_rate': 0.00525, 'epoch': 0.0}
{'loss': 3.4935, 'learning_rate': 0.005, 'epoch': 0.0}
{'loss': 3.5829, 'learning_rate': 0.00475, 'epoch': 0.0}
52%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████ | 210/400 [3:29:37<3:08:58, 59.68s/it]Saving PrefixEncoder [INFO|configuration_utils.py:457] 2023-05-12 17:12:29,360 >> Configuration saved in output/checkpoint-210/config.json [INFO|configuration_utils.py:362] 2023-05-12 17:12:29,363 >> Configuration saved in output/checkpoint-210/generation_config.json [INFO|modeling_utils.py:1762] 2023-05-12 17:12:29,473 >> Model weights saved in output/checkpoint-210/pytorch_model.bin [INFO|tokenization_utils_base.py:2163] 2023-05-12 17:12:29,474 >> tokenizer config file saved in output/checkpoint-210/tokenizer_config.json [INFO|tokenization_utils_base.py:2170] 2023-05-12 17:12:29,474 >> Special tokens file saved in output/checkpoint-210/special_tokens_map.json {'loss': 3.5376, 'learning_rate': 0.0045000000000000005, 'epoch': 0.0}
{'loss': 3.4128, 'learning_rate': 0.00425, 'epoch': 0.0}
{'loss': 3.4999, 'learning_rate': 0.004, 'epoch': 0.0}
60%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ | 240/400 [3:59:27<2:39:04, 59.65s/it]Saving PrefixEncoder [INFO|configuration_utils.py:457] 2023-05-12 17:42:19,210 >> Configuration saved in output/checkpoint-240/config.json [INFO|configuration_utils.py:362] 2023-05-12 17:42:19,213 >> Configuration saved in output/checkpoint-240/generation_config.json [INFO|modeling_utils.py:1762] 2023-05-12 17:42:19,323 >> Model weights saved in output/checkpoint-240/pytorch_model.bin [INFO|tokenization_utils_base.py:2163] 2023-05-12 17:42:19,324 >> tokenizer config file saved in output/checkpoint-240/tokenizer_config.json [INFO|tokenization_utils_base.py:2170] 2023-05-12 17:42:19,324 >> Special tokens file saved in output/checkpoint-240/special_tokens_map.json 62%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▍

训练完后加载 (发现并没有使用我的数据来回答,不道怎么回事?) -- coding: UTF-8 --

import os import torch from transformers import AutoConfig, AutoModel, AutoTokenizer CHECKPOINT_PATH = "./output/checkpoint-2010" tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True)

config = AutoConfig.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True, pre_seq_len=128) model = AutoModel.from_pretrained("THUDM/chatglm-6b", config=config, trust_remote_code=True).cuda() prefix_state_dict = torch.load(os.path.join(CHECKPOINT_PATH, "pytorch_model.bin")) new_prefix_state_dict = {} for k, v in prefix_state_dict.items(): if k.startswith("transformer.prefix_encoder."): new_prefix_state_dict[k[len("transformer.prefix_encoder."):]] = v model.transformer.prefix_encoder.load_state_dict(new_prefix_state_dict)

print(f"Quantized to 4 bit") model = model.quantize(4) model = model.half().cuda() model.transformer.prefix_encoder.float() model = model.eval() response, history = model.chat(tokenizer, "就是发烧,昨天晚上39度 刘大夫,病人年龄是九岁,这上边改不了", history=[]) print("ChatGLM-6B:\n",response)

最后 发现并没有使用我的数据来回答,不道怎么回事?

回答

7

你好 这个问题能解决了吗

5

没有,解决早就关闭并下面说明解决办法了。

8

mark

3

同问

5

我觉得max_steps太少了,这货不是epoch,根本过不了一轮数据

7

请问有没有人遇到loss还不到一个epoch就变为0的情况,比如1万条数据

3

应该训练的epoch不够,得训练久一些才行

2

@hjing100 我也遇到loss 为0,而且推理还出现了RuntimeError: probability tensor contains either inf, nan or element < 0,我看一些issue说是代码和权重版本不一样引起的,大兄弟你解决了没

4

@gg22mm ,兄弟,你这是自己训练,不是微调? 用了多少设备,我的手上也有很多数据,但是不敢动

4

@gg22mm ,对了,我用lora 测试的结果反推你的情况,应该是 batch_size 和maxstep关系不对,你的size是1 ,一个epoch 700W 数据 应该是700W step。 我600条数据,size = 3 , 训练200步是一个epoch , 所以加大显存,加大size , 减少步数,至少应该跑完1个epoch吧。我不知道我的推断是否正确,供参考