[WARNING|tokenization_auto.py:652] 2023-04-03 18:03:23,678 >> Explicitly passing a revision
is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
[INFO|tokenization_utils_base.py:1800] 2023-04-03 18:03:23,849 >> loading file ice_text.model
[INFO|tokenization_utils_base.py:1800] 2023-04-03 18:03:23,849 >> loading file added_tokens.json
[INFO|tokenization_utils_base.py:1800] 2023-04-03 18:03:23,849 >> loading file special_tokens_map.json
[INFO|tokenization_utils_base.py:1800] 2023-04-03 18:03:23,849 >> loading file tokenizer_config.json
[WARNING|auto_factory.py:456] 2023-04-03 18:03:24,851 >> Explicitly passing a revision
is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
[INFO|modeling_utils.py:2400] 2023-04-03 18:03:24,959 >> loading weights file ./output/adgen-chatglm-6b-pt-8-1e-2/checkpoint-1000/pytorch_model.bin
[INFO|configuration_utils.py:575] 2023-04-03 18:03:35,970 >> Generate config GenerationConfig {
"_from_model_config": true,
"bos_token_id": 150004,
"eos_token_id": 150005,
"pad_token_id": 20003,
"transformers_version": "4.27.1"
}
[INFO|modeling_utils.py:3032] 2023-04-03 18:05:00,044 >> All model checkpoint weights were used when initializing ChatGLMForConditionalGeneration.
[INFO|modeling_utils.py:3040] 2023-04-03 18:05:00,044 >> All the weights of ChatGLMForConditionalGeneration were initialized from the model checkpoint at ./output/adgen-chatglm-6b-pt-8-1e-2/checkpoint-1000. If your task is similar to the task the model of the checkpoint was trained on, you can already use ChatGLMForConditionalGeneration for predictions without further training. [INFO|configuration_utils.py:535] 2023-04-03 18:05:00,056 >> loading configuration file ./output/adgen-chatglm-6b-pt-8-1e-2/checkpoint-1000/generation_config.json [INFO|configuration_utils.py:575] 2023-04-03 18:05:00,057 >> Generate config GenerationConfig { "_from_model_config": true, "bos_token_id": 150004, "eos_token_id": 150005, "pad_token_id": 20003, "transformers_version": "4.27.1" }
Quantized to 4 bit
/home/weiqiang/.local/lib/python3.8/site-packages/dill/_dill.py:1705: PicklingWarning: Cannot locate reference to <class 'google.protobuf.pyext._message.CMessage'>.
warnings.warn('Cannot locate reference to %r.' % (obj,), PicklingWarning)
/home/weiqiang/.local/lib/python3.8/site-packages/dill/_dill.py:1707: PicklingWarning: Cannot pickle <class 'google.protobuf.pyext._message.CMessage'>: google.protobuf.pyext._message.CMessage has recursive self-references that trigger a RecursionError.
warnings.warn('Cannot pickle %r: %s.%s has recursive self-references that trigger a RecursionError.' % (obj, obj.module, obj_name), PicklingWarning)
04/03/2023 18:05:00 - WARNING - datasets.fingerprint - Parameter 'function'=<function main.
希望正常执行,别出现现在的中断执行问题
1.windows10 ,wsl2环境下 2.ptuning按照教程执行 3.train.sh训练过程正常 4.evalueate.sh 评估过程出错
Environment- OS:windows10-wsl2-ubuntu20.04
- Python:3.8
- Transformers:4.27.1
- PyTorch:1.13
- CUDA Support (`python -c "import torch; print(torch.cuda.is_available())"`) :