from transformers import AutoModel, AutoTokenizer import sys
local_model_path = "my_local_checkpoint_path"
model_path = "THUDM/chatglm-6b" # You can modify the path for storing the local modelmodel = AutoModel.from_pretrained(local_model_path, trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained(local_model_path, trust_remote_code=True) response, history = model.chat(tokenizer, "你好", history=[]) print("\033[1;36m" + "chatGLM:{}".format(response) + "\033[0m")
line = input("Human:") while line: response, history = model.chat(tokenizer, str(line), history=[]) print("\033[1;36m"+"chatGLM:{}".format(response)+"\033[0m")
print('\033[42m'+"chatGLM:{}"+'\033[42m'.format(response))line = input("Human:")
# print(line)
我使用main.py进行了全量微调,重新加载模型发现直接不输出了,什么输出都为空。
Expected BehaviorNo response
Steps To Reproduce使用了main.py在我自己的数据集上进行了训练,使用trainer.save_model()保存了训练后的模型
Environment- OS:centos7
- Python:3.8.13
- Transformers:4.28.0.dev0
- PyTorch: 1.13.0
- CUDA Support (`python -c "import torch; print(torch.cuda.is_available())"`) : 11.7