[THUDM/ChatGLM-6B][BUG/Help] 在训练完成后测试没有返回结果

2024-06-12 55 views
0
from transformers import AutoTokenizer, AutoModel, AutoConfig
import os
import torch
tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b",cache_dir='/output/ChatGLM-6B/chatglm-6b', trust_remote_code=True)
config = AutoConfig.from_pretrained("THUDM/chatglm-6b",cache_dir='/output/ChatGLM-6B/chatglm-6b', trust_remote_code=True, pre_seq_len=128)
model = AutoModel.from_pretrained("THUDM/chatglm-6b",cache_dir='/output/ChatGLM-6B/chatglm-6b',config=config, trust_remote_code=True)
prefix_state_dict = torch.load(os.path.join("output/adgen-chatglm-6b-ft-1e-4/checkpoint-1000", "pytorch_model.bin"))
new_prefix_state_dict = {}
for k, v in prefix_state_dict.items():
    new_prefix_state_dict[k[len("transformer.prefix_encoder."):]] = v
model.transformer.prefix_encoder.load_state_dict(new_prefix_state_dict)
# model = model.quantize(4)
model = model.half().cuda()
model.transformer.prefix_encoder.float()
model = model.eval()
运行报错 ` Some weights of ChatGLMForConditionalGeneration were not initialized from the model checkpoint at THUDM/chatglm-6b and are newly initialized: ['transformer.prefix_encoder.embedding.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.

RuntimeError Traceback (most recent call last) Cell In[4], line 11 9 for k, v in prefix_state_dict.items(): 10 new_prefix_state_dict[k[len("transformer.prefix_encoder."):]] = v ---> 11 model.transformer.prefix_encoder.load_state_dict(new_prefix_state_dict) 12 # model = model.quantize(4) 13 model = model.half().cuda()

File /usr/local/lib/python3.8/site-packages/torch/nn/modules/module.py:1671, in Module.load_state_dict(self, state_dict, strict) 1666 error_msgs.insert( 1667 0, 'Missing key(s) in state_dict: {}. '.format( 1668 ', '.join('"{}"'.format(k) for k in missing_keys))) 1670 if len(error_msgs) > 0: -> 1671 raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( 1672 self.class.name, "\n\t".join(error_msgs))) 1673 return _IncompatibleKeys(missing_keys, unexpected_keys)

RuntimeError: Error(s) in loading state_dict for PrefixEncoder: Missing key(s) in state_dict: "embedding.weight". Unexpected key(s) in state_dict: ".weight", "layernorm.weight", "layernorm.bias", "ttention_layernorm.weight", "ttention_layernorm.bias", "_layernorm.weight", "_layernorm.bias", "attention_layernorm.weight", "attention_layernorm.bias", ".bias". `

在训练完成后测试没有返回结果

Environment
- OS:linux
- Python:python3.8
- Transformers:
- PyTorch:
- CUDA Support (`python -c "import torch; print(torch.cuda.is_available())"`) :

回答

7

I encountered the same issue. After the training is done, I run web_demo.py, it show warning:"Some weights of ChatGLMForConditionalGeneration were not initialized from the model checkpoint at d:\airesearch\ChatGLM-6B\models\chatglm-6b and are newly initialized: ['transformer.p refix_encoder.embedding.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference."

And it cannot answer my questions which are from my training data.

5

是不是因为多卡训练 导致的?

5

我也遇到同样的问题