[binary-husky/gpt_academic]【chatglm 说】: [Local Message] Call ChatGLM fail 不能正常加载ChatGLM的参数。

2024-03-29 292 views
6

Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision. Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision. Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision. Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision. Process GetGLMHandle-1: Traceback (most recent call last): File "C:\Anaconda3\lib\site-packages\transformers\tokenization_utils_base.py", line 1958, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "C:\Users\123、/.cache\huggingface\modules\transformers_modules\THUDM\chatglm-6b\f83182484538e663a03d3f73647f10f89878f438\tokenization_chatglm.py", line 209, in init self.sp_tokenizer = SPTokenizer(vocab_file, num_image_tokens=num_image_tokens) File "C:\Users\123、/.cache\huggingface\modules\transformers_modules\THUDM\chatglm-6b\f83182484538e663a03d3f73647f10f89878f438\tokenization_chatglm.py", line 61, in init self.text_tokenizer = TextTokenizer(vocab_file) File "C:\Users\123、/.cache\huggingface\modules\transformers_modules\THUDM\chatglm-6b\f83182484538e663a03d3f73647f10f89878f438\tokenization_chatglm.py", line 22, in init self.sp.Load(model_path) File "C:\Anaconda3\lib\site-packages\sentencepiece__init__.py", line 905, in Load return self.LoadFromFile(model_file) File "C:\Anaconda3\lib\site-packages\sentencepiece__init__.py", line 310, in LoadFromFile return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg) OSError: Not found: "C:\Users\123、/.cache\huggingface\hub\models--THUDM--chatglm-6b\snapshots\f83182484538e663a03d3f73647f10f89878f438\ice_text.model": Invalid argument Error #22

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "C:\Users\123、\Desktop\人工智能\chatgpt_academic-master\request_llm\bridge_chatglm.py", line 40, in run self.chatglm_tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) File "C:\Anaconda3\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 679, in from_pretrained return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "C:\Anaconda3\lib\site-packages\transformers\tokenization_utils_base.py", line 1804, in from_pretrained return cls._from_pretrained( File "C:\Anaconda3\lib\site-packages\transformers\tokenization_utils_base.py", line 1960, in _from_pretrained raise OSError( OSError: Unable to load vocabulary from file. Please check that the provided vocabulary is accessible and not corrupted.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "C:\Anaconda3\lib\multiprocessing\process.py", line 315, in _bootstrap self.run() File "C:\Users\123、\Desktop\人工智能\chatgpt_academic-master\request_llm\bridge_chatglm.py", line 54, in run raise RuntimeError("不能正常加载ChatGLM的参数!") RuntimeError: 不能正常加载ChatGLM的参数!

回答

1

大家有遇到这个问题吗?总是无法加载清华ChatGLM

5

+1

3

根据https://github.com/THUDM/ChatGLM-6B/issues/747 ,你应该删除这个缓存文件夹

image

0

@zhanghaoyu92hou @abigkeep

9

删了之后,还是不行...

2

我这边解决了这个问题,显卡为N卡,系统为Windows平台。

报错后,首先我尝试了这个项目Chinese-LangChain ,发现它可以直接下载ChatGLM并正常运行,我查找后发现它下下来的是chatglm-6b-int4-qe。

于是我对比了“C:\Users\你的用户名.cache\huggingface\hub\models--THUDM--chatglm-6b-int4-qe\snapshots\一串中英文字符”和“C:\Users\你的用户名.cache\huggingface\hub\models--THUDM--chatglm-6b\snapshots\一串字符”两个目录下的文件,发现chatglm-6b目录下的文件缺少modeling_chatglm.py,quantization.py,pytorch_model.bin.index.json以及一系列pytorch_model-00001-of-00008.bin(从1~8)等文件。

于是解决方法很简单了,去chatglm-6b下载

image

这几个缺少的文件,并放置到“C:\Users\你的用户名.cache\huggingface\hub\models--THUDM--chatglm-6b\snapshots\一串字符”这个目录内,然后问题就解决了。

image