Explicitly passing a revision
is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a revision
is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a revision
is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a revision
is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Process GetGLMHandle-1:
Traceback (most recent call last):
File "C:\Anaconda3\lib\site-packages\transformers\tokenization_utils_base.py", line 1958, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "C:\Users\123、/.cache\huggingface\modules\transformers_modules\THUDM\chatglm-6b\f83182484538e663a03d3f73647f10f89878f438\tokenization_chatglm.py", line 209, in init
self.sp_tokenizer = SPTokenizer(vocab_file, num_image_tokens=num_image_tokens)
File "C:\Users\123、/.cache\huggingface\modules\transformers_modules\THUDM\chatglm-6b\f83182484538e663a03d3f73647f10f89878f438\tokenization_chatglm.py", line 61, in init
self.text_tokenizer = TextTokenizer(vocab_file)
File "C:\Users\123、/.cache\huggingface\modules\transformers_modules\THUDM\chatglm-6b\f83182484538e663a03d3f73647f10f89878f438\tokenization_chatglm.py", line 22, in init
self.sp.Load(model_path)
File "C:\Anaconda3\lib\site-packages\sentencepiece__init__.py", line 905, in Load
return self.LoadFromFile(model_file)
File "C:\Anaconda3\lib\site-packages\sentencepiece__init__.py", line 310, in LoadFromFile
return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
OSError: Not found: "C:\Users\123、/.cache\huggingface\hub\models--THUDM--chatglm-6b\snapshots\f83182484538e663a03d3f73647f10f89878f438\ice_text.model": Invalid argument Error #22
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "C:\Users\123、\Desktop\人工智能\chatgpt_academic-master\request_llm\bridge_chatglm.py", line 40, in run self.chatglm_tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) File "C:\Anaconda3\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 679, in from_pretrained return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "C:\Anaconda3\lib\site-packages\transformers\tokenization_utils_base.py", line 1804, in from_pretrained return cls._from_pretrained( File "C:\Anaconda3\lib\site-packages\transformers\tokenization_utils_base.py", line 1960, in _from_pretrained raise OSError( OSError: Unable to load vocabulary from file. Please check that the provided vocabulary is accessible and not corrupted.
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "C:\Anaconda3\lib\multiprocessing\process.py", line 315, in _bootstrap self.run() File "C:\Users\123、\Desktop\人工智能\chatgpt_academic-master\request_llm\bridge_chatglm.py", line 54, in run raise RuntimeError("不能正常加载ChatGLM的参数!") RuntimeError: 不能正常加载ChatGLM的参数!