[THUDM/ChatGLM-6B][Help] 如何像stable diffusion一样只要启动第一次后续就不需要联网

2024-05-13 463 views
9

第一次打开web_demo.py正常运行后我使用控制面板把网络禁用进入断网状态,但是在这状态下再次打开web_demo.py脚本后出现以下 C:\Users\open>E:\1\chatglm-6b\web_demo1232.py Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision. Traceback (most recent call last): File "C:\Users\open\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\connection.py", line 174, in _new_conn conn = connection.create_connection( File "C:\Users\open\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\util\connection.py", line 72, in create_connection for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): File "C:\Users\open\AppData\Local\Programs\Python\Python310\lib\socket.py", line 955, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): socket.gaierror: [Errno 11001] getaddrinfo failed

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "C:\Users\open\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\connectionpool.py", line 703, in urlopen httplib_response = self._make_request( File "C:\Users\open\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\connectionpool.py", line 386, in _make_request self._validate_conn(conn) File "C:\Users\open\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\connectionpool.py", line 1042, in _validate_conn conn.connect() File "C:\Users\open\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\connection.py", line 363, in connect self.sock = conn = self._new_conn() File "C:\Users\open\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\connection.py", line 186, in _new_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x0000029E1492B520>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "C:\Users\open\AppData\Local\Programs\Python\Python310\lib\site-packages\requests\adapters.py", line 489, in send resp = conn.urlopen( File "C:\Users\open\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\connectionpool.py", line 787, in urlopen retries = retries.increment( File "C:\Users\open\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\util\retry.py", line 592, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /api/models/THUDM/chatglm-6b (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x0000029E1492B520>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "E:\1\chatglm-6b\web_demo1232.py", line 5, in tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b", cache_dir="E:/1/2", trust_remote_code=True) File "C:\Users\open\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 663, in from_pretrained tokenizer_class = get_class_from_dynamic_module( File "C:\Users\open\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\dynamic_module_utils.py", line 388, in get_class_from_dynamic_module final_module = get_cached_module_file( File "C:\Users\open\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\dynamic_module_utils.py", line 286, in get_cached_module_file commit_hash = model_info(pretrained_model_name_or_path, revision=revision, token=use_auth_token).sha File "C:\Users\open\AppData\Local\Programs\Python\Python310\lib\site-packages\huggingface_hub\utils_validators.py", line 120, in _inner_fn return fn(*args, kwargs) File "C:\Users\open\AppData\Local\Programs\Python\Python310\lib\site-packages\huggingface_hub\hf_api.py", line 1623, in model_info r = requests.get(path, headers=headers, timeout=timeout, params=params) File "C:\Users\open\AppData\Local\Programs\Python\Python310\lib\site-packages\requests\api.py", line 73, in get return request("get", url, params=params, kwargs) File "C:\Users\open\AppData\Local\Programs\Python\Python310\lib\site-packages\requests\api.py", line 59, in request return session.request(method=method, url=url, kwargs) File "C:\Users\open\AppData\Local\Programs\Python\Python310\lib\site-packages\requests\sessions.py", line 587, in request resp = self.send(prep, send_kwargs) File "C:\Users\open\AppData\Local\Programs\Python\Python310\lib\site-packages\requests\sessions.py", line 701, in send r = adapter.send(request, kwargs) File "C:\Users\open\AppData\Local\Programs\Python\Python310\lib\site-packages\requests\adapters.py", line 565, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /api/models/THUDM/chatglm-6b (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x0000029E1492B520>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed')) 试过这个方法 image 但是它提示找不到config.json这个文件,我在huggingface下载的完整文件** image

正常运行过一遍之后关闭(模型,分词器都下载完毕可正常使用状态)。把网络禁用或拔掉网线后再次运行相同脚本即可复现

Environment
- OS:win10
- Python:.3.10
- Transformers:
- PyTorch:
- CUDA Support (`python -c "import torch; print(torch.cuda.is_available())"`) :

希望有弄过的大佬发表一下自己解决的方法或是能实现相同效果的方法供以学习

回答

7

尝试设置本地模型路径。就像readme里面的从本地加载模型

9

尝试设置本地模型路径。就像readme里面的从本地加载模型

我试一下

4

尝试设置本地模型路径。就像readme里面的从本地加载模型

加载量化的模型也能照这个弄吗,还是需要其他方法

4

尝试设置本地模型路径。就像readme里面的从本地加载模型

按照readme里的方法出现下面错误,已将模型文件下载到e盘 image

image

image

3

尝试设置本地模型路径。就像readme里面的从本地加载模型

按照readme里的方法出现下面错误,已将模型文件下载到e盘 image

image

image

你错误提示。实际上是去这里 c:\...\.cache\e:\1\2 去c盘加载了。。。。。