硬件: cpu:5950x gpu:rtx 4090 系统: windows 11 wsl: ubuntu 22.04
windows环境下报错:
发生异常: RuntimeError Windows not yet supported for torch.compile File "E:\git\ChatTTS\ChatTTS\core.py", line 102, in _load gpt.gpt.forward = torch.compile(gpt.gpt.forward, backend='inductor', dynamic=True) File "E:\git\ChatTTS\ChatTTS\core.py", line 61, in load_models self._load(**{k: os.path.join(download_path, v) for k, v in OmegaConf.load(os.path.join(download_path, 'config', 'path.yaml')).items()}, **kwargs) File "E:\git\ChatTTS\test.py", line 10, in <module> chat.load_models() RuntimeError: Windows not yet supported for torch.compile
并不知道为什么会需要调用torch.compile。
而切换WSL发现性能不佳,初步判断可能是并没有用gpu进行推理,但通过检查torch.cuda.is_available,发现结果为True。输入如下:
INFO:ChatTTS.core:Load from cache: /home/xxxx/.cache/huggingface/hub/models--2Noise--ChatTTS/snapshots/c0aa9139945a4d7bb1c84f07785db576f2bb1bfa INFO:ChatTTS.core:use cuda:0 INFO:ChatTTS.core:vocos loaded. INFO:ChatTTS.core:dvae loaded. INFO:ChatTTS.core:gpt loaded. INFO:ChatTTS.core:decoder loaded. INFO:ChatTTS.core:tokenizer loaded. INFO:ChatTTS.core:All initialized. INFO:ChatTTS.core:All initialized. 0%|▎ | 1/384 [00:48<5:09:58, 48.56s/it]W0531 07:15:26.417000 140610319078464 torch/_dynamo/exc.py:184] [0/1] Backend compiler failed with a fake tensor exception at W0531 07:15:26.417000 140610319078464 torch/_dynamo/exc.py:184] [0/1] File "/home/xxxx/miniconda3/envs/tts/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py", line 998, in forward W0531 07:15:26.417000 140610319078464 torch/_dynamo/exc.py:184] [0/1] return BaseModelOutputWithPast( W0531 07:15:26.417000 140610319078464 torch/_dynamo/exc.py:184] [0/1] Adding a graph break. W0531 07:15:43.909000 140610319078464 torch/_dynamo/exc.py:184] [0/1_1] Backend compiler failed with a fake tensor exception at W0531 07:15:43.909000 140610319078464 torch/_dynamo/exc.py:184] [0/1_1] File "/home/xxxx/miniconda3/envs/tts/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py", line 998, in forward W0531 07:15:43.909000 140610319078464 torch/_dynamo/exc.py:184] [0/1_1] return BaseModelOutputWithPast( W0531 07:15:43.909000 140610319078464 torch/_dynamo/exc.py:184] [0/1_1] Adding a graph break. W0531 07:15:47.588000 140610319078464 torch/_dynamo/exc.py:184] [3/0] Backend compiler failed with a fake tensor exception at W0531 07:15:47.588000 140610319078464 torch/_dynamo/exc.py:184] [3/0] File "/home/xxxx/miniconda3/envs/tts/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py", line 738, in forward W0531 07:15:47.588000 140610319078464 torch/_dynamo/exc.py:184] [3/0] return outputs W0531 07:15:47.588000 140610319078464 torch/_dynamo/exc.py:184] [3/0] Adding a graph break. 1%|▌ | 2/384 [02:02<6:44:00, 63.46s/it]W0531 07:16:25.436000 140610319078464 torch/_dynamo/exc.py:184] [3/20] Backend compiler failed with a fake tensor exception at W0531 07:16:25.436000 140610319078464 torch/_dynamo/exc.py:184] [3/20] File "/home/xxxx/miniconda3/envs/tts/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py", line 738, in forward W0531 07:16:25.436000 140610319078464 torch/_dynamo/exc.py:184] [3/20] return outputs W0531 07:16:25.436000 140610319078464 torch/_dynamo/exc.py:184] [3/20] Adding a graph break. 1%|▉ | 3/384 [02:42<5:34:20, 52.65s/it]