WARNING:ChatTTS.utils.gpu_utils:No GPU found, use CPU instead
INFO:ChatTTS.core:use cpu
WARNING:ChatTTS.core:vocos not initialized.
WARNING:ChatTTS.core:gpt not initialized.
WARNING:ChatTTS.core:tokenizer not initialized.
WARNING:ChatTTS.core:dvae not initialized.
WARNING:ChatTTS.core:vocos not initialized.
WARNING:ChatTTS.core:gpt not initialized.
WARNING:ChatTTS.core:tokenizer not initialized.
WARNING:ChatTTS.core:decoder not initialized.
Traceback (most recent call last):
File "/private/tmp/demo.py", line 9, in
Q
[2noise/ChatTTS]在Intel芯片的MacBook Pro上运行失败
8
A
回答
9
您需要先下载模型,然后在chat.load_models()时给出模型路径。
这是我生成音频并输出到文件的脚本
import ChatTTS
import numpy as np
model_path = '/Users/zouguodong/.cache/modelscope/hub/pzc163/chatTTS'
chat = ChatTTS.Chat()
chat.load_models(
vocos_config_path=f"{model_path}/vocos.yaml",
vocos_ckpt_path=f"{model_path}/Vocos.pt",
dvae_config_path=f"{model_path}/dvae.yaml",
dvae_ckpt_path=f"{model_path}/DVAE.pt",
gpt_config_path=f"{model_path}/gpt.yaml",
gpt_ckpt_path=f"{model_path}/GPT.pt",
decoder_config_path=f"{model_path}/decoder.yaml",
decoder_ckpt_path=f"{model_path}/Decoder.pt",
tokenizer_path=f"{model_path}/tokenizer.pt",
device='cpu'
)
texts = ["Complaint", ]
wavs = chat.infer(texts, use_decoder=True)
audio_data = wavs[0]
audio_rate = 24000
# Convert numpy array to bytes
audio_data = (audio_data * 32767).astype(np.int16).tobytes()
# Save audio to file
output_filename = 'output_audio.wav'
with wave.open(output_filename, 'w') as wav_file:
wav_file.setnchannels(1) # Mono channel
wav_file.setsampwidth(2) # 2 bytes per sample (16-bit audio)
wav_file.setframerate(audio_rate)
wav_file.writeframes(audio_data)
print(f"Audio saved to {output_filename}")
6
发现感叹号、”-“会读出来,大家遇到过吗?
7
文件保存到哪里去了? 找不到呀
3
根据 @zou8944 的说明构建:
确保您的系统上安装了 Git LFS,然后执行git clone https://www.modelscope.cn/pzc163/chatTTS.git
并指向模型,如下所示:
cwd = os.getcwd()
model_path = cwd + '/chatTTS'
chat.load_models(
vocos_config_path=f"{model_path}/config/vocos.yaml",
vocos_ckpt_path=f"{model_path}/asset/Vocos.pt",
dvae_config_path=f"{model_path}/config/dvae.yaml",
dvae_ckpt_path=f"{model_path}/asset/DVAE.pt",
gpt_config_path=f"{model_path}/config/gpt.yaml",
gpt_ckpt_path=f"{model_path}/asset/GPT.pt",
decoder_config_path=f"{model_path}/config/decoder.yaml",
decoder_ckpt_path=f"{model_path}/asset/Decoder.pt",
tokenizer_path=f"{model_path}/asset/tokenizer.pt",
device='cpu'
)