2
(chattts) PS E:\software\ChatTTS> python examples/cmd/run.py "Your text 1." "Your text 2."
C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\vector_quantize_pytorch\vector_quantize_pytorch.py:461: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
@autocast(enabled = False)
C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\vector_quantize_pytorch\vector_quantize_pytorch.py:674: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
@autocast(enabled = False)
C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\vector_quantize_pytorch\finite_scalar_quantization.py:162: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
@autocast(enabled = False)
C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\pydub\utils.py:170: RuntimeWarning: Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work
warn("Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work", RuntimeWarning)
[+0800 20240817 23:12:38] [INFO] Command | run | Starting ChatTTS commandline demo...
[+0800 20240817 23:12:38] [INFO] Command | run | Text input: ['Your text 1.', 'Your text 2.']
[+0800 20240817 23:12:38] [INFO] Command | run | Initializing ChatTTS...
[+0800 20240817 23:12:38] [INFO] ChatTTS | dl | checking assets...
[+0800 20240817 23:12:39] [INFO] ChatTTS | dl | all assets are already latest.
[+0800 20240817 23:12:39] [INFO] ChatTTS | core | use device cuda:0
[+0800 20240817 23:12:39] [INFO] ChatTTS | core | vocos loaded.
[+0800 20240817 23:12:39] [INFO] ChatTTS | core | dvae loaded.
[+0800 20240817 23:12:41] [INFO] ChatTTS | core | gpt loaded.
[+0800 20240817 23:12:42] [INFO] ChatTTS | core | decoder loaded.
e:\software\chattts\ChatTTS\model\tokenizer.py:24: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is
possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a futu
re release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded vi
a this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
tokenizer: BertTokenizerFast = torch.load(
[+0800 20240817 23:12:42] [INFO] ChatTTS | core | tokenizer loaded.
[+0800 20240817 23:12:42] [INFO] ChatTTS | core | all models has been initialized.
[+0800 20240817 23:12:42] [INFO] Command | run | Models loaded successfully.
[+0800 20240817 23:12:42] [INFO] Command | run | Use speaker:
蘁淰敱欀淃淬繕旑圣牲弊瓙疽嗘艡焌櫣賴各瑥皽索終梑热捳仠喚琓绤肣掆哰烎侍儘莘蘠戙抉緻烣焉璀功嫺猄笏啾篃爾詒沐蜃宥穠癢毆攻确藮耝嶣蛰剞妩灙俉旄殄字呫東幱殓仅曯屬筽妻洛蕧章噄殛坶捿奨樟胙嘋凂湌坖噡怐燡禋焐翗友
竿貣臃褗睾世茤討台奦疴浪礇揌惽狻牙苷忐歖廾跬嘪勂勀犊桗賯亮謭糪瘉嗅屔爰榖诩斻垟媧皁巼娈儁衄柊墪匪汆崀徸貅叹叚贮楰姍緙愶襥旞欙聗纲丬羊淴犲癿螝跴艇幂诓簹则俤脮膚厥婗兝谟殚揅琐忑菸媕缲嘲肉艌矍翚亼珧蠐笵妟漈
痉耉瞖浆蘭櫗现褙佈榖瞧壑偢巑煨犖仄昵傁噜廉刱俰藺謒艠蒲簩能嚻怱偮康搧礘弚扞繜瞡奖樶爫畇脔虅慤婮蔣加藢澎構贄掩嘔廳峠椗耓撂如十枢蛃淥犜嘺曹廊衴梽殇欼侘礨姝橸茪嗅榘佈绫璐豘孞翱窗裫妰翋繣度蔫俒偮犺蕵窑厯嚓贜
瓡葙撒栐畾諎楝擌杠擭嶮赈墺杉淰腖苴誔祤蛯趠薌桤嬙畀芙虤昧甂窇叻僂桶吉壨蒂吐檗偡滺蠂蟿薞戧膝淊腿叧坠按桓脔背呯戽尥囤治栊坞秼剦厦坖璸傋竛艡婦楗刘橸趧谩苪恠觺覥侗碮訮緽蒈殀为濊蕲貒眓嬢昫愽檄淓祣惩毠一螅赈注
濑犜晋屪毨拷絧桁彬熝傅喂死淐呩睌奸娬胞愭秈絴笎赱洶嗾匸岸懲瓦橌犎媡嚈噳厽寄蝬謖荑觔巵槗旍穹珌廎完棿懛檯峇纃蓤謗衸礁箽斂春捃孴訚嬒嬱謇挛腷挤湱纴凢羉櫢秧啰蝗责凫榏垑扎彃尭葮弗樹媴熈臱狚莊蟑蕬濝貭朂倘奂眣痷
剽矛謈戗熜腩磕羕揗胜崅簦貗催嗵蘀慡荻瀁秬畟触貙恶桙圀蔅箨莕継耊屁朔厊搵弝珳欼攽寯燖统荮匾土賙螾庙佝诩洑坢湃屙萆坟揽賥厄潝猩倊苳圤瓒瘾咛垢夘亩儎尋貇嫑潱打秦偟媽跭忧燼暂皩寢纊沂礼舁摧畐皰媔眫垜僀枮奓嶯舸痗
慗儳変裧甇璡樚羽諭篅俎漭荫绪珑懎毮彘绚竭毨廹謧胻嘠趵攒裒獔绵忁瓝乏摱殮茠珕杼徵笮撇蠻摁姢潘剂帀岅墮景堨褕熡蟅唛笎斨繅繽艥灎聎凘竒矡烳咒檺豌绝榻葦耰蘇侐佭巪噄叚紇哌亲矯糯愶徨瓸緺甭嵐讔烓璬柽拢當櫝侀崱裡袕
睑皸蛧覧胫婦坯椤暜蘻嚳棬绞穖肰溏挃坿蒦斲竖瞢朵趾凗緾撽账蝼使疡呃虶磶櫒徑矪梋歊秳諙娦亳惌列卛傁廩爽玠怄灝嫎琽劤亅懋瘄嗎暤浃拾穂蘳譺橃殭羙亼穙楨睟塝荞电愉旹嬚氪疁崶朌汫椗目謇敓敃概疸唱翍識済桅謯焬眏跿欹峯 牒虋唼栊蕮炣剼嚗翫勸譑愽亊莾扺才忷槬塚艷埝蘟嬝晭睹瓜帟倁織罤赹對蜟埧螻薑媦忇畼繅攀礒菛富蚦蘳襅紙熫茼儡楔嵠渀㴃
[+0800 20240817 23:12:42] [INFO] Command | run | Start inference.
[+0800 20240817 23:12:42] [INFO] ChatTTS | core | all models has been initialized.
[+0800 20240817 23:12:42] [WARN] ChatTTS | norm | found invalid characters: {'1'}
[+0800 20240817 23:12:42] [WARN] ChatTTS | norm | found invalid characters: {'2'}
Traceback (most recent call last):
File "E:\software\ChatTTS\examples\cmd\run.py", line 95, in <module>
main(args.texts, args.spk, args.stream)
File "E:\software\ChatTTS\examples\cmd\run.py", line 47, in main
wavs = chat.infer(
File "e:\software\chattts\ChatTTS\core.py", line 232, in infer
return next(res_gen)
File "e:\software\chattts\ChatTTS\core.py", line 369, in _infer
refined = self._refine_text(
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "e:\software\chattts\ChatTTS\core.py", line 574, in _refine_text
emb = gpt(input_ids, text_mask)
File "e:\software\chattts\ChatTTS\model\gpt.py", line 157, in __call__
return super().__call__(input_ids, text_mask)
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\nn\modules\module.py", line 1551, in _wrapped_call_impl
return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_dynamo\eval_frame.py", line 433, in _fn
return fn(*args, **kwargs)
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "e:\software\chattts\ChatTTS\model\gpt.py", line 165, in forward
input_ids[text_mask].narrow(1, 0, 1).squeeze_(1).to(self.device_gpt)
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_dynamo\convert_frame.py", line 1116, in __call__
return self._torchdynamo_orig_callable(
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_dynamo\convert_frame.py", line 948, in __call__
result = self._inner_convert(
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_dynamo\convert_frame.py", line 472, in __call__
return _compile(
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_utils_internal.py", line 84, in wrapper_function
return StrobelightCompileTimeProfiler.profile_compile_time(
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_strobelight\compile_time_profiler.py", line 129, in profile_compile_time
return func(*args, **kwargs)
File "C:\Users\Administrator\.conda\envs\chattts\lib\contextlib.py", line 79, in inner
return func(*args, **kwds)
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_dynamo\convert_frame.py", line 817, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_dynamo\utils.py", line 231, in time_wrapper
r = func(*args, **kwargs)
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_dynamo\convert_frame.py", line 636, in compile_inner
out_code = transform_code_object(code, transform)
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_dynamo\bytecode_transformation.py", line 1185, in transform_code_object
transformations(instructions, code_options)
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_dynamo\convert_frame.py", line 178, in _fn
return fn(*args, **kwargs)
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_dynamo\convert_frame.py", line 582, in transform
tracer.run()
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 2451, in run
super().run()
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 893, in run
while self.step():
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 805, in step
self.dispatch_table[inst.opcode](self, inst)
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 497, in wrapper
return handle_graph_break(self, inst, speculation.reason)
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 566, in handle_graph_break
self.output.compile_subgraph(self, reason=reason)
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_dynamo\output_graph.py", line 1123, in compile_subgraph
self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root)
File "C:\Users\Administrator\.conda\envs\chattts\lib\contextlib.py", line 79, in inner
return func(*args, **kwds)
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_dynamo\output_graph.py", line 1318, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_dynamo\utils.py", line 231, in time_wrapper
r = func(*args, **kwargs)
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_dynamo\output_graph.py", line 1409, in call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_dynamo\output_graph.py", line 1390, in call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_dynamo\repro\after_dynamo.py", line 129, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\__init__.py", line 1951, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "C:\Users\Administrator\.conda\envs\chattts\lib\contextlib.py", line 79, in inner
return func(*args, **kwds)
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_inductor\compile_fx.py", line 1505, in compile_fx
return aot_autograd(
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_dynamo\backends\common.py", line 69, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_functorch\aot_autograd.py", line 954, in aot_module_simplified
compiled_fn, _ = create_aot_dispatcher_function(
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_dynamo\utils.py", line 231, in time_wrapper
r = func(*args, **kwargs)
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_functorch\aot_autograd.py", line 687, in create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_functorch\_aot_autograd\jit_compile_runtime_wrappers.py", line 168, in aot_dispatch_base
compiled_fw = compiler(fw_module, updated_flat_args)
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_dynamo\utils.py", line 231, in time_wrapper
r = func(*args, **kwargs)
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_inductor\compile_fx.py", line 1410, in fw_compiler_base
return inner_compile(
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_dynamo\repro\after_aot.py", line 84, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_inductor\debug.py", line 304, in inner
return fn(*args, **kwargs)
File "C:\Users\Administrator\.conda\envs\chattts\lib\contextlib.py", line 79, in inner
return func(*args, **kwds)
File "C:\Users\Administrator\.conda\envs\chattts\lib\contextlib.py", line 79, in inner
return func(*args, **kwds)
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_dynamo\utils.py", line 231, in time_wrapper
r = func(*args, **kwargs)
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_inductor\compile_fx.py", line 527, in compile_fx_inner
compiled_graph = fx_codegen_and_compile(
File "C:\Users\Administrator\.conda\envs\chattts\lib\contextlib.py", line 79, in inner
return func(*args, **kwds)
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_inductor\compile_fx.py", line 831, in fx_codegen_and_compile
compiled_fn = graph.compile_to_fn()
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_inductor\graph.py", line 1749, in compile_to_fn
return self.compile_to_module().call
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_dynamo\utils.py", line 231, in time_wrapper
r = func(*args, **kwargs)
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_inductor\graph.py", line 1678, in compile_to_module
self.codegen_with_cpp_wrapper() if self.cpp_wrapper else self.codegen()
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_inductor\graph.py", line 1634, in codegen
self.scheduler = Scheduler(self.buffers)
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_dynamo\utils.py", line 231, in time_wrapper
r = func(*args, **kwargs)
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_inductor\scheduler.py", line 1364, in __init__
self.nodes = [self.create_scheduler_node(n) for n in nodes]
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_inductor\scheduler.py", line 1364, in <listcomp>
self.nodes = [self.create_scheduler_node(n) for n in nodes]
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_inductor\scheduler.py", line 1462, in create_scheduler_node
return SchedulerNode(self, node)
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_inductor\scheduler.py", line 731, in __init__
self._compute_attrs()
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_inductor\scheduler.py", line 742, in _compute_attrs
group_fn = self.scheduler.get_backend(self.node.get_device()).group_fn
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_inductor\scheduler.py", line 2663, in get_backend
self.backends[device] = self.create_backend(device)
File "C:\Users\Administrator\.conda\envs\chattts\lib\site-packages\torch\_inductor\scheduler.py", line 2655, in create_backend
raise RuntimeError(
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
RuntimeError: Cannot find a working triton installation. More information on installing Triton can be found at https://github.com/openai/triton
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
然后examples包下面的client和main.py 都有点问题