就是接着这个大佬的模型继续训练,因为我自己的模型太过于小了,无法收敛。但是一更换就报错
截图在下面 然后我再复制粘贴一下防止我没传上来图片
E:\数据集制作\MockingBird-main\synthesizer\synthesizer_dataset.py:84: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ..\torch\csrc\utils\tensor_new.cpp:201.)
embeds = torch.tensor(embeds)
C:\Users\11351\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\functional.py:1795: UserWarning: nn.functional.tanh is deprecated. Use torch.tanh instead.
warnings.warn("nn.functional.tanh is deprecated. Use torch.tanh instead.")
Traceback (most recent call last):
File "E:\数据集制作\MockingBird-main\synthesizer_train.py", line 37, in
train(vars(args))
File "E:\数据集制作\MockingBird-main\synthesizer\train.py", line 208, in train
optimizer.step()
File "C:\Users\11351\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\optim\optimizer.py", line 88, in wrapper
return func(*args, *kwargs)
File "C:\Users\11351\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\autograd\grad_mode.py", line 28, in decorate_context
return func(args, kwargs)
File "C:\Users\11351\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\optim\adam.py", line 133, in step
F.adam(params_with_grad,
File "C:\Users\11351\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\optim_functional.py", line 86, in adam
expavg.mul(beta1).add_(grad, alpha=1 - beta1)
RuntimeError: The size of tensor a (1024) must match the size of tensor b (3) at non-singleton dimension 3