Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

调 export_torch_script.py 的时候只能用中文吗 #1

Open
Chi8wah opened this issue Nov 20, 2024 · 4 comments
Open

调 export_torch_script.py 的时候只能用中文吗 #1

Chi8wah opened this issue Nov 20, 2024 · 4 comments

Comments

@Chi8wah
Copy link

Chi8wah commented Nov 20, 2024

请问一下大佬,
我在执行导出 .pt 文件的时候,可以导出bert_model.pt ssl_model.pt,但继续的时候就报错了。看起来是按中文去处理参考文本导致后续传参有问题了?(后面试了下就算改了 get_phones_and_bert( 后面的 "all_zh" 为 "en" 也还是会报错)
执行命令:
python GPT_SoVITS/export_torch_script.py --device cuda --gpt_model GPT_weights_v2/ddd-long-e15.ckpt --sovits_model SoVITS_weights_v2/ddd-long_e50_s13950.pth --ref_audio /root/workspace/temp-files/ddd-6s-f.wav --ref_text 'Some example texts.' --output_path pt_output --export_common_model
控制台输出:

目录已创建: pt_output                                                                                   
#### exported ssl ####                                                                                                                                                                                           
#### exported bert ####                                                                                                                                                                                          
device: cuda                                                                                                                                                                                                     
['S O M E E X A M P L E T E X T S.  ']                                                                                                   
['en']                                                                                                  
当前使用g2pw进行拼音推理                                                                                                                                                                                         
Building prefix dict from the default dictionary ...                                                    
DEBUG:jieba_fast:Building prefix dict from the default dictionary ...                                                                                                                                            
Dumping model to file cache /tmp/jieba.cache                                                                                                                                                                     
DEBUG:jieba_fast:Dumping model to file cache /tmp/jieba.cache                                                                                                                                                    
Loading model cost 0.591 seconds.                                                                                                                                                                                
DEBUG:jieba_fast:Loading model cost 0.591 seconds.                                                                                                                                                               
Prefix dict has been built succesfully.                                                                                                                                                                          
DEBUG:jieba_fast:Prefix dict has been built succesfully.                                                                                                                                                         
#### get_raw_t2s_model ####                                                                                                                                                                                      
{'data': {'max_eval_sample': 8, 'max_sec': 54, 'num_workers': 4, 'pad_val': 1024}, 'inference': {'top_k': 15}, 'model': {'EOS': 1024, 'dropout': 0.0, 'embedding_dim': 512, 'head': 16, 'hidden_dim': 512, 'linea
r_units': 2048, 'n_layer': 24, 'phoneme_vocab_size': 732, 'random_bert': 0, 'vocab_size': 1025}, 'optimizer': {'decay_steps': 40000, 'lr': 0.01, 'lr_end': 0.0001, 'lr_init': 1e-05, 'warmup_steps': 2000}, 'outp
ut_dir': 'logs/ddd-long/logs_s1', 'pretrained_s1': 'GPT_SoVITS/pretrained_models/gsv-v2final-pretrained/s1bert25hz-5kh-longer-epoch=12-step=369668.ckpt', 'train': {'batch_size': 48, 'epochs': 50, 'exp_name': '
ddd-long', 'gradient_clip': 1.0, 'half_weights_save_dir': 'GPT_weights_v2', 'if_dpo': True, 'if_save_every_weights': True, 'if_save_latest': True, 'precision': '16-mixed', 'save_every_n_epoch': 5, 'seed': 1234
}, 'train_phoneme_path': 'logs/ddd-long/2-name2text.txt', 'train_semantic_path': 'logs/ddd-long/6-name2semantic.tsv'}
Traceback (most recent call last):                                                                                                                                                                               
  File "/root/workspace/GPT-SoVITS/GPT_SoVITS/export_torch_script.py", line 831, in <module>            
    main()                                                                                                                                                                                                       
  File "/root/workspace/GPT-SoVITS/GPT_SoVITS/export_torch_script.py", line 817, in main                
    export(                                                                                                                                                                                                      
  File "/root/workspace/GPT-SoVITS/GPT_SoVITS/export_torch_script.py", line 640, in export
    t2s = torch.jit.script(t2s_m).to(device)                                                                                                                                                                     
  File "/root/miniconda3/envs/GPTSoVits/lib/python3.9/site-packages/torch/jit/_script.py", line 1324, in script
    return torch.jit._recursive.create_script_module(                                                                                                                                                            
  File "/root/miniconda3/envs/GPTSoVits/lib/python3.9/site-packages/torch/jit/_recursive.py", line 559, in create_script_module
    return create_script_module_impl(nn_module, concrete_type, stubs_fn)                                                                                                                                         
  File "/root/miniconda3/envs/GPTSoVits/lib/python3.9/site-packages/torch/jit/_recursive.py", line 636, in create_script_module_impl                                                                             
    create_methods_and_properties_from_stubs(                                                                                                                                                                    
  File "/root/miniconda3/envs/GPTSoVits/lib/python3.9/site-packages/torch/jit/_recursive.py", line 469, in create_methods_and_properties_from_stubs
    concrete_type._create_methods_and_properties(                                                       
RuntimeError:                                                                                           
                                                    
aten::pad(Tensor self, SymInt[] pad, str mode="constant", float? value=None) -> Tensor:
Expected a value of type 'Optional[float]' for argument 'value' but instead found type 'bool'.
:                                                                                                                                                                                                                
  File "/root/workspace/GPT-SoVITS/GPT_SoVITS/export_torch_script.py", line 458
        bsz = x.shape[0]                                                                                                                                                                                         
        src_len = x_len + y_len                                                                         
        x_attn_mask_pad = F.pad(                                                                        
                          ~~~~~ <--- HERE
            x_attn_mask,                                                                                                                                                                                         
            (0, y_len),  ###xx的纯0扩展到xx纯0+xy纯1,(x,x+y)
@L-jasmine
Copy link
Collaborator

以前也有个人和你是一样的情况。不过我的 export_torch_script.py 里确实只能中文。因为这一行 https://github.com/RVC-Boss/GPT-SoVITS/blob/a70e1ad30c072cdbcfb716962abdc8008fa41cc2/GPT_SoVITS/export_torch_script.py#L619
你如果用英文你改成 en 或者 auto 应该就可以了。
如果还是这个现象你试着升级一下torch 和 transform

@Chi8wah
Copy link
Author

Chi8wah commented Nov 20, 2024

以前也有个人和你是一样的情况。不过我的 export_torch_script.py 里确实只能中文。因为这一行 https://github.com/RVC-Boss/GPT-SoVITS/blob/a70e1ad30c072cdbcfb716962abdc8008fa41cc2/GPT_SoVITS/export_torch_script.py#L619 你如果用英文你改成 en 或者 auto 应该就可以了。 如果还是这个现象你试着升级一下torch 和 transform

我前面改了 en 也不行,升级一下 torch 就能导出了,我明天试下能不能跑起来,感谢大佬

@L-jasmine
Copy link
Collaborator

L-jasmine commented Nov 21, 2024

跑起来了吗
导出的时候最好是用 cpu 因为cpu导出的可以在cuda和cpu中运行,在cuda中速度也和cuda导出是一样的。 但是cuda导出的不可以在cpu中运行

@Chi8wah
Copy link
Author

Chi8wah commented Nov 21, 2024

跑起来了吗
导出的时候最好是用 cpu 因为cpu导出的可以在cuda和cpu中运行,在cuda中速度也和cuda导出是一样的。 但是cuda导出的不可以在cpu中运行

感谢大佬指导~
今天还没跑,因为昨晚发现 i9+4090 的推理速度远高于我之前的 8核 AMD EPYC 7742 + A100,现在我在重新压测 QPS,所以预计可能过几天才会继续尝试搞这块了。
然后我大概率不会在cpu上运行(除非cuda运行不了),所以应该cuda导出也没关系吧。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants