[PaddlePaddle/PaddleOCR]tensorrt使用相关问题

2024-05-20 150 views
2
  • 系统环境/System Environment: sudo nvidia-docker run --name ppocr -v $PWD:/paddle --shm-size=64G --network=host -it registry.baidubce.com/paddlepaddle/paddle:2.1.3-gpu-cuda11.2-cudnn8 /bin/bash
  • 版本号/Version:Paddle: PaddleOCR: 问题相关组件/Related components:
  • 运行指令/Command Code:
  • 完整报错/Complete Error Message:

根据 基于Python引擎的PP-OCR模型库推理 说明,使用python3 tools/infer/predict_system.py进行推理时,设置use_tensorrt=True会出现如下报错:

Traceback (most recent call last):
Traceback (most recent call last):
  File "tools/infer/predict_system.py", line 232, in <module>
  File "tools/infer/predict_system.py", line 232, in <module>
    main(args)
  File "tools/infer/predict_system.py", line 184, in main
    main(args)
  File "tools/infer/predict_system.py", line 184, in main
    text_sys = TextSystem(args)
  File "tools/infer/predict_system.py", line 45, in __init__
        text_sys = TextSystem(args)self.text_detector = predict_det.TextDetector(args)

  File "tools/infer/predict_system.py", line 45, in __init__
  File "/paddle/PaddleOCR/tools/infer/predict_det.py", line 107, in __init__
        args, 'det', logger)
self.text_detector = predict_det.TextDetector(args)  File "/paddle/PaddleOCR/tools/infer/utility.py", line 303, in create_predictor

  File "/paddle/PaddleOCR/tools/infer/predict_det.py", line 107, in __init__
    args, 'det', logger)
  File "/paddle/PaddleOCR/tools/infer/utility.py", line 303, in create_predictor
    predictor = inference.create_predictor(config)
ValueError: (InvalidArgument) Pass tensorrt_subgraph_pass has not been registered. Please use the paddle inference library compiled with tensorrt or disable the tensorrt engine in inference configuration!
  [Hint: Expected Has(pass_type) == true, but received Has(pass_type):0 != true:1.] (at /paddle/paddle/fluid/framework/ir/pass.h:216)

    predictor = inference.create_predictor(config)
ValueError: (InvalidArgument) Pass tensorrt_subgraph_pass has not been registered. Please use the paddle inference library compiled with tensorrt or disable the tensorrt engine in inference configuration!
  [Hint: Expected Has(pass_type) == true, but received Has(pass_type):0 != true:1.] (at /paddle/paddle/fluid/framework/ir/pass.h:216)

如果use_tensorrt设置为默认False时不会出现问题,但是会太慢。 请问这里既然给出了tensorrt加速参数,理论是可以使用的,为什么会报错呢?请解答疑惑,多谢

回答

1

请确保tensorrt安装正确,另外建议用paddle 2.2版本

8

下载安装tensorrt后不再报错

体验反馈: 1、既然是使用docker容器,容器就应该包含tensorrt,毕竟tensorrt需要从国外网站下载安装配置也会较麻烦。 2、不要说tensorrt包太大,确实该包1G左右有点大,但相比于你官方提供的docker容器 registry.baidubce.com/paddlepaddle/paddle/2.2.2-gpu-cuda11.2-cudnn8 12.8GB真的不值一提

3

好的,感谢反馈,我们会进一步改进

0

关于tensorrt加速问题: 看到其他issue也提出来了,官方回复检测模型预计加速20-30%,识别模型加速较少,但是我实验的时候几乎没区别,请问这是正常的吗? use_tensorrt=False

root@k8s-m-10-13-84-6:/paddle/PaddleOCR# python3 tools/infer/predict_system.py
grep: warning: GREP_OPTIONS is deprecated; please use an alias or script
grep: warning: GREP_OPTIONS is deprecated; please use an alias or script
grep: warning: GREP_OPTIONS is deprecated; please use an alias or script
grep: warning: GREP_OPTIONS is deprecated; please use an alias or script
[2022/03/01 09:12:58] root DEBUG: Predict time of ./data/5.jpg: 1.800s
[2022/03/01 09:12:59] root DEBUG: Predict time of ./data/2.png: 0.374s
[2022/03/01 09:12:59] root DEBUG: Predict time of ./data/4.jpg: 0.372s
[2022/03/01 09:12:59] root DEBUG: Predict time of ./data/3.png: 0.152s
[2022/03/01 09:13:00] root DEBUG: Predict time of ./data/1.jpg: 0.252s

use_tensorrt=True

root@k8s-m-10-13-84-6:/paddle/PaddleOCR# python3 tools/infer/predict_system.py
grep: warning: GREP_OPTIONS is deprecated; please use an alias or script
grep: warning: GREP_OPTIONS is deprecated; please use an alias or script
grep: warning: GREP_OPTIONS is deprecated; please use an alias or script
grep: warning: GREP_OPTIONS is deprecated; please use an alias or script
[2022/03/01 09:13:58] root DEBUG: Predict time of ./data/5.jpg: 0.921s
[2022/03/01 09:13:59] root DEBUG: Predict time of ./data/2.png: 0.349s
[2022/03/01 09:13:59] root DEBUG: Predict time of ./data/4.jpg: 0.365s
[2022/03/01 09:13:59] root DEBUG: Predict time of ./data/3.png: 0.148s
[2022/03/01 09:14:00] root DEBUG: Predict time of ./data/1.jpg: 0.231s
6

你可以for循环连续跑个100次,算一下预测的平均时间

4

@andyjpaddle 对5张图片for循环100次的对比结果:

use_tensorrt=False
[2022/03/01 09:58:36] root DEBUG: avg time of 100 loops for 5 images :0.397s
use_tensorrt=True
[2022/03/01 09:54:29] root DEBUG: avg time of 100 loops for 5 images :0.357s

真的是没什么区别,望帮忙分析下可能原因

9

从结果来看,你这个加速大概有10%,原因可能和不同的机器配置有关系

8

@andyjpaddle 请问有什么办法可以进一步提高预测速度吗?

参数配置: det_limit_side_len = 1510,det_limit_type=‘max’。 因为有些图片分辨率较高,如果尺寸设置较小,会导致图片中很多文字无法识别

其他参数均为默认的,未进行修改。已经在使用是小模型(ch_PP-OCRv2_det_infer、ch_PP-OCRv2_rec_infer)。我这边期望能够达到的效果是预测速度在几十ms,多谢!

1

可以考虑用量化后的模型,但是识别精度可能会降一些

4

好的,我都试验下

6

好的,我都试验下

@Biaocsu hello,后面有试验用量化模型吗,性能有提升吗

0

@inewlife 没有,因为他说了量化模型识别精度会下降。先保证精度再解决速度问题吧