RuntimeError: get_nvds_buf_Surface: Currently we only support RGBA color Format
even tho the capsfilter property is set to RGBA
since it’s not a detector I get the output from tensor_meta , I succesfuly got the frames in another projects that only differ from this one by this line
current project: osdsinkpad = pgie .get_static_pad(“ src ”)
working projects: osdsinkpad = tiler .get_static_pad(“ sink ”)
and I could not use this line with this regression model because it won’t enter the l_user loop,
but the problem when I used osdsinkpad = pgie .get_static_pad(“ src ”) to make the code loop over the l_user this could not retrieve the frame,
but when I am using osdsinkpad = tiler.get_static_pad(“ sink ”)
I could retrieve the frame but the code does not loop over the l_user.
• Hardware Platform (GPU) • DeepStream Version 5.0 • TensorRT Version 7.0.0
Hello,
I did share the code privately with Fiona before
I just sent a new private message to both you and @Fiona.Chen with two links, code and model file
Actually i mean we support onnx model converting to TensorRT model, need some model properties settings in configuration.
the code you provided to convert the model is fine, but can you provide the whole code? this is some part?
Seems you missed put test_config in the directory?
root@8d5707cdf585:/workspace/work/dsuse/convert_cc_onnx# python convert_to_onnx.py
Traceback (most recent call last):
File “convert_to_onnx.py”, line 25, in
net = CrowdCounter(cfg_GPU_ID, model_net, pretrained=False)
File “/workspace/work/dsuse/convert_cc_onnx/models/CC_LCM.py”, line 17, in init
from .SCC_Model.VGG16_LCM_REG import VGG16_LCM_REG as net
File “/workspace/work/dsuse/convert_cc_onnx/models/SCC_Model/VGG16_LCM_REG.py”, line 9, in
from test_config import cfg
ModuleNotFoundError: No module named ‘test_config’
root@8d5707cdf585:/workspace/work/dsuse/convert_cc_onnx# vim models/SCC_Model/VGG16_LCM_REG.py
root@8d5707cdf585:/workspace/work/dsuse/convert_cc_onnx# grep -Rns test_config
convert_to_onnx.py:5:# from test_config import cfg
Binary file models/_pycache_/CC_DM.cpython-37.pyc matches
Binary file models/_pycache_/CC_LCM.cpython-37.pyc matches
models/CC_DM.py:5:from test_config import cfg
Binary file models/SCC_Model/_pycache_/VGG16_LCM_REG.cpython-36.pyc matches
Binary file models/SCC_Model/_pycache_/VGG16_LCM_REG.cpython-37.pyc matches
models/SCC_Model/VGG16_LCM_REG.py:9:from test_config import cfg
models/CC_LCM.py:5:#from test_config import cfg
Seems we have some misunderstanding, the code convert the model to onnx model successfully now. but what i acutally mean is how you converting onnx model to TensorRT engine?
root@8d5707cdf585:/workspace/work/dsuse/convert_cc_onnx# python convert_to_onnx.py
image shape torch.Size([1, 3, 720, 1280])
root@8d5707cdf585:/workspace/work/dsuse/convert_cc_onnx# ls
_pycache_ cfg.py convert_to_onnx.py crowd_dynamic_torch1_4_opset11_RTdocker.onnx datasets misc models sample_720p.jpg test_config.py
root@8d5707cdf585:/workspace/work/dsuse/convert_cc_onnx# ll
total 220540
drwxrwxr-x 6 1001 1001 4096 Oct 11 09:05 ./
drwxr-xr-x 9 1001 1001 4096 Oct 11 07:43 …/
drwxr-xr-x 2 root root 4096 Oct 11 09:05 _pycache_/
-rw-rw-r-- 1 1001 1001 684 Oct 11 09:05 cfg.py
-rw-rw-r-- 1 1001 1001 1509 Jan 10 2021 convert_to_onnx.py
-rw-r–r-- 1 root root 225726606 Oct 11 09:05 crowd_dynamic_torch1_4_opset11_RTdocker.onnx
drwxrwxr-x 7 1001 1001 4096 Oct 11 07:19 datasets/
drwxrwxr-x 3 1001 1001 4096 Oct 11 07:19 misc/
drwxrwxr-x 5 1001 1001 4096 Oct 11 09:03 models/
-rw-r–r-- 1 root root 59724 Oct 11 07:44 sample_720p.jpg
-rw-rw-r-- 1 1001 1001 738 Oct 11 09:02 test_config.py
What you have is script to convert the model to onnx, then I convert the onnx model to engine using tesnorRT7.0.0 tar file installation, I used this command inside this path: tarfile/TensorRT-7.0.0.11/bin
If you add probe on nvinfer src pad, the caps have video format NV12. but we only support nvbufsurface with RGBA format to get the frame in python bindings. add probe on tiler, you have converted the format from NV12 to RGBA, that’s why you can get the frame.
you can add this line in your script to get the pipeline graph before line
print(“Exiting app\n”)
modify header:
-from os import path, mkdir
+from os import path, mkdir, environ
add this line after header import
environ[‘GST_DEBUG_DUMP_DOT_DIR’] = ‘/tmp’
then you can get the dot file, install graphviz, to convert dot file to png file to view.
dot -Tpng -o /tmp/pipeline.png /tmp/pipeline.dot
About why you can not loop l_user, the user meta data added to frame meta data or object meta data based on process-mode, it’s filled by inference output tensor data. depends on your model.