Trying to access stream frames from a project with a regression model

I’m trying to save the frames from deepstream project
the model is the one mentioned in this topic :

How to make a parses function for my regression model - #24 by aya95

I tried to access the frame with the bellow line :

n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)

but I got the following error:

RuntimeError: get_nvds_buf_Surface: Currently we only support RGBA color Format

even tho the capsfilter property is set to RGBA

since it’s not a detector I get the output from tensor_meta , I succesfuly got the frames in another projects that only differ from this one by this line
current project: osdsinkpad = pgie .get_static_pad(“ src ”)
working projects: osdsinkpad = tiler .get_static_pad(“ sink ”)

and I could not use this line with this regression model because it won’t enter the l_user loop,

but the problem when I used
osdsinkpad = pgie .get_static_pad(“ src ”) to make the code loop over the l_user this could not retrieve the frame,
but when I am using osdsinkpad = tiler.get_static_pad(“ sink ”)
I could retrieve the frame but the code does not loop over the l_user.

• Hardware Platform (GPU)
• DeepStream Version 5.0
• TensorRT Version 7.0.0

1 Like

Please update to latest deepstream version 5.1

Can your app run with the model without frame access? Can you show your complete source code?

Yeah it can run without the frame accessing line, I sent you the link to my code in a private message

Hi @Fiona.Chen
we’re eagerly awaiting your reply.
Thank you

Hi aya95
I did not find the model you mentioned in
How to make a parses function for my regression model - #24 by aya95
Did you share privately? can you share again?

I did share the code privately with Fiona before
I just sent a new private message to both you and @Fiona.Chen with two links, code and model file

Ok. got it. got the model.
will try to repro the issue and update once progress.

ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: INVALID_CONFIG: The engine plan file is generated on an incompatible device, expecting compute 6.1 got compute 7.5, please rebuild.
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: engine.cpp (1407) - Serialization Error in deserialize: 0 (Core engine deserialization failure)
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: INVALID_STATE: std::exception
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: INVALID_CONFIG: Deserialize the cuda engine failed.

Which GPU you are using? i am using Tesla P4. is that possible you share the model file? not engine file.

I’m using a 2060 super, but I also have run this project on a system with a 2080

I replied to the previous message with the link to the original model before conversion

It would be better you share the infer configuration property for converting onnx model, thanks.

I dnt actually get what you mean of " the infer configuration property",
but if you mean the code that i converted the model with, here it’s

img_transform = standard_transforms.Compose([
img =
img = img_transform(img)
dynamic_axes = {"input": {0: "batch_size"}, "output": {0: "batch_size"}}
with torch.no_grad():
    img = Variable(img[None, :, :, :]).cuda()
    print("image shape", img.shape)
    pred_map = net.test_forward(img)

    torch.onnx.export(net, (img, pred_map), 'crowd_dynamic_torch1_4_opset11_RTdocker.onnx', input_names=['input'],
          do_constant_folding=True, output_names=['output'], export_params=True, opset_version=11, dynamic_axes=dynamic_axes)
1 Like

@amycao Is that what you want ?

Actually i mean we support onnx model converting to TensorRT model, need some model properties settings in configuration.
the code you provided to convert the model is fine, but can you provide the whole code? this is some part?

Here’s the code that used to convert the model to onnx,
you’ll find it at this drive path:

put the and the along side with the other folders ( for more clarify, check how they’re imported at the convert python file)

We’re waiting your reply ^^

Seems you missed put test_config in the directory?
root@8d5707cdf585:/workspace/work/dsuse/convert_cc_onnx# python
Traceback (most recent call last):
File “”, line 25, in
net = CrowdCounter(cfg_GPU_ID, model_net, pretrained=False)
File “/workspace/work/dsuse/convert_cc_onnx/models/”, line 17, in init
from .SCC_Model.VGG16_LCM_REG import VGG16_LCM_REG as net
File “/workspace/work/dsuse/convert_cc_onnx/models/SCC_Model/”, line 9, in
from test_config import cfg
ModuleNotFoundError: No module named ‘test_config’
root@8d5707cdf585:/workspace/work/dsuse/convert_cc_onnx# vim models/SCC_Model/
root@8d5707cdf585:/workspace/work/dsuse/convert_cc_onnx# grep -Rns test_config from test_config import cfg
Binary file models/_pycache_/CC_DM.cpython-37.pyc matches
Binary file models/_pycache_/CC_LCM.cpython-37.pyc matches
models/ test_config import cfg
Binary file models/SCC_Model/_pycache_/VGG16_LCM_REG.cpython-36.pyc matches
Binary file models/SCC_Model/_pycache_/VGG16_LCM_REG.cpython-37.pyc matches
models/SCC_Model/ test_config import cfg
models/ test_config import cfg

I’ve updated the drive folder with the needed script, please try it

Seems we have some misunderstanding, the code convert the model to onnx model successfully now. but what i acutally mean is how you converting onnx model to TensorRT engine?

root@8d5707cdf585:/workspace/work/dsuse/convert_cc_onnx# python
image shape torch.Size([1, 3, 720, 1280])
root@8d5707cdf585:/workspace/work/dsuse/convert_cc_onnx# ls
_pycache_ crowd_dynamic_torch1_4_opset11_RTdocker.onnx datasets misc models sample_720p.jpg
root@8d5707cdf585:/workspace/work/dsuse/convert_cc_onnx# ll
total 220540
drwxrwxr-x 6 1001 1001 4096 Oct 11 09:05 ./
drwxr-xr-x 9 1001 1001 4096 Oct 11 07:43 …/
drwxr-xr-x 2 root root 4096 Oct 11 09:05 _pycache_/
-rw-rw-r-- 1 1001 1001 684 Oct 11 09:05
-rw-rw-r-- 1 1001 1001 1509 Jan 10 2021
-rw-r–r-- 1 root root 225726606 Oct 11 09:05 crowd_dynamic_torch1_4_opset11_RTdocker.onnx
drwxrwxr-x 7 1001 1001 4096 Oct 11 07:19 datasets/
drwxrwxr-x 3 1001 1001 4096 Oct 11 07:19 misc/
drwxrwxr-x 5 1001 1001 4096 Oct 11 09:03 models/
-rw-r–r-- 1 root root 59724 Oct 11 07:44 sample_720p.jpg
-rw-rw-r-- 1 1001 1001 738 Oct 11 09:02

What you have is script to convert the model to onnx, then I convert the onnx model to engine using tesnorRT7.0.0 tar file installation, I used this command inside this path: tarfile/TensorRT-

./trtexec --onnx=crowd_counting.onnx --explicitBatch --saveEngine=dynamic_crowd.engine --workspace=5120 --fp16 --optShapes=input:3x3x720x1280 --maxShapes=input:5x3x720x1280 --minShapes=input:1x3x720x1280 --shapes=input:3x3x720x1280

If you add probe on nvinfer src pad, the caps have video format NV12. but we only support nvbufsurface with RGBA format to get the frame in python bindings. add probe on tiler, you have converted the format from NV12 to RGBA, that’s why you can get the frame.
you can add this line in your script to get the pipeline graph before line
print(“Exiting app\n”)

Gst.debug_bin_to_dot_file(pipeline, Gst.DebugGraphDetails.ALL, “pipeline”);

modify header:
-from os import path, mkdir
+from os import path, mkdir, environ

add this line after header import
environ[‘GST_DEBUG_DUMP_DOT_DIR’] = ‘/tmp’
then you can get the dot file, install graphviz, to convert dot file to png file to view.
dot -Tpng -o /tmp/pipeline.png /tmp/

About why you can not loop l_user, the user meta data added to frame meta data or object meta data based on process-mode, it’s filled by inference output tensor data. depends on your model.

here’s the graph that resulted from doing this