Example Graphs work but Remaking them from scratch shows Unkown Type errors for ObjectCounter

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) NVIDIA T400 4GB
• DeepStream Version 6.2
• JetPack Version (valid for Jetson only) -
• TensorRT Version 8.5.2
• NVIDIA GPU Driver Version (valid for GPU only) 515.65.01
• Issue Type( questions, new requirements, bugs) Unknown type: nvidia::deepstream::NvDsPerClassObjectCounting
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
0. I’m using the devel DeepStream6.2 docker container

  1. Open deepstream-test1.yaml project → replace VideoRenderer with NvRtspOut (I use gstreamer to then read the rtsp stream)
  2. Create a new graph and build the exact same graph as in deepstream-test1.yaml
  3. I get the following error
2023-05-23 21:32:23 [1,326,513ms] [Error] [omni.kit.app._impl] [py stderr]: 2023-05-23 21:32:23.288 ERROR gxf/std/type_registry.cpp@48: Unknown type: nvidia::deepstream::NvDsPerClassObjectCounting
2023-05-23 21:32:23.288 ERROR gxf/std/type_registry.cpp@48: Unknown type: nvidia::deepstream::NvDsPerClassObjectCounting2023-05-23 21:32:23 [1,326,513ms] [Error] [omni.kit.app._impl] [py stderr]: 


2023-05-23 21:32:23 [1,326,513ms] [Error] [omni.kit.app._impl] [py stderr]: 2023-05-23 21:32:23.288 ERROR gxf/std/yaml_file_loader.cpp@351: Could not add component of type 'nvidia::deepstream::NvDsPerClassObjectCounting' to entity.
2023-05-23 21:32:23.288 ERROR gxf/std/yaml_file_loader.cpp@351: Could not add component of type 'nvidia::deepstream::NvDsPerClassObjectCounting' to entity.2023-05-23 21:32:23 [1,326,514ms] [Error] [omni.kit.app._impl] [py stderr]: 


2023-05-23 21:32:23 [1,326,514ms] [Error] [omni.kit.app._impl] [py stderr]: 2023-05-23 21:32:23.288 ERROR gxf/gxe/gxe.cpp@245: LoadApplication Error: GXF_FACTORY_UNKNOWN_CLASS_NAME
2023-05-23 21:32:23.288 ERROR gxf/gxe/gxe.cpp@245: LoadApplication Error: GXF_FACTORY_UNKNOWN_CLASS_NAME2023-05-23 21:32:23 [1,326,514ms] [Error] [omni.kit.app._impl] [py stderr]: 

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I initially discovered this when trying to build up deepstream-test2.yaml one component at a time from the first sample project. But then noticed that even if I build test1 from scratch it fails as soon as I add the object tracker.

Why does it work in the sample project, but not the ones I create.

I’ve made progress… copying and pasting the extension dependency from the deepstream_test1.yaml file and pasting it into my custom file fixed the issue. Even though that dependency was automatically generated by composer. I literally opened my .yaml file → deleted the NvDsInferenceUtilsExt extension text → pasted identical text from the test1 file. They both have identical text and they’re both generated by composer so not sure why this manual change made the difference.

These are the 3 lines I replaces:

- extension: NvDsInferenceUtilsExt
  uuid: 27856a43-5ad4-4d8e-be36-0ec2cf9bbb58
  version: 1.1.1

I then added an Object Tracker which was also failing… used the same method and it fixed it. Someone please explain it to me because it makes no sense to me why this worked.

I’ve tried to replace the VideoRender with NvRtspOut with deepstream-test.yaml, it works. I can’t reproduce your failure.

......
2023-05-24 06:55:09.860 INFO  extensions/nvdsinference/nvinferbin.hpp@56: bin_add: nvinferbin object_detector

2023-05-24 06:55:09.861 INFO  extensions/nvdsvisualization/nvosdbin.hpp@32: create_element: nvosdbin onscreen_display

2023-05-24 06:55:09.862 INFO  extensions/nvdsvisualization/nvosdbin.hpp@56: bin_add: nvosdbin onscreen_display

2023-05-24 06:55:09.862 INFO  extensions/nvdsoutputsink/nvrtspoutsinkbin.hpp@31: create_element: nvrtspoutsinkbin nv_ds_rtsp_out5

2023-05-24 06:55:09.862 INFO  extensions/nvdsoutputsink/nvrtspoutsinkbin.hpp@55: bin_add: nvrtspoutsinkbin nv_ds_rtsp_out5


*** NvDsRtspOut/nv_ds_rtsp_out5: Launched RTSP Streaming at rtsp://localhost:8554/ds-test ***

2023-05-24 06:55:13 [80,107,332ms] [Error] [omni.kit.app._impl] [py stderr]: 0:00:04.760769977  1377 0x7fbc5000b380 WARN                 nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger:<nvinfer_bin_nvinfer> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1897> [UID = 1]: deserialize engine from file :/tmp/ds.deepstream-test1/gxf/sample_models/primary.resnet10.caffemodel_b1_gpu0_int8.engine failed
0:00:04.760769977  1377 0x7fbc5000b380 WARN                 nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger:<nvinfer_bin_nvinfer> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1897> [UID = 1]: deserialize engine from file :/tmp/ds.deepstream-test1/gxf/sample_models/primary.resnet10.caffemodel_b1_gpu0_int8.engine failed2023-05-24 06:55:13 [80,107,332ms] [Error] [omni.kit.app._impl] [py stderr]: 


2023-05-24 06:55:13 [80,107,367ms] [Error] [omni.kit.app._impl] [py stderr]: 0:00:04.813198422  1377 0x7fbc5000b380 WARN                 nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger:<nvinfer_bin_nvinfer> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: deserialize backend context from engine from file :/tmp/ds.deepstream-test1/gxf/sample_models/primary.resnet10.caffemodel_b1_gpu0_int8.engine failed, try rebuild
0:00:04.813198422  1377 0x7fbc5000b380 WARN                 nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger:<nvinfer_bin_nvinfer> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: deserialize backend context from engine from file :/tmp/ds.deepstream-test1/gxf/sample_models/primary.resnet10.caffemodel_b1_gpu0_int8.engine failed, try rebuild2023-05-24 06:55:13 [80,107,367ms] [Error] [omni.kit.app._impl] [py stderr]: 


2023-05-24 06:55:13 [80,107,367ms] [Error] [omni.kit.app._impl] [py stderr]: 0:00:04.813696787  1377 0x7fbc5000b380 INFO                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<nvinfer_bin_nvinfer> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
0:00:04.813696787  1377 0x7fbc5000b380 INFO                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<nvinfer_bin_nvinfer> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files2023-05-24 06:55:13 [80,107,367ms] [Error] [omni.kit.app._impl] [py stderr]: 


2023-05-24 06:55:45 [80,139,807ms] [Error] [omni.kit.app._impl] [py stderr]: 0:00:37.251546579  1377 0x7fbc5000b380 INFO                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<nvinfer_bin_nvinfer> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1955> [UID = 1]: serialize cuda engine to file: /tmp/ds.deepstream-test1/gxf/sample_models/primary.resnet10.caffemodel_b1_gpu0_int8.engine successfully
0:00:37.251546579  1377 0x7fbc5000b380 INFO                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<nvinfer_bin_nvinfer> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1955> [UID = 1]: serialize cuda engine to file: /tmp/ds.deepstream-test1/gxf/sample_models/primary.resnet10.caffemodel_b1_gpu0_int8.engine successfully2023-05-24 06:55:45 [80,139,807ms] [Error] [omni.kit.app._impl] [py stderr]: 


2023-05-24 06:55:45 [80,139,880ms] [Error] [omni.kit.app._impl] [py stderr]: 0:00:37.317667262  1377 0x7fbc5000b380 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<nvinfer_bin_nvinfer> [UID 1]: Load new model:/tmp/ds.deepstream-test1/gxf/sample_models/config_infer_primary.txt sucessfully
0:00:37.317667262  1377 0x7fbc5000b380 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<nvinfer_bin_nvinfer> [UID 1]: Load new model:/tmp/ds.deepstream-test1/gxf/sample_models/config_infer_primary.txt sucessfully2023-05-24 06:55:45 [80,139,880ms] [Error] [omni.kit.app._impl] [py stderr]: 


WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
WARNING: ../nvdsinfer/nvdsinfer_model_builder.cpp:1487 Deserialize engine failed because file path: /tmp/ds.deepstream-test1/gxf/sample_models/primary.resnet10.caffemodel_b1_gpu0_int8.engine open error
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x368x640
1   OUTPUT kFLOAT conv2d_bbox     16x23x40
2   OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40

Running...
****** NvDsScheduler Runtime Keyboard controls:
p: Pause pipeline
r: Resume pipeline
q: Quit pipeline
2023-05-24 06:55:45.888 INFO  extensions/nvdsbase/nvds_scheduler.cpp@398: NvDsScheduler Pipeline ready

2023-05-24 06:55:46.077 INFO  extensions/nvdsbase/nvds_scheduler.cpp@383: NvDsScheduler Pipeline running

Source 0: Frame Number = 0 Total objects = 10 [ Car:4 Person:6 ]
Source 0: Frame Number = 1 Total objects = 8 [ Car:3 Person:5 ]
Source 0: Frame Number = 2 Total objects = 11 [ Car:5 Person:6 ]
Source 0: Frame Number = 3 Total objects = 10 [ Car:5 Person:5 ]
Source 0: Frame Number = 4 Total objects = 12 [ Car:5 Person:7 ]
Source 0: Frame Number = 5 Total objects = 12 [ Car:5 Person:7 ]
Source 0: Frame Number = 6 Total objects = 8 [ Car:4 Person:4 ]
Source 0: Frame Number = 7 Total objects = 8 [ Car:4 Person:4 ]
Source 0: Frame Number = 8 Total objects = 10 [ Car:5 Person:5 ]
Source 0: Frame Number = 9 Total objects = 8 [ Car:4 Person:4 ]
Source 0: Frame Number = 10 Total objects = 10 [ Car:5 Person:5 ]
Source 0: Frame Number = 11 Total objects = 10 [ Car:5 Person:5 ]
......

What does the following error mean?

Unknown type: nvidia::deepstream::NvDsPerClassObjectCounting

It shows up when executing the graph right after

INFO  gxf/std/yaml_file_loader.cpp@129: Loading GXF entities from YAML file '/opt/nvidia/deepstream/deepstream-6.2/lukasz_test1.yaml'...

where “lukasz_test1.yaml” is the build from scratch copy of deepstream_test1.yaml.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Seems there is something wrong with the graph YAML file. Please compare with the original deepstream_test1.yaml.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.