I run both deepstream-app -c source4_720p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt and
deepstream-yolo-app Tegra sample_720p.h264 config/yolov3.txt work well.
My environments are Jetpack4.1.1, DeepStream3.0 and deepstream_reference_apps on Xavier.
Hi,
The two apps run different pipelines. In deepstream-app, we have primary detector and secondary classifiers. Do you want to replace existing models? Or insert into the pipeline? Please check application architecture in document and give more information about your desired pipeline.
Hi DaneLL,
I understand they are two apps run different pipelines; so that, I want to to replace existing models to YOLO model and related pipeline.
After the modification, maybe deepstream-app can display multi-stream in high fps(over 30 fps),
and it is based YOLO model(maybe it need to degrade the number of labeling).
Re-training is possible. Which model do you want to use?
For YOLO, you can find the re-training steps with darknet in the author’s webpage:
[url]https://pjreddie.com/darknet/yolo/[/url]
If I have a pre-train model(resnet), how can I covert it into an input(XXX.engine) of deepstream-app?
Or, what system can re-train the resnet model and convert this training model into an input of TensorRT?
furthermore, the result can be transfered into a XXX.engine.
@AastaLLL,
I am sorry that you reply answer to me is the same time I post the new questions.
we hope a pre-train model or re-train model that can be used directly by deepstream-app.
Resnet can be used by deepstream-app directly.
Please start from the .prototxt and .caffemodel file located at ‘/home/nvidia/Model/ResNet_18’.
And use our training app DIGITs: https://github.com/NVIDIA/DIGITS.
When the new model is available, update the model information of the config should be enough.
run deepstream-app -c deepstream_app_config_yoloV3.txt, and get the results:
** WARN: <parse_tiled_display:1018>: Unknown key ‘gpu-id’ for group [tiled-display]
** WARN: <parse_source:359>: Unknown key ‘gpu-id’ for group [source0]
** WARN: <parse_streammux:418>: Unknown key ‘gpu-id’ for group [streammux]
** WARN: <parse_streammux:418>: Unknown key ‘cuda-memory-type’ for group [streammux]
** WARN: <parse_sink:962>: Unknown key ‘gpu-id’ for group [sink0]
** WARN: <parse_osd:599>: Unknown key ‘gpu-id’ for group [osd]
** WARN: <parse_gie:783>: Unknown key ‘gpu-id’ for group [primary-gie]
Using winsys: x11
Using TRT model serialized engine /home/nvidia/deepstream-plugins/sources/apps/trt-yolo/trt-yolo-app crypto flags(0) deepstream-app: engine.cpp:868: bool nvinfer1::rt::Engine::deserialize(const void*, std::size_t, nvinfer1::IGpuAllocator&, nvinfer1::IPluginFactory*): Assertion `size >= bsize && “Mismatch between allocated memory size and expected size of serialized engine.”’ failed.
there is no “engine.cpp” file, how can I fixed it?
my deepstream_app_config_yoloV3.txt, please reference attached 1
my config_infer_primary_YoloV3.txt, please reference attached 2
deepstream-app -c deepstream_app_config_yoloV3.txt can be worked well now,
after I change the model-engine-file=/home/nvidia/deepstream-plugins/data/yolo/yolov3-kINT8-batch1.engine in deepstream_app_config_yoloV3.txt.
As mentioned in the README - https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps#note
Have a look at the deepstream_config_file_parser.c (line 98), deepstream_app_config_parser.c, deepstream_app.c source files in the SDK. The code has been annotated with comments on how to add the ds-example plugin to the pipeline in deepstream-app
You will need to implement the corresponding config file parser code for NvYolo plugin. Corresponding ds-example reference is deepstream_dsexample.c and deepstream_dsexample.h files in the SDK
Trace the execution flow on how ds-example plugin in added in the pipeline and do the same for NvYolo plugin.
Make corresponding changes in the deepstream-app config file similar to what you do to add ds-example plugin to the pipeline.
This procedure holds good for any custom gstreamer plugin you have implemented and are interested to integrate with the deepstream-app
there are only deepstream_config_file_parser.c and deepstream_app.c in SDK, but no deepstream_app_config_parser.c, is it OK? (both deepstream-app and deepstream-yolo-app are worked well)
I have a look at the both file, the above 2 files are parsing config file and add related properties to pipeline, am I right?
3)Can I modify corresponding config file and reuse the original parser code for NvYolo plugin?
deepstream_dsexample.c in SDK apply dsexample_bin, how to make relationship with NvYolo, so that deepstream_dsexample.c can be a NvYolo plugin?
Does “ds-example plugin” is deepstream_dsexample.c or deepstream_app.c?
if it is deepstream_app.c, it add many related properties of config file to pipeline, the amount of pipeline operation about 100, which are need to modify for NvYolo plugin?
You can follow the dsexample to add the required parser or update in the deepstream-app.
For example, you can check the attachment on how to enable nvyolo in the deepstream-app with full-frame mode.