When i run deepstream-test2, it stops here:

when i run deepstream-test2, it stops here:

what should i do to go on?


On which platform you are running on?

hi, run at jetson nx

It did not have any error, did you use original sample or modified?

i used original codes, the 1st time to test that… i tried test-1 and test-2 after make, both is like that issue.

Can you clean the cache and run again?
rm ~/.cache/gstreamer-1.0/ -rf

i tried the command just now, and it is same before to stop there 15 minutes more to wait no response, too.

i used sample_720p.h264

Can you share the whole log?

-desktop:~/opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream-test1$ ./deepstream-test1-app sample_720p.h264
Now playing: sample_720p.h264

Using winsys: x11
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:12.499178028 304 0x5592014780 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1702> [UID = 1]: deserialized trt engine from :/home/sgs-001/opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40

0:00:12.499467082 304 0x5592014780 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1806> [UID = 1]: Use deserialized engine model: /home/sgs-001/opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
0:00:12.513698160 304 0x5592014780 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261

i put sample_720p.h264 file in test1 path

does it less pipline file? should i creat a pipline file inside?

Where did you get the engine built on? you should build on the same platform.

i changed nothing , used resnet10…engine in deepstream.

Can you move the engine to another place and let it rebuild and try again?

but i run a yolo model before with another own pipline by fp16 by other path,
does it affect here?

ok i move engine to test 1 path and try again later

Sorry for not clear, not to test1 path, anywhere, or you can delete it directly.

i delete that and run deepstrea-test1 command now deepstream is running, because shows acknowledge warning of power,
but doen’s how any video…

and stop same before still.