Different results executing Deepstream on Jetson Nano and AWS EC2 Instance

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): Jetson
• DeepStream Version: 5.0
• Issue Type(questions, new requirements, bugs): Question


We have executed deepstream app with the tracker and nvdsanalytics activated on a Jetson Nano and on an AWS EC2 Instance. Both executions use Yolov4. However, we got worse results on the Jetson Nano.

Do you know what can be causing this?

We think it could be the conversion from .weights to .engine because Jetson Nano and EC2 instances have different architectures, but we´re not sure.

Thanks in advance.

Sorry for the late response, have this issue been resolved? Thanks

Not yet.

Hi @alexmoreno98
Are you using nvinfer or nvinferserver?
Did you use the same DeepStream config on two platforms?

Are YoloV4 output the same on two platforms?

We are using nvinfer and same configuration on two platforms.

We think tracker behaviour is slightly different comparing both executions, maybe that´s the reason why we get different results, but we are not 100% sure.
Moreover, could the generation process of .engine file be the reason for getting these results? (To generate .engine file, we use this repository: GitHub - Tianxiaomo/pytorch-YOLOv4: PyTorch ,ONNX and TensorRT implementation of YOLOv4)

the .engine is generated on the target platform, right? if it’s, I don’t think .engine caused the issue.
could you disable tracker and check again?

Sorry for the late response.

Yes, the .engine was generated on the target platform, but I mean if different engines created in different platforms can have non-identical performance.
With tracker disabled the bounding boxes seem to be the same, the only difference is that there is no tracker ID on the bounding box label.

We can appreciate a difference in performance because we are using nvdsanalytics plugin to count when an object crosses the line. However, when analyzing some videos in EC2 instance the object is counted, while in Jetson Nano not. As analytics plugin depends on tracker, maybe the problem is related to the tracker?

do you mean accuracy or latency? for latency, yes, for accuracy, I think it’s not expected.

is it possible to share a repo sample?

I mean accuracy, because in different platforms (EC2 Instance and Jetson Nano) with same tracker, deepstream, infer, nvdsanalytics… configurations, in some videos nvdsanalytics plugin counts the line crossing for a specific object in one platform but not in the other.
We checked the analyzed videos and for example in one of them we saw this: in the frame in which the object is crossing the line, the bounding box is lost in Jetson Nano so it is not counted, while in EC2 Instance the bounding box is not lost so it is counted. This happens, as I said, executing Deepstream with same configurations in both platform. The only different thing is the engine file because it must be generated in target platform (to generate both engine files we used same weights as well).

Is there a way to share some images (or files if needed) to you in private?