Jetson Xavier NX performance with yolov8

Hello! I’m trying to improve performance on Nvidia Jetson Xavier NX 16GB while running yolov8n-seg.engine model. Now I have 22ms per frame. Can this Jetson provide more performance?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)

• DeepStream Version

• JetPack Version (valid for Jetson only)

• TensorRT Version

• NVIDIA GPU Driver Version (valid for GPU only)

• Issue Type( questions, new requirements, bugs)

• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

please refer to this topic for performance improvement.

Hii!
I am also tring to improve the performance for my simple object detection application on Nvidia Jetson Xavier NX 8GB. (For real time object detection)

I am using yolov8 model for object detection(I have converted the model to tensorRT in .engine format)
JetPack : 5.0.2
TensorRT : 8.4.1.5
CUDA : 11.4.239
CUDNN : 8.4.1.50

I have installed Pytorch and torchvision with cuda compatibility.
The opencv installation that is build with cuda is also done, i have followed this link for installation GitHub - mdegans/nano_build_opencv: Build OpenCV on Nvidia Jetson Nano.

The problem I am facing here is the openCV with cuda compatibility is installed in “/usr/local/lib/python3.8/site-packages” and when I add this to path and try to run the object detection code it gives me error regarding the tensrrt-nvidia installation.

And if the path is not added then it uses the normal openCV which is not compiled with the cuda and no error of tensorrt-nvidia is encountered.

I want to use openCV with cuda compatibility so that i can upload the frames to gpu memory to process.

Can you help me with this or guide me regarding how to use tensorRT model to get better FPS

sorry for the late reply! Here is DeepStream forum. what is DeepStream Version? Is this still an DeepStream issue to support? Thanks!

DeepStream version is 6.1.1

I was able to get good speed when I changed the model to int8 from fp32 but now if I want to customise the code and get the bounding box co-ordinates and further process them as per the requirement of my logic. (with deepstream app in c/c++)
How can I do that.

I have tried to install deepstream_python_app repository and try to run the test application but I am getting error while building the app.

The error is related to include <Python.h> in /opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/bindings/…/3rdparty/pybind11/include/pybind11/detail/common.h:112:10:
when I run the following command:

cd deepstream_python_apps/bindings
mkdir build
cd build
cmake … -DPYTHON_MAJOR_VERSION=3 -DPYTHON_MINOR_VERSION=10
-DPIP_PLATFORM=linux_aarch64 -DDS_PATH=/opt/nvidia/deepstream/deepstream/
make -j$(nproc)

Can you help with this error and guide for custom application developement.

  1. you can install the whl from this link for the DS6.1.1 python bindijngs.
  2. if you want to build the binding yourself, please refer to this branch for DS6.1.1.
  3. if you still encounter the error. please share the whole log. Thanks!

Hii
Thanks, got the solution using the v1.14.
But Not able to find a way to create my custom application. I want to use my YOLOv8 model for detection. How can I do that?

please refer to this FAQ for yolov8 detection sample.

Yes I have gone through it, it is working but it is in c++, I want to know that is there any way to use yolo model in python deepstream code? And how do I modify and customise it according to my requirement.

you can replace the model with yolov8 in this python sampledeepstream_test_1.py. the nvinfer configurations is ready-made.

Hii,
as you said that change the model so the model is defined in configuration file itself so when we change config file the model will automatically change but the problem I am facing is that the config file you have mentioned here is with reference to the deepstream c/c++ SDK and using the same config file is not working. Also the format of this config file very different from that of the deepstream python SDK.

Can you please tell me the step by step process of this.
Thanks for the support.

please refer to this topic and my last comment. you need to copy https://github.com/marcoslucianops/DeepStream-Yolo/blob/master/config_infer_primary_yoloV8.txt and nvdsinfer_custom_impl_Yolo to Python deepstream_test1.

yes, now I am able to run yolov8-obb model with deepstream python SDK.

But as I mentioned I am using yolov8-obb(oriented bounding boxes) model so the bounding boxes should be draw with their respective orientation but as I am running the inference all the boxes are not oriented. How can I make changes in bbox parsing.

it seems the current question is not related to the original topic issue. could you open a new topic? Thanks!

okay, Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.