How to combine poseNet with opencv using thread?

Hi,

My program runs smooth in my PC (win i5), but when I run it in the NX, the fps is really slow…
NX only use 6 core runs at 70% and 42% of 8Gb only
Is there any python thread example in NX?

And I also want to add the poseNet, should I use
img. jetson.utils.cudaFrom Numpy(frame)
to convert cv2 image to utils? Thx

import threading
import cv2
class T1(Thread):
def init(self):
Thread.init(self)
self.frame=frame
def run(self):

class T2(threading.Thread):
def init(self):
threading.Thread.init(self)
self.frame=frame
def run(self):

cap = cv2.VideoCapture(0)
while(1):
cap.grab()
ret, frame = cap.retrieve()
thread1 = T1(frame)
thread2 = T2(frame)
thread1.start()
thread2.start()
thread1.join()
thread2.join()
cv2.imshow(“Demo”, frame)

Hi,

Do you use TensorRT for inference?
If yes, could you share the GPU utilization with us first?

More, please note that you can maximize the performance with the following command:

$ sudo nvpmodel -m 0
$ sudo jetson_clocks

Thanks.

Hi,

Just tried, it is the same…
The way I call python thread is ok in NX, right? Thx

Hi,

It depends.
Could you share a complete source or the GPU utilization with us first?

In your implementation, two threads have the same input called frame.
If you didn’t apply memcpy in the run(.), they might try to access the same buffer at the same time.
In some cases, they will need to wait in turn if the buffer doesn’t support concurrent access.

Thanks.

Hi,
I am still writing the code…
My goal is to divide the screen into 4 parts, each part use the thread to run it, then combine them back to one frame.
Is there any general idea of doing it? Shall I use thread?

Here is my pseudocode
class T1(Thread):
def init (self):
Thread. init (self)
self.frame=frame
def run(self):
(do the inference and draw rectangle in the main frame)
Note: this part of combining does not seems to be the bottle neck.

Use cudaAllocMapped to reserve the memory space, sub_frame1-4

while (1):
Use cudaCrop to crop certain part of the frame (sub_frame1-4)
thread1.T1(sub_frame1)
thread2.T1(sub_frame2)
thread3.T1(sub_frame3)
thread4.T1(sub_frame4)
thread1.start()
thread2.start()
thread3.start()
thread4.start()
thread1.join()
thread2.join()
thread3.join()
thread4.join()

Thx

Hi,

Have you tried Deepstream library?
It seems that Deepstream can meet your requirement directly.

Thanks.

Hi,

I don’t see posenet in deepstream…

Thx

Hi,

You can update the config to run a customized model.
Or try the sample with OpenPose model below:

Thanks.

not easy to install the deepstream…
Still looking for the instruction for deepstream install…
If I have the 4.6 jetson SD image, is there a link? I download the deb in this link, DeepStream Getting Started | NVIDIA Developer

Here is the READme in deepstream, is it the one I shall follow?
stuck at “Installing librdkafka to use kafka protocol adaptor with message broker”

What is the easy way to install deepstream, shall I use a new SD card and use SDK manager instead?
Thx

*****************************************************************************
* Copyright (c) 2018-2021 NVIDIA Corporation.  All rights reserved.
*
* NVIDIA Corporation and its licensors retain all intellectual property
* and proprietary rights in and to this software, related documentation
* and any modifications thereto.  Any use, reproduction, disclosure or
* distribution of this software and related documentation without an express
* license agreement from NVIDIA Corporation is strictly prohibited.
*****************************************************************************

***********************************************************************
                              README
                       DEEPSTREAM ON TEGRA SDK
                         QUICK START GUIDE
***********************************************************************
The NVIDIA® DeepStream on Jetson Software Development Kit (SDK) provides a
framework for constructing GPU-accelerated video analytics
applications running within the L4T package on the
Jetson platform.

=======================================================================
Package contents
=======================================================================
The DeepStream packages include:
1. sources - Sources for sample application and plugin
2. samples - Config files, models, streams and tools to run the sample app

=======================================================================
Prerequisites
=======================================================================
JetPack-4.6 is required to successfully run and use the
DeepStream on Jetson SDK. JetPack, including the latest L4T
release, is available for download at:
https://developer.nvidia.com/embedded/jetpack

After downloading the installer, follow the steps mentioned in SDK Manager
document to flash the L4T image to your target system.
The SDK Manager document is available for download at:
https://docs.nvidia.com/sdk-manager/index.html

=======================================================================
Additional components required on the target
=======================================================================
To successfully use the DeepStream SDK, the following additional
components must be installed and set up on the target system:
- CUDA (10.2)
- TensorRT (8.0.1+)
- OpenCV (4.1.1)
- VisionWorks (1.6)
Install these packages using JetPack SDK Manager.

=======================================================================
Installing prerequisite software on Jetson development board:
=======================================================================
Packages to be installed:
$ sudo apt-get install \
    libssl1.0.0 \
    libgstreamer1.0-0 \
    gstreamer1.0-tools \
    gstreamer1.0-plugins-good \
    gstreamer1.0-plugins-bad \
    gstreamer1.0-plugins-ugly \
    gstreamer1.0-libav \
    gstreamer1.0-alsa \
    libgstrtspserver-1.0-0 \
    libjansson4

=======================================================================
Installing librdkafka to use kafka protocol adaptor with message broker
=======================================================================
Refer to the README files available under
/opt/nvidia/deepstream/deepstream/sources/libs/kafka_protocol_adaptor
for detailed documentation on prerequisites and usages of kafka protocol
adaptor with the message broker plugin for sending messages to cloud.

=======================================================================
Using AMQP protocol adaptor with message broker
=======================================================================
Refer to the README files available under
/opt/nvidia/deepstream/deepstream/sources/libs/amqp_protocol_adaptor
for detailed documentation on prerequisites and usages of rabbitmq based
amqp protocol adaptor with the message broker plugin for sending messages to cloud.

=======================================================================
Using Azure MQTT protocol adaptor with message broker
=======================================================================
Refer to the README files available under sources/libs/azure_protocol_adaptor
for detailed documentation on prerequisites and usages of azure MQTT protocol
adaptor with the message broker plugin for sending messages to cloud.

Refer to the source code and README of deepstream-test4 available under
sources/apps/sample_apps/deepstream-test4/ to send messages to the cloud.

=======================================================================
Installing libhiredis to use Redis protocol adaptor with message broker
=======================================================================
Refer to the README files available under
/opt/nvidia/deepstream/deepstream/sources/libs/redis_protocol_adaptor
for detailed documentation on prerequisites and usages of redis protocol
adaptor with the message broker plugin for sending messages to cloud.

=======================================================================
[Optional] Uninstall DeepStream 4.0
=======================================================================
To uninstall any previously installed DeepStream 4.0 libraries, use uninstall.sh script.

1. Open the uninstall.sh file which will be present in /opt/nvidia/deepstream/deepstream/
2. Set PREV_DS_VER as 4.0
3. Run the script as sudo ./uninstall.sh

=======================================================================
Installing latest nvv4l2 gstreamer plugin
=======================================================================
To install latest gstreamer nvv4l2 plugin:
1. Open the apt source configuration file in a text editor, for example:
   $ sudo vi /etc/apt/sources.list.d/nvidia-l4t-apt-source.list
2. Change the repository name and download URL in the deb commands as below.
     deb https://repo.download.nvidia.com/jetson/common r32.4 main
     deb https://repo.download.nvidia.com/jetson/<platform> r32.4 main
   Where <platform> identifies the platform’s processor:
   - t186 for Jetson TX2 series
   - t194 for Jetson AGX Xavier series or Jetson Xavier NX
   - t210 for Jetson Nano or Jetson TX1
   If your platform is Jetson Xavier NX, for example:
     deb https://repo.download.nvidia.com/jetson/common r32.4 main
     deb https://repo.download.nvidia.com/jetson/t194 r32.4 main
3. Save and close the source configuration file.
4. Enter the commands:
   $ sudo apt update
   $ sudo apt install --reinstall nvidia-l4t-gstreamer
   If apt prompts you to choose a configuration file, reply Y for yes
   (to use the NVIDIA updated version of the file).

=======================================================================
Installing DeepStream SDK on Jetson
=======================================================================
After using the JetPack SDK Manager, you are ready to install and run
the DeepStream SDK on Jetson.

To install the DeepStream SDK on Jetson:
1. On the Jetson target development board, determine the IP address by
   executing the command:
   ifconfig

2. Copy the DeepStream on Jetson SDK tarball from the host system to the NVIDIA
   home directory on the Jetson development board:
   scp deepstream_sdk_<DS_VERSION>_jetson.tbz2 nvidia@$<ip_address>:~

   Where:
   - <ip_address> is the IP address as determined earlier.

3. On the Jetson development board, navigate to the DeepStream
   package and extract it to root as follows:
   sudo tar xvf deepstream_sdk_<DS_VERSION>_jetson.tbz2 -C /

4. The install.sh script will now be found in:
   /opt/nvidia/deepstream/deepstream-<DS_REL_VERSION>/
   Run the install.sh script as follows:
   sudo ./install.sh

5. Execute the following command on the Jetson development board:
   sudo ldconfig

=======================================================================
Running the samples
=======================================================================
1. Go to samples directory and run:
   deepstream-app -c <path to config.txt>
2. Application config files included in `configs/deepstream-app/`:
   a. source30_1080p_dec_infer-resnet_tiled_display_int8.txt (30 Decode + Infer)
   b. source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt
      (4 Decode + Infer + SGIE + Tracker)
3. Configuration files for "nvinfer" element in `configs/deepstream-app/`:
   a. config_infer_primary.txt (Primary Object Detector)
   b. config_infer_secondary_carcolor.txt (Secondary Car Color Classifier)
   c. config_infer_secondary_carmake.txt (Secondary Car Make Classifier)
   d. config_infer_secondary_vehicletypes.txt (Secondary Vehicle Type Classifier)

=======================================================================
Running the Triton Inference Server samples
=======================================================================
Instructions to prepare and run Triton inference server samples
are provided in samples/configs/deepstream-app-triton/README.

=========================================================================
Downloading and Running the Pre-trained TAO Toolkit Models
=========================================================================
Instructions to download and run the pre-trained TAO Toolkit models
are provided in samples/configs/tao_pretrained_models/README.

=======================================================================
Notes:
=======================================================================
1. If the application runs into errors and cannot create gst elements, try again
after removing gstreamer cache:
   rm ${HOME}/.cache/gstreamer-1.0/registry.aarch64.bin
2. When running deepstream for first time, the following warning might show up:
   "GStreamer-WARNING: Failed to load plugin '...libnvdsgst_inferserver.so':
    libtrtserver.so: cannot open shared object file: No such file or directory"
This is a harmless warning indicating that the DeepStream's nvinferserver plugin
cannot be used since "Triton Inference Server" is not installed.
If required, try DeepStream's TRT-IS docker image or install the Triton Inference
Server manually. For more details, refer to https://github.com/NVIDIA/triton-inference-server.

Hi, I have flashed a new SD using SDK manager (SDK crashed easy and it took me a week of trying. :< )
It should have deepstream 6.0 inside, right? But it does not really fit what it describes in Readme file… Do I need to install the deepstream using deb file again? When the instruction gets into the IP address, I am confused, why deepstream needs ip address…

Is there a link I shall ask these questions about the deepstream posenet? If yes, I can close this thread. (I don’t see the issue tab in github…)

May I have a clear instruction of installing/running the demo of deepstream pose? 'Coz I need to evaluate if this deepstream posenet is good for my application. Thx

I still cannot get the deepstream pose estimation working. Seems 4.6 does not work with it. still looking for a tutoral for 4.6…

nvidia@nvidia-desktop:~/deepstream-pose-estimation$ sudo ./deepstream-pose-estimation-app test.mp4 .

(deepstream-pose-estimation-app:24675): GStreamer-WARNING **: 00:03:23.329: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_infer.so': /usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_infer.so: undefined symbol: nvds_add_user_meta_to_roi
One element could not be created. Exiting.

can deepstream pose estimation be used with usbcam? Is there an example?

thx

Yes, I got the pose estimation work in this link. Thx
Deepstream_pose_estimation - fatal error: gstnvdsmeta.h: No such file or directory - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.