Hi everyone,
I’m implementing zero-shot detection on a Jetson platform using NVIDIA’s Metropolis microservices. The bounding boxes for detected objects in my video stream are not accurately aligning with the actual objects; they’re often offset or incorrectly sized.
Any optimization techniques or tips for resolving this issue?
Any advice or recommendations would be greatly appreciated!
Thanks,
Tanisha
kesong
June 21, 2024, 1:18am
3
Can you share the video to reproduce the issue?
kesong
June 25, 2024, 1:46am
5
We can reproduce the issue. Will update here when the fix available.
Is this solved?
Because I got the same issue when running Zero Shot Detection on Jetson Platform Services, Any possible solutions available?
kesong
July 17, 2024, 8:25am
7
Please get the fix on GitHub. You need to clone the JPS repository
git clone --recurse-submodules https://github.com/NVIDIA-AI-IOT/jetson-platform-services
Then you have two options to run the container with the fixed code
Option 1) Mount the new code in the pre-built container from NGC
cd jetson-platform-services/inference/zero_shot_detection
sudo docker compose -f compose-dev.yaml up -d
This will automatically pull the prebuilt container from NGC and mount the updated code from the github repository.
Option 2) Rebuild the container with the fix
First configure docker for building containers on Jetson by following the “Docker Configuration” section of the GitHub readme GitHub - NVIDIA-AI-IOT/jetson-platform-services: A collection of reference AI microservices and workflows for Jetson Platform Services
Then run the commands to rebuild the container with latest code
cd ~/jetson-platform-services/inference/zero_shot_detection
sudo bash build_container.sh
sudo docker compose up -d
These commands are also documented in the zero_shot_detection readme on the GitHub Repo jetson-platform-services/inference/zero_shot_detection at main · NVIDIA-AI-IOT/jetson-platform-services · GitHub