No inference when running the object detection model trained in TAO with ISAAC ROS for Inference

We are trying to train a model (based on the Detectnet V2 architecture). The training is complete and after pruning, retraining and quantization (as described in the TAO toolkit’s jupyter notebook), we get a .etlt model. Running the inference on the jupyter notebook, there are valid detections visualized. This .etlt model is exported to ISAAC ROS, for generating the engine plan and the plan is generated without any errors. (The instruction given here were followed and necessary modifications were made as we are using a custom model with Detectnet_v2 architecture). However, although the model plans loads successfully, there are no detections visualized. (We even tried to print the Detection2DArray by modifyingthe isaac_ros_detectnet_visualizer.py node but there are no values.)

I go through isaac_ros_object_detection/tutorial-custom-model.md at main · NVIDIA-ISAAC-ROS/isaac_ros_object_detection · GitHub, may I know that if you ever follow the step and run successfully with the official TAO models(such as peoplenet mentioned in DetectNet Model ) ?

Yes. It was successful. Are there any additional steps that needs to be followed when using the model generated within TAO toolkit apart from the ones mention in the official documentation?

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

If the official peoplenet can work, please check the gap between your model and the official peoplenet.
Please pay attention to the width and height.
The peoplenet is 960 X 544 X 3 (W x H x C)
How about your model?

Please check all the command and pbtxt files.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.