I have trained SSD using the Transfer Learning Toolkit on Ubuntu 20.0.4 via the docker container. I have two questions,
-
The development environment is in Ubuntu 20.0.4. In the Jupyter Notebook, I’m able to infer the images using the following script,
!ssd inference --gpu_index=$GPU_INDEX
-m $USER_EXPERIMENT_DIR/export/trt.engine
-e $SPECS_DIR/ssd_retrain_resnet18_kitti.txt
-i /workspace/examples/ssd/test_samples
-o $USER_EXPERIMENT_DIR/ssd_infer_images
-t 0.4Now, How do I write business logic on top of it by feeding images from the OpenCV? Where is this file that has got the inference code?
-
In Jetson, I have used Deepstream to deploy. But I couldn’t find a simple implementation that shows how I could deploy the custom model on the Jetson using Python API? I have both
.tlt,.etlt,.enginefiles but the documentation is extremely worse and scattered so it’s hard to understand. And how do I see the output as we would with OpenCV usingimshow?