How to use onnx_to_tensorrt.py with a webcam
A sample of TensorRT, “/usr/src/tensorrt/samples/python/yolov3_onnx/onnx_to_tensorrt.py”, is used.
I converted Yolov3-tiny.weight to onnx and made Yolov3-tiny.trt.
I want to use a webcam to perform real-time reasoning. What can I do?
I’m using Jetson Nano ( JetPack 4.2.2.).
Hi,
Please refer to below samples in case it helps:
<img src="https://github.com/dusty-nv/jetson-inference/raw/master/docs/images/deep-vision-header.jpg" width="100%">
<p align="right"><sup><a href="imagenet-example-2.md">Back</a> | <a href="detectnet-console-2.md">Next</a> | </sup><a href="../README.md#hello-ai-world"><sup>Contents</sup></a>
<br/>
<sup>Image Recognition</sup></p>
# Running the Live Camera Recognition Demo
The [`imagenet.cpp`](../examples/imagenet/imagenet.cpp) / [`imagenet.py`](../python/examples/imagenet.py) samples that we used previously can also be used for realtime camera streaming. The types of supported cameras include:
- MIPI CSI cameras (`csi://0`)
- V4L2 cameras (`/dev/video0`)
- RTP/RTSP streams (`rtsp://username:password@ip:port`)
For more information about video streams and protocols, please see the [Camera Streaming and Multimedia](aux-streaming.md) page.
Below are some typical scenarios for launching the program on a camera feed (run `--help` for more options):
#### C++
``` bash
This file has been truncated. show original
<img src="https://github.com/dusty-nv/jetson-inference/raw/master/docs/images/deep-vision-header.jpg" width="100%">
# Deploying Deep Learning
Welcome to our instructional guide for inference and realtime [DNN vision](#api-reference) library for NVIDIA **[Jetson Nano/TX1/TX2/Xavier NX/AGX Xavier/AGX Orin](http://www.nvidia.com/object/embedded-systems.html)**.
This repo uses NVIDIA **[TensorRT](https://developer.nvidia.com/tensorrt)** for efficiently deploying neural networks onto the embedded Jetson platform, improving performance and power efficiency using graph optimizations, kernel fusion, and FP16/INT8 precision.
Vision primitives, such as [`imageNet`](docs/imagenet-console-2.md) for image recognition, [`detectNet`](docs/detectnet-console-2.md) for object detection, [`segNet`](docs/segnet-console-2.md) for semantic segmentation, and [`poseNet`](docs/posenet.md) for pose estimation inherit from the shared [`tensorNet`](c/tensorNet.h) object. Examples are provided for streaming from live camera feed and processing images. See the **[API Reference](#api-reference)** section for detailed reference documentation of the C++ and Python libraries.
<img src="https://github.com/dusty-nv/jetson-inference/raw/dev/docs/images/deep-vision-primitives.jpg">
Follow the [Hello AI World](#hello-ai-world) tutorial for running inference and transfer learning onboard your Jetson, including collecting your own datasets and training your own models. It covers image classification, object detection, semantic segmentation, pose estimation, and mono depth.
### Table of Contents
* [Hello AI World](#hello-ai-world)
* [Video Walkthroughs](#video-walkthroughs)
* [API Reference](#api-reference)
* [Code Examples](#code-examples)
* [Pre-Trained Models](#pre-trained-models)
This file has been truncated. show original
Thanks
My jkjung-avt/tensorrt_demos includes TensorRT implementations for yolov3 and yolov4 models. It could support USB webcam, IP cam ,or image/video file inputs.