I have model for road segmentation as shown in the picture below, the question is how to implement this model using jetson inference for Live Camera Segmentation ? Is there any resource or tutorial to learn ?
You can find the document to run the segmentation model with a live camera below:
This file has been truncated.
<img src="https://github.com/dusty-nv/jetson-inference/raw/master/docs/images/deep-vision-header.jpg" width="100%">
<p align="right"><sup><a href="segnet-console-2.md">Back</a> | <a href="posenet.md">Next</a> | </sup><a href="../README.md#hello-ai-world"><sup>Contents</sup></a>
# Running the Live Camera Segmentation Demo
The [`segnet.cpp`](../examples/segnet/segnet.cpp) / [`segnet.py`](../python/examples/segnet.py) sample that we used previously can also be used for realtime camera streaming. The types of supported cameras include:
- MIPI CSI cameras (`csi://0`)
- V4L2 cameras (`/dev/video0`)
- RTP/RTSP streams (`rtsp://username:password@ip:port`)
For more information about video streams and protocols, please see the [Camera Streaming and Multimedia](aux-streaming.md) page.
Run the program with `--help` to see a full list of options - some of them specific to segNet include:
- optional `--network` flag changes the segmentation model being used (see [available networks](segnet-console-2.md#pre-trained-segmentation-models-available))
- optional `--visualize` flag accepts `mask` and/or `overlay` modes (default is `overlay`)
- optional `--alpha` flag sets the alpha blending value for the overlay (default is `120`)
- optional `--filter-mode` flag accepts `point` or `linear` sampling (default is `linear`)
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.