I come from raspberry experience, now i would like to try my hands on jetson nano owing to its AI capabilities. I have few noob questions
Can video file (h264) captured from camera feed is directly playable or one has to convert it into a container format? If not, is it doable with gstreamer on fly?
Can we somehow show the live system time on captured video, this feature is available on raspivid command (raspivid -a 12)
What is the status of overlayFS functionality on B01 model
I have researched on other functionaloties and they seem to fullfil my needs such as rtc and wake alarm etc. My plan is to use jetson smart camera feature ie to recording when certain objects are detected.
If the h264 stream is valid, it should be played by running gs-launch-1.0 command filesrc ! h264parse ! nvv4l2decoder ! nvoverlaysink
Not sure about this. Two multimedia frameworks with hardware acceleration on Jetson platforms are gstreamer and jetson_multimedia_api. You may check if rtspsrc in gstreamer supports this.
By default the sample rootfs is Ubuntu. You would need customize rootfs for this.
DeepStream SDK is an optimal solution of running deep learning inference. Please install it and give it a try. One thing is that smart recording is only supported in IP cameras(using rtspsrc) If you use Bayer sensors such as Raspberry Pi camera V2, you are not able to use smart recording.