how to record .h264 compatible with DriveNet / LaneNet / Object tracker from a USB camera

Hi!

Since I don’t have access to a DrivePX 2 yet I wanted to run the examples not with the videos that you provide but with my own recordings. I have been trying to record a video which would fits the requirement but so far it seems that I’m not successful.

Can you please let me know if there is an easy way to record a video which would be compatible with your interface by using a regular USB camera (either webcam or point grey in my case)? (without considering the FOV / camera itself)

I have tried to record some samples and then use ffmpeg to convert them into .h264 the generated video can be seen with VLC but can’t be correctly displayed with your sample_camera_replay.

I have uploaded the file here: https://drive.google.com/file/d/0B6iYmJ9J5PYvRkNia3RqZ291RTA/view?usp=sharing (this is just an example)

the video itself is 1280*800 @30 fps which seems to be the input of laneNet.

In addition I was able to make use of a USB webcam with the sample_camera_usb. Is there any easy way to use that video stream directly into any of the sample examples you have?

thanks for your support!

Jeremy you would need a DRIVE PX 2 for what you are trying to do. Please contact us @ infodrivepx at NVidia dot com.

Hi Shri!

thanks and the DRIVE PX 2 has been ordered today!

I am confused why Jeremy would need a drive PX 2 to do what he wants to do. Can someone explain it to me?

There are a variety of demo applications that run on PC, that replay the video, object detectors, trackers, drivenet, lane net etc.

And these applications seem to take flags like: --video1=

Where normally these take in one of the .h264 video files that are provided. But If Jeremy created his own video file as a .h264 couldn’t he set the program to use his video, and let the sample application run on that?

I am just wondering what I am missing, and why a Drive PX2 is said to be necessary. If someone could explain it to me, it would help!

Thanks

Update: I see now a separate part of his question was about feeding a webcam stream directly into the sample examples. Couldn’t SensorIOCuda.cpp be modded, or another one created, that copies the RGB data from webcam into the cudaTextureObject rather than pulling it from the video file.

If something like that exists on Drive PX 2, couldn’t that be used on PC with usb webcam?