I run detectornet-camera, I see a green screen.

after new version, When I run the program, there is a green screen. What’s the solution?

edit gstCamera.cpp ss<<“filesrc location=/home/user/test.avi ! decodebin ! videoconvert ! appsink name=mysink”;

log:

[OpenGL] glDisplay – X screen 0 resolution: 640x480
[OpenGL] glDisplay – display device initialized
[gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> videoconvert0
[gstreamer] gstreamer changed state from NULL to READY ==> typefind
[gstreamer] gstreamer changed state from NULL to READY ==> decodebin0
[gstreamer] gstreamer changed state from NULL to READY ==> filesrc0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer changed state from READY to PAUSED ==> videoconvert0
[gstreamer] gstreamer stream status CREATE ==> sink
[gstreamer] gstreamer changed state from READY to PAUSED ==> typefind
[gstreamer] gstreamer changed state from READY to PAUSED ==> filesrc0
[gstreamer] gstreamer stream status ENTER ==> sink
[gstreamer] gstreamer changed state from NULL to READY ==> avidemux0
[gstreamer] gstreamer stream status CREATE ==> sink
[gstreamer] gstreamer changed state from READY to PAUSED ==> avidemux0
[gstreamer] gstreamer stream status ENTER ==> sink
detectnet-camera: camera open for streaming
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 260
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 260
[gstreamer] gstCamera onPreroll
[gstreamer] gstCamera – allocated 16 ringbuffers, 976 bytes each
[gstreamer] gstreamer stream status CREATE ==> src_0
[gstreamer] gstreamer stream status ENTER ==> src_0
[gstreamer] gstreamer changed state from NULL to READY ==> mpeg4vparse0
[gstreamer] gstreamer changed state from READY to PAUSED ==> mpeg4vparse0
[gstreamer] gstreamer changed state from NULL to READY ==> nvv4l2decoder0
[gstreamer] gstreamer changed state from READY to PAUSED ==> nvv4l2decoder0
[gstreamer] gstreamer msg duration-changed ==> mpeg4vparse0
[gstreamer] gstreamer changed state from READY to PAUSED ==> decodebin0
[gstreamer] gstreamer msg stream-start ==> pipeline0
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer mysink missing gst_tag_list_to_string()
[gstreamer] gstreamer mysink missing gst_tag_list_to_string()
[gstreamer] gstreamer mysink missing gst_tag_list_to_string()
[gstreamer] gstreamer changed state from READY to PAUSED ==> mysink
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0
[gstreamer] gstreamer msg async-done ==> pipeline0
[gstreamer] gstreamer msg new-clock ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> mysink
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> videoconvert0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> nvv4l2decoder0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> mpeg4vparse0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> multiqueue0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> avidemux0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> typefind
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> decodebin0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> filesrc0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> pipeline0
[gstreamer] gstreamer mysink missing gst_tag_list_to_string()
[gstreamer] gstreamer mysink missing gst_tag_list_to_string()
[gstreamer] gstreamer mysink missing gst_tag_list_to_string()
[gstreamer] gstreamer mysink missing gst_tag_list_to_string()
[gstreamer] gstreamer mysink missing gst_tag_list_to_string()
[gstreamer] gstCamera – allocated 16 RGBA ringbuffers
[OpenGL] creating 640x480 texture

help me :(

Hi barca105, what is the line number of gstCamera.cpp that you edited? You would need to make sure that the data being passed to appsink is in NV12 format, which is what the nvarguscamerasrc element that is already in the pipeline outputs. The nvomx decoder elements can output NV12 - see the L4T GStreamer User Guide for examples.

Alternatively, you may find it easier to use this fork of jetson-inference with detectnet-video sample that plays back video file:
https://github.com/gcjyzdd/jetson-inference/blob/master/detectnet-video/detectnet-video.cpp

399 ~ 446 line edit :

// buildLaunchStr
bool gstCamera::buildLaunchStr( gstCameraSrc src )
{
	// gst-launch-1.0 nvcamerasrc fpsRange="30.0 30.0" ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1' ! \
	// nvvidconv flip-method=2 ! 'video/x-raw(memory:NVMM), format=(string)I420' ! fakesink silent=false -v
	// #define CAPS_STR "video/x-raw(memory:NVMM), width=(int)2592, height=(int)1944, format=(string)I420, framerate=(fraction)30/1"
	// #define CAPS_STR "video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1"
	std::ostringstream ss;

	// if( csiCamera() && src != GST_SOURCE_V4L2 )
	// {
	// 	mSource = src;	 // store camera source method

	// #if NV_TENSORRT_MAJOR > 1 && NV_TENSORRT_MAJOR < 5	// if JetPack 3.1-3.3 (different flip-method)
	// 	const int flipMethod = 0;					// Xavier (w/TRT5) camera is mounted inverted
	// #else
	// 	const int flipMethod = 2;
	// #endif	

	// 	if( src == GST_SOURCE_NVCAMERA )
	// 		ss << "nvcamerasrc fpsRange=\"30.0 30.0\" ! video/x-raw(memory:NVMM), width=(int)" << mWidth << ", height=(int)" << mHeight << ", format=(string)NV12 ! nvvidconv flip-method=" << flipMethod << " ! "; //'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1' ! ";
	// 	else if( src == GST_SOURCE_NVARGUS )
	// 		ss << "nvarguscamerasrc sensor-id=" << mSensorCSI << " ! video/x-raw(memory:NVMM), width=(int)" << mWidth << ", height=(int)" << mHeight << ", framerate=30/1, format=(string)NV12 ! nvvidconv flip-method=" << flipMethod << " ! ";
		
	// 	ss << "video/x-raw ! appsink name=mysink";
	// }
	// else
	// {
	// 	ss << "v4l2src device=" << mCameraStr << " ! ";
	// 	ss << "video/x-raw, width=(int)" << mWidth << ", height=(int)" << mHeight << ", "; 
		
	// #if NV_TENSORRT_MAJOR >= 5
	// 	ss << "format=YUY2 ! videoconvert ! video/x-raw, format=RGB ! videoconvert !";
	// #else
	// 	ss << "format=RGB ! videoconvert ! video/x-raw, format=RGB ! videoconvert !";
	// #endif

	// 	ss << "appsink name=mysink";

	// 	mSource = GST_SOURCE_V4L2;
	// }
	ss << "filesrc location=/home/user/test.avi ! decodebin ! videoconvert ! appsink name=mysink";
	mLaunchStr = ss.str();

	printf(LOG_GSTREAMER "gstCamera pipeline string:\n");
	printf("%s\n", mLaunchStr.c_str());
	return true;
}

It would need more editing of the pipeline to use the NVIDIA omx decoder element and get it into NV12 format, and mSource should still be set to GST_SOURCE_NVARGUS so it knows to convert from NV12->RGBA in ConvertRGBA()

For video playback, try using cvVideoCapture like in this example:
https://github.com/gcjyzdd/jetson-inference/blob/master/detectnet-video/detectnet-video.cpp