Jetson inference code breaks when running on my other jetson

Hi,

@dusty_nv I have successfully been using the jetson-inference library for a while now, and I have two jetson nanos, jetson 1 (B01 version 4gb, IMX219-83 csi camera, 128gb sd card) and jetson 2 (4gb jetson nano with only 1 csi camera lane A01 version I think, IMX219-160 csi camera, 64gb sd card). The detectnet example works on both jetsons all by itself. And my project code that uses detectnet works on jetson 1 and used to work on jetson 2. Below I’ve provided:

  1. error log from my code on jetson 2
  2. modified detectnet.cpp
  3. My code with the modified detectnet (works on jetson 1, used to work on jetson 2)

Can you help me pinpoint the issue? Thank you in advance! :)

  1. Here is my console output with the errors:
[WARNING] This channel is already in use, continuing anyway. Use setwarnings(false) to disable warnings. channel: 19
[WARNING] This channel is already in use, continuing anyway. Use setwarnings(false) to disable warnings. channel: 18
[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstCamera -- attempting to create device csi://0
[gstreamer] gstCamera pipeline string:
[gstreamer] nvarguscamerasrc sensor-id=0 ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, framerate=30/1, format=(string)NV12 ! nvvidconv flip-method=2 ! video/x-raw(memory:NVMM) ! appsink name=mysink
[gstreamer] gstCamera successfully created device csi://0
[video]  created gstCamera from csi://0
------------------------------------------------
gstCamera video options:
------------------------------------------------
  -- URI: csi://0
     - protocol:  csi
     - location:  0
  -- deviceType: csi
  -- ioType:     input
  -- width:      1280
  -- height:     720
  -- frameRate:  30
  -- numBuffers: 4
  -- zeroCopy:   true
  -- flipMethod: rotate-180
------------------------------------------------

detectNet -- loading detection network model from:
          -- model        ../networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff
          -- input_blob   'Input'
          -- output_blob  'NMS'
          -- output_count 'NMS_1'
          -- class_labels ../networks/SSD-Mobilenet-v2/ssd_coco_labels.txt
          -- threshold    0.500000
          -- batch_size   1

[TRT]    TensorRT version 8.2.1
[TRT]    loading NVIDIA plugins...
[TRT]    Registered plugin creator - ::GridAnchor_TRT version 1
[TRT]    Registered plugin creator - ::GridAnchorRect_TRT version 1
[TRT]    Registered plugin creator - ::NMS_TRT version 1
[TRT]    Registered plugin creator - ::Reorg_TRT version 1
[TRT]    Registered plugin creator - ::Region_TRT version 1
[TRT]    Registered plugin creator - ::Clip_TRT version 1
[TRT]    Registered plugin creator - ::LReLU_TRT version 1
[TRT]    Registered plugin creator - ::PriorBox_TRT version 1
[TRT]    Registered plugin creator - ::Normalize_TRT version 1
[TRT]    Registered plugin creator - ::ScatterND version 1
[TRT]    Registered plugin creator - ::RPROI_TRT version 1
[TRT]    Registered plugin creator - ::BatchedNMS_TRT version 1
[TRT]    Registered plugin creator - ::BatchedNMSDynamic_TRT version 1
[TRT]    Could not register plugin creator -  ::FlattenConcat_TRT version 1
[TRT]    Registered plugin creator - ::CropAndResize version 1
[TRT]    Registered plugin creator - ::DetectionLayer_TRT version 1
[TRT]    Registered plugin creator - ::EfficientNMS_TRT version 1
[TRT]    Registered plugin creator - ::EfficientNMS_ONNX_TRT version 1
[TRT]    Registered plugin creator - ::EfficientNMS_TFTRT_TRT version 1
[TRT]    Registered plugin creator - ::Proposal version 1
[TRT]    Registered plugin creator - ::ProposalLayer_TRT version 1
[TRT]    Registered plugin creator - ::PyramidROIAlign_TRT version 1
[TRT]    Registered plugin creator - ::ResizeNearest_TRT version 1
[TRT]    Registered plugin creator - ::Split version 1
[TRT]    Registered plugin creator - ::SpecialSlice_TRT version 1
[TRT]    Registered plugin creator - ::InstanceNormalization_TRT version 1
[TRT]    completed loading NVIDIA plugins.
[TRT]    detected model format - UFF  (extension '.uff')
[TRT]    desired precision specified for GPU: FASTEST
[TRT]    requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT]    [MemUsageChange] Init CUDA: CPU +229, GPU +0, now: CPU 252, GPU 3888 (MiB)
[TRT]    [MemUsageSnapshot] Begin constructing builder kernel library: CPU 252 MiB, GPU 3882 MiB
[TRT]    [MemUsageSnapshot] End constructing builder kernel library: CPU 282 MiB, GPU 3899 MiB
[TRT]    native precisions detected for GPU:  FP32, FP16
[TRT]    selecting fastest native precision for GPU:  FP16
[TRT]    found engine cache file ../networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff.1.1.8201.GPU.FP16.engine
[TRT]    found model checksum ../networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff.sha256sum
[TRT]    echo "$(cat ../networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff.sha256sum) ../networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff" | sha256sum --check --status
[TRT]    model matched checksum ../networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff.sha256sum
[TRT]    loading network plan from engine cache... ../networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff.1.1.8201.GPU.FP16.engine
[TRT]    device GPU, loaded ../networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff
[TRT]    [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 286, GPU 3897 (MiB)
[TRT]    Loaded engine size: 33 MiB
[TRT]    Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
[TRT]    Using cublas as a tactic source
[TRT]    [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +158, GPU -50, now: CPU 463, GPU 3857 (MiB)
[TRT]    Using cuDNN as a tactic source
[TRT]    [MemUsageChange] Init cuDNN: CPU +240, GPU +8, now: CPU 703, GPU 3865 (MiB)
[TRT]    Deserialization required 10033012 microseconds.
[TRT]    [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +0, GPU +33, now: CPU 0, GPU 33 (MiB)
[TRT]    Using cublas as a tactic source
[TRT]    [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +1, GPU +2, now: CPU 704, GPU 3893 (MiB)
[TRT]    Using cuDNN as a tactic source
[TRT]    [MemUsageChange] Init cuDNN: CPU +0, GPU +0, now: CPU 704, GPU 3893 (MiB)
[TRT]    Total per-runner device persistent memory is 22143488
[TRT]    Total per-runner host persistent memory is 135488
[TRT]    Allocated activation device memory of size 13360640
[TRT]    [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +34, now: CPU 0, GPU 67 (MiB)
[TRT]    
[TRT]    CUDA engine context initialized on device GPU:
[TRT]       -- layers       124
[TRT]       -- maxBatchSize 1
[TRT]       -- deviceMemory 13360640
[TRT]       -- bindings     3
[TRT]       binding 0
                -- index   0
                -- name    'Input'
                -- type    FP32
                -- in/out  INPUT
                -- # dims  3
                -- dim #0  3
                -- dim #1  300
                -- dim #2  300
[TRT]       binding 1
                -- index   1
                -- name    'NMS'
                -- type    FP32
                -- in/out  OUTPUT
                -- # dims  3
                -- dim #0  1
                -- dim #1  100
                -- dim #2  7
[TRT]       binding 2
                -- index   2
                -- name    'NMS_1'
                -- type    FP32
                -- in/out  OUTPUT
                -- # dims  3
                -- dim #0  1
                -- dim #1  1
                -- dim #2  1
[TRT]    
[TRT]    binding to input 0 Input  binding index:  0
[TRT]    binding to input 0 Input  dims (b=1 c=3 h=300 w=300) size=1080000
[TRT]    binding to output 0 NMS  binding index:  1
[TRT]    binding to output 0 NMS  dims (b=1 c=1 h=100 w=7) size=2800
[TRT]    binding to output 1 NMS_1  binding index:  2
[TRT]    binding to output 1 NMS_1  dims (b=1 c=1 h=1 w=1) size=4
[TRT]    
[TRT]    device GPU, ../networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff initialized.
[TRT]    W = 7  H = 100  C = 1
[TRT]    detectNet -- maximum bounding boxes:   100
[TRT]    loaded 91 class labels
[TRT]    detectNet -- number of object classes:  91
[TRT]    loaded 0 class colors
[TRT]    didn't load expected number of class colors  (0 of 91)
[TRT]    filling in remaining 91 class colors with default colors
[gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter1
[gstreamer] gstreamer changed state from NULL to READY ==> nvvconv0
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter0
[gstreamer] gstreamer changed state from NULL to READY ==> nvarguscamerasrc0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter1
[gstreamer] gstreamer changed state from READY to PAUSED ==> nvvconv0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter0
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> nvarguscamerasrc0
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer message new-clock ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> nvvconv0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> nvarguscamerasrc0
[gstreamer] gstreamer message stream-start ==> pipeline0
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 3264 x 2464 FR = 21.000000 fps Duration = 47619048 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 3264 x 1848 FR = 28.000001 fps Duration = 35714284 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1920 x 1080 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1640 x 1232 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 120.000005 fps Duration = 8333333 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: Running with following settings:
   Camera index = 0 
   Camera mode  = 5 
   Output Stream W = 1280 H = 720 
   seconds to Run    = 0 
   Frame Rate = 120.000005 
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[TRT]    detectNet::Detect( 0x(nil), 1280, 720 ) -> invalid parameters
CONSUMER: Producer has connected; continuing.
[gstreamer] gstCamera -- onPreroll
[gstreamer] gstBufferManager recieve caps:  video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)NV12, framerate=(fraction)30/1
[gstreamer] gstBufferManager -- recieved first frame, codec=raw format=nv12 width=1280 height=720 size=1008
[gstreamer] gstBufferManager -- recieved NVMM memory
[cuda]   allocated 4 ring buffers (8 bytes each, 32 bytes total)
[gstreamer] gstreamer changed state from READY to PAUSED ==> mysink
[gstreamer] gstreamer message async-done ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> mysink
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> pipeline0
[cuda]   allocated 4 ring buffers (2764800 bytes each, 11059200 bytes total)
[cuda]   cudaGetLastError()
[cuda]      invalid resource handle (error 400) (hex 0x190)
[cuda]      /home/crose72/Documents/GitHub/jetson-inference/utils/cuda/cudaYUV-NV12.cu:154
[cuda]   cudaNV12ToRGB(input, (uchar3*)output, width, height, stream)
[cuda]      invalid resource handle (error 400) (hex 0x190)
[cuda]      /home/crose72/Documents/GitHub/jetson-inference/utils/cuda/cudaColorspace.cpp:43
[cuda]   cudaConvertColor(latestYUV, mFormatYUV, nextRGB, format, mOptions->width, mOptions->height, stream)
[cuda]      invalid resource handle (error 400) (hex 0x190)
[cuda]      /home/crose72/Documents/GitHub/jetson-inference/utils/codec/gstBufferManager.cpp:445
[gstreamer] gstBufferManager -- unsupported image format (rgb8)
[gstreamer]                     supported formats are:
[gstreamer]                        * rgb8
[gstreamer]                        * rgba8
[gstreamer]                        * rgb32f
[gstreamer]                        * rgba32f
[gstreamer] gstCamera::Capture() -- an error occurred retrieving the next image buffer
[TRT]    detectNet::Detect( 0x(nil), 1280, 720 ) -> invalid parameters
[cuda]   cudaGetLastError()
[cuda]      invalid resource handle (error 400) (hex 0x190)
[cuda]      /home/crose72/Documents/GitHub/jetson-inference/utils/cuda/cudaYUV-NV12.cu:154
[cuda]   cudaNV12ToRGB(input, (uchar3*)output, width, height, stream)
[cuda]      invalid resource handle (error 400) (hex 0x190)
[cuda]      /home/crose72/Documents/GitHub/jetson-inference/utils/cuda/cudaColorspace.cpp:43
[cuda]   cudaConvertColor(latestYUV, mFormatYUV, nextRGB, format, mOptions->width, mOptions->height, stream)
[cuda]      invalid resource handle (error 400) (hex 0x190)
[cuda]      /home/crose72/Documents/GitHub/jetson-inference/utils/codec/gstBufferManager.cpp:445
[gstreamer] gstBufferManager -- unsupported image format (rgb8)
[gstreamer]                     supported formats are:
[gstreamer]                        * rgb8
[gstreamer]                        * rgba8
[gstreamer]                        * rgb32f
[gstreamer]                        * rgba32f
[gstreamer] gstCamera::Capture() -- an error occurred retrieving the next image buffer
[TRT]    detectNet::Detect( 0x(nil), 1280, 720 ) -> invalid parameters
  1. Modified detectnet.cpp (working)
/*
 * Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
 *
 * Permission is hereby granted, free of charge, to any person obtaining a
 * copy of this software and associated documentation files (the "Software"),
 * to deal in the Software without restriction, including without limitation
 * the rights to use, copy, modify, merge, publish, distribute, sublicense,
 * and/or sell copies of the Software, and to permit persons to whom the
 * Software is furnished to do so, subject to the following conditions:
 *
 * The above copyright notice and this permission notice shall be included in
 * all copies or substantial portions of the Software.
 *
 * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
 * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
 * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
 * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
 * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
 * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
 * DEALINGS IN THE SOFTWARE.
 */

#include "jetson-inference/detectNet.h"
#include "jetson-utils/videoSource.h"
#include "jetson-utils/videoOutput.h"
#include "jetson-inference/detectNet.h"
#include "jetson-inference/objectTracker.h"

#include <signal.h>

detectNet *net;
bool signal_recieved = false;

void sig_handler(int signo)
{
    if (signo == SIGINT)
    {
        LogVerbose("received SIGINT\n");
        signal_recieved = true;
    }
}

int usage()
{
    printf("usage: detectnet [--help] [--network=NETWORK] [--threshold=THRESHOLD] ...\n");
    printf("                 input [output]\n\n");
    printf("Locate objects in a video/image stream using an object detection DNN.\n");
    printf("See below for additional arguments that may not be shown above.\n\n");
    printf("positional arguments:\n");
    printf("    input           resource URI of input stream  (see videoSource below)\n");
    printf("    output          resource URI of output stream (see videoOutput below)\n\n");

    printf("%s", detectNet::Usage());
    printf("%s", objectTracker::Usage());
    printf("%s", videoSource::Usage());
    printf("%s", videoOutput::Usage());
    printf("%s", Log::Usage());

    return 0;
}

int main(int argc, char **argv)
{
    /*
     * parse command line
     */
    commandLine cmdLine(argc, argv);

    if (cmdLine.GetFlag("help"))
        return usage();

    /*
     * attach signal handler
     */
    if (signal(SIGINT, sig_handler) == SIG_ERR)
        LogError("can't catch SIGINT\n");

    /*
     * create input stream
     */
    videoOptions options1;

    options1.resource = URI("csi://0");
    options1.resource.protocol = "csi";
    options1.resource.location = "0";
    options1.deviceType = videoOptions::DeviceType::DEVICE_CSI;
    options1.ioType = videoOptions::IoType::INPUT;
    options1.width = 1280;
    options1.height = 720;
    options1.frameRate = 30;
    options1.numBuffers = 4;
    options1.zeroCopy = true;
    options1.flipMethod = videoOptions::FlipMethod::FLIP_NONE;

    videoSource *input = videoSource::Create(options1);

    if (!input)
    {
        LogError("detectnet:  failed to create input stream\n");
        return 1;
    }

    /*
     * create output stream
     */
    videoOptions options2;

    options2.resource = "display://0"; // Specify the display URI
    options2.resource.protocol = "display";
    options2.resource.location = "0";
    options2.deviceType = videoOptions::DeviceType::DEVICE_DISPLAY;
    options2.ioType = videoOptions::IoType::OUTPUT;
    options2.width = 1920;
    options2.height = 1080;
    options2.frameRate = 30; // Adjust as needed
    options2.numBuffers = 4;
    options2.zeroCopy = true;

    videoOutput *output = videoOutput::Create(options2);

    if (!output)
    {
        LogError("detectnet:  failed to create output stream\n");
        return 1;
    }

    /*
     * create detection network
     */

    float detection_thresh = (float)0.7;
    uint32_t max_batch_size = (uint32_t)4;
    uint32_t min_frames = (uint32_t)25;
    uint32_t drop_frames = (uint32_t)50;
    float overlap_thresh = (float)0.5;

    const char *model = "../../../../SquirrelDefender/networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff";
    const char *class_labels = "../../../../SquirrelDefender/networks/SSD-Mobilenet-v2/ssd_coco_labels.txt";
    float thresh = (float)0.5;
    const char *input_blob = "Input";
    const char *output_blob = "NMS";
    Dims3 inputDims(3, 720, 1280);
    const char *output_count = "NMS_1";

    // Validate input/output blobs
    if (input_blob == nullptr || output_blob == nullptr)
    {
        LogError("Invalid input or output blob names\n");
        return false;
    }

    net = detectNet::Create(model, class_labels, thresh, input_blob, inputDims, output_blob, output_count);

    if (!net)
    {
        LogError("detectnet:  failed to load detectNet model\n");
        return 1;
    }

    // net->SetTracker(objectTrackerIOU::Create(min_frames, drop_frames, overlap_thresh));

    if (!net)
    {
        LogError("detectNet: failed to set tracker\n");
        return false;
    }

    // parse overlay flags
    const uint32_t overlayFlags = detectNet::OverlayFlagsFromStr(cmdLine.GetString("overlay", "box,labels,conf"));

    /*
     * processing loop
     */
    while (!signal_recieved)
    {
        // capture next image
        uchar3 *image = NULL;
        int status = 0;

        if (!input->Capture(&image, &status))
        {
            if (status == videoSource::TIMEOUT)
                continue;

            break; // EOS
        }

        // detect objects in the frame
        detectNet::Detection *detections = NULL;

        const int numDetections = net->Detect(image, input->GetWidth(), input->GetHeight(), &detections, overlayFlags);

        if (numDetections > 0)
        {
            LogVerbose("%i objects detected\n", numDetections);

            for (int n = 0; n < numDetections; n++)
            {
                LogVerbose("\ndetected obj %i  class #%u (%s)  confidence=%f\n", n, detections[n].ClassID, net->GetClassDesc(detections[n].ClassID), detections[n].Confidence);
                LogVerbose("bounding box %i  (%.2f, %.2f)  (%.2f, %.2f)  w=%.2f  h=%.2f\n", n, detections[n].Left, detections[n].Top, detections[n].Right, detections[n].Bottom, detections[n].Width(), detections[n].Height());

                if (detections[n].TrackID >= 0) // is this a tracked object?
                    LogVerbose("tracking  ID %i  status=%i  frames=%i  lost=%i\n", detections[n].TrackID, detections[n].TrackStatus, detections[n].TrackFrames, detections[n].TrackLost);
            }
        }

        // render outputs
        if (output != NULL)
        {
            output->Render(image, input->GetWidth(), input->GetHeight());

            // update the status bar
            char str[256];
            sprintf(str, "TensorRT %i.%i.%i | %s | Network %.0f FPS", NV_TENSORRT_MAJOR, NV_TENSORRT_MINOR, NV_TENSORRT_PATCH, precisionTypeToStr(net->GetPrecision()), net->GetNetworkFPS());
            output->SetStatus(str);

            // check if the user quit
            if (!output->IsStreaming())
                break;
        }

        // print out timing info
        net->PrintProfilerTimes();
    }

    /*
     * destroy resources
     */
    LogVerbose("detectnet:  shutting down...\n");

    SAFE_DELETE(input);
    SAFE_DELETE(output);
    SAFE_DELETE(net);

    LogVerbose("detectnet:  shutdown complete.\n");
    return 0;
}
  1. My project code (same as detecnet.cpp and works on jetson 1, but gives the error above on jetson 2)

Hi @crose72 , it looks like videoSource got a timeout capturing the first frame from camera (this more typically happens with RTP/RTSP network streams, as it takes time to negotiate the connection) and the error handling for it in detectNet.cpp is not working…

In that case, normally videoSource::TIMEOUT status code is returned, and then that image is not processed. But from the detectNet::Detect( 0x(nil), 1280, 720 ) -> invalid parameters message in the log, it appears that is not occurring and the NULL image is still being set to detectNet.

Can you also add a check for if( image == NULL) continue; after the capture call?

Hopefully then the next frame (or in a couple frames), the nvarguscamerasrc starts delivering frames and they start flowing through the pipeline.

So this function is called every loop, so I can’t put a continue statement in here, I’ll get a compile error. But your suggestion did make me look closely at the place where I call the Capture() function and the Detect()function. The changes I made below did fix this issue: [TRT] detectNet::Detect( 0x(nil), 1280, 720 ) -> invalid parameters.

Summary of changes:
1)only update the global variable called image if the local image isn’t NULL
2)only call the Detect function if the global variable called image isn’t NULL.

bool Video::capture_image(void)
{
    int status = 0;
    uchar3 *temp_image = NULL;

    if (!input->Capture(&temp_image, &status))
    {
        if (status != videoSource::TIMEOUT)
        {
            return false;
        }
    }

    // Checking for valid image, Capture may return true while image may still be NULL
    if (temp_image == NULL)
    {
        valid_image_rcvd = false;
        return false; // Return false if the image is not valid
    }

    image = temp_image;
    valid_image_rcvd = true;

    return true;
}

void Detection::detect_objects(void)
{
    uint32_t overlay_flags = 0;

    overlay_flags = overlay_flags | detectNet::OVERLAY_LABEL | detectNet::OVERLAY_CONFIDENCE | detectNet::OVERLAY_TRACKING | detectNet::OVERLAY_LINES;

    if (overlay_flags > 0 && image != NULL)
    {
        numDetections = net->Detect(image, input->GetWidth(), input->GetHeight(), &detections, overlay_flags);
    }
    else if (image != NULL)
    {
        numDetections = net->Detect(image, input->GetWidth(), input->GetHeight(), &detections);
    }
    else
    {
        // No other options
    }
}

@dusty_nv Now that that invalid parameters issue is resolved I’m left with these:

CONSUMER: Producer has connected; continuing.
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera -- onPreroll
[gstreamer] gstBufferManager recieve caps:  video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)NV12, framerate=(fraction)30/1
[gstreamer] gstBufferManager -- recieved first frame, codec=raw format=nv12 width=1280 height=720 size=1008
[gstreamer] gstBufferManager -- recieved NVMM memory
[cuda]   allocated 4 ring buffers (8 bytes each, 32 bytes total)
[gstreamer] gstreamer changed state from READY to PAUSED ==> mysink
[gstreamer] gstreamer message async-done ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> mysink
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> pipeline0
nvbuf_utils: dmabuf_fd 1221 mapped entry NOT found
nvbuf_utils: Can not get HW buffer from FD... Exiting...
NvBufferGetParams failed for dst_dmabuf_fd
nvbuffer_transform Failed
[cuda]   allocated 4 ring buffers (2764800 bytes each, 11059200 bytes total)
[cuda]   cudaGetLastError()
[cuda]      invalid resource handle (error 400) (hex 0x190)
[cuda]      /home/crose72/Documents/GitHub/jetson-inference/utils/cuda/cudaYUV-NV12.cu:154
[cuda]   cudaNV12ToRGB(input, (uchar3*)output, width, height, stream)
[cuda]      invalid resource handle (error 400) (hex 0x190)
[cuda]      /home/crose72/Documents/GitHub/jetson-inference/utils/cuda/cudaColorspace.cpp:43
[cuda]   cudaConvertColor(latestYUV, mFormatYUV, nextRGB, format, mOptions->width, mOptions->height, stream)
[cuda]      invalid resource handle (error 400) (hex 0x190)
[cuda]      /home/crose72/Documents/GitHub/jetson-inference/utils/codec/gstBufferManager.cpp:445
[gstreamer] gstBufferManager -- unsupported image format (rgb8)
[gstreamer]                     supported formats are:
[gstreamer]                        * rgb8
[gstreamer]                        * rgba8
[gstreamer]                        * rgb32f
[gstreamer]                        * rgba32f
[gstreamer] gstCamera::Capture() -- an error occurred retrieving the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer

The

nvbuf_utils: dmabuf_fd 1221 mapped entry NOT found
nvbuf_utils: Can not get HW buffer from FD... Exiting...
NvBufferGetParams failed for dst_dmabuf_fd
nvbuffer_transform Failed

and cudaGetLastError(), cudaNV12ToRGB(), cudaConvertColor() errors are new to me. Thoughts on these? The waiting for the next image buffer is also new but I think these errors above are the cause of that.

@crose72 this is some error happening in the NVMM mapping for zero copy. Can you try recompiling jetson-inference with cmake ../ -DENABLE_NVMM=off ?

@dusty_nv your suggestioned worked for that issue, now all that’s left is this:

[cuda]   cudaGetLastError()
[cuda]      invalid resource handle (error 400) (hex 0x190)
[cuda]      /home/crose72/Documents/GitHub/jetson-inference/utils/cuda/cudaYUV-NV12.cu:154
[cuda]   cudaNV12ToRGB(input, (uchar3*)output, width, height, stream)
[cuda]      invalid resource handle (error 400) (hex 0x190)
[cuda]      /home/crose72/Documents/GitHub/jetson-inference/utils/cuda/cudaColorspace.cpp:43
[cuda]   cudaConvertColor(latestYUV, mFormatYUV, nextRGB, format, mOptions->width, mOptions->height, stream)
[cuda]      invalid resource handle (error 400) (hex 0x190)
[cuda]      /home/crose72/Documents/GitHub/jetson-inference/utils/codec/gstBufferManager.cpp:445
[gstreamer] gstBufferManager -- unsupported image format (rgb8)
[gstreamer]                     supported formats are:
[gstreamer]                        * rgb8
[gstreamer]                        * rgba8
[gstreamer]                        * rgb32f
[gstreamer]                        * rgba32f
[gstreamer] gstCamera::Capture() -- an error occurred retrieving the next image buffer

The only difference between the previous error logs and this one is that I’m only seeing one gstreamer buffer error at the end of these cuda errors, and then they repeat.

Hmm, the fact that detectnet runs for you fine, coupled with this invalid resource handle (which seem to typically related to CUDA contexts) makes me think that it could be threading related. Is your application using multiple threads or running videoSource/detectNet/videoOutput in different threads?

You can try using cudaSetDevice() at the beginning of thread execution to see if that helps. I take it there were no other CUDA errors higher up in the log?

The videoSource/detectNet/videoOutput have logic spread out across different functions that share global variables. For example videoSource *input is shared across files to detectNet. detectNet *net gets passed to the function that outputs the video. videoOutput *output does consume net. Is this what you’re talking about? The only other thing related to threads that I can think of is this line where I try to maintain a specific loop rate:

std::chrono::duration_cast<std::chrono::milliseconds>(loop_end_time - loop_start_time);
while (loop_duration < std::chrono::milliseconds(25))
{
    std::this_thread::sleep_for(std::chrono::milliseconds(1)); // wait in small increments
    loop_end_time = std::chrono::steady_clock::now();
    loop_duration = std::chrono::duration_cast<std::chrono::milliseconds>(loop_end_time - loop_start_time);
}

But really the only difference between my application and detectntet is that instead of everything being local variables because they’re al in the main function, I’ve separated out the creation of the video input/output and detection network, and created some global variables that are passed between these logics. Do you want to see the logic? I don’t wanna make you do a deep dive of my code but it’s pretty straightforward if you want me to include it here.

And you are correct that there are no other errors higher up in the log, there’s a warning about colors for classes being default and a warning about using an engine across different devices (same device though).

I have some more information that might be related, I saw nvbuf_utils: dmabuf_fd 1243 mapped entry NOT found (or maybe it was 1224, slightly different than the 1221 above) whenever I EITHER had my camera physically upside down, or if I used the FLIP_ROTATE_180 option when creating the videoSource input/output.

I sporadically get that too, and although it seems to be warning and not cause issue, in your case wanted to eliminate the possibility. You can always turn it back to -DENABLE_NVMM=on like you have been and see if it causes any difference.

Cool project! I would like a drone like that for sure. In this case, I would recommend increasing the options.numBuffers of the videoSource (to 30 or something) and see if that causes any change, or make a copy of the frame to use, or temporarily try a different camera resolution. You can also add debug printf’s into jetson-inference/jetson-utils to make sure the pointers are what you expect. It is seemingly related to the memory accesses but tricky to put a finger on eyeballing the code.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.