"Cuda Error in NCHWTONCHHW2: 33 (invalid resource handle) ",How to solve it?

But this is for the entire session, and I want to ask how do I target multiple models within the same session?

I tried using x86 code methods, and they didn’t seem to work

Or if both TRT and pbfile are enabled, how do you get pbfile to go CPU? (on jetosn nano)

Hi,

Our official TensorFlow package also has CPU implementation.
So you just need to apply the config to make TensorFlow run on CPU.

TensorFlow also support model/layer device placement.

with tf.device('/cpu:0'):
...

You can check the tutorial for some details.

Thanks.

“With tf. Device ('/CPU: 0) :”
There is no effect, or error, as shown in the title of the error.
Cuda - error - in - nchwtonchhw2-33 - invalid - resource - handle

Hi,

Have you fixed this issue?
Since this topic opened for a while, would you mind to share current status with us?

Thanks.

Hi!

I m trying to use the mars-small model with TRT, but seems some layers are not supported?
Any guidance of how can I solve it, as it causes memory overflows in a jetson NX?

Thanks for your help

Hi!
We seem to face up to same problem. I am also trying tensorRT and tensorflowwhich use pycuda and mars-small model.
I wonder tensorflow-CPU is work? and Is there other good way?
If you have, please share with me.
Thank you!

Hi, both

Do you still meet the same error as the original post?
If yes, would you mind to share a simple reproducible source with us?
If no, please file a new topic for your issue.

Thanks.

HI @AastaLLL , @kayccc ,

Even I have just encountered the same problem!

ERROR LOG:

[TensorRT] ERROR: …/rtSafe/cuda/reformat.cu (925) - Cuda Error in NCHWToNCHHW2: 400 (invalid resource handle)
[TensorRT] ERROR: FAILED_EXECUTION: std::exception

Let me brief you what I have done !

I have converted YOLOV3 model to onnx and from onnx to TRT model .

Now I am trying to run inference on received image using socketio . Here is my code:

Blockquote
“”"trt_yolo.py

This script demonstrates how to do real-time object detection with
TensorRT optimized YOLO engine.
“”"

import the necessary packages

import os
import time
import argparse
import numpy as np
import cv2
import pycuda.autoinit # This is needed for initializing CUDA driver
import socketio
import base64
from utils.yolo_classes import get_cls_dict
from utils.camera import add_camera_args, Camera
from utils.display import open_window, set_display, show_fps
from utils.visualization import BBoxVisualization
from utils.yolo_with_plugins import TrtYOLO

global conf_th

conf_th = 0.3
‘’’
Loading Model
‘’’

global trt_yolo

trt_yolo = TrtYOLO(“yolov3-custom-416”, (416, 416), 3)

print (“trt_yolo ==>”, trt_yolo )

‘’’
Shinobi Plugin Variables
‘’’
PLuginName = “abcd”
PluginKey = “abcd123123”
Host = ‘http://192.168.0.109:9090

‘’’
Socker IO Connection with Reconnection
‘’’
sio = socketio.Client(reconnection=True,reconnection_delay=1,ssl_verify = False)
sio.connect(Host,transports=‘websocket’)
sio.emit(‘ocv’,
{‘f’:‘init’,‘plug’:PLuginName,‘type’:‘detector’,‘connectionType’:‘websocket’,‘pluginKey’:PluginKey})

#Socket IO Connection Event , Built in Reconneciton Logic
@sio.event
def connect():
print(‘connection established :’)
sio.emit(‘ocv’,
{‘f’:‘init’,‘plug’:PLuginName,‘type’:‘detector’,‘connectionType’:‘websocket’,‘pluginKey’:PluginKey})

#Socket IO Re Connection Event
@sio.event
def reconnect():
print (“Reconnection established :”)
sio.emit(‘ocv’,
{‘f’:‘init’,‘plug’:PLuginName,‘type’:‘detector’,‘connectionType’:‘websocket’,‘pluginKey’:PluginKey})

#Socket IO Disconnect Event
@sio.event
def disconnect():
print(‘disconnected from server’)

def yolo_detection(img_np,trt_yolo,recvdImg,height, width,shinobiId,shonibiKe):
frame = img_np
trt_yolo = trt_yolo
print (“trt_yolo_YOLODETECTION”, trt_yolo)
(h, w) = frame.shape[:2]

recvdImg = recvdImg
boxes, confs, clss = trt_yolo.detect(frame, conf_th)
print ("boxes ", boxes)
print ("confs", confs)
print ("clss" , clss)

#f event ! , Frame will be revived in this fucntion
@sio.event
def f(data):
# print (“on_f”)
#print (“Data”,data)
# print (“type(data)”,type(data))
# print (“data[ke]”,data.get(“ke”))
# print (“data[f]”,data.get(“f”))
# print (“data[id]”,data.get(“id”))
Id = data.get(“id”)
Ke = data.get(“ke”)
#print (“data[frame]”,data.get(“frame”))
recvdImg = data.get(“frame”)
#print (“Type of Image”, type(recvdImg))
#print (“Length of Image”,len(recvdImg))
nparr = np.fromstring(recvdImg, np.uint8)
print (“trt_yolo ON F!! ==>”, trt_yolo )
img_np = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
#img_np = cv2.resize(img_np,inputShape,interpolation = cv2.INTER_AREA)
#print (“Image Recieved !!!”)
#cv2.imwrite(‘recvdImg.jpg’,img_np)
yolo_detection(img_np,trt_yolo,recvdImg,img_np.shape[0],img_np.shape[1],shinobiId,shonibiKe)

It will wait for socket io events!!

sio.wait()

Hi bhargav.ravat,

Please help to open a new topic for your issue. Thanks