Hi Dusty,
I did a complete re-install of jetpack. I have Pytorch up and running. I am having a problem with a program that I am running. This is the part I am getting an error:
net=jetson.inference.imageNet(‘alexnet’,[‘–model= /home/jetson/Downloads/jetson-inference/python/training/classfication/myModel/resnet18.onnx’,‘–input_blob=input_0’,‘–output_blob=output_0’,‘–labels= /home/jetson/Downloads/jetson-inference/myTrain/labels.txt’])
This is what I get when running the program:
jetson@jetson-desktop:~/Desktop/pyPro$ /usr/bin/python3 /home/jetson/Desktop/pyPro/NVIDIA/deepLearning-10.py
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (1757) handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module source reported: Could not read from resource.
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (886) open OpenCV | GStreamer warning: unable to start pipeline
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (480) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created
jetson.inference – imageNet loading network using argv command line params
imageNet – loading classification network model from:
– prototxt (null)
– model /home/jetson/Downloads/jetson-inference/python/training/classfication/myModel/resnet18.onnx
– class_labels /home/jetson/Downloads/jetson-inference/myTrain/labels.txt
– input_blob ‘input_0’
– output_blob ‘output_0’
– batch_size 1
[TRT] TensorRT version 8.0.1
[TRT] loading NVIDIA plugins…
[TRT] Registered plugin creator - ::GridAnchor_TRT version 1
[TRT] Registered plugin creator - ::GridAnchorRect_TRT version 1
[TRT] Registered plugin creator - ::NMS_TRT version 1
[TRT] Registered plugin creator - ::Reorg_TRT version 1
[TRT] Registered plugin creator - ::Region_TRT version 1
[TRT] Registered plugin creator - ::Clip_TRT version 1
[TRT] Registered plugin creator - ::LReLU_TRT version 1
[TRT] Registered plugin creator - ::PriorBox_TRT version 1
[TRT] Registered plugin creator - ::Normalize_TRT version 1
[TRT] Registered plugin creator - ::ScatterND version 1
[TRT] Registered plugin creator - ::RPROI_TRT version 1
[TRT] Registered plugin creator - ::BatchedNMS_TRT version 1
[TRT] Registered plugin creator - ::BatchedNMSDynamic_TRT version 1
[TRT] Could not register plugin creator - ::FlattenConcat_TRT version 1
[TRT] Registered plugin creator - ::CropAndResize version 1
[TRT] Registered plugin creator - ::DetectionLayer_TRT version 1
[TRT] Registered plugin creator - ::EfficientNMS_ONNX_TRT version 1
[TRT] Registered plugin creator - ::EfficientNMS_TRT version 1
[TRT] Registered plugin creator - ::Proposal version 1
[TRT] Registered plugin creator - ::ProposalLayer_TRT version 1
[TRT] Registered plugin creator - ::PyramidROIAlign_TRT version 1
[TRT] Registered plugin creator - ::ResizeNearest_TRT version 1
[TRT] Registered plugin creator - ::Split version 1
[TRT] Registered plugin creator - ::SpecialSlice_TRT version 1
[TRT] Registered plugin creator - ::InstanceNormalization_TRT version 1
[TRT] detected model format - ONNX (extension ‘.onnx’)
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] [MemUsageChange] Init CUDA: CPU +203, GPU +0, now: CPU 234, GPU 3874 (MiB)
[TRT] native precisions detected for GPU: FP32, FP16
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file .1.1.8001.GPU.FP16.engine
[TRT] cache file not found, profiling network model on device GPU
error: model file ’ /home/jetson/Downloads/jetson-inference/python/training/classfication/myModel/resnet18.onnx’ was not found.
if loading a built-in model, maybe it wasn’t downloaded before.
Run the Model Downloader tool again and select it for download:
$ cd /tools
$ ./download-models.sh
[TRT] failed to load /home/jetson/Downloads/jetson-inference/python/training/classfication/myModel/resnet18.onnx
[TRT] imageNet – failed to initialize.
jetson.inference – imageNet failed to load built-in network ‘alexnet’
Traceback (most recent call last):
File “/home/jetson/Desktop/pyPro/NVIDIA/deepLearning-10.py”, line 17, in
net=jetson.inference.imageNet(‘alexnet’,[‘–model= /home/jetson/Downloads/jetson-inference/python/training/classfication/myModel/resnet18.onnx’,‘–input_blob=input_0’,‘–output_blob=output_0’,‘–labels= /home/jetson/Downloads/jetson-inference/myTrain/labels.txt’])
Exception: jetson.inference – imageNet failed to load network
jetson@jetson-desktop:~/Desktop/pyPro$
I have checked the path many times to see if I have it right. It looks good. I am just trying to recognize one model. It seemed to train ok. The conversion from Pytorch to onnx_export.py seemed to run good.
I am running Pytorch 1.10.0
L4t 32.6.1
TensorRT: 8.0.1.6
I hope that I have given you enough information to go on.
Thank you for your time & help,
Brent