Hi … I have trained a detectnet_v2 model of tlt-training toolkit with my custom dataset. I am able to get a .etlt file.
Now my next step is that I need to make a module in python where I can load that .etlt model (file) and run predictions on sample images using that model.
I need to know how I can do it ?
Which device did you run inference?
For example, if you want to run inference in Jetson Nano, you should download tlt-converter jetson version, copy etlt model into Nano and then generate trt engine in Nano.
@Morganh
after referring the documentation what you suggested, i am getting this error.
./tlt-converter -k OGQ4ZGw5cXN2M3QwNDJsNGxpbnRsNXJuOHY6OTQ4ZmU2ZTAtZDcyYy00MzE0LTk1ZjEtMTgyMDJkYWFIMDgw -d 3,384,1248 -o output_bbox/BiasAdd,output_cov/Sigmoid -e /home/senquire-nano/Documents -b 1 input_file /home/senquire-nano/Documents/tlt_exp_niharika
[ERROR] UffParser: Unsupported number of graph 0
[ERROR] Failed to parse the model, please check the encoding key to make sure it’s correct
[ERROR] Network must have at least one output
[ERROR] Network validation failed.
[ERROR] Unable to create engine
Segmentation fault (core dumped).
You are missing etlt model file in your command line.
Except tlt user guide, you can also trigger tlt docker and run its jupyter notebook to see how to run tlt-covnerter.
@Morganh
still on the same page. not getting it exactly why this is happening.
senquire-nano@senquire-nano:~/Downloads$ ./tlt-converter ./resnet18_detector.etlt \
-k OGQ4ZGw5cXN2M3QwNDJsNGxpbnRsNXJuOHY6OTQ4ZmU2ZTAtZDcyYy00MzE0LTk1ZjEtMTgyMDJkYWFIMDgw
-o output_bbox/BiasAdd,output_cov/Sigmoid -d 3,384,1248 -e resnet18_detector.trt -b 1
[ERROR] UffParser: Unsupported number of graph 0
[ERROR] Failed to parse the model, please check the encoding key to make sure it’s correct
[ERROR] Network must have at least one output
[ERROR] Network validation failed.
[ERROR] Unable to create engine
Segmentation fault (core dumped)
thanks
@Morganh
now that error has gone and when i start i am getting this now.
[WARNING] Int8 support requested on hardware without native Int8 support, performance will be negatively affected.
[INFO] Detected 1 inputs and 2 output network tensors.
[INFO] Starting Calibration with batch size 1.
[INFO] Post Processing Calibration data in 9.636e-06 seconds.
[INFO] Calibration completed in 96.0879 seconds.
[ERROR] Calibration failure occurred with no scaling factors detected. This could be due to no int8 calibrator or insufficient custom scales for network layers. Please see int8 sample to setup calibration correctly.
[ERROR] Builder failed while configuring INT8 mode.
[ERROR] Unable to create engine
Segmentation fault (core dumped)
command which i ran is :
./tlt-converter ./resnet18_detector.etlt -k OGQ4ZGw5cXN2M3QwNDJsNGxpbnRsNXJuOHY6OTQ4ZmU2ZTAtZDcyYy00MzE0LTk1ZjEtMTgyMDJkYWFlMDgw -o output_bbox/BiasAdd,output_cov/Sigmoid -d 3,384,1248 -i nchw -m 64 -t int8 -e resnet18_detector.trt -b 1
thanks
@Morganh that error has gone finally able to generate an engine file or a trt file using tlt-converter. now trying to loading the engine file in the people net script, but getting errors like …
Traceback (most recent call last):
File “tlt.py”, line 250, in
detection_out, keepCount_out = predict(image, model_w, model_h)
File “tlt.py”, line 115, in predict
img = process_image(image, model_w, model_h)
File “tlt.py”, line 96, in process_image
image = Image.fromarray(np.int(arr))
TypeError: int() argument must be a string, a bytes-like object or a number, not ‘NoneType’