Just reminding everyone that Jetpack 4.4 sucks

What errors are you getting? If it’s when it goes to load the networks, you might need to remove the .engine files from under jetson-inference/data/networks. Or just delete all the networks, and run the Model Downloader tool again.

Do you mean that you had problems installing the TensorFlow 2.2 wheel, or that your application had problems running with TensorFlow 2?

If the later, TensorFlow 2 has API changes that some applications need to be updated for. Otherwise those programs can continue to use TensorFlow 1.x.

Forgot to mention that I already re-downloaded the networks. I don’t see any .engine files in that folder. Here’s the error I’m receiving:

[TRT] coreReadArchive.cpp (38) - Serialization Error in verifyHeader: 0 (Version tag does not match)
[TRT] INVALID_STATE: std::exception
[TRT] INVALID_CONFIG: Deserialize the cuda engine failed.
[TRT] device GPU, failed to create CUDA engine
detectNet – failed to initialize.
jetson.inference – detectNet failed to load built-in network ‘ssd-mobilenet-v2’
PyTensorNet_Dealloc()
Exception in thread Thread-1:
Traceback (most recent call last):
File “/usr/lib/python3.6/threading.py”, line 916, in _bootstrap_inner
self.run()
File “/usr/lib/python3.6/threading.py”, line 864, in run
self._target(*self._args, **self._kwargs)
File “jetson2.py”, line 210, in NET_SETUP
net = jetson.inference.detectNet(“ssd-mobilenet-v2”, threshold=thresh)
Exception: jetson.inference – detectNet failed to load network

Thanks for your help.

If you go into jetson-inference/data/networks/SSD-Mobilenet-v2, there must be a *.engine file in there. If so, delete it and re-run the program, and it will re-generate the TensorRT engine to match the new version.

I had tried to work around this by adding the TensorRT version to the .engine filename, so on a different version it would automatically re-generate the engine - but I think I need to add additional logic so that if the engine fails to load, it does the re-generation then instead.

2 Likes

That fixed it, thank you!
Are there any performance gains with JP4.4 on the nano?

The 2.2 wheel found here would not install/work

@vondalej
Your feedback is appreciated. In the future, in addition to Docker images of OpenCV, I may also publish apt packages so running the build script is not necessary. In any case, the release will need to be upgraded when underlying dependencies change. That’s mostly unavoidable on any Linux platform with this sort of packaging.

For now, you must build OpenCV from master (4.4.0-pre) since unpatched 4.3.0 has an issue with cuDNN. With my script, as mentioned on GitHub that’s just ./build_opencv.sh master

JetsonHacks version may work as well, but I am not sure when it was last updated. As such, things like cuDNN may not be available.

In the future, a fully containerized Docker approach may help to mitigate this since underlying libraries won’t be tied to the release. That leaves driver compatibility issues still, but it’s better, and the same issues apply on x86-64. FWIW, currently, I do provide releases of the newest OpenCV for JetPack 4.3, 4.4 preview, and 4.4 GA.

https://hub.docker.com/r/mdegans/tegra-opencv/tags

Example run on 4.4 GA:

(sudo) docker run --user $(id -u):$(cut -d: -f3 < <(getent group video)) --runtime nvidia -it --rm mdegans/tegra-opencv:latest

I have NX with JP4.4dp with OpenCV compiled with contrib,cuda,dnn and cudnn. In the upgrade ro JP4.4 I thought just recompile my OpenCV 4.3, should I upgrade to 4.4pre or up if there is available?
Thanks

@fpsychosis
You will have to rebuild when you upgrade to 4.4 GA, or use a pre-built docker image. To rebuild using my script, just do ./build_opencv.sh master. This will build the current version of OpenCV that’s in master (4.4.0-pre). 4.3.0 does not build due to changes in cuDNN.

1 Like

we try to run pretrained peoplenet model on deepstream 5.0 but we fail to run successfully , what we can do for run pretrained model on deepstream 5.0 ?

please give link for download jp4.4dp

what is the use of .engine file when we try to run pre trained model on deepstream 5.0. ? .engine file already present in folder or not ?

Hi @shankarjadhav232, here is the link to Nano SD card image for JetPack 4.4 DP: https://developer.nvidia.com/jetson-nano-sd-card-image-44-dp

give me the link for download jetpack 4.4dp for jetson nano developer kit

give me the link to download deepstream 5.0dp for jetson nano developer kit

https://developer.nvidia.com/jetson-nano-sd-card-image-44-dp

https://developer.nvidia.com/deepstream-getting-started

Add TensorRT to the list of things jetpack 4.4 bricked…

Can you provide information about the issue you are having?

I was trying out one of the featured projects seen here…

When running the model to compile the MobileNet SSD model [compile_ssd_mobilenet.py] i get this.

2020-07-15 10:40:40.450457: I tensorflow/compiler/tf2tensorrt/segment/segment.cc:486] There are 1849 ops of 28 different types in the graph that are not converted to TensorRT: Fill, Merge, Switch, Range, ConcatV2, ZerosLike, Identity, NonMaxSuppressionV3, Squeeze, Mul, ExpandDims, Unpack, TopKV2, Cast, Transpose, Placeholder, Sub, Const, Greater, Shape, Where, Reshape, NoOp, GatherV2, AddV2, Pack, Minimum, StridedSlice, (For more information see https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html#supported-ops).
2020-07-15 10:40:41.872395: I tensorflow/compiler/tf2tensorrt/convert/convert_graph.cc:633] Number of TensorRT candidate segments: 2
2020-07-15 10:40:42.116825: F tensorflow/core/util/device_name_utils.cc:92] Check failed: IsJobName(job)
Aborted (core dumped)

The developer says it has worked on previous versions. Thus another 4.4 issue.

I posted some related topics/workarounds to your GitHub issue here:

I also saw the way to fix this on JK Jung’s blog about JetPack 4.3, and it appears this project hadn’t been updated/tested since JetPack 4.2.