I would like to ask if i can train a new model by NX?

i am trying train yolo4 model on NX, but it shows :
used slow CUDNN alog without workspace.
NX can be use by train or not?


Hi @wilicyy

Training process would be real challenging for NX. NX can train the model but it lasts much time so it is not recommended to use NX for training process. You should use another host machine while training for saving your time.

It might help you to take a look at this training documentation.

Best wishes

Here are some training examples that run on Jetson using PyTorch:

thank you a lots,
i will study this example of pytorch immediately.
and can you help me find a example about darknet/yolov4 trainning on Jetson NX?

i read jestson-inference already and try it on NX now,
but can not cmake when i do it,

but i did that.
how to do?

Hi @wilicyy, you need to run git submodule update --init in your top-level jetson-inference directory:

$ cd ~/jetson-inference
$ git submodule update --init
$ cd build
$ cmake ../
$ make
$ sudo make install

Or you can use the jetson-inference container instead. If you use the container, you won’t need to build the project yourself or wait while it installs PyTorch.

thank you very much
this is my problem and now it is go on.

when i pip3 install pycuda,
it shows like this:

what can i do?

I don’t see the part that is actually causing the error. If you want to use PyCUDA, the jetson-inference container already has it installed, so you may want to try that. Or you may look at the steps I followed to install it in the container:


i am install pycuda still failed,
when i done:
ENV PATH="/usr/local/cuda/bin:{PATH}" ENV LD_LIBRARY_PATH="/usr/local/cuda/lib64:{LD_LIBRARY_PATH}"
RUN echo “$PATH” && echo “$LD_LIBRARY_PATH”

should i do that both of root and work? i think it should be cuda-10.2, not cuda.
can you list a full command on NX to try again?

thanks you a lots!

and another question,
i found some models inside of jetson inference already, and it will be save much time on video.
could i put yolov4 model inside to do trainning? is it possible?

You can put those in your user’s ~/.bashrc file and re-open your terminal. If they aren’t already in your ~/.bashrc, you can add these to that file:

export PATH="/usr/local/cuda/bin:${PATH}"
export LD_LIBRARY_PATH="/usr/local/cuda/lib64:${LD_LIBRARY_PATH}"

/usr/local/cuda-10.2 is symbolically linked to /usr/local/cuda, so that difference in the path shouldn’t matter.

YOLOv4 isn’t supported in jetson-inference, instead it uses SSD-Mobilenet DNN architecture for object detection. However you can see how to run YOLOv4 with TensorRT here: TensorRT YOLOv4

thank you very much, \SSD-MOBILENET v2 is ok for me, and i may working it with yolo togehter for my project is urgently.
can you help me to build a .py file on SSD-MOBILE NET with RTSP both input and output?
and i put my images like this: home/images/1,2,3…
pls help to finish the .py then i may train quickly!

thanks a lots

Hi @wilicyy, here is the tutorial for Training SSD-Mobilenet. The second step of the tutorial, you train the model on your own data - it needs to be in Pascal VOC format.

The jetson-inference programs can do RTSP input, but not RTSP output. If you require RTSP output, you may want to look into using DeepStream instead. I believe DeepStream can also run YOLO (although not sure of which version(s) of YOLO exactly)

yes i note that now, and another collegue is working on deepstream 5.1 right now, we are Embedded in the chinese national standard for cameras to instead of RTSP, then we may work many goverment projects in future.
it is a little complated but have to do that for a urgent project.
i have to solve trainning things to save his time right now.


it is a disaster after done
2021-04-15 01-44-36屏幕截图

how to do now? urgently…

echo $PATH

how to do next to restore?

Sorry about that, need to add $ to your ~/.bashrc, like below:

export PATH="/usr/local/cuda/bin:${PATH}"
export LD_LIBRARY_PATH="/usr/local/cuda/lib64:${LD_LIBRARY_PATH}"

but i can do nothing and lost all commands, how can i restore that?