core dumps when tweaking Deepstream config file (Solved)

Question regarding configuration changes to DeepStream 1.5 on the Jetson TX2:

I was able to run the demo application nvgstiva-app with the provided config file. Next, I would like to make some simple changes to the Deepstream config file (no custom plugins, etc) - just use the primary inference engine (without any secondary inference engines), but use an object detection network for the primary engine.

In particular, the detectnet from the “Two Days to a Demo: Getting familiar with Jetson TX2”, to detect dogs, pedestrians, faces.

I modified the config file, and specified the trained detectnet prototxt and caffemodel. I see segfaults and core dumps.

Any suggestions on changing the Deepstream configuration in this simple way?

Thanks

What’s did you modify?

Can’t you give the error log ?

ChrisDing: Thanks for responding.

Here’s the modified simplified config.txt file. The error log is also provided below. Your comments welcome. Thanks.


[application]
enable-perf-measurement=0
roi-marking=1
app-mode=0

[source0]
enable=1
#Type - 1=CameraCSi 2=CameraV4L2 3=URI
type=3
uri=file:///home/nvidia/sample_720p.mp4

[sink0]
enable=1
#Type - 1=FakeSink 2=OverlaySink 3=EglSink 4=XvImageSink 5=File
type=3
display-id=0
offset-x=0
offset-y=0
width=0
height=0
sync=1
#overlay-index=1
source-id=0

[osd]
enable=0
osd-mode=2
border-width=3
text-size=10
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Arial

[primary-gie]
enable=1
net-scale-factor=0.0039215697906911373
model-file=file:///home/nvidia/Model/detect-facenet-120/snapshot_iter_24000.caffemodel
proto-file=file:///home/nvidia/Model/detect-facenet-120/deploy.prototxt
model-cache=file:///home/nvidia/Model/detect-facenet-120/face.caffemodel_b2_fp16.cache

net-stride=16
batch-size=2
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;1;1;1
bbox-border-color3=0;1;0;1
#num-classes=4
#class-thresholds=0.2;0.2;0.2;0.2
#class-eps=0.1;0.1;0.1;0.1
#class-group-thresholds=3;3;3;3
color-format=0
roi-top-offset=0;0;0;0
roi-bottom-offset=0;0;0;0
detected-min-w=0;0;0;0
detected-min-h=0;0;0;0
detected-max-w=1920;100;1920;1920
detected-max-h=1080;1080;1080;1080
interval=1

-2 for all; -1 for none;

To set multiple class id’s use format as “1;2;0”

detect-color-class-ids=0;
gie-unique-id=1
parse-func=4
#is-classifier=0
output-bbox-name=Layer11_bbox
output-blob-names=Layer11_cov

Uncomment below lines for DBSCAN. EPS and minBoxes can be tuned for DBSCAN

#enable-dbscan=1
#class-minBoxes=4;4;4;4
#class-eps=0.7;0.7;0.7;0.7

Bit 0: Model decryption required

crypto-flags=0

[tests]
file-loop-count=0
#0=send overlaps; 1=do not send overlaps
server-overlap-mode=1
#Fixed to 1 for diplay color in GUI mode
color-mode=1


Error Log, when running nvgstiva-app -c config.txt

------------> -----------------
------------> -----------------
------------> -----------------
------------> -----------------
------------> -----------------

Using winsys: x11
Deploy Name : /home/nvidia/Model/detect-facenet-120/deploy.prototxt
Model Name : /home/nvidia/Model/detect-facenet-120/snapshot_iter_24000.caffemodel
Model Cache Name : /home/nvidia/Model/detect-facenet-120/face.caffemodel_b2_fp16.cache
Batch_Size 2

Error. Could not open model cache file /home/nvidia/Model/detect-facenet-120/face.caffemodel_b2_fp16.cache
Generating new GIE model cache
forced_fp32 has been set to 0(using fp16)
Segmentation fault (core dumped)

What’s your detectnet output layer? Is it as below?
output-bbox-name=Layer11_bbox
output-blob-names=Layer11_cov

This was indeed the problem with the file. After changing these parameters to the correct values, the crash is gone.

Thanks very much!