Jetson Nano not supporting INT8

I am running deepstream-app on Jetson Nano on MAXN mode. However, as advertised, I am unable to get Primary inference on 8 channel from 720p video. Maybe because its not running in INT8 mode rather in FP16 mode. The latest Jetpack 4.2.1(rev1) seems to have New beta features like DLA support for INT8 in TensorRT. But I am unable to run the demo on INT8 precision in real-time.

Here is my output when running the demo:

$ deepstream-app -c source30_720p_dec_infer-resnet_tiled_display_int8.txt 
** WARN: <parse_streammux:492>: Unknown key 'config-file' for group [streammux]
Unknown key 'parse-func' for group [property]

Using winsys: x11 
Creating LL OSD context new
0:00:00.902930920 10646     0x2afaf150 WARN                 nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:useEngineFile(): Failed to read from model engine file
0:00:00.903027006 10646     0x2afaf150 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:00:00.903254332 10646     0x2afaf150 WARN                 nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:generateTRTModel(): INT8 not supported by platform. Trying FP16 mode.
0:01:56.060009037 10646     0x2afaf150 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:generateTRTModel(): Storing the serialized cuda engine to file at /home/jetNano/deepstream_sdk_on_jetson_partner/deepstream_sdk_on_jetson/samples/models/Primary_Detector/resnet10.caffemodel_b1_fp16.engine

Runtime commands:
	h: Print this help
	q: Quit

	p: Pause
	r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
      To go back to the tiled display, right-click anywhere on the window.

**PERF: FPS 0 (Avg)	FPS 1 (Avg)	FPS 2 (Avg)	FPS 3 (Avg)	
**PERF: 0.00 (0.00)	0.00 (0.00)	0.00 (0.00)	0.00 (0.00)	
** INFO: <bus_callback:163>: Pipeline ready

Opening in BLOCKING MODE 
Opening in BLOCKING MODE 
Opening in BLOCKING MODE 
Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 261 
NvMMLiteOpen : Block : BlockType = 261 
NvMMLiteOpen : Block : BlockType = 261 
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
NvMMLiteBlockCreate : Block : BlockType = 261 
NvMMLiteBlockCreate : Block : BlockType = 261 
NvMMLiteBlockCreate : Block : BlockType = 261 
Creating LL OSD context new
** INFO: <bus_callback:149>: Pipeline running

**PERF: 8.36 (8.36)	8.17 (8.17)	8.15 (8.15)	8.34 (8.34)	
**PERF: 7.50 (7.89)	7.50 (7.80)	7.49 (7.79)	7.50 (7.88)	
**PERF: 7.50 (7.75)	7.50 (7.69)	7.50 (7.68)	7.50 (7.75)	
**PERF: 7.50 (7.69)	7.50 (7.64)	7.51 (7.64)	7.50 (7.68)	
**PERF: 7.50 (7.65)	7.50 (7.61)	7.51 (7.61)	7.50 (7.65)	
**PERF: 7.50 (7.62)	7.50 (7.59)	7.50 (7.59)	7.50 (7.62)	
**PERF: 7.50 (7.61)	7.50 (7.58)	7.50 (7.58)	7.50 (7.60)	
**PERF: 7.50 (7.59)	7.50 (7.57)	7.50 (7.57)	7.50 (7.59)	
**PERF: 7.50 (7.58)	7.50 (7.56)	7.50 (7.56)	7.50 (7.58)	
** INFO: <bus_callback:186>: Received EOS. Exiting ...

Quitting
App run successful

My config file is like this:

# Copyright (c) 2018 NVIDIA Corporation.  All rights reserved.
#
# NVIDIA Corporation and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto.  Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA Corporation is strictly prohibited.

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=2
columns=2
width=1280
height=720
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
#(5): nvbuf-mem-handle - Allocate Surface Handle memory, applicable for Jetson
#(6): nvbuf-mem-system - Allocate Surface System memory, allocated using calloc
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=3
uri=file://../../streams/sample_720p.mp4
num-sources=2
#drop-frame-interval=2
gpu-id=0
# (0): memtype_device   - Memory type Device
# (1): memtype_pinned   - Memory type Host Pinned
# (2): memtype_unified  - Memory type Unified
cudadec-memtype=0

[source1]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=3
uri=file://../../streams/sample_720p.mp4
num-sources=2
gpu-id=0
# (0): memtype_device   - Memory type Device
# (1): memtype_pinned   - Memory type Host Pinned
# (2): memtype_unified  - Memory type Unified
cudadec-memtype=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=1
source-id=0
gpu-id=0
nvbuf-memory-type=0

[sink1]
enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265 3=mpeg4
## only SW mpeg4 is supported right now.
codec=1
sync=0
#iframeinterval=10
bitrate=2000000
output-file=out.mp4
source-id=0

[sink2]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming
type=4
#1=h264 2=h265
codec=1
sync=0
bitrate=4000000
# set below properties in case of RTSPStreaming
rtsp-port=8554
udp-port=5400

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Arial
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=1
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1280
height=720
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0
config-file=config_mux_source30.txt

# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.
[primary-gie]
enable=1
gpu-id=0
model-engine-file=../../models/Primary_Detector/resnet10.caffemodel_b30_int8.engine
#Required to display the PGIE labels, should be added even when using config-file
#property
labelfile-path=../../models/Primary_Detector/labels.txt
batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=0
#Required by the app for SGIE, when used along with config-file property
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary.txt

[tests]
file-loop=0

Hi xhuv_NV, INT8 support is referring to Xavier only.

For the Nano demo config, see ‘source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_nano.txt’

I have been using the Deepstream EA where this config file is not present. The latest GA contains this file which runs fine. Thanks.