For same frame I get different output using .tlt and .engine

Hi,

I am using fp32

[property]
gpu-id=0
net-scale-factor=1

model-engine-file=classify.engine
batch-size=2
# 0=FP32 and 1=INT8 mode
network-mode=0
process-mode=2
model-color-format=0
gpu-id=0
gie-unique-id=3
operate-on-gie-id=1
operate-on-class-ids=0
output-blob-names=predictions/Softmax
#offsets = 104.0;177.0;123.0
## 0=Detector, 1=Classifier, 2=Segmentation, 100=Other
network-type=1
# Enable tensor metadata output
output-tensor-meta=1

Deepstream log

With tracker
Now playing:video.h264
Creating LL OSD context new
Deserialize yoloLayerV3 plugin: yolo_17
Deserialize yoloLayerV3 plugin: yolo_24
Running...
Creating LL OSD context new

There is nothing to debug using deepstream log.

Can you paste your command line too? Thanks.

Can you refer to “Integrating a Classification model” part of tlt user guide https://docs.nvidia.com/metropolis/TLT/tlt-getting-started-guide/index.html#deepstream_deployment ?

In your config file, there is not “labelfile-path”, “input-dims”, “uff-input-blob-name”.

Hi,

I am using deepstream-infer-tensor-meta-app so I will mention labels inside .cpp file. Even after I try using “input-dims”, “uff-input-blob-name” my output remains same.

my command line

./deepstream-infer-tensor-meta-app  video.h264

Can you use deepstream_test1_app to check too? Thanks.

I already tried with deepstream-test4 app I get same results.

How many frames in your h264 files? What’s the accuracy rate when you run deepstream?Do you mean you generate the h264 file with the same images as using in tlt-infer?

Hi sathiez,

Have you managed to get issue resolved? Any result can be shared?

I encountered the same problem

@neos2008,
Could you please elaborate your problem?

I follow the example notebook to train the resnet_10 two class classifier. when I infer using tlt-infer, I get accuracy around 92%. But when I converted it to etlt->trt engine using tlt-converter and used it in deepstream, for same frame I get different output in deepstream which incorrectly classifies. And I tried using etlt file in deepstream, the model can’t be converted to engine.

the ouput error log:

gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_iou.so
gstnvtracker: Optional NvMOT_RemoveStreams not implemented
gstnvtracker: Batch processing is OFF
Device Number: 0
Device name: GeForce RTX 2080 Ti
Device Version 7.5
Device Supports Optical Flow Functionality
0:00:02.658735384 11321 0x55f8fcb61100 WARN nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]:useEngineFile(): Failed to read from model engine file
0:00:02.658763326 11321 0x55f8fcb61100 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]:initialize(): Trying to create engine from model files
0:00:02.980378146 11321 0x55f8fcb61100 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]:log(): UffParser: Output error: Output predictions/Softmax #output node name for classification not found
NvDsInferCudaEngineGetFromTltModel: Failed to parse UFF model
0:00:02.983341364 11321 0x55f8fcb61100 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]:generateTRTModel(): Failed to create network using custom network creation function
0:00:02.983369856 11321 0x55f8fcb61100 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]:initialize(): Failed to create engine from model files
0:00:02.983433550 11321 0x55f8fcb61100 WARN nvinfer gstnvinfer.cpp:692:gst_nvinfer_start:<secondary_gie_0> error: Failed to create NvDsInferContext instance
0:00:02.983444342 11321 0x55f8fcb61100 WARN nvinfer gstnvinfer.cpp:692:gst_nvinfer_start:<secondary_gie_0> error: Config file path: /deepstream/deepstream_sdk_v4.0.1_x86_64/sources/objectDetector_DetectNet_v2/resnet10_cls/config_cls_file.txt, NvDsInfer Error: NVDSINFER_CUSTOM_LIB_FAILED
** ERROR: main:1294: Failed to set pipeline to PAUSED
Quitting

model config:

[property]
gpu-id=0
# preprocessing parameters: These are the same for all classification models generated by TLT.
net-scale-factor=1.0
offsets=123.67;116.28;103.53
model-color-format=1
batch-size=4
# Model specific paths. These need to be updated for every classfication model.
int8-calib-file=resnet10_class/final_model_int8_cache.bin
labelfile-path=resnet10_class/labels.txt
tlt-encoded-model=resnet10_class/final_model.etlt
tlt-model-key=######### hidden
input-dims=3;224;224;0 # where c = number of channels, h = height of the model input, w = width of model input, 0: implies CHW format.
uff-input-blob-name=input_1
output-blob-names=predictions/Softmax #output node name for classification
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
# process-mode: 2 - inferences on crops from primary detector, 1 - inferences on whole frame
process-mode=2
interval=0
network-type=1 # defines that the model is a classifier.
gie-unique-id=2
operate-on-gie-id=1
operate-on-class-ids=0
classifier-threshold=0.2
is-classifier=1
classifier-async-mode=1
gie-mode=2

Please modify below

output-blob-names=predictions/Softmax #output node name for classification

to

output-blob-names=predictions/Softmax

and retry.

After I modify the model config file, the engine is successfully generated by deepstearm. But the result still is wrong.

What’s your latest problem? Can you elaborate?

I follow the example notebook to train the resnet_10 two class classifier. when I infer using tlt-infer, I get accuracy around 92%. But when I converted it to etlt->engine and used it in deepstream, for same frame I get different output in deepstream which incorrectly classifies, also I tried using etlt file in deepstream, still get same incorrect result.

@neos2008
Please paste your

  1. training spec
  2. training log
  3. resnet10_class/labels.txt

trainging spec

model_config {
  arch: "resnet",
  n_layers: 10
  # Setting these parameters to true to match the template downloaded from NGC.
  use_bias: true
  use_batch_norm: true
  all_projections: true
  freeze_blocks: 0
  freeze_blocks: 1
  input_image_size: "3,224,224"
}
train_config {
  train_dataset_path: "/data/jiajia_warehouse_dataset/jt_project/split/train"
  val_dataset_path: "/data/jiajia_warehouse_dataset/jt_project/split/val"
  pretrained_model_path: "/data/tlt-streamanalytics/pretrain_model/pretrained_resnet10/resnet10.hdf5"
  optimizer: "sgd"
  batch_size_per_gpu: 64
  n_epochs: 80
  n_workers: 16

  # regularizer
  reg_config {
    type: "L2"
    scope: "Conv2D,Dense"
    weight_decay: 0.00005
  }

  # learning_rate
  lr_config {
    scheduler: "step"
    learning_rate: 0.006
    #soft_start: 0.056
    #annealing_points: "0.3, 0.6, 0.8"
    #annealing_divider: 10
    step_size: 10
    gamma: 0.1
  }
}
eval_config {
  eval_dataset_path: "/data/jiajia_warehouse_dataset/jt_project/split/test"
  model_path: "/data/tlt-streamanalytics/config/jt_resnet10_cls/output/weights/resnet_080.tlt"
  top_k: 3
  batch_size: 256
  n_workers: 8
}

training log

Using TensorFlow backend.
2020-06-16 17:18:52.391924: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-06-16 17:18:55.016624: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x616ad40 executing computations on platform CUDA. Devices:
2020-06-16 17:18:55.016666: I tensorflow/compiler/xla/service/service.cc:158]   StreamExecutor device (0): GeForce RTX 2080 Ti, Compute Capability 7.5
2020-06-16 17:18:55.016676: I tensorflow/compiler/xla/service/service.cc:158]   StreamExecutor device (1): GeForce RTX 2080 Ti, Compute Capability 7.5
2020-06-16 17:18:55.016683: I tensorflow/compiler/xla/service/service.cc:158]   StreamExecutor device (2): GeForce RTX 2080 Ti, Compute Capability 7.5
2020-06-16 17:18:55.042645: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2500020000 Hz
2020-06-16 17:18:55.047296: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x6360980 executing computations on platform Host. Devices:
2020-06-16 17:18:55.047335: I tensorflow/compiler/xla/service/service.cc:158]   StreamExecutor device (0): <undefined>, <undefined>
2020-06-16 17:18:55.047632: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties: 
name: GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.545
pciBusID: 0000:05:00.0
totalMemory: 10.76GiB freeMemory: 10.60GiB
2020-06-16 17:18:55.047666: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2020-06-16 17:18:55.053634: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-06-16 17:18:55.053664: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990]      0 
2020-06-16 17:18:55.053675: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0:   N 
2020-06-16 17:18:55.053809: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10312 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2080 Ti, pci bus id: 0000:05:00.0, compute capability: 7.5)
2020-06-16 17:18:55,063 [INFO] iva.makenet.scripts.train: Loading experiment spec at /data/tlt-streamanalytics/config/jt_resnet10_cls/specs/classification_spec.cfg.
2020-06-16 17:18:55,065 [INFO] iva.makenet.spec_handling.spec_loader: Merging specification from /data/tlt-streamanalytics/config/jt_resnet10_cls/specs/classification_spec.cfg
2020-06-16 17:18:55,197 [INFO] iva.makenet.scripts.train: Processing dataset (train): /data/jiajia_warehouse_dataset/jt_project/split/train
2020-06-16 17:18:55,318 [INFO] iva.makenet.scripts.train: Processing dataset (validation): /data/jiajia_warehouse_dataset/jt_project/split/val
WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
2020-06-16 17:18:55,329 [WARNING] tensorflow: From /usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
Found 2481 images belonging to 2 classes.
Found 355 images belonging to 2 classes.
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            (None, 3, 224, 224)  0                                            
__________________________________________________________________________________________________
conv1 (Conv2D)                  (None, 64, 112, 112) 9472        input_1[0][0]                    
__________________________________________________________________________________________________
bn_conv1 (BatchNormalization)   (None, 64, 112, 112) 256         conv1[0][0]                      
__________________________________________________________________________________________________
activation_1 (Activation)       (None, 64, 112, 112) 0           bn_conv1[0][0]                   
__________________________________________________________________________________________________
block_1a_conv_1 (Conv2D)        (None, 64, 56, 56)   36928       activation_1[0][0]               
__________________________________________________________________________________________________
block_1a_bn_1 (BatchNormalizati (None, 64, 56, 56)   256         block_1a_conv_1[0][0]            
__________________________________________________________________________________________________
activation_2 (Activation)       (None, 64, 56, 56)   0           block_1a_bn_1[0][0]              
__________________________________________________________________________________________________
block_1a_conv_2 (Conv2D)        (None, 64, 56, 56)   36928       activation_2[0][0]               
__________________________________________________________________________________________________
block_1a_conv_shortcut (Conv2D) (None, 64, 56, 56)   4160        activation_1[0][0]               
__________________________________________________________________________________________________
block_1a_bn_2 (BatchNormalizati (None, 64, 56, 56)   256         block_1a_conv_2[0][0]            
__________________________________________________________________________________________________
block_1a_bn_shortcut (BatchNorm (None, 64, 56, 56)   256         block_1a_conv_shortcut[0][0]     
__________________________________________________________________________________________________
add_1 (Add)                     (None, 64, 56, 56)   0           block_1a_bn_2[0][0]              
                                                                 block_1a_bn_shortcut[0][0]       
__________________________________________________________________________________________________
activation_3 (Activation)       (None, 64, 56, 56)   0           add_1[0][0]                      
__________________________________________________________________________________________________
block_2a_conv_1 (Conv2D)        (None, 128, 28, 28)  73856       activation_3[0][0]               
__________________________________________________________________________________________________
block_2a_bn_1 (BatchNormalizati (None, 128, 28, 28)  512         block_2a_conv_1[0][0]            
__________________________________________________________________________________________________
activation_4 (Activation)       (None, 128, 28, 28)  0           block_2a_bn_1[0][0]              
__________________________________________________________________________________________________
block_2a_conv_2 (Conv2D)        (None, 128, 28, 28)  147584      activation_4[0][0]               
__________________________________________________________________________________________________
block_2a_conv_shortcut (Conv2D) (None, 128, 28, 28)  8320        activation_3[0][0]               
__________________________________________________________________________________________________
block_2a_bn_2 (BatchNormalizati (None, 128, 28, 28)  512         block_2a_conv_2[0][0]            
__________________________________________________________________________________________________
block_2a_bn_shortcut (BatchNorm (None, 128, 28, 28)  512         block_2a_conv_shortcut[0][0]     
__________________________________________________________________________________________________
add_2 (Add)                     (None, 128, 28, 28)  0           block_2a_bn_2[0][0]              
                                                                 block_2a_bn_shortcut[0][0]       
__________________________________________________________________________________________________
activation_5 (Activation)       (None, 128, 28, 28)  0           add_2[0][0]                      
__________________________________________________________________________________________________
block_3a_conv_1 (Conv2D)        (None, 256, 14, 14)  295168      activation_5[0][0]               
__________________________________________________________________________________________________
block_3a_bn_1 (BatchNormalizati (None, 256, 14, 14)  1024        block_3a_conv_1[0][0]            
__________________________________________________________________________________________________
activation_6 (Activation)       (None, 256, 14, 14)  0           block_3a_bn_1[0][0]              
__________________________________________________________________________________________________
block_3a_conv_2 (Conv2D)        (None, 256, 14, 14)  590080      activation_6[0][0]               
__________________________________________________________________________________________________
block_3a_conv_shortcut (Conv2D) (None, 256, 14, 14)  33024       activation_5[0][0]               
__________________________________________________________________________________________________
block_3a_bn_2 (BatchNormalizati (None, 256, 14, 14)  1024        block_3a_conv_2[0][0]            
__________________________________________________________________________________________________
block_3a_bn_shortcut (BatchNorm (None, 256, 14, 14)  1024        block_3a_conv_shortcut[0][0]     
__________________________________________________________________________________________________
add_3 (Add)                     (None, 256, 14, 14)  0           block_3a_bn_2[0][0]              
                                                                 block_3a_bn_shortcut[0][0]       
__________________________________________________________________________________________________
activation_7 (Activation)       (None, 256, 14, 14)  0           add_3[0][0]                      
__________________________________________________________________________________________________
block_4a_conv_1 (Conv2D)        (None, 512, 14, 14)  1180160     activation_7[0][0]               
__________________________________________________________________________________________________
block_4a_bn_1 (BatchNormalizati (None, 512, 14, 14)  2048        block_4a_conv_1[0][0]            
__________________________________________________________________________________________________
activation_8 (Activation)       (None, 512, 14, 14)  0           block_4a_bn_1[0][0]              
__________________________________________________________________________________________________
block_4a_conv_2 (Conv2D)        (None, 512, 14, 14)  2359808     activation_8[0][0]               
__________________________________________________________________________________________________
block_4a_conv_shortcut (Conv2D) (None, 512, 14, 14)  131584      activation_7[0][0]               
__________________________________________________________________________________________________
block_4a_bn_2 (BatchNormalizati (None, 512, 14, 14)  2048        block_4a_conv_2[0][0]            
__________________________________________________________________________________________________
block_4a_bn_shortcut (BatchNorm (None, 512, 14, 14)  2048        block_4a_conv_shortcut[0][0]     
__________________________________________________________________________________________WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
2020-06-16 17:19:02,131 [WARNING] tensorflow: From /usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
2020-06-16 17:19:08.189790: I tensorflow/stream_executor/dso_loader.cc:152] successfully opened CUDA library libcublas.so.10.0 locally
________
add_4 (Add)                     (None, 512, 14, 14)  0           block_4a_bn_2[0][0]              
                                                                 block_4a_bn_shortcut[0][0]       
__________________________________________________________________________________________________
activation_9 (Activation)       (None, 512, 14, 14)  0           add_4[0][0]                      
__________________________________________________________________________________________________
avg_pool (AveragePooling2D)     (None, 512, 1, 1)    0           activation_9[0][0]               
__________________________________________________________________________________________________
flatten (Flatten)               (None, 512)          0           avg_pool[0][0]                   
__________________________________________________________________________________________________
predictions (Dense)             (None, 2)            1026        flatten[0][0]                    
==================================================================================================
Total params: 4,919,874
Trainable params: 4,826,498
Non-trainable params: 93,376
__________________________________________________________________________________________________
Epoch 1/80
39/39 [==============================] - 12s 300ms/step - loss: 0.7114 - acc: 0.7462 - val_loss: 0.6007 - val_acc: 0.8366
WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/horovod/tensorflow/__init__.py:91: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Deprecated in favor of operator or tf.math.divide.
2020-06-16 17:19:17,619 [WARNING] tensorflow: From /usr/local/lib/python2.7/dist-packages/horovod/tensorflow/__init__.py:91: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Deprecated in favor of operator or tf.math.divide.
Epoch 2/80
39/39 [==============================] - 8s 202ms/step - loss: 0.4755 - acc: 0.8573 - val_loss: 0.5061 - val_acc: 0.8310
Epoch 3/80
39/39 [==============================] - 7s 192ms/step - loss: 0.4594 - acc: 0.8700 - val_loss: 0.4980 - val_acc: 0.8423
Epoch 4/80
39/39 [==============================] - 8s 201ms/step - loss: 0.4227 - acc: 0.8844 - val_loss: 0.5126 - val_acc: 0.8366
Epoch 5/80
39/39 [==============================] - 8s 200ms/step - loss: 0.4295 - acc: 0.8810 - val_loss: 0.4705 - val_acc: 0.8535
Epoch 6/80
39/39 [==============================] - 8s 199ms/step - loss: 0.4146 - acc: 0.8872 - val_loss: 0.4840 - val_acc: 0.8423
Epoch 7/80
39/39 [==============================] - 8s 196ms/step - loss: 0.4080 - acc: 0.8911 - val_loss: 0.5009 - val_acc: 0.8479
Epoch 8/80
39/39 [==============================] - 8s 195ms/step - loss: 0.4019 - acc: 0.8983 - val_loss: 0.4733 - val_acc: 0.8479
Epoch 9/80
39/39 [==============================] - 7s 186ms/step - loss: 0.3960 - acc: 0.8988 - val_loss: 0.5027 - val_acc: 0.8479
Epoch 10/80
39/39 [==============================] - 7s 187ms/step - loss: 0.3953 - acc: 0.8877 - val_loss: 0.4883 - val_acc: 0.8451
Epoch 11/80
39/39 [==============================] - 7s 187ms/step - loss: 0.4002 - acc: 0.8963 - val_loss: 0.5026 - val_acc: 0.8423
Epoch 12/80
39/39 [==============================] - 8s 197ms/step - loss: 0.3911 - acc: 0.9054 - val_loss: 0.4949 - val_acc: 0.8394
Epoch 13/80
39/39 [==============================] - 7s 192ms/step - loss: 0.3851 - acc: 0.8972 - val_loss: 0.4870 - val_acc: 0.8423
Epoch 14/80
39/39 [==============================] - 7s 182ms/step - loss: 0.3968 - acc: 0.8918 - val_loss: 0.4820 - val_acc: 0.8451
Epoch 15/80
39/39 [==============================] - 7s 178ms/step - loss: 0.3877 - acc: 0.8994 - val_loss: 0.4957 - val_acc: 0.8423
Epoch 16/80
39/39 [==============================] - 7s 189ms/step - loss: 0.3968 - acc: 0.8959 - val_loss: 0.4913 - val_acc: 0.8394
Epoch 17/80
39/39 [==============================] - 7s 190ms/step - loss: 0.3864 - acc: 0.8969 - val_loss: 0.4861 - val_acc: 0.8451
Epoch 18/80
39/39 [==============================] - 8s 199ms/step - loss: 0.3972 - acc: 0.8904 - val_loss: 0.4856 - val_acc: 0.8451
Epoch 19/80
39/39 [==============================] - 8s 198ms/step - loss: 0.3849 - acc: 0.8969 - val_loss: 0.4842 - val_acc: 0.8451
Epoch 20/80
39/39 [==============================] - 8s 198ms/step - loss: 0.3945 - acc: 0.8876 - val_loss: 0.4912 - val_acc: 0.8394
Epoch 21/80
39/39 [==============================] - 8s 202ms/step - loss: 0.3933 - acc: 0.8919 - val_loss: 0.4834 - val_acc: 0.8423
Epoch 22/80
39/39 [==============================] - 8s 201ms/step - loss: 0.3873 - acc: 0.9014 - val_loss: 0.4788 - val_acc: 0.8451
Epoch 23/80
39/39 [==============================] - 8s 202ms/step - loss: 0.3844 - acc: 0.9009 - val_loss: 0.4860 - val_acc: 0.8423
Epoch 24/80
39/39 [==============================] - 8s 194ms/step - loss: 0.3861 - acc: 0.8931 - val_loss: 0.4857 - val_acc: 0.8451
Epoch 25/80
39/39 [==============================] - 8s 195ms/step - loss: 0.3906 - acc: 0.8941 - val_loss: 0.4871 - val_acc: 0.8394
Epoch 26/80
39/39 [==============================] - 7s 190ms/step - loss: 0.3944 - acc: 0.8945 - val_loss: 0.4848 - val_acc: 0.8451
Epoch 27/80
39/39 [==============================] - 8s 197ms/step - loss: 0.3846 - acc: 0.9027 - val_loss: 0.4769 - val_acc: 0.8451
Epoch 28/80
39/39 [==============================] - 8s 192ms/step - loss: 0.3831 - acc: 0.8997 - val_loss: 0.4939 - val_acc: 0.8423
Epoch 29/80
39/39 [==============================] - 7s 192ms/step - loss: 0.3869 - acc: 0.8967 - val_loss: 0.4870 - val_acc: 0.8451
Epoch 30/80
39/39 [==============================] - 7s 188ms/step - loss: 0.3878 - acc: 0.8979 - val_loss: 0.4837 - val_acc: 0.8451
Epoch 31/80
39/39 [==============================] - 7s 186ms/step - loss: 0.3920 - acc: 0.8922 - val_loss: 0.4781 - val_acc: 0.8451
Epoch 32/80
39/39 [==============================] - 7s 185ms/step - loss: 0.3896 - acc: 0.8900 - val_loss: 0.4855 - val_acc: 0.8423
Epoch 33/80
39/39 [==============================] - 8s 194ms/step - loss: 0.3896 - acc: 0.9013 - val_loss: 0.4810 - val_acc: 0.8451
Epoch 34/80
39/39 [==============================] - 7s 192ms/step - loss: 0.3902 - acc: 0.8927 - val_loss: 0.4876 - val_acc: 0.8423
Epoch 35/80
39/39 [==============================] - 8s 194ms/step - loss: 0.3930 - acc: 0.8943 - val_loss: 0.4930 - val_acc: 0.8451
Epoch 36/80
39/39 [==============================] - 8s 194ms/step - loss: 0.3874 - acc: 0.9020 - val_loss: 0.4877 - val_acc: 0.8394
Epoch 37/80
39/39 [==============================] - 8s 198ms/step - loss: 0.3921 - acc: 0.8957 - val_loss: 0.4803 - val_acc: 0.8451
Epoch 38/80
39/39 [==============================] - 7s 189ms/step - loss: 0.3947 - acc: 0.8920 - val_loss: 0.4840 - val_acc: 0.8451
Epoch 39/80
39/39 [==============================] - 7s 184ms/step - loss: 0.3963 - acc: 0.8960 - val_loss: 0.4905 - val_acc: 0.8394
Epoch 40/80
39/39 [==============================] - 7s 176ms/step - loss: 0.3853 - acc: 0.9009 - val_loss: 0.4897 - val_acc: 0.8423
Epoch 41/80
39/39 [==============================] - 6s 150ms/step - loss: 0.3919 - acc: 0.8959 - val_loss: 0.4897 - val_acc: 0.8394
Epoch 42/80
39/39 [==============================] - 8s 195ms/step - loss: 0.3930 - acc: 0.8894 - val_loss: 0.4906 - val_acc: 0.8394
Epoch 43/80
39/39 [==============================] - 8s 195ms/step - loss: 0.3926 - acc: 0.8947 - val_loss: 0.4787 - val_acc: 0.8451
Epoch 44/80
39/39 [==============================] - 8s 195ms/step - loss: 0.3968 - acc: 0.8908 - val_loss: 0.4873 - val_acc: 0.8451
Epoch 45/80
39/39 [==============================] - 7s 190ms/step - loss: 0.3907 - acc: 0.8900 - val_loss: 0.4878 - val_acc: 0.8451
Epoch 46/80
39/39 [==============================] - 7s 186ms/step - loss: 0.3974 - acc: 0.8986 - val_loss: 0.4797 - val_acc: 0.8451
Epoch 47/80
39/39 [==============================] - 7s 188ms/step - loss: 0.3960 - acc: 0.8877 - val_loss: 0.4926 - val_acc: 0.8423
Epoch 48/80
39/39 [==============================] - 7s 188ms/step - loss: 0.3854 - acc: 0.8993 - val_loss: 0.4903 - val_acc: 0.8423
Epoch 49/80
39/39 [==============================] - 7s 189ms/step - loss: 0.3883 - acc: 0.8976 - val_loss: 0.4825 - val_acc: 0.8451
Epoch 50/80
39/39 [==============================] - 7s 186ms/step - loss: 0.3848 - acc: 0.9090 - val_loss: 0.4873 - val_acc: 0.8394
Epoch 51/80
39/39 [==============================] - 8s 194ms/step - loss: 0.3866 - acc: 0.8984 - val_loss: 0.4934 - val_acc: 0.8423
Epoch 52/80
39/39 [==============================] - 7s 189ms/step - loss: 0.3929 - acc: 0.8969 - val_loss: 0.4906 - val_acc: 0.8394
Epoch 53/80
39/39 [==============================] - 7s 192ms/step - loss: 0.3924 - acc: 0.8913 - val_loss: 0.4883 - val_acc: 0.8394
Epoch 54/80
39/39 [==============================] - 7s 189ms/step - loss: 0.3888 - acc: 0.9011 - val_loss: 0.4947 - val_acc: 0.8451
Epoch 55/80
39/39 [==============================] - 8s 194ms/step - loss: 0.3938 - acc: 0.9030 - val_loss: 0.4885 - val_acc: 0.8423
Epoch 56/80
39/39 [==============================] - 8s 196ms/step - loss: 0.3858 - acc: 0.9003 - val_loss: 0.4866 - val_acc: 0.8451
Epoch 57/80
39/39 [==============================] - 8s 193ms/step - loss: 0.3917 - acc: 0.9005 - val_loss: 0.4904 - val_acc: 0.8423
Epoch 58/80
39/39 [==============================] - 7s 187ms/step - loss: 0.3822 - acc: 0.8985 - val_loss: 0.4835 - val_acc: 0.8423
Epoch 59/80
39/39 [==============================] - 8s 199ms/step - loss: 0.3852 - acc: 0.9022 - val_loss: 0.4861 - val_acc: 0.8423
Epoch 60/80
39/39 [==============================] - 8s 196ms/step - loss: 0.3916 - acc: 0.8926 - val_loss: 0.4782 - val_acc: 0.8451
Epoch 61/80
39/39 [==============================] - 8s 193ms/step - loss: 0.3899 - acc: 0.9013 - val_loss: 0.4754 - val_acc: 0.8451
Epoch 62/80
39/39 [==============================] - 7s 186ms/step - loss: 0.4002 - acc: 0.8879 - val_loss: 0.4911 - val_acc: 0.8394
Epoch 63/80
39/39 [==============================] - 7s 183ms/step - loss: 0.3851 - acc: 0.8980 - val_loss: 0.4844 - val_acc: 0.8423
Epoch 64/80
39/39 [==============================] - 7s 191ms/step - loss: 0.3951 - acc: 0.8984 - val_loss: 0.4880 - val_acc: 0.8423
Epoch 65/80
39/39 [==============================] - 8s 195ms/step - loss: 0.3876 - acc: 0.9012 - val_loss: 0.4876 - val_acc: 0.8423
Epoch 66/80
39/39 [==============================] - 8s 197ms/step - loss: 0.3893 - acc: 0.8964 - val_loss: 0.4903 - val_acc: 0.8423
Epoch 67/80
39/39 [==============================] - 7s 178ms/step - loss: 0.3933 - acc: 0.8945 - val_loss: 0.4848 - val_acc: 0.8423
Epoch 68/80
39/39 [==============================] - 5s 137ms/step - loss: 0.3913 - acc: 0.8933 - val_loss: 0.4830 - val_acc: 0.8451
Epoch 69/80
39/39 [==============================] - 5s 136ms/step - loss: 0.3833 - acc: 0.8921 - val_loss: 0.4860 - val_acc: 0.8423
Epoch 70/80
39/39 [==============================] - 6s 144ms/step - loss: 0.3895 - acc: 0.8967 - val_loss: 0.4857 - val_acc: 0.8423
Epoch 71/80
39/39 [==============================] - 5s 137ms/step - loss: 0.3945 - acc: 0.8906 - val_loss: 0.4910 - val_acc: 0.8423
Epoch 72/80
39/39 [==============================] - 5s 140ms/step - loss: 0.3840 - acc: 0.9029 - val_loss: 0.4822 - val_acc: 0.8451
Epoch 73/80
39/39 [==============================] - 6s 154ms/step - loss: 0.3908 - acc: 0.8995 - val_loss: 0.4809 - val_acc: 0.8451
Epoch 74/80
39/39 [==============================] - 7s 174ms/step - loss: 0.3971 - acc: 0.8932 - val_loss: 0.4855 - val_acc: 0.8423
Epoch 75/80
39/39 [==============================] - 5s 140ms/step - loss: 0.3947 - acc: 0.8951 - val_loss: 0.4864 - val_acc: 0.8394
Epoch 76/80
39/39 [==============================] - 6s 148ms/step - loss: 0.3899 - acc: 0.8963 - val_loss: 0.4879 - val_acc: 0.8423
Epoch 77/80
39/39 [==============================] - 6s 143ms/step - loss: 0.3915 - acc: 0.9019 - val_loss: 0.4866 - val_acc: 0.8423
Epoch 78/80
39/39 [==============================] - 5s 140ms/step - loss: 0.3886 - acc: 0.8947 - val_loss: 0.4810 - val_acc: 0.8451
Epoch 79/80
39/39 [==============================] - 6s 156ms/step - loss: 0.3946 - acc: 0.8977 - val_loss: 0.4832 - val_acc: 0.8423
Epoch 80/80
39/39 [==============================] - 5s 139ms/step - loss: 0.3949 - acc: 0.8934 - val_loss: 0.4809 - val_acc: 0.8451
2020-06-16 17:30:05,392 [INFO] iva.makenet.scripts.train: Total Val Loss: 0.486088007689
2020-06-16 17:30:05,393 [INFO] iva.makenet.scripts.train: Total Val accuracy: 0.842253506184
2020-06-16 17:30:05,393 [INFO] iva.makenet.scripts.train: Training finished successfully.

resnet10_class/labels.txt

empty
full

How about the accuracy when run inference with Deepstream?
You mentioned “incorrect result”, does it mean all frames are wrong?

hi, Morganh. All frames are wrong.

How about changing

empty
full

to

full
empty