The performance on yolov3 in the test3 doesn't match the performance that using deepstream-app

Hello,
I am running custom YoloV3 via Deepstream and its runs ok.
My system configuration:ubuntu 18.04 + cuda10.1 + deepstream_sdk_4.0 + RTX2080Ti
I am using this command: deepstream-app -c deepstream_app_config_yoloV3.txt in the directory /opt/nvidia/deepstream/deepstream-4.0/sources/objectDetector_Yolo. I am using 8 sources(local file, not rtsp) as a test, and the result is good. The FPS for each source is 24 and GPU utilization can reach 90%.
But when using yolov3’s configuration in the test3, this means I want to run test3 using yolov3, I replace the configure file and model. The FPS for each source is only 15 and GPU utilization is 60%~70%. When I try to add more sources, fps is lower and GPU utilization is still 60%~70%.
The performance on yolov3 in the test3 doesn’t match the performance that using deepstream-app.
Can you give me some useful advice?Thank you very much.

Hi xmyzy123,
Could you share more informantion about the pgie configuration for Yolov3 in test3?
With deepstream_app_config_yoloV3.txt, it run @INT8 precision by default, how about Yolov3 in test3? DId you also copy the INT8 calibration file into test3? Without the int8 calibration, it should fall back to FP16 or FP32 mode.

hi, @mchi,
this is pgie configuration for yolov3 in test3.

`[property]
gpu-id=0
net-scale-factor=1
#0=RGB, 1=BGR
model-color-format=0
custom-network-config=yolov3.cfg
model-file=yolov3_10000.weights
model-engine-file=model_b8_int8.engine
labelfile-path=labels.txt
int8-calib-file=yolov3-calibration.table.trt5.1

0=FP32, 1=INT8, 2=FP16 mode

network-mode=1
num-detected-classes=5
gie-unique-id=1
is-classifier=0
maintain-aspect-ratio=1
parse-bbox-func-name=NvDsInferParseCustomYoloV3
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so`

yes, I copied the INT8 cablibration file into test3.
I copied .cfg, .weights, yolov3-calibration.table.trt5.1, and directory nvdsinfer_custom_impl_Yolo where libnvdsinfer_custom_impl_Yolo.so is generated. I checked it carefully many times.

Hi @xmyyzy123,
As both generated model_b8_int8.engine, could you use TensorRT tool - trteec to profile it , the profile command is:

$ trtexec --batch=2 --useSpinWait --loadEngine=yolo_resnet18.etlt_b2_gpu0_fp16.engine --verbose --plugins=$PATH/libnvdsinfer_custom_impl_Yolo.so

Hi @mchi,
I run this command, this is the result:

/opt/nvidia/deepstream/deepstream-4.0/sources/apps/sample_apps/deepstream-test3$ trtexec --batch=2 --useSpinWait --loadEngine=model_b8_int8.engine --verbose --plugins=/opt/nvidia/deepstream/deepstream-4.0/sources/apps/sample_apps/deepstream-test3/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
&&&& RUNNING TensorRT.trtexec # trtexec --batch=2 --useSpinWait --loadEngine=model_b8_int8.engine --verbose --plugins=/opt/nvidia/deepstream/deepstream-4.0/sources/apps/sample_apps/deepstream-test3/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
[03/30/2020-22:12:19] [I] === Model Options ===
[03/30/2020-22:12:19] [I] Format: *
[03/30/2020-22:12:19] [I] Model:
[03/30/2020-22:12:19] [I] Output:
[03/30/2020-22:12:19] [I] === Build Options ===
[03/30/2020-22:12:19] [I] Max batch: 2
[03/30/2020-22:12:19] [I] Workspace: 16 MB
[03/30/2020-22:12:19] [I] minTiming: 1
[03/30/2020-22:12:19] [I] avgTiming: 8
[03/30/2020-22:12:19] [I] Precision: FP32
[03/30/2020-22:12:19] [I] Calibration:
[03/30/2020-22:12:19] [I] Safe mode: Disabled
[03/30/2020-22:12:19] [I] Save engine:
[03/30/2020-22:12:19] [I] Load engine: model_b8_int8.engine
[03/30/2020-22:12:19] [I] Inputs format: fp32:CHW
[03/30/2020-22:12:19] [I] Outputs format: fp32:CHW
[03/30/2020-22:12:19] [I] Input build shapes: model
[03/30/2020-22:12:19] [I] === System Options ===
[03/30/2020-22:12:19] [I] Device: 0
[03/30/2020-22:12:19] [I] DLACore:
[03/30/2020-22:12:19] [I] Plugins: /opt/nvidia/deepstream/deepstream-4.0/sources/apps/sample_apps/deepstream-test3/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
[03/30/2020-22:12:19] [I] === Inference Options ===
[03/30/2020-22:12:19] [I] Batch: 2
[03/30/2020-22:12:19] [I] Iterations: 10 (200 ms warm up)
[03/30/2020-22:12:19] [I] Duration: 10s
[03/30/2020-22:12:19] [I] Sleep time: 0ms
[03/30/2020-22:12:19] [I] Streams: 1
[03/30/2020-22:12:19] [I] Spin-wait: Enabled
[03/30/2020-22:12:19] [I] Multithreading: Enabled
[03/30/2020-22:12:19] [I] CUDA Graph: Disabled
[03/30/2020-22:12:19] [I] Skip inference: Disabled
[03/30/2020-22:12:19] [I] Input inference shapes: model
[03/30/2020-22:12:19] [I] === Reporting Options ===
[03/30/2020-22:12:19] [I] Verbose: Enabled
[03/30/2020-22:12:19] [I] Averages: 10 inferences
[03/30/2020-22:12:19] [I] Percentile: 99
[03/30/2020-22:12:19] [I] Dump output: Disabled
[03/30/2020-22:12:19] [I] Profile: Disabled
[03/30/2020-22:12:19] [I] Export timing to JSON file:
[03/30/2020-22:12:19] [I] Export profile to JSON file:
[03/30/2020-22:12:19] [I]
[03/30/2020-22:12:19] [V] [TRT] Plugin Creator registration succeeded - GridAnchor_TRT
[03/30/2020-22:12:19] [V] [TRT] Plugin Creator registration succeeded - NMS_TRT
[03/30/2020-22:12:19] [V] [TRT] Plugin Creator registration succeeded - Reorg_TRT
[03/30/2020-22:12:19] [V] [TRT] Plugin Creator registration succeeded - Region_TRT
[03/30/2020-22:12:19] [V] [TRT] Plugin Creator registration succeeded - Clip_TRT
[03/30/2020-22:12:19] [V] [TRT] Plugin Creator registration succeeded - LReLU_TRT
[03/30/2020-22:12:19] [V] [TRT] Plugin Creator registration succeeded - PriorBox_TRT
[03/30/2020-22:12:19] [V] [TRT] Plugin Creator registration succeeded - Normalize_TRT
[03/30/2020-22:12:19] [V] [TRT] Plugin Creator registration succeeded - RPROI_TRT
[03/30/2020-22:12:19] [V] [TRT] Plugin Creator registration succeeded - BatchedNMS_TRT
[03/30/2020-22:12:19] [V] [TRT] Plugin Creator registration succeeded - FlattenConcat_TRT
[03/30/2020-22:12:19] [I] Loading supplied plugin library: /opt/nvidia/deepstream/deepstream-4.0/sources/apps/sample_apps/deepstream-test3/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
Deserialize yoloLayerV3 plugin: yolo_83
Deserialize yoloLayerV3 plugin: yolo_95
Deserialize yoloLayerV3 plugin: yolo_107
[03/30/2020-22:12:22] [V] [TRT] Deserialize required 2635305 microseconds.
[03/30/2020-22:12:22] [I] Average over 10 runs is 11.4627 ms (host walltime is 11.4718 ms, 99% percentile time is 11.7373).
[03/30/2020-22:12:22] [I] Average over 10 runs is 11.4114 ms (host walltime is 11.4192 ms, 99% percentile time is 11.4443).
[03/30/2020-22:12:22] [I] Average over 10 runs is 10.2514 ms (host walltime is 10.2586 ms, 99% percentile time is 11.3842).
[03/30/2020-22:12:22] [I] Average over 10 runs is 9.51382 ms (host walltime is 9.52058 ms, 99% percentile time is 9.52349).
[03/30/2020-22:12:23] [I] Average over 10 runs is 9.49303 ms (host walltime is 9.4999 ms, 99% percentile time is 9.51546).
[03/30/2020-22:12:23] [I] Average over 10 runs is 9.54339 ms (host walltime is 9.5521 ms, 99% percentile time is 9.56621).
[03/30/2020-22:12:23] [I] Average over 10 runs is 9.52087 ms (host walltime is 9.52778 ms, 99% percentile time is 9.54106).
[03/30/2020-22:12:23] [I] Average over 10 runs is 9.54488 ms (host walltime is 9.55209 ms, 99% percentile time is 9.57946).
[03/30/2020-22:12:23] [I] Average over 10 runs is 9.55783 ms (host walltime is 9.56574 ms, 99% percentile time is 9.58851).
[03/30/2020-22:12:23] [I] Average over 10 runs is 9.53126 ms (host walltime is 9.53838 ms, 99% percentile time is 9.57037).
&&&& PASSED TensorRT.trtexec # trtexec --batch=2 --useSpinWait --loadEngine=model_b8_int8.engine --verbose --plugins=/opt/nvidia/deepstream/deepstream-4.0/sources/apps/sample_apps/deepstream-test3/nvdsinfer_cus

Sorry! Need to change “–batch=2” to “–batch=8”.
How about the profiling result of the TRT engine generated under /opt/nvidia/deepstream/deepstream-4.0/sources/objectDetector_Yolo ?

Thanks!

Hi, @mchi,
It doesn’t matter.
this is the result of test3:

/opt/nvidia/deepstream/deepstream-4.0/sources/apps/sample_apps/deepstream-test3$ trtexec --batch=8 --useSpinWait --loadEngine=model_b8_int8.engine --verbose --plugins=/opt/nvidia/deepstream/deepstream-4.0/sources/apps/sample_apps/deepstream-test3/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
&&&& RUNNING TensorRT.trtexec # trtexec --batch=8 --useSpinWait --loadEngine=model_b8_int8.engine --verbose --plugins=/opt/nvidia/deepstream/deepstream-4.0/sources/apps/sample_apps/deepstream-test3/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
[03/30/2020-22:51:31] [I] === Model Options ===
[03/30/2020-22:51:31] [I] Format: *
[03/30/2020-22:51:31] [I] Model:
[03/30/2020-22:51:31] [I] Output:
[03/30/2020-22:51:31] [I] === Build Options ===
[03/30/2020-22:51:31] [I] Max batch: 8
[03/30/2020-22:51:31] [I] Workspace: 16 MB
[03/30/2020-22:51:31] [I] minTiming: 1
[03/30/2020-22:51:31] [I] avgTiming: 8
[03/30/2020-22:51:31] [I] Precision: FP32
[03/30/2020-22:51:31] [I] Calibration:
[03/30/2020-22:51:31] [I] Safe mode: Disabled
[03/30/2020-22:51:31] [I] Save engine:
[03/30/2020-22:51:31] [I] Load engine: model_b8_int8.engine
[03/30/2020-22:51:31] [I] Inputs format: fp32:CHW
[03/30/2020-22:51:31] [I] Outputs format: fp32:CHW
[03/30/2020-22:51:31] [I] Input build shapes: model
[03/30/2020-22:51:31] [I] === System Options ===
[03/30/2020-22:51:31] [I] Device: 0
[03/30/2020-22:51:31] [I] DLACore:
[03/30/2020-22:51:31] [I] Plugins: /opt/nvidia/deepstream/deepstream-4.0/sources/apps/sample_apps/deepstream-test3/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
[03/30/2020-22:51:31] [I] === Inference Options ===
[03/30/2020-22:51:31] [I] Batch: 8
[03/30/2020-22:51:31] [I] Iterations: 10 (200 ms warm up)
[03/30/2020-22:51:31] [I] Duration: 10s
[03/30/2020-22:51:31] [I] Sleep time: 0ms
[03/30/2020-22:51:31] [I] Streams: 1
[03/30/2020-22:51:31] [I] Spin-wait: Enabled
[03/30/2020-22:51:31] [I] Multithreading: Enabled
[03/30/2020-22:51:31] [I] CUDA Graph: Disabled
[03/30/2020-22:51:31] [I] Skip inference: Disabled
[03/30/2020-22:51:31] [I] Input inference shapes: model
[03/30/2020-22:51:31] [I] === Reporting Options ===
[03/30/2020-22:51:31] [I] Verbose: Enabled
[03/30/2020-22:51:31] [I] Averages: 10 inferences
[03/30/2020-22:51:31] [I] Percentile: 99
[03/30/2020-22:51:31] [I] Dump output: Disabled
[03/30/2020-22:51:31] [I] Profile: Disabled
[03/30/2020-22:51:31] [I] Export timing to JSON file:
[03/30/2020-22:51:31] [I] Export profile to JSON file:
[03/30/2020-22:51:31] [I]
[03/30/2020-22:51:31] [V] [TRT] Plugin Creator registration succeeded - GridAnchor_TRT
[03/30/2020-22:51:31] [V] [TRT] Plugin Creator registration succeeded - NMS_TRT
[03/30/2020-22:51:31] [V] [TRT] Plugin Creator registration succeeded - Reorg_TRT
[03/30/2020-22:51:31] [V] [TRT] Plugin Creator registration succeeded - Region_TRT
[03/30/2020-22:51:31] [V] [TRT] Plugin Creator registration succeeded - Clip_TRT
[03/30/2020-22:51:31] [V] [TRT] Plugin Creator registration succeeded - LReLU_TRT
[03/30/2020-22:51:31] [V] [TRT] Plugin Creator registration succeeded - PriorBox_TRT
[03/30/2020-22:51:31] [V] [TRT] Plugin Creator registration succeeded - Normalize_TRT
[03/30/2020-22:51:31] [V] [TRT] Plugin Creator registration succeeded - RPROI_TRT
[03/30/2020-22:51:31] [V] [TRT] Plugin Creator registration succeeded - BatchedNMS_TRT
[03/30/2020-22:51:31] [V] [TRT] Plugin Creator registration succeeded - FlattenConcat_TRT
[03/30/2020-22:51:31] [I] Loading supplied plugin library: /opt/nvidia/deepstream/deepstream-4.0/sources/apps/sample_apps/deepstream-test3/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
Deserialize yoloLayerV3 plugin: yolo_83
Deserialize yoloLayerV3 plugin: yolo_95
Deserialize yoloLayerV3 plugin: yolo_107
[03/30/2020-22:51:34] [V] [TRT] Deserialize required 2664446 microseconds.
[03/30/2020-22:51:34] [I] Average over 10 runs is 37.0724 ms (host walltime is 37.085 ms, 99% percentile time is 40.6074).
[03/30/2020-22:51:35] [I] Average over 10 runs is 35.3318 ms (host walltime is 35.3395 ms, 99% percentile time is 35.4345).
[03/30/2020-22:51:35] [I] Average over 10 runs is 35.363 ms (host walltime is 35.3704 ms, 99% percentile time is 35.4407).
[03/30/2020-22:51:35] [I] Average over 10 runs is 35.4296 ms (host walltime is 35.4368 ms, 99% percentile time is 35.4865).
[03/30/2020-22:51:36] [I] Average over 10 runs is 35.4277 ms (host walltime is 35.4359 ms, 99% percentile time is 35.48).
[03/30/2020-22:51:36] [I] Average over 10 runs is 35.4205 ms (host walltime is 35.428 ms, 99% percentile time is 35.4627).
[03/30/2020-22:51:36] [I] Average over 10 runs is 35.4039 ms (host walltime is 35.4121 ms, 99% percentile time is 35.4674).
[03/30/2020-22:51:37] [I] Average over 10 runs is 35.4249 ms (host walltime is 35.4329 ms, 99% percentile time is 35.4598).
[03/30/2020-22:51:37] [I] Average over 10 runs is 35.4241 ms (host walltime is 35.4333 ms, 99% percentile time is 35.4591).
[03/30/2020-22:51:38] [I] Average over 10 runs is 35.4412 ms (host walltime is 35.4675 ms, 99% percentile time is 35.5886).
&&&& PASSED TensorRT.trtexec # trtexec --batch=8 --useSpinWait --loadEngine=model_b8_int8.engine --verbose --plugins=/opt/nvidia/deepstream/deepstream-4.0/sources/apps/sample_apps/deepstream-test3/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so

this is the result of yolo in the directory ‘/opt/nvidia/deepstream/deepstream-4.0/sources/objectDetector_Yolo’:

/opt/nvidia/deepstream/deepstream-4.0/sources/objectDetector_Yolo$ trtexec --batch=8 --useSpinWait --loadEngine=model_b8_int8.engine --verbose --plugins=/opt/nvidia/deepstream/deepstream-4.0/sources/objectDetector_Yolo/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
&&&& RUNNING TensorRT.trtexec # trtexec --batch=8 --useSpinWait --loadEngine=model_b8_int8.engine --verbose --plugins=/opt/nvidia/deepstream/deepstream-4.0/sources/objectDetector_Yolo/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
[03/30/2020-22:55:45] [I] === Model Options ===
[03/30/2020-22:55:45] [I] Format: *
[03/30/2020-22:55:45] [I] Model:
[03/30/2020-22:55:45] [I] Output:
[03/30/2020-22:55:45] [I] === Build Options ===
[03/30/2020-22:55:45] [I] Max batch: 8
[03/30/2020-22:55:45] [I] Workspace: 16 MB
[03/30/2020-22:55:45] [I] minTiming: 1
[03/30/2020-22:55:45] [I] avgTiming: 8
[03/30/2020-22:55:45] [I] Precision: FP32
[03/30/2020-22:55:45] [I] Calibration:
[03/30/2020-22:55:45] [I] Safe mode: Disabled
[03/30/2020-22:55:45] [I] Save engine:
[03/30/2020-22:55:45] [I] Load engine: model_b8_int8.engine
[03/30/2020-22:55:45] [I] Inputs format: fp32:CHW
[03/30/2020-22:55:45] [I] Outputs format: fp32:CHW
[03/30/2020-22:55:45] [I] Input build shapes: model
[03/30/2020-22:55:45] [I] === System Options ===
[03/30/2020-22:55:45] [I] Device: 0
[03/30/2020-22:55:45] [I] DLACore:
[03/30/2020-22:55:45] [I] Plugins: /opt/nvidia/deepstream/deepstream-4.0/sources/objectDetector_Yolo/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
[03/30/2020-22:55:45] [I] === Inference Options ===
[03/30/2020-22:55:45] [I] Batch: 8
[03/30/2020-22:55:45] [I] Iterations: 10 (200 ms warm up)
[03/30/2020-22:55:45] [I] Duration: 10s
[03/30/2020-22:55:45] [I] Sleep time: 0ms
[03/30/2020-22:55:45] [I] Streams: 1
[03/30/2020-22:55:45] [I] Spin-wait: Enabled
[03/30/2020-22:55:45] [I] Multithreading: Enabled
[03/30/2020-22:55:45] [I] CUDA Graph: Disabled
[03/30/2020-22:55:45] [I] Skip inference: Disabled
[03/30/2020-22:55:45] [I] Input inference shapes: model
[03/30/2020-22:55:45] [I] === Reporting Options ===
[03/30/2020-22:55:45] [I] Verbose: Enabled
[03/30/2020-22:55:45] [I] Averages: 10 inferences
[03/30/2020-22:55:45] [I] Percentile: 99
[03/30/2020-22:55:45] [I] Dump output: Disabled
[03/30/2020-22:55:45] [I] Profile: Disabled
[03/30/2020-22:55:45] [I] Export timing to JSON file:
[03/30/2020-22:55:45] [I] Export profile to JSON file:
[03/30/2020-22:55:45] [I]
[03/30/2020-22:55:45] [V] [TRT] Plugin Creator registration succeeded - GridAnchor_TRT
[03/30/2020-22:55:45] [V] [TRT] Plugin Creator registration succeeded - NMS_TRT
[03/30/2020-22:55:45] [V] [TRT] Plugin Creator registration succeeded - Reorg_TRT
[03/30/2020-22:55:45] [V] [TRT] Plugin Creator registration succeeded - Region_TRT
[03/30/2020-22:55:45] [V] [TRT] Plugin Creator registration succeeded - Clip_TRT
[03/30/2020-22:55:45] [V] [TRT] Plugin Creator registration succeeded - LReLU_TRT
[03/30/2020-22:55:45] [V] [TRT] Plugin Creator registration succeeded - PriorBox_TRT
[03/30/2020-22:55:45] [V] [TRT] Plugin Creator registration succeeded - Normalize_TRT
[03/30/2020-22:55:45] [V] [TRT] Plugin Creator registration succeeded - RPROI_TRT
[03/30/2020-22:55:45] [V] [TRT] Plugin Creator registration succeeded - BatchedNMS_TRT
[03/30/2020-22:55:45] [V] [TRT] Plugin Creator registration succeeded - FlattenConcat_TRT
[03/30/2020-22:55:45] [I] Loading supplied plugin library: /opt/nvidia/deepstream/deepstream-4.0/sources/objectDetector_Yolo/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
Deserialize yoloLayerV3 plugin: yolo_83
Deserialize yoloLayerV3 plugin: yolo_95
Deserialize yoloLayerV3 plugin: yolo_107
[03/30/2020-22:55:49] [V] [TRT] Deserialize required 2657630 microseconds.
[03/30/2020-22:55:49] [I] Average over 10 runs is 36.8808 ms (host walltime is 36.8897 ms, 99% percentile time is 40.5483).
[03/30/2020-22:55:49] [I] Average over 10 runs is 35.3183 ms (host walltime is 35.3261 ms, 99% percentile time is 35.3889).
[03/30/2020-22:55:50] [I] Average over 10 runs is 35.3498 ms (host walltime is 35.3568 ms, 99% percentile time is 35.4459).
[03/30/2020-22:55:50] [I] Average over 10 runs is 35.3862 ms (host walltime is 35.3953 ms, 99% percentile time is 35.4588).
[03/30/2020-22:55:50] [I] Average over 10 runs is 35.3438 ms (host walltime is 35.3555 ms, 99% percentile time is 35.4122).
[03/30/2020-22:55:51] [I] Average over 10 runs is 35.3589 ms (host walltime is 35.3663 ms, 99% percentile time is 35.4877).
[03/30/2020-22:55:51] [I] Average over 10 runs is 35.3515 ms (host walltime is 35.5379 ms, 99% percentile time is 35.4017).
[03/30/2020-22:55:51] [I] Average over 10 runs is 35.3773 ms (host walltime is 35.4952 ms, 99% percentile time is 35.4102).
[03/30/2020-22:55:52] [I] Average over 10 runs is 35.3525 ms (host walltime is 35.3599 ms, 99% percentile time is 35.3956).
[03/30/2020-22:55:52] [I] Average over 10 runs is 35.3775 ms (host walltime is 35.4216 ms, 99% percentile time is 35.4101).
&&&& PASSED TensorRT.trtexec # trtexec --batch=8 --useSpinWait --loadEngine=model_b8_int8.engine --verbose --plugins=/opt/nvidia/deepstream/deepstream-4.0/sources/objectDetector_Yolo/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so

Ok, they have the same inference performance (~ 28fps/stream = 1000 ms/ (35.36ms/batch)).
So, something others affect the perf.

@mchi, Could you give me some useful advice? something others, like what. I don’t figure out the cause of difference between both situation. Or could you reproduce this issue?

Can somebody help me?
Thank you very much for any suggestion.