Nvdspreprocess seems do process twice in ds6.2

Please provide complete information as applicable to your setup.

• Hardware Platform (GPU)
• DeepStream Version 6.2
• Issue Type(BUG)

I have use nvpreprocess plugin to preprocess image for my model, but it seem process twice.

this is the preprocess config file:

[property]
enable=1
target-unique-ids=1
# 网络输入顺序 0=NCHW, 1=NHWC, 2=CUSTOM
network-input-order=0

process-on-frame=1
# 输入尺寸,batch小于7会出错,暂时未知
network-input-shape=3;3;720;1280
processing-width=1280
processing-height=720
scaling-buf-pool-size=6
tensor-buf-pool-size=6

# 输入颜色格式 0=RGB, 1=BGR, 2=GRAY
network-color-format=0
# 输入数据类型 0=FP32, 1=UINT8, 2=INT8, 3=UINT32, 4=INT32, 5=FP16
tensor-data-type=0
# **输入层名称**
tensor-name=input.1
# 0=NVBUF_MEM_DEFAULT 1=NVBUF_MEM_CUDA_PINNED 2=NVBUF_MEM_CUDA_DEVICE 3=NVBUF_MEM_CUDA_UNIFIED 4=NVBUF_MEM_SURFACE_ARRAY
scaling-pool-memory-type=0
# 0=NvBufSurfTransformCompute_Default 1=NvBufSurfTransformCompute_GPU 2=NvBufSurfTransformCompute_VIC
scaling-pool-compute-hw=0
    # Scaling Interpolation method
    # 0=NvBufSurfTransformInter_Nearest 1=NvBufSurfTransformInter_Bilinear 2=NvBufSurfTransformInter_Algo1
    # 3=NvBufSurfTransformInter_Algo2 4=NvBufSurfTransformInter_Algo3 5=NvBufSurfTransformInter_Algo4
    # 6=NvBufSurfTransformInter_Default
scaling-filter=0
# 自定义预处理实现(yolov7无需自定义)
custom-lib-path=/opt/nvidia/deepstream/deepstream/lib/gst-plugins/libcustom2d_preprocess.so
#custom-lib-path=/lib/x86_64-linux-gnu/libcustom2d_preprocess.so
custom-tensor-preparation-function=CustomTensorPreparation

[group-0]
src-ids=0;1;2;3;4;5;6
#src-ids=0
custom-input-transformation-function=CustomTransformation
# 是否开启ROI
process-on-roi=0
# 可以多个框 left;top;width;height
roi-params-src-0=0;540;900;500;100;100;100;100

#[group-1]
#src-ids=2
#custom-input-transformation-function=CustomAsyncTransformation
#process-on-roi=1
#roi-params-src-2=0;540;900;500;960;0;900;500

#[group-2]
#src-ids=3
#custom-input-transformation-function=CustomAsyncTransformation
#process-on-roi=0
#roi-params-src-3=0;540;900;500;960;0;900;500

[user-configs]
# 缩放1/255=0.003921568
pixel-normalization-factor=0.003921568
#offset=113.766;104.426;100.746
#channel-scale-factors=73.4925;70.9269;72.2931
#channel-mean-fffsets=113.766;104.426;100.746

and infer config file:

[property]
# 运行的GPU ID     ===> replace in program
#gpu-id=0
# 缩放 1/255
#net-scale-factor=0.0039215697906911373
# onnx模型路径
onnx-file= ./models/hrnet_b3.onnx
# tensorrt模型路径
model-engine-file=./models/hrnet_ds62_b3_sm75.int8.cache
# 需要提供标签文件,才能顺利渲染
#labelfile-path=./models/labels_city.txt
# 校准表路径
int8-calib-file=./models/hrnet_ds62_b3_sm75.table.int8
# 批量大小         ===> replace in program
#batch-size=3
# 处理模式 1=Primary 2=Secondary   ===> replace in program
#process-mode=1
# 模型颜色格式 Integer 0: RGB 1: BGR 2: GRAY
#model-color-format=0
# 网络模式
## 0=FP32, 1=INT8, 2=FP16 mode
#network-mode=1
# 检测类别数
num-detected-classes=1

#input-tensor-meta=1

#interval=0

gie-unique-id=1
# 输出层名称 "output", "528", "546"
#output-blob-names=onnx::Resize_3600;3646

#force-implicit-batch-dim=1
# 解析框函数(Yolov7要实现后处理)
parse-bbox-func-name=NvDsInferParseCustomHRNet
custom-lib-path=/lib/x86_64-linux-gnu/libnvds_infercustomparser.so
## 1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)
# 聚类模式
#cluster-mode=2
#scaling-filter=0
#scaling-compute-hw=0

#Use the config params below for dbscan clustering mode
#[class-attrs-all]
#detected-min-w=4
#detected-min-h=4
#minBoxes=3

# 针对所有类的配置
#Use the config params below for NMS clustering mode
#[class-attrs-all]
#topk=20
#nms-iou-threshold=0.1
#pre-cluster-threshold=0.1

# 针对某个类别的配置
## Per class configurations
#[class-attrs-0]
#topk=20
#nms-iou-threshold=0.45
#pre-cluster-threshold=0.25

#[class-attrs-1]
#pre-cluster-threshold=0.05
#eps=0.7
#dbscan-min-score=0.5

#[class-attrs-2]
#pre-cluster-threshold=0.1
#eps=0.6
#dbscan-min-score=0.95

#[class-attrs-3]
#pre-cluster-threshold=0.05
#eps=0.7
#dbscan-min-score=0.5

this is the code on infer low level api:

bool NvDsInferParseCustomHRNet (std::vector<NvDsInferLayerInfo> const &outputLayersInfo,
        NvDsInferNetworkInfo  const &networkInfo,
        NvDsInferParseDetectionParams const &detectionParams,
        std::vector<NvDsInferObjectDetectionInfo> &objectList)
{
    assert(outputLayersInfo.size() == 2);
    // pred-map
    float* pred_map = (float *)outputLayersInfo[0].buffer;
    // threshold
    float *threshold = (float*)outputLayersInfo[1].buffer;

    cv::Mat mask(networkInfo.height, networkInfo.width, CV_8U, cv::Scalar(0));
    for(int y = 0; y < networkInfo.height; y++) {
        for(int x = 0; x < networkInfo.width; x++) {
            float val1 = threshold[y * networkInfo.width + x];
            float val2 = pred_map[y * networkInfo.width + x];
            if(val2 > val1) {
                mask.at<uchar>(y,x) = 1;
            }
        }
    }

    cv::Mat labels, stats, centroids;
    cv::connectedComponentsWithStats(mask, labels, stats, centroids, 4);

    std::cout <<
    for(int i = 1; i < stats.rows; i++) {
        if(stats.at<int>(i, 4) > 3) {
            NvDsInferObjectDetectionInfo res;
            res.classId = 0;
            res.detectionConfidence = 1.0;
            res.left = stats.at<int>(i, 0);
            res.top = stats.at<int>(i, 1);
            res.width = stats.at<int>(i, 2);
            res.height = stats.at<int>(i, 3);

            if (res.width && res.height) {
                std::cout << "i: " << i << " x: " << res.left << " y: " << res.top << " w: " << res.width << " h: " << res.height << std::endl;
                objectList.emplace_back(res);
            }
        }
    }

  return true;
}

this is the log:

********************************************************
i: 1 x: 905 y: 170 w: 5 h: 5
i: 2 x: 769 y: 190 w: 5 h: 6
i: 3 x: 146 y: 279 w: 9 h: 11
i: 4 x: 900 y: 343 w: 12 h: 13
i: 5 x: 951 y: 422 w: 23 h: 23
i: 6 x: 1023 y: 451 w: 14 h: 15
#####$###########################################
********************************************************
i: 1 x: 905 y: 170 w: 5 h: 5
i: 2 x: 769 y: 190 w: 5 h: 6
i: 3 x: 146 y: 279 w: 9 h: 11
i: 4 x: 900 y: 343 w: 12 h: 13
i: 5 x: 951 y: 422 w: 23 h: 23
i: 6 x: 1023 y: 451 w: 14 h: 15
#####$###########################################
I0921 13:43:27.149148    65 imagesavebroker.cpp:301] i: 0 x: 1357 y: 255 w: 7 h: 7
I0921 13:43:27.149204    65 imagesavebroker.cpp:301] i: 1 x: 1153 y: 285 w: 7 h: 9
I0921 13:43:27.149230    65 imagesavebroker.cpp:301] i: 2 x: 219 y: 418 w: 13 h: 16
I0921 13:43:27.149242    65 imagesavebroker.cpp:301] i: 3 x: 1350 y: 514 w: 18 h: 19
I0921 13:43:27.149255    65 imagesavebroker.cpp:301] i: 4 x: 1426 y: 633 w: 34 h: 34
I0921 13:43:27.149266    65 imagesavebroker.cpp:301] i: 5 x: 1534 y: 676 w: 21 h: 22
I0921 13:43:27.149277    65 imagesavebroker.cpp:301] i: 6 x: 1357 y: 255 w: 7 h: 7
I0921 13:43:27.149288    65 imagesavebroker.cpp:301] i: 7 x: 1153 y: 285 w: 7 h: 9
I0921 13:43:27.149299    65 imagesavebroker.cpp:301] i: 8 x: 219 y: 418 w: 13 h: 16
I0921 13:43:27.149312    65 imagesavebroker.cpp:301] i: 9 x: 1350 y: 514 w: 18 h: 19
I0921 13:43:27.149322    65 imagesavebroker.cpp:301] i: 10 x: 1426 y: 633 w: 34 h: 34
I0921 13:43:27.149334    65 imagesavebroker.cpp:301] i: 11 x: 1534 y: 676 w: 21 h: 22

you can see it process twice and get the same result, and then double results downstream to broker plugin,

I have test it in deepstream 6.3, it not happened.

  1. what is the input source? if the two consective frames are the same, the two printing also are the same.
  2. to narrow down this issue, you can use one jpeg as source. then you can check if it process twice.
  1. the input source is local mp4 file,

you can see the boxes print in broker , it is in the same frame, and the index 0 is the same as index 5, 1vs 6, 2 vs 7 , 3 vs 8, 4 vs 9, they are the same.

I0921 14:54:51.502107    57 imagesavebroker.cpp:301] i: 0 x: 1153 y: 286 w: 3 h: 4
I0921 14:54:51.502154    57 imagesavebroker.cpp:301] i: 1 x: 274 y: 399 w: 12 h: 13
I0921 14:54:51.502166    57 imagesavebroker.cpp:301] i: 2 x: 219 y: 420 w: 12 h: 13
I0921 14:54:51.502197    57 imagesavebroker.cpp:301] i: 3 x: 1290 y: 477 w: 19 h: 18
I0921 14:54:51.502223    57 imagesavebroker.cpp:301] i: 4 x: 1302 y: 594 w: 27 h: 27
I0921 14:54:51.502246    57 imagesavebroker.cpp:301] i: 5 x: 1153 y: 286 w: 3 h: 4
I0921 14:54:51.502256    57 imagesavebroker.cpp:301] i: 6 x: 274 y: 399 w: 12 h: 13
I0921 14:54:51.502264    57 imagesavebroker.cpp:301] i: 7 x: 219 y: 420 w: 12 h: 13
I0921 14:54:51.502271    57 imagesavebroker.cpp:301] i: 8 x: 1290 y: 477 w: 19 h: 18
I0921 14:54:51.502279    57 imagesavebroker.cpp:301] i: 9 x: 1302 y: 594 w: 27 h: 27
    cv::Mat image(info->imageHeight, info->imageWidth, CV_8UC(info->channel), (uchar *)info->imageCpuData);
    std::vector<cv::Rect> boxes;
    for (size_t i = 0; i < info->info.size(); i++) {
        cv::Rect rect = cv::Rect(info->info[i].bbox.left, info->info[i].bbox.top,
                                 info->info[i].bbox.width, info->info[i].bbox.height);
        boxes.push_back(rect);
        LOG(INFO) << "i: " << i << " x: " << rect.x << " y: " << rect.y << " w: " << rect.width << " h: " << rect.height;
    }

I have commented the property “input-tensor-meta”, it mean no use preprocess plugin:

        g_object_set (G_OBJECT (pgie),
                      //设置检测参数
                      "config-file-path", config_file_path.c_str(),
                      "process-mode", 1,
                      "batch-size", MAX_NUM_SRCS,
                      "gpu-id", gpu_id,
                    //  "input-tensor-meta", true,
                      NULL);

it will normal, so i think it may occur in preprocess and not in nvinfer…

how to test with jpeg source?

  1. deepstream-image-decode-test is sample to use jpeg as source.
  2. which sample are you testing? what is stremmux’s batch-size? how many sources are you testing? if source number is 1, network-input-shape should be 1:xx, what is MAX_NUM_SRCS?

this it my project not sample, and the model batch size is 3, so I set network-input-shape=3;3;720;1280 the MAX_NUM_SRCS is 3, when I set network-input-shape to 1, would occur error. My project also test in deepstream 6.3, it is normal, I only regenerate model from onnx, others all the same.

        g_object_set(G_OBJECT(streammux),
                     "width", image_width,
                     "height", image_height,
                     "batch-size", MAX_NUM_SRCS,
                     "live-source", true,
                     "enable-padding", true,
                     "batched-push-timeout", 40,
                     "nvbuf-memory-type", 0,
                     "gpu-id", gpu_id,
                     "drop-pipeline-eos", TRUE, NULL);

        g_object_set(G_OBJECT(preprocess),
                     "config-file", preprocess_config_file.c_str(),
                     "gpu-id", gpu_id,
                     //"pixel-normalization-factor", 0.003921568,
                     NULL);

        g_object_set (G_OBJECT (pgie),
                      //设置检测参数
                      "config-file-path", config_file_path.c_str(),
                      "process-mode", 1,
                      "batch-size", MAX_NUM_SRCS,
                      "gpu-id", gpu_id,
                      "input-tensor-meta", true,
                      NULL);

can you use deepstream-preprocess-test to reproduce this issue? it supports mp4 input, nvpreprocess and nvinfer. to narrow down this issue, you can add log in probe function instead of sending msgbroker.

deepstream-preprocess-test.zip (60.6 MB)

I have used the deepstream-preprocess-test to reproduce this issue. you can run it on DS6.2.0 and T4 GPU.

first unzip the file. and run:

sudo nvidia-docker run -it --rm -v/your/code/file:/workspace nvcr.io/nvidia/deepstream:6.2-devel /bin/bash

and cd /workspace/xx/,

cp libopencv* /usr/local/lib/
ldconfig

and build the code, the code I only add the log.

CUDA_VER=11.8 make

check the dependency:

ldd -r libnvds_infercustomparser.so

and run:

./deepstream-preprocess-test config_preprocess_hrnet.txt config_infer_primary_hrnet.txt file:///workspace/1.mp4

the stream should contain person, because my model is head detect.

can you reproduce this issue?

could you share the model by forum private email?
hrnet_ds62_b3_sm75.int8.cache is not useful for a new GPU.

hrnet_b3.onnx
https://forums.developer.nvidia.com/t/nvdspreprocess-seems-do-process-twice-in-ds6-2/267101/11

Fanzh via NVIDIA Developer Forums <notifications@nvidia.discoursemail.com> 于2023年9月21日周四 14:53写道:

this is the onnx model

I can’t access because “your request was approved”.

you can try again, I have added

you can transform model first by

/usr/src/tensorrt/bin/trtexec --onnx=hrnet_b3.onnx --fp16 --saveEngine=xxxx.engine

fp16 can also reproduce issue,

1.zip (744.8 KB)

2.zip (914.2 KB)

different result between deepstream6.2 and 6.3

I also test yolov7 model, also can reproduce, so it seem not caused by model

you also can test the demo that deepstream offer, that use resnet10.caffemodel model.

turn off the param:
process-on-roi=0

################################################################################
# Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
################################################################################

# The values in the config file are overridden by values set through GObject
# properties.

[property]
enable=1
    # list of component gie-id for which tensor is prepared
target-unique-ids=1
    # 0=NCHW, 1=NHWC, 2=CUSTOM
network-input-order=0
    # 0=process on objects 1=process on frames
process-on-frame=1
    #uniquely identify the metadata generated by this element
unique-id=5
    # gpu-id to be used
gpu-id=0
    # if enabled maintain the aspect ratio while scaling
maintain-aspect-ratio=1
    # if enabled pad symmetrically with maintain-aspect-ratio enabled
symmetric-padding=1
    # processig width/height at which image scaled
processing-width=640
processing-height=368
    # max buffer in scaling buffer pool
scaling-buf-pool-size=6
    # max buffer in tensor buffer pool
tensor-buf-pool-size=6
    # tensor shape based on network-input-order
network-input-shape= 8;3;368;640
    # 0=RGB, 1=BGR, 2=GRAY
network-color-format=0
    # 0=FP32, 1=UINT8, 2=INT8, 3=UINT32, 4=INT32, 5=FP16
tensor-data-type=0
    # tensor name same as input layer name
tensor-name=input_1
    # 0=NVBUF_MEM_DEFAULT 1=NVBUF_MEM_CUDA_PINNED 2=NVBUF_MEM_CUDA_DEVICE 3=NVBUF_MEM_CUDA_UNIFIED
scaling-pool-memory-type=0
    # 0=NvBufSurfTransformCompute_Default 1=NvBufSurfTransformCompute_GPU 2=NvBufSurfTransformCompute_VIC
scaling-pool-compute-hw=0
    # Scaling Interpolation method
    # 0=NvBufSurfTransformInter_Nearest 1=NvBufSurfTransformInter_Bilinear 2=NvBufSurfTransformInter_Algo1
    # 3=NvBufSurfTransformInter_Algo2 4=NvBufSurfTransformInter_Algo3 5=NvBufSurfTransformInter_Algo4
    # 6=NvBufSurfTransformInter_Default
scaling-filter=0
    # custom library .so path having custom functionality
custom-lib-path=/opt/nvidia/deepstream/deepstream/lib/gst-plugins/libcustom2d_preprocess.so
    # custom tensor preparation function name having predefined input/outputs
    # check the default custom library nvdspreprocess_lib for more info
custom-tensor-preparation-function=CustomTensorPreparation

[user-configs]
   # Below parameters get used when using default custom library nvdspreprocess_lib
   # network scaling factor
pixel-normalization-factor=0.003921568
   # mean file path in ppm format
#mean-file=
   # array of offsets for each channel
#offsets=

[group-0]
src-ids=0;1
custom-input-transformation-function=CustomAsyncTransformation
process-on-roi=0
roi-params-src-0=300;200;700;800;1300;300;600;700
roi-params-src-1=860;300;900;500;50;300;500;700

[group-1]
src-ids=2
custom-input-transformation-function=CustomAsyncTransformation
process-on-roi=1
roi-params-src-2=50;300;500;700;650;300;500;500;1300;300;600;700

[group-2]
src-ids=3
custom-input-transformation-function=CustomAsyncTransformation
process-on-roi=0
draw-roi=0
roi-params-src-3=0;540;900;500;960;0;900;500

you can see log: the number of objects in fact is 23, not 46

iiii: 0 x: 644.656 y: 290.297 w: 52.3813 h: 117.73
iiii: 1 x: 517.853 y: 302.932 w: 52.4032 h: 99.4169
iiii: 2 x: 167.995 y: 442.855 w: 52.4998 h: 144.871
iiii: 3 x: 1658.08 y: 590.869 w: 115.259 h: 267.46
iiii: 4 x: 1028.61 y: 170.349 w: 45.9144 h: 102.528
iiii: 5 x: 888.896 y: 209.353 w: 57.103 h: 157.307
iiii: 6 x: 273.864 y: 390.414 w: 48.3133 h: 107.746
iiii: 7 x: 138.787 y: 459.272 w: 50.0018 h: 148.888
iiii: 8 x: 469.277 y: 317.707 w: 58.1757 h: 156.463
iiii: 9 x: 110.082 y: 464.013 w: 63.2619 h: 196.44
iiii: 10 x: 193.672 y: 422.537 w: 46.4035 h: 114
iiii: 11 x: 1306.46 y: 231.13 w: 74.6251 h: 193.86
iiii: 12 x: 1201.79 y: 293.267 w: 61.3132 h: 166.374
iiii: 13 x: 32.7113 y: 466.358 w: 78.0839 h: 223.679
iiii: 14 x: 247.345 y: 405.755 w: 41.8941 h: 97.9709
iiii: 15 x: 1627.73 y: 654.434 w: 115.09 h: 337.037
iiii: 16 x: 127.234 y: 479.992 w: 65.0468 h: 196.715
iiii: 17 x: 738.942 y: 266.197 w: 76.928 h: 186.086
iiii: 18 x: 184.646 y: 479.617 w: 53.5917 h: 70.5521
iiii: 19 x: 241.501 y: 439.492 w: 57.6707 h: 72.2919
iiii: 20 x: 1312.74 y: 322.354 w: 76.8167 h: 123.218
iiii: 21 x: 1362.67 y: 141.214 w: 45.4559 h: 55.7235
iiii: 22 x: 850.902 y: 708.977 w: 202.461 h: 148.749
iiii: 23 x: 644.656 y: 290.297 w: 52.3813 h: 117.73
iiii: 24 x: 517.853 y: 302.932 w: 52.4032 h: 99.4169
iiii: 25 x: 167.995 y: 442.855 w: 52.4998 h: 144.871
iiii: 26 x: 1658.08 y: 590.869 w: 115.259 h: 267.46
iiii: 27 x: 1028.61 y: 170.349 w: 45.9144 h: 102.528
iiii: 28 x: 888.896 y: 209.353 w: 57.103 h: 157.307
iiii: 29 x: 273.864 y: 390.414 w: 48.3133 h: 107.746
iiii: 30 x: 138.787 y: 459.272 w: 50.0018 h: 148.888
iiii: 31 x: 469.277 y: 317.707 w: 58.1757 h: 156.463
iiii: 32 x: 110.082 y: 464.013 w: 63.2619 h: 196.44
iiii: 33 x: 193.672 y: 422.537 w: 46.4035 h: 114
iiii: 34 x: 1306.46 y: 231.13 w: 74.6251 h: 193.86
iiii: 35 x: 1201.79 y: 293.267 w: 61.3132 h: 166.374
iiii: 36 x: 32.7113 y: 466.358 w: 78.0839 h: 223.679
iiii: 37 x: 247.345 y: 405.755 w: 41.8941 h: 97.9709
iiii: 38 x: 1627.73 y: 654.434 w: 115.09 h: 337.037
iiii: 39 x: 127.234 y: 479.992 w: 65.0468 h: 196.715
iiii: 40 x: 738.942 y: 266.197 w: 76.928 h: 186.086
iiii: 41 x: 184.646 y: 479.617 w: 53.5917 h: 70.5521
iiii: 42 x: 241.501 y: 439.492 w: 57.6707 h: 72.2919
iiii: 43 x: 1312.74 y: 322.354 w: 76.8167 h: 123.218
iiii: 44 x: 1362.67 y: 141.214 w: 45.4559 h: 55.7235
iiii: 45 x: 850.902 y: 708.977 w: 202.461 h: 148.749
Source ID = 0 Frame Number = 668 Number of objects = 46

yes, currently you can workaround by this method:
In config_preprocess_hrnet.txt, change
roi-params-src-0=0;540;900;500;100;100;100;100
to roi-params-src-0=0;540;900;500

ok, this can resolve the issue. and if I want to set multi ROI, how to set?

after roi-params-src is set, please set process-on-roi to 1.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.