How to run LDPC encoder stand alone test

I am trying to run the following LDPC_Encoder stand alone test is it possible ?

I get the following error concerning the data set file TbCbsUncoded ?

sbaker@ubuntusim12:~/zodiacArtemis2/zodiac/trunk/Prototypes/Artemis2/cuda_test_3/build/cuPHY/examples/ldpc_encode$ ./ldpc_encode /home/sbaker/HDF5/standalone_LDPC_encoder_test/mat_gen_data_1_1144_752_0.hdf5
AERIAL_LOG_PATH unset
Using default log path
Log file set to /tmp/ldpc_encode.log
HDF5-DIAG: Error detected in HDF5 (1.10.7) thread 1:
#000: …/…/…/src/H5D.c line 298 in H5Dopen2(): unable to open dataset
major: Dataset
minor: Can’t open object
#001: …/…/…/src/H5Dint.c line 1429 in H5D__open_name(): not found
major: Dataset
minor: Object not found
#002: …/…/…/src/H5Gloc.c line 420 in H5G_loc_find(): can’t find object
major: Symbol table
minor: Object not found
#003: …/…/…/src/H5Gtraverse.c line 848 in H5G_traverse(): internal path traversal failed
major: Symbol table
minor: Object not found
#004: …/…/…/src/H5Gtraverse.c line 624 in H5G__traverse_real(): traversal operator failed
major: Symbol table
minor: Callback failed
#005: …/…/…/src/H5Gloc.c line 376 in H5G__loc_find_cb(): object ‘TbCbsUncoded’ doesn’t exist
major: Symbol table
minor: Object not found
H5Dopen(): Unable to open file TbCbsUncoded
terminate called after throwing an instance of ‘hdf5hpp::hdf5_exception’
what(): HDF5 Error
Aborted (core dumped)

Hi @stuart.baker,

It seems there is an issue opening the input test vector mat_gen_data_1_1144_752_0.hdf5.

Can you check if this test vector file exists on the file path that you provided and it is not corrupted?

Thank you.

I think the file does exist e.g:

$ ll /home/sbaker/HDF5/standalone_LDPC_encoder_test/mat_gen_data_1_1144_752_0.hdf5
-rw-rw-r-- 1 root root 3521504 Aug 29 23:18 /home/sbaker/HDF5/standalone_LDPC_encoder_test/mat_gen_data_1_1144_752_0.hdf5

Can you please check if the test vector contains the field TbCbsUncoded ?

h5disp(“./mat_gen_data_1_1144_752_0.hdf5”)
HDF5 mat_gen_data_1_1144_752_0.hdf5
Group ‘/’
Dataset ‘encodedData’
Size: 3536x752
MaxSize: 3536x752
Datatype: H5T_STD_U8LE (uint8)
ChunkSize:
Filters: none
FillValue: 0
Dataset ‘sourceData’
Size: 1144x752
MaxSize: 1144x752
Datatype: H5T_STD_U8LE (uint8)
ChunkSize:
Filters: none
FillValue: 0

I am not sure how to answer you questions so I just tried matlab h5dist() to display the file. I dont see TbCbsUncoded

Yes, it seems the h5 file does not include this field.

You can also check the same with h5ls mat_gen_data_1_1144_752_0.hdf5 in Aerial container.

How did you generate the test vector mat_gen_data_1_1144_752_0.hdf5 ?

NOTE: ldpc_encode example is currently not maintained for external release.

HDF5 ldpc_BG-1_Zc- 36_C-1_R-0.89.h5
Group ‘/’
Dataset ‘BG’
Size: 1x1
MaxSize: 1x1
Datatype: H5T_STD_U8LE (uint8)
ChunkSize:
Filters: none
FillValue: 0
Dataset ‘C’
Size: 1x1
MaxSize: 1x1
Datatype: H5T_STD_U32LE (uint32)
ChunkSize:
Filters: none
FillValue: 0
Dataset ‘K’
Size: 1x1
MaxSize: 1x1
Datatype: H5T_STD_U32LE (uint32)
ChunkSize:
Filters: none
FillValue: 0
Dataset ‘K_b’
Size: 1x1
MaxSize: 1x1
Datatype: H5T_STD_U32LE (uint32)
ChunkSize:
Filters: none
FillValue: 0
Dataset ‘PuschCfgAlloc’
Size: 1
MaxSize: 1
Datatype: H5T_COMPOUND
Member ‘nprb_alloc’: H5T_IEEE_F64LE (double)
Member ‘N_data’: H5T_IEEE_F64LE (double)
ChunkSize:
Filters: none
FillValue: H5T_COMPOUND
Dataset ‘PuschCfgCoding’
Size: 1
MaxSize: 1
Datatype: H5T_COMPOUND
Member ‘mcsTable’: H5T_IEEE_F64LE (double)
Member ‘mcs’: H5T_IEEE_F64LE (double)
Member ‘qamstr’: H5T_IEEE_F64LE (double)
Member ‘qam’: H5T_IEEE_F64LE (double)
Member ‘codeRate’: H5T_IEEE_F64LE (double)
Member ‘TBS’: H5T_IEEE_F64LE (double)
Member ‘BGN’: H5T_IEEE_F64LE (double)
Member ‘CRC’: H5T_IEEE_F64LE (double)
Member ‘B’: H5T_IEEE_F64LE (double)
Member ‘C’: H5T_IEEE_F64LE (double)
Member ‘Zc’: H5T_IEEE_F64LE (double)
Member ‘i_LS’: H5T_IEEE_F64LE (double)
Member ‘K’: H5T_IEEE_F64LE (double)
Member ‘K_b’: H5T_IEEE_F64LE (double)
Member ‘F’: H5T_IEEE_F64LE (double)
Member ‘K_prime’: H5T_IEEE_F64LE (double)
Member ‘nV_parity’: H5T_IEEE_F64LE (double)
ChunkSize:
Filters: none
FillValue: H5T_COMPOUND
Dataset ‘PuschCfgMimo’
Size: 1
MaxSize: 1
Datatype: H5T_COMPOUND
Member ‘nl’: H5T_IEEE_F64LE (double)
ChunkSize:
Filters: none
FillValue: H5T_COMPOUND
Dataset ‘R’
Size: 1x1
MaxSize: 1x1
Datatype: H5T_IEEE_F64LE (double)
ChunkSize:
Filters: none
FillValue: 0.000000
Dataset ‘TbCbsCoded’
Size: 2376x1
MaxSize: 2376x1
Datatype: H5T_STD_U8LE (uint8)
ChunkSize:
Filters: none
FillValue: 0
Dataset ‘TbCbsUncoded’
Size: 792x1
MaxSize: 792x1
Datatype: H5T_STD_U8LE (uint8)
ChunkSize:
Filters: none
FillValue: 0
Dataset ‘Z’
Size: 1x1
MaxSize: 1x1
Datatype: H5T_STD_U32LE (uint32)
ChunkSize:
Filters: none
FillValue: 0
Dataset ‘nV_parity’
Size: 1x1
MaxSize: 1x1
Datatype: H5T_STD_U8LE (uint8)
ChunkSize:
Filters: none
FillValue: 0
Dataset ‘rv’
Size: 1x1
MaxSize: 1x1
Datatype: H5T_STD_U32LE (uint32)
ChunkSize:
Filters: none
FillValue: 0

If I input the GPU_test_input file to the stand alone ldpc_encoder example I get the following

./ldpc_encode “./GPU_test_input/ldpc_BG-1_Zc- 36_C-1_R-0.89.h5”
AERIAL_LOG_PATH set to /home/sbaker/AERIAL_LOG
Log file set to /home/sbaker/AERIAL_LOG/ldpc_encode.log
10:39:02.159077 WRN 46088 0 [NVLOG.CPP] Using /home/sbaker/zodiacArtemis2/zodiac/trunk/Prototypes/Artemis2/cuda_test_3/cuPHY/nvlog/config/nvlog_config.yaml for nvlog configuration
10:39:02.260586 ERR 46088 0 [AERIAL_CUPHY_EVENT] [CUPHY.PDSCH_TX] ERROR: the wrongly structured reference output: 2448 vs. 2376

Note that the GPU input file does include TbCbsCoded

Dataset ‘TbCbsCoded’
Size: 2376x1

I think there is some issue with generated data do you agree ?

I ran the following matlab script

HDF5/standalone_LDPC_encoder_test/test_encoder_main.m

Yes, I agree.

Can you please provide a spec for the expected HDF5 data file input for both the encoder and decoder ?

For the stand alone error correction LDPC decoder this is what I get from command line help e.g.

-i input_filename Input HDF5 file name, which must contain the following datasets:
sourceData: uint8 data set with source information bits
inputLLR: Log-likelihood ratios for coded, modulated symbols
inputCodeWord: uint8 data set with encoded bits (optional)
(Initial bits are sourceData. No puncturing assumed.)

Is the input LLR before or after de rate matching ?

This is an example I would like to try stand alone testing

TrBlk size = 966920 (966896 info bits + 24 bit CRC)
Number of CBs = 115
CB size = 8408 + 24 bit CB CRC = 8432 (115*8408=966920)
n_cb = 25344
Zc = 384
BG = BG1
rv = 0
Qm = 8
numFillerBits = 16
Rate-matched size Er = 9088 for CBs 0-14, 9120 for CBs 15-114

If I want to decode just the first CB
sourceData 8432 = the CB plus CRC
inputLLR 9088 ? = rate matched size
BG = 1
mb = 44 ?

Thanks for your post above HDF5 tools are useful e.g. h5ls and h5dump we can guess the GPU input format.
I understand there is no support for this stand alone testing? If so do you advise that we stop this activity and find some other way to run the encoder in a different fixture?

Hi @stuart.baker ,

I recommend using pyAerial for this purpose. pyAerial invokes the same CUDA implementation of LDPC decoder.

Please check Using pyAerial for LDPC encoding-decoding chain - NVIDIA Docs .

“The pyAerial library provides a Python-callable bit-accurate GPU-accelerated library for all of the signal processing CUDA kernels in the NVIDIA cuBB layer-1 PDSCH and PUSCH pipelines. In other words, the pyAerial Python classes behave in a numerically identical manner to the kernels employed in cuBB because a pyAerial class employs the exact same CUDA code as the corresponding cuBB kernel: it is the CUDA kernel but with a Python API.”

Thank you.

Understood thanks we can close this issue.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.