While performing custom detection getting unicodeDecode error

When i execute command python3 train_ssd.py --dataset-type=voc --data=data/test/ --model-dir=models/test --batch-size=2 --workers=1 --epochs=1
i am getting error
/home/xyz/.local/lib/python3.6/site-packages/torchvision/io/image.py:11: UserWarning: Failed to load image Python extension:
warn(f"Failed to load image Python extension: {e}")
Traceback (most recent call last):
File “train_ssd.py”, line 26, in
from vision.datasets.open_images import OpenImagesDataset
File “/home/xyz/jetson-inference/python/training/detection/ssd/vision/datasets/open_images.py”, line 4, in
import pandas as pd
File “/usr/lib/python3/dist-packages/pandas/init.py”, line 58, in
from pandas.io.api import *
File “/usr/lib/python3/dist-packages/pandas/io/api.py”, line 19, in
from pandas.io.packers import read_msgpack, to_msgpack
File “/usr/lib/python3/dist-packages/pandas/io/packers.py”, line 68, in
from pandas.util._move import (
UnicodeDecodeError: ‘utf-8’ codec can’t decode byte 0xfd in position 0: invalid start byte

Hi,

It looks like you met the same issue as the below link.
Please check the comments for the suggestions:

Thanks.

Yes exactly same error i am getting, But i don’t want to run my code inside docker container, and in that post there is no specif answer for the same

Hi,

Sorry for the late update.

This error is related to the encoding format.
Is your data saved with utf-8?
If not, could you save it as utf-8 and try it again?

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.