bug in parallel-forall/code-samples.git

Hi all,
I try the demo on [u]https://devblogs.nvidia.com/speed-up-inference-tensorrt/[/u]. But there are some problems.

First, I install onnx library as indicated in the devblog, and compile SampleOnnx_1.cpp. But the compiler complains it can’t find IParser. So I git clone the onnx-tensorrt on github. And Then compiler compains IParser has no member “parseFromFile”. I can’t find this member function in header file. Any suggestions?

class IParser
{
public:
/** \brief Parse a serialized ONNX model into the TensorRT network.
*
* \param serialized_onnx_model Pointer to the serialized ONNX model
* \param serialized_onnx_model_size Size of the serialized ONNX model
* in bytes
* \return true if the model was parsed successfully
* \see getNbErrors() getError()
/
virtual bool parse(void const
serialized_onnx_model,
size_t serialized_onnx_model_size) = 0;

/** \brief Check whether TensorRT supports a particular ONNX model
 *
 * \param serialized_onnx_model Pointer to the serialized ONNX model
 * \param serialized_onnx_model_size Size of the serialized ONNX model
 *        in bytes
 * \return true if the model is supported
 */
virtual bool supportsModel(void const *serialized_onnx_model,
                           size_t serialized_onnx_model_size) = 0;

/** \brief Parse a serialized ONNX model into the TensorRT network
 * with consideration of user provided weights
 *
 * \param serialized_onnx_model Pointer to the serialized ONNX model
 * \param serialized_onnx_model_size Size of the serialized ONNX model
 *        in bytes
 * \param weight_count number of user provided weights
 * \param weight_descriptors pointer to user provided weight array
 * \return true if the model was parsed successfully
 * \see getNbErrors() getError()
 */
virtual bool parseWithWeightDescriptors(
    void const *serialized_onnx_model, size_t serialized_onnx_model_size,
    uint32_t weight_count,
    onnxTensorDescriptorV1 const *weight_descriptors) = 0;

/** \brief Returns whether the specified operator may be supported by the
 *         parser.
 *
 * Note that a result of true does not guarantee that the operator will be
 * supported in all cases (i.e., this function may return false-positives).
 *
 * \param op_name The name of the ONNX operator to check for support
 */
virtual bool supportsOperator(const char* op_name) const = 0;
/** \brief destroy this object
 */
virtual void destroy() = 0;
/** \brief Get the number of errors that occurred during prior calls to
 *         \p parse
 *
 * \see getError() clearErrors() IParserError
 */
virtual int  getNbErrors() const = 0;
/** \brief Get an error that occurred during prior calls to \p parse
 *
 * \see getNbErrors() clearErrors() IParserError
 */
virtual IParserError const* getError(int index) const = 0;
/** \brief Clear errors from prior calls to \p parse
 *
 * \see getNbErrors() getError() IParserError
 */
virtual void clearErrors() = 0;

protected:
virtual ~IParser() {}
};

Hello, can you provide details on the platforms you are using?

The demo you referenced is for TRT 5.02

Linux distro and version
GPU type
nvidia driver version
CUDA version
CUDNN version
Python version [if using python]
Tensorflow version
TensorRT version

Hello,

we are reviewing the details with the devblog authors, and will keep you updated.

Hi NVES,

My platform information:
Ubuntu 16.04;
Geforce 980, I also have a TX2-jetpack3.3 but haven’t tried yet.
Driver 390.48
CUDA 9.0
CUDNN 7
Python 2.7/3.5
No tensorflow
TensorRT 4.0

Hi.

Unfortunately, blog post sample is based on TensorRT 5.0, which appears to add function parseFromFile. Is it possible for you to update TensorRT library?

Please find below declaration of parseFromFile function from NvOnnxParser.h file.

Best regards,
Piotr Wojciechowski

/** \brief Parse an onnx model file, can be a binary protobuf or a text onnx model
  *         calls parse method inside.
  *
  * \param File name
  * \param Verbosity Level
  *
  * \return true if the model was parsed successfully
  *
  */
  virtual bool parseFromFile(const char* onnxModelFile, int verbosity) = 0;

Hmm, Because I hope my code can run on tx2, so I use TensorRT 4.0.
Could you please recommend some other examples like the devblog for tensorRT 4.0?

Hello,

TensorRT 4 ships with many C++ and PYthon samples that show how to use TensorRT in numerous use cases. Please reference
https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#samples