已经成功构建了X86上的TRT模型, 但在ARM64 (NX)上构建TRT模型失败。
错误如下
GRU_75: inputs to IRecurrenceLayer mismatched Builder failed while analyzing shapes.X86 Config: TensorRT version: 7.2.3.4
ARM64 config: TensorRt version: 7.1.3.4
Xavier NX 软件版本JetPack 4.5
2. 尝试其他网络、已经使用“torch.onnx.export”导出了ONNX 已经使用PyTorch、OnnxRuntime和TensorRT完成了推理。 PyTorch和OnnxRuntime给出了相同的结果,而TensorRT则没有。
觉得ONNX的TrtEngine有问题。 我还有其他的分割模型,载入很好,但结果是不相似的。 我已经尝试了polygraphy分割模型,Pytorch和OnnRuntime给出相同的结果, 但TRT引擎不是。
3. 升级nvonnxparser对于JetPack 4.5.1环境,可以将nvonnxparser升级到v7.2来修复这个问题。 以下是具体的操作步骤:
Install cmake-3.13.5
$ sudo apt-get install -y protobuf-compiler libprotobuf-dev openssl libssl-dev libcurl4-openssl-dev $ wget https://github.com/Kitware/CMake/releases/download/v3.13.5/cmake-3.13.5.tar.gz $ tar xvf cmake-3.13.5.tar.gz $ cd cmake-3.13.5/ $ ./bootstrap –system-curl $ make -j$(nproc) $ echo export PATH=${PWD}/bin/:$PATH >> ~/.bashrc $ source ~/.bashrcBuild onnx-tensorrt
$ git clone https://github.com/onnx/onnx-tensorrt.git $ cd onnx-tensorrt/ $ git submodule update –init –recursive $ mkdir build && cd build $ cmake ../ $ make -j $ sudo mv /usr/lib/aarch64-linux-gnu/libnvonnxparser.so.7.1.3 libnvonnxparser.so.7.1.3_bk $ sudo cp libnvonnxparser.so.7.2.2 /usr/lib/aarch64-linux-gnu/libnvonnxparser.so.7.2.2 $ sudo rm /usr/lib/aarch64-linux-gnu/libnvonnxparser.so.7 $ sudo ln -s /usr/lib/aarch64-linux-gnu/libnvonnxparser.so.7.2.2 /usr/lib/aarch64-linux-gnu/libnvonnxparser.so.7 $ sudo ldconfig实际上,在X86_64 Ubuntu上使用TensorRT 8.0+CUDA 11.3解决了所有的问题。
根源在于onnx解析器,而不是TensorRT库本身。 为了兼容性,您仍然需要使用TensorRT v7.1.3。 但是升级ONNX解析器可以解决这个问题
4.更新完测试在ARM64-NX上更新完后解发现问题 TensorRT 7.1.3通过JetPack 4.5.1安装 Onnx-Parser按照指令安装 问题如下:
home/onnx-tensorrt/onnx2trt_utils.cpp:291: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. /home/onnx-tensorrt/ModelImporter.cpp:703: While parsing node number 37 [Resize -> “300”]: /home/onnx-tensorrt/ModelImporter.cpp:704: — Begin node — /home/onnx-tensorrt/ModelImporter.cpp:705: input: “295” input: “299” input: “526” output: “300” name: “Resize_37” op_type: “Resize” attribute { name: “coordinate_transformation_mode” s: “align_corners” type: STRING } attribute { name: “cubic_coeff_a” f: -0.75 type: FLOAT } attribute { name: “mode” s: “linear” type: STRING } attribute { name: “nearest_mode” s: “floor” type: STRING } /home/onnx-tensorrt/ModelImporter.cpp:706: — End node — /home/onnx-tensorrt/ModelImporter.cpp:709: ERROR: /home/onnx-tensorrt/builtin_op_importers.cpp:3074 In function importResize: [8] Assertion failed: (transformationMode == “asymmetric” || transformationMode == “pytorch_half_pixel” || transformationMode == “half_pixel”) && “TensorRT only supports half pixel, pytorch half_pixel, and asymmetric tranformation mode for linear resizes when scales are provided!” kError: Assertion failed: (transformationMode == “asymmetric” || transformationMode == “pytorch_half_pixel” || transformationMode == “half_pixel”) && “TensorRT only supports half pixel, pytorch half_pixel, and asymmetric tranformation mode for linear resizes when scales are provided!” Network must have at least one output Network validation failed.以上错误是由一个不支持的操作引起的:调整大小+线性插值+非对称模式。
// alignCorners = 0: HALF_PIXEL // alignCorners = 1: ASYMMETRIC else { if (mode == “nearest”) { ASSERT(transformationMode == “asymmetric” && “TensorRT only supports asymmetric tranformation mode for nearest neighbor resizes when scales are provided!”,ErrorCode::kUNSUPPORTED_NODE); } else if (mode == “linear”) { ASSERT((transformationMode == “asymmetric” || transformationMode == “pytorch_half_pixel” || transformationMode == “half_pixel”) && “TensorRT only supports half pixel, pytorch half_pixel, and asymmetric tranformation mode for linear resizes when scales are provided!”, ErrorCode::kUNSUPPORTED_NODE); if (transformationMode == “asymmetric”) { layer->setAlignCorners(true); } } } } // For opset 10 resize, the only supported mode is asymmetric resize with scales. else增加更多的插值支持的调整层。 目前可以将Resize模式更新为最接近跳过此错误的模式。
更新也可以通过ONNX graphsurgeon API完成。 增加更多的插值支持的调整层。 目前可以将Resize模式更新为最接近跳过此错误的模式。
更新也可以通过ONNX graphsurgeon API完成。
import onnx_graphsurgeon as gs import onnx import numpy as np graph = gs.import_onnx(onnx.load(“model.onnx”)) node = [node for node in graph.nodes if node.op == “Resize”] for n in node: n.attrs[mode] = nearest onnx.save(gs.export_onnx(graph), “updated_model.onnx”)免责声明:文章内容来自互联网,本站不对其真实性负责,也不承担任何法律责任,如有侵权等情况,请与本站联系删除。
转载请注明出处:xavier nx建立RNN层失败问题 https://www.yhzz.com.cn/a/12253.html