
本文共 1087 字,大约阅读时间需要 3 分钟。
relay 本质上在TVM中就是一种模型描述的IR, 将各种模型编译成relay后再进行一步优化
示例用法: tvm.relay.frontend.from_onnx("/path/to/onnx/model") 加载onnx模型
tvm.relay.frontend
Functions:
(init_net, predict_net, …) | Convert from caffe model into compatible relay Function. |
(init_net, predict_net[, shape, …]) | Load caffe2 graph which contains init_net and predict_net into Relay Function. |
(model[, shape]) | Convert from coreml model into Relay Function. |
(net[, shape, dtype]) | Convert from Darknet’s model into compatible relay Function. |
(model[, shape, layout]) | Convert keras model to relay Function. |
(symbol[, shape, dtype, …]) | Convert from MXNet”s model into compatible relay Function. |
(model[, shape, dtype, opset, …]) | Convert a ONNX model into an equivalent Relay Function. |
(script_module, input_infos[, …]) | Load PyTorch model in the form of a scripted PyTorch model and convert into relay. |
(graph[, layout, shape, outputs]) | Load tensorflow graph which is a python tensorflow graph object into relay. |
(model[, shape_dict, dtype_dict]) | Convert from tflite model into compatible relay Function. |
(bias_var, …) | Quantized conv2d bias |
发表评论
最新留言
关于作者
