In my previous blog post “ONNX Runtime C++ Inference”, we have discussed how to use ONNX Runtime C++ API to run inference. In this blog post, we will discuss how to use ONNX Runtime Python API to run inference instead.
The models and images used for the example are exactly the same as the ones used in the example for ONNX Runtime C++ inference. The only differences are that this time we used a new Docker container in which the ONNX Runtime Python library was installed via pip and the Python implementation is much simpler and human readable than the C++ implementation. All the Dockerfile, scripts, models and images are available on my GitHub.
mean_vec = np.array([0.485, 0.456, 0.406]) stddev_vec = np.array([0.229, 0.224, 0.225]) norm_image_data = np.zeros(image_data.shape).astype('float32') for i inrange(image_data.shape): # For each pixel in each channel, # divide the value by 255 to get value between [0, 1] and then normalize. norm_image_data[i, :, :] = (image_data[i, :, :] / 255 - mean_vec[i]) / stddev_vec[i] return norm_image_data