Technology
OpenVINO
Intel’s open-source toolkit for optimizing and deploying high-performance AI inference across heterogeneous hardware (CPUs, GPUs, and NPUs).
OpenVINO (Open Visual Inference and Neural Network Optimization) streamlines the transition from model training to real-world deployment. It converts models from frameworks like PyTorch, TensorFlow, and ONNX into an Intermediate Representation (IR) to maximize throughput on Intel silicon. Developers use the runtime to execute LLMs, computer vision, and generative AI workloads with minimal code changes. By utilizing automated device discovery and load balancing, the toolkit ensures consistent performance whether running on a Core i7 laptop, an integrated Arc GPU, or dedicated AI accelerators.
Related technologies
Recent Talks & Demos
Showing 1-1 of 1