Unlock the full potential of your AI PC. A professional-grade benchmarking framework designed to validate, test, and optimize AI inference performance on Intel NPU, CPU, and GPU.
Comprehensive tools to measure, compare, and optimize AI inference across all Intel hardware.
Seamlessly benchmark across Intel CPU, Arc/Integrated GPU, and AI Boost NPU. Direct side-by-side performance comparisons.
Stunning glassmorphic web UI to run tests and visualize real-time performance with beautiful animated charts.
Generate detailed HTML reports with hardware specifications, speedup metrics, and downloadable results.
Curated collection of industry-standard models including ResNet, YOLO, BERT, and Vision Transformers. All pre-configured for NPU.
Compress models for faster NPU inference with negligible accuracy loss. Up to 4x compression ratio.
Automatically find the optimal batch size for maximum throughput on each device type.
Launch the beautiful web interface with a single command. Run benchmarks, visualize results, and download reports all from your browser.
$ npu-benchmark web
127.0.0.1:5000
Actual performance measurements from an Intel Core Ultra 7 255H system with Intel AI Boost NPU.
Industry-standard models automatically downloaded, converted, and optimized for NPU.
Simple installation with pip. Works on Windows 10/11 and Linux with Python 3.10+.
git clone
https://github.com/singhraghvendra2104/OpenVINO-NPU-Inference-Benchmark-Suite.git
cd
OpenVINO-NPU-Inference-Benchmark-Suite
pip install -e .
npu-benchmark web
Open http://127.0.0.1:5000 in your browser
Discover the true AI performance of your Intel Core Ultra processor. Start benchmarking today with our open-source suite.