Skip to content

It is the compatibility engine behind the SOTA LabVIEW Deep Learning Toolkit, ensuring that every ONNX operator behaves consistently across hardware targets. It validates each node against multiple execution providers to guarantee reliable and predictable AI deployment.

License

Notifications You must be signed in to change notification settings

Graiphic/ONNX-Runtime-Execution-Providers-Tester

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ONNX Runtime Logo

Welcome to the ONNX Runtime – Execution Provider Coverage Tester

This open source initiative, led by Graiphic, provides a detailed, real-world coverage map of ONNX operator support for each Execution Provider (EP) in ONNX Runtime.

It is part of our broader effort to democratize AI deployment through SOTA — an ONNX-native orchestration framework designed for engineers, researchers, and industrial use cases.

🎯 Project Objectives

  • Systematically test and report ONNX operator coverage per Execution Provider.
  • Deliver up-to-date insights to guide industrial and academic ONNX Runtime adoption.
  • Help developers, maintainers, and hardware vendors prioritize missing or broken operator support.

🧪 What’s Tested

  • Each ONNX operator is tested in isolation using a minimal single-node model.
  • Status per operator: SUCCESS, FALLBACK, FAIL, NOT TESTED, SKIPPED, UNKNOWN.
  • Per-EP datasets include logs, optimized models (when applicable), and a README.

📐 How’s Tested

Inference

Each operator is tested with a minimal ONNX graph. For EPs like OpenVINO/TensorRT, a complexification pass can add a small chain of Mul/And nodes (type-dependent) to make the backend compile more of the graph and reveal actual EP coverage.

Training

When ONNX Runtime Training is available, a trainable scalar __train_C is injected via a Mul on the first input of the tested node (initialized to 1.0). We generate artifacts (AdamW) and run a single optimization step with an MSE loss on the first output. Operators that complete this step are marked SUCCESS; explicitly skipped or unsupported patterns are SKIPPED; others are FAIL.

For detailed results and EP lists, please navigate to the per-opset dashboards:

🧭 Related Tools

For a complementary and more aggregated perspective on backend compliance, we encourage you to also visit the official ONNX Backend Scoreboard.

While the Scoreboard provides a high-level view of backend support based on ONNX's internal test suite, our initiative focuses on operator-level validation and runtime behavior analysis — especially fallback detection — across Execution Providers. Together, both efforts help build a clearer, more actionable picture of ONNX Runtime capabilities.

🤝 Maintainer

This project is maintained by Graiphic as part of the SOTA initiative.

We welcome collaboration, community feedback, and open contribution to make ONNX Runtime stronger and more widely adopted.

📬 Contact: [email protected]
🌐 Website: graiphic.io
🧠 Learn more about SOTA: graiphic.io/download

About

It is the compatibility engine behind the SOTA LabVIEW Deep Learning Toolkit, ensuring that every ONNX operator behaves consistently across hardware targets. It validates each node against multiple execution providers to guarantee reliable and predictable AI deployment.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •