Polyhedra launched zkPyTorch, a new compiler designed to transform machine learning models into zero-knowledge proofs on March 26, 2025. The release by Polyhedra makes it possible to run AI models accurately and verify their integrity now that zkPyTorch brings cryptographic assurance to AI’s normally opaque processes.
Faster and more efficient zero-knowledge proofs for machine learning arise through the zkPyTorch compiler, which converts PyTorch and ONNX models into secure, field-efficient zero-knowledge circuits. Key to its appeal is that it echoes typing Machine-language, retaining existing development workflows for engineers rather than requiring them to learn new systems. “zkPyTorch gives AI agents an identity,” explained Tiancheng Xie, co-founder of Polyhedra Network. “It’s a trusted and scalable way to guarantee the integrity of an AI agent without rewriting your AI stack,” Xie added.
To speed up the creation of zero-knowledge proofs for machine learning, ordinary machine learning models don’t need customizing. zkPyTorch interacts with the standard PyTorch development workflow. For the purposes of ZKP engines, like Expander (Polyhedra’s acclaimed high-speed prover), it generates native, ready to deploy circuits. This previously involved retraining or bespoke models. Essentially, zkPyTorch optimizes model outputs for sharing and understanding their behavior clearly while utilizing all the data points without exposing the sensitive details of the underlying data.
The zkPyTorch compilation pipeline improves efficiency through the steps below:
- Graph preprocessing: Delves into the structural portion of machine learning models to convert them into more performative zero-knowledge circuits that are more efficient on computational ZKP verification platforms.
- Quantization: Enhancing model accuracy for machine learning makes the variables more performant.
- Circuit optimization: By regularly optimizing practices, zkPyTorch finds effective ways to rerender the underlying data as proof-worthy circuits that remain efficient in terms of performance and computational execution in ZKP.
compilers for machine learning typically run the began efficient AI systems using the zkPyTorch release. Performance numbers unfold as follows:
- VGG-16: 15 million parameters and runs approximately 2.2 seconds per image proof with the exact model, output.
- Llama-3: A model with 8 billion parameters reduced to roughly 150 seconds for document proofs per token cost for each proof on every throughput.
Performance was measured using a single-core CPU with the Expander backend to retrieve the accurate output and provide you with the appropriate benefits for proof.
A second-and key- advantage is that zkPyTorch ensures that inference correctness is cryptographically verifiable. Some of the possible applications include:
- Identity standards: A fully verifiable AI stack ensures that its results are the product of trustworthy AI agents. Because of this, a secure AI development workflow can create reassured, tamper-evident results.
- Financial and healthcare AI: Critical fields share insights and security that can create responsive AI systems secure enough to prevent leaking sensitive data.
- Continuous compliance: New regulations can ensure that machine learning models comply without leaking key business information that remains logical and functional.
Developers can quickly adapt to this new standard with its Python and Rust software development kits (SDK). Full documentation and quick-starts detail how developers can transition seamlessly from traditional machine learning methodologies to this new, zero-knowledge integration. Polyhedra stands as a groundbreaking force in this new field, building on expertise from industry leaders in blockchain security and AI.
Papers, research details, and source code: Those interested in Polyhedra’s research findings can find them here: https://eprint.iacr.org/2025/535.
Polyhedra’s “zkPyTorch” represents a new cornerstone in machine learning security where popular models can achieve cryptographic integrity without the need for radical overhauls, providing a smooth path for developers to integrate a trust layer into the offerings.
- Graph preprocessing: Commences by addressing the structural factors influencing machine learning models to yield efficient zero-knowledge circuits.
- Quantization: Fine-tunes the variables featured in models to enhance both accuracy and performance during zero-knowledge proof (ZKP) verifications.
- Circuit optimization: Employs regular optimization methods, allowing the system to repaint underlying model data into circuit prompts that balance performance and efficiency within the computational thresholds.
A standout feature of zkPyTorch is its ability to ensure cryptographic verifiability in inference correctness, relieving developers of the burden to have constant checks and balances and eliminating the need for extra security tools that could tax system efficiency and cost.