TeleSparse banner
A zero-knowledge-friendly sparsification technique for neural networks, designed to optimize SNARK proof generation without compromising accuracy.

๐Ÿš€ TeleSparse: Practical Zero-Knowledge Proofs for Deep Neural Networks

Deep neural networks (DNNs) have transformed AI, powering breakthroughs in image recognition and language processing. But verifying these models without revealing sensitive details poses a significant privacy challenge. ๐ŸŒ TeleSparse makes zero-knowledge verification practical for todayโ€™s powerful neural networks!


๐Ÿ•ต๏ธโ€โ™‚๏ธ Threat Model: Ensuring Privacy and Security

TeleSparse assumes a malicious verifier and prover scenario, focusing on protecting sensitive model weights and inputs while verifying correctness. The threat model ensures that while parties follow the protocol, they may try to infer private information indirectly. TeleSparse provides robust guarantees against such privacy leakages.

Threat Model


โ“ Challenges Addressed by TeleSparse

  1. โš™๏ธ High Computational Overhead: Large models lead to extensive constraints, increasing computation and memory demands.
  2. ๐Ÿ“š Extensive Lookup Tables: Non-linear activations need massive lookup tables, further raising resource usage.

๐Ÿ“Š System Overview

The figure below provides an overview of the TeleSparse system, showcasing its integration with the Halo2 proving system:

System Overview


๐Ÿ’ก Key Innovations of TeleSparse

๐ŸŒณ Sparse-aware ZK Proof Generation

TeleSparse employs sparse-aware pruning, reducing unnecessary constraints by strategically removing weights. This approach maintains high accuracy while drastically cutting prover memory use by 67% and proof generation time by 46%.

๐Ÿ”„ Neural Network Teleportation โ€“ ๐Ÿ“ˆ Activation Range Optimization

Neural network teleportation minimizes activation ranges, reducing lookup table sizes essential for zero-knowledge proofs. Teleportation optimizes activations, significantly streamlining the verification process. TeleSparseโ€™s teleportation adjusts activation ranges to be narrower, reducing the lookup tables needed. The distribution below illustrates this for ResNet20:

Activation Range Distribution for ResNet20


๐Ÿงช Evaluation and Impressive Results

TeleSparse has been rigorously tested on popular architectures (Vision Transformers, ResNet, MobileNet) and datasets (CIFAR-10, CIFAR-100, ImageNet).

  • ๐Ÿš€ Faster Proof Generation: Remarkable reductions in prover memory usage and computation time.
  • ๐ŸŽฏ Accuracy Retention: Only about a 1% accuracy dropโ€”minimal compared to huge efficiency gains.

Evaluation Framework


๐Ÿ›  TeleSparse in Action

Using Halo2, known for efficient zero-knowledge proofs, TeleSparse leverages lightweight pruning and teleportation to deliver a scalable and efficient solution.


๐ŸŒŸ Why TeleSparse Matters

TeleSparse revolutionizes zero-knowledge proofs, enabling scalable, secure, and privacy-preserving AI. Dive deeper into TeleSparse on our GitHub repository or explore our arXiv paper! ๐ŸŒ