ICICLE for Researchers: Grants & Challenges

Published on: 
Dec 5, 2023

In which we invite researchers and practitioners to advance ZK acceleration together with Ingonyama. For this purpose we are allocating $100,000 to be distributed as grants. Read on for details.


As more industry players port their ZKP infrastructure to GPUs (e.g. Scroll, Starknet, ZKSync, Polygon ZKEVM, Risc0, etc.), we believe it is time for academia to catch up. CPU-targeted toolchains used until now are inefficient in comparison with specialized hardware. GPUs accelerate computations by taking advantage of parallelism, and we believe the dialogue between industry and academia will encourage development of protocols that are designed to be as parallel as possible.

To maintain a profound impact on future production systems, applied ZK research should speak the same language as the industry when it comes to implementation and benchmarking sections of papers. Analogously, try to imagine AI researchers implementing a new model today using CPU only.

ZK GPU Ecosystem

Adding GPU support to existing CPU based ecosystems and frameworks would result in tremendous overhead for researchers. It will also take significant effort to uplift GPU support from a second class citizen: In general, we aim to move as much computation as possible to the device. Therefore, a good alternative will be to implement new protocols directly in a GPU-first framework. Enter Icicle.

Icicle has been in development for over a year, and recently a number of projects have begun to use it in production. The library is maintained and actively developed by a team of professionals with high visibility into roadmap, detailed documentation, and instant support. With a security audit scheduled for Q1 2024, we humbly posit that Icicle has matured enough to be trusted by researchers.

Bounties & Grants

To be eligible for a grant we offer the following tracks:

  1. Students only: work with Icicle in any way you see fit as part of your research
  2. Using algorithmic innovation, improve performance of any of the implemented accelerated primitives in Icicle
  3. Port an existing ZK protocol to Icicle
  4. Add a new primitive to Icicle
  5. Compare your ZK benchmarks against Icicle

In addition to financial support, we commit to helping all grantees with technical guidance and to spotlight their work. Grantees will automatically be entitled to participate in our GPU grant program, providing them with access to our GPUs and lab.

Questions? Submissions? Shoot us an email grants@ingonyama.com


What does it mean “work with Icicle in any way you see fit” ?

Answer: It is difficult to define the scope of future research, but as an example: using Icicle in the implementation of a new ZK protocol.

Another example idea is to write a paper on GPU acceleration; e.g. assume I have one H100 running LLM inference, and latency is x [sec]. Now I want to use the same H100 to run ZK proof of inference (same size circuit as the LLM model), and the proof takes y [sec]. Is there a way to improve latency better than x+y, or is this the lower bound ? In other words, can we run ZK on h100 with minimal impact to LLM inference quality of service?

What are the expectations from projects, and when should I involve Ingonyama?

Answer: We can get involved at any stage of the research. We will reward the final product (as you know, with research it is hard to know where you will end up) based on the amount of individual labor put into it, and likely some other factors as well.

Are the grants awarded in Fiat or Cryptocurrency?

Answer: Fiat


Written by

Table of Contents

Want to discuss further?

Ingonyama is commited to developing hardware for a private future using Zero Knowledge Proofs.

Get in touch
Get our RSS feed