Hardware Review: GPUs , FPGAs and Zero Knowledge Proofs

Published on: 
May 4, 2023

The Story So Far

Zero Knowledge Proofs (ZKPs) are of growing importance for decentralized systems. Initially gaining popularity with projects such as ZCash, they are most widely known today as a scaling solution for Ethereum.

Utilizing ZKPs, Layer 2’s (Scroll and Polygon for example) also known as Rollups are developing zkEVMs (Zero Knowledge Ethereum Virtual Machines). These zkEVMs process a bundle of transactions and produce a single, small proof (KB in size). This proof can be used to verify to the entire network that all the transactions in this bundle are valid and correct.

However, their utility doesn’t stop there. ZKPs allow for all sorts of interesting applications.

  • Private Layer 1 - Mina for example hides the details of transactions and data on its blockchain allowing users to just prove the validity of their transactions without revealing specific information about the transaction itself. neptune.cash and Aleo are also very interesting projects.
  • Decentralized Cloud - FedML and together.xyz are allowing ML (Machine learning) models to be trained in a decentralized way, outsourcing computation to the network and validating the computation using ZKPs. Druglike is building a more open drug discovery platform utilizing similar techniques.
  • Decentralized Storage - Filecoin is revolutionizing cloud storage and ZKPs are at its core, allowing storage providers to prove that they have stored the correct data over a period of time.
  • Gaming - More advanced versions of Darkforest are likely to emerge, requiring realtime proving. ZKPs may also scale games, reduce the ability to cheat and perhaps along with Decentralized Cloud, games could pay for their own hosting fees!
  • Identity - Decentralized identity is also now possible with ZKP. A lot of interesting projects are being built around this idea. Some of my favorites are notebook and zkemail.

The implications of ZKPs and decentralized systems are tremendous.

However, there are still many roadblocks and challenges. The process of generating a proof is extremely resource-intensive, requiring complex mathematical operations to be performed many times. As developers seek to build ever more ambitious and complex protocols and systems utilizing ZKP technologies, they demand larger proofs, faster performance, and better optimizations, both for proof generation and verification.

In this article, we shall explore the current state of hardware acceleration and how it is going to play a key role in advancing ZKP applications.

The different types of Zero-Knowledge Proofs

There are two popular zero-knowledge technologies available today zk-STARKs and zk-SNARKs (we are ignoring Bulletproofs in this article).

As illustrated above both have differences and tradeoffs. The most important to highlight is that the trusted setup of a zk-SNARK based system relies on the assumption that at least one honest player took part in the trusted setup process and destroyed their key.

In terms of verifying zk-STARK vs zk-SNARK on chain, zk-SNARK are considerably cheaper in terms of gas. zk-SNARK proofs are also significantly smaller making it cheaper to store on chain.

Currently, zk-SNARKs are more popular then zk-STARKs. Most likely because zk-SNARKs are way older. Another reason may be that Zcash one of the earliest ZK blockchains utilized zk-SNARKs so most of the development tools and documentation currently revolve around the zk-SNARK ecosystem.

What are we accelerating?

Lets first understand how a developer would approach building a ZK application.

They would need two components:

  1. A way to express the computation in a zk friendly manner (DSL or Low level library).
  2. A proving system, this system should implement two functions Prove and Verify.

DSLs (domain-specific language) and low level libraries

When it comes to DSLs there are many: Circom, Halo2, Noir and many more. Today Circom is perhaps the most popular. A popular example for a low level library is Arkworks; others are being developed such as ICICLE, which is a CUDA-accelerated library.

These DSLs are designed to output constraint systems. Unlike a normal program which would be evaluated as x := y * z the same program in constraint form would look more like x - y * z = 0. As a result of representing programs as constraint systems we find that operations such as addition or multiplication can be represented in a single constraint. However more complex operations such as bit manipulation requires hundreds of constraints!

Thus it becomes complex to implement certain hash functions as ZK friendly programs.

Proving systems

A proving system is a piece of software that fulfills two main roles: generating a proof and verifying a proof. Proving systems normally take as inputs, Circuit, witness and public / private parameters.

Two of the most popular proving systems are Groth16 and PLONK (HyperPlonk, UltraPlonk).

They differ from each other in many ways.

Groth16 requires a lengthy trusted setup, but provides us with faster proving and verifying times as well as a constant proof size.

Plonk offers a Universal setup, each time you generate a proof a preprocessing phase will be executed for the circuit you are running. Verifying time however is constant despite supporting many circuits.

Both also differ when it comes to optimizing your constraints. Groth16 gives you little control and constraint sizes grow linearly while Plonk lets you have more flexibility.

Overall it’s important to understand that performance is directly tied to the number of constraints your ZKP application has. The more constraints the more operations the Prover must compute.

Bottlenecks

Proof systems contain two main mathematical operations: MSM and FFT. Today we will be exploring systems where both are present, in such systems MSM may take over ~60% of runtime while FFT takes up ~ 30%.

MSM requires a high number of memory accesses which in most cases consumes a significant amount of a machine’s memory, as well as performing many scalar multiplications.

FFT algorithms often require rearranging the input data into a specific order to perform the transform. This can require a large amount of data movement and can be a bottleneck, especially for large input sizes. Cache utilization may also be an issue if memory access patterns do not fit well into the cache hierarchy.

Beyond Software Optimizations

Hardware acceleration is a must to fully optimize ZKP performance. This is reminiscent of how ASICs and GPUs took over Bitcoin and Ethereum mining.

While software optimizations are extremely important and valuable they have limitations. Hardware optimizations can improve parallelism by designing FPGAs with multiple processing units for example which reduce issues with software-based parallelism, such as thread synchronization or vectorization, memory access as well can be optimized more efficiently by improving memory hierarchy and access pattern.

Today in ZKP there are two major trends: GPUs and FPGAs.

In the short term, GPUs have the price advantage, especially after ETH moved over to Proof Of Stake leaving massive amounts of GPUs without use. GPUs also have the advantage of short development times and provide developers with massive parallelism “out of the box”

FPGAs however should catch up, especially when comparing form factor and power consumption. FPGAs also provide better latency for high speed data streams. FPGAs when clustered together also provide better performance versus clusters of GPUs.

GPU vs FPGA

GPU

  • GPUs provide a fast development time. Frameworks such as CUDA and OpenCL are all documented and popular.
  • GPUs are extremely “power hungry”. Even when a developer is exploiting data level parallelism and thread level parallelism, GPUs still consume a lot of power.
  • GPUs are readily available; consumer grade GPUs are currently cheap and readily available.

FPGA

  • FPGAs have a way more complex development cycle and require more specialized engineers. FPGAs allow an engineer to implement alot of “low level” optimizations not possible with GPUs.
  • FPGAs provide lower latency especially when processing large data streams.
  • FPGAs are more expensive then GPUs and not as readily available.

ZPrize

ZPrize is one of the most important initiatives currently in the ZK space. ZPrize is a competition encouraging teams to develop hardware acceleration for key ZKP bottlenecks (MSM, NTT, Poseidon and more) on a wide range of platforms (FPGA, GPU, Mobile and browser) with the judgment criteria being latency, throughput and efficiency.

The results are interesting. GPU improvements where huge for example:

  • MSM was improved 131% from 5.86 seconds to 2.52 seconds.

On the other hand FPGA solutions often even lacked any benchmarks. Unlike the GPU results which had prior benchmarks to compare to most of the FPGA solutions where the first time such algorithms were open sourced, future improvement is expected.

I think these results also display a map of where the majority of developers feel comfortable developing their hardware acceleration solutions. Many teams competed for the GPU category, but only a very small group of teams competed for the FPGA category. I’m not sure if this is due to lack of skills when it comes to developing for FPGAs or the majority of FPGA companies / teams developing in secret to sell their solutions commercially.

ZPrize made available to the community some extremely interesting research work. As more rounds of ZPrize commence I think we will be seeing increasingly interesting solutions and problems arise.

Hardware acceleration in action

Let’s take Groth16 as an example. If we inspect the rapidsnark implementation of Groth16. We can see that two main operations are occurring: FFT (fast fourier transform) and MSM (multi scalar multiplication) ; both these primitives are common in many proving systems. The amount of time they are executed is directly correlated with the number of constraints a circuit has, naturally as more complex circuits are written the number of constraints will grow.

To visualize how dominant these operations can be let’s have a look at a benchmark performed by Ingonyama on a Groth16 circuit (Vanilla C2 process from filecoin executed on a 32GB sector):

On MSM

As seen above MSM can take up a huge chunk of runtime for many Prover’s and serves as a serious bottleneck, this makes MSM a vital primitive that needs to be accelerated.

Source

What is MSM?

Both of these are addressed in Ingonyama’s PipeMSM paper in detail.

MSMs role in Groth16

MSMs role in Groth16 can be understood at a very high level quite trivially.

Given:

GPU vs FPGA in terms of MSM

As a case study of two leading MSM implementations, we will compare PipeMSM and Sppark, with PipeMSM implemented on FPGA and Sppark on GPU.

PipeMSM vs Sppark

Sppark and PipeMSM implement MSM using state of the art Pippenger MSM, also known as bucket algorithm; plenty of articles have been written on this subject (example).

Sppark is implemented using CUDA; their version is highly parallel and archives impressive results running on the latest GPUs.

PipeMSMs however optimizes not only the algorithm but provides optimisations down to the mathematical primitives of Modular Multiplication and EC Addition. PipeMSM addresses the whole “MSM stack” implementing a range of mathematical tricks and optimisations aimed at making MSM more hardware friendly and a hardware design aimed at lowering latency as well as focusing heavily on parallelization.

Let’s quickly review PipeMSMs implementation.

Low latency

PipeMSM is designed to provide low latency when performing many MSM on large amounts of inputs. GPUs cannot provide deterministic low latency due to Dynamic frequency scaling where the GPU will adjust its clock speed based on the workload.

The FPGA implementation can provide deterministic performance and latency for specific workloads, due to an optimized hardware design.

Addition implementation

EC Addition is implemented as a low latency, highly parallel and complete formula (complete means it correctly computes the sum of any two points in the EC group). EC Addition is used in the EC Adder Module in a pipeline fashion to reduce latency. \

We choose to represent our coordinates as projective coordinates because fewer divisions and multiplications are involved in computation. To demonstrate this;

As seen above we perform only a single division while affine coordinates would contain three division operations.

We use a Complete formula for EC Addition. The main advantage here is that we don’t have to handle edge cases. The complete formula;

We implement this addition formula in a pipeline and parallel fashion, our FPGA EC Adder circuit can perform this formula with only 12 multiplications, 17 additions, and 3 multiplications by 3. While the original requires 15 modular multiplications, 13 additions, and 4 multiplications by 3.


EC Adder module - a highly parallel, low latency circuit performing Addition and Subtraction

Modular Multiplication Implementation

We utilize Barrett Reduction and Karatsuba algorithms.

Barrett Reduction is a technique to efficiently compute

when n is fixed. Barrett Reduction attempts to replace division (an expensive operation) by approximating division. An error tolerance can be chosen to describe a range for which we are looking for a correct result. We found that a large error tolerance allows for a smaller reduction constant to be used; this reduces the size of the intermediate values used in the modular reduction operation, which in turn reduces the number of multiplications required to compute the final result.

In the blue square below we can see that due to our high error tolerance we must perform multiple checks to find the accurate result.

In short, the Karatsuba algorithm is used to optimize the multiplications we perform in Barrett Reduction. Karatsuba reduces the multiplication of two n-digit numbers to three multiplications of n/2-digit numbers; these multiplications can be reduced to the level where they are narrow enough to be computed by the hardware directly.

MSM

Using the primitives described above we developed a low latency highly parallel implementation of MSM.

I will provide a brief explanation (for a complete explanation please refer here) of how this system works.

for exact steps refer here

Results

Above we see a comparison between Sppark and PipeMSM.

What stands out most is the significant power savings FPGAs offer, this could be extremely important in the future for large scale Proof generation operations.
We should also note that our implementation was prototyped and benchmarked @125Mhz increasing clock speed to @500Mhz could boost calculation times by up to 4X or more.

Another advantage of using our FPGAs is that they can be fit into small form factor servers as they consume less energy and produce less heat.

We are in the early days of FPGA engineering for ZKP, further improvements to algorithms should be expected. Additionally it’s important to note that while the FPGA is calculating MSM, the CPU is idle, it may be possible to achieve even faster times by combining the FPGA with system resources (CPU/GPU).

ZPUs

Zero Knowledge Processing Units are likely to arise as a result of the growing demand for ZKP solutions. As we start seeing ZKP integrated into databases, applications, gaming, IoT, and as we see blockchain adoption becoming more widespread, the demand for solutions more efficient than FPGAs and GPUs shall arise.

ZPUs similar to ASICs once produced will outperform all existing solutions in terms of power and computation. Unlike FPGAs and GPUs they require event more resources and research to produce since once manufactured they are not possible to modify.

Perhaps in the future we may even see ZPUs being intergrated into consumer devices similar to Apples Neural Engine in its mac devices.

Conclusion

Zero knowledge proofs (ZKP) have emerged as a crucial technology for decentralized systems.

ZKPs are going to expand way beyond just scaling Etheruem but rather allow for outsourcing of compute to untrusted 3rd parties, allow the development of systems impossible before such as decentralized cloud computing, identity systems and much more.

Optimizations for ZKP have traditionally focused on software-level improvements, but as demand for better performance grows we will be seeing more advanced acceleration solutions involving both hardware and software.

Currently we will mostly see GPUs being used, due to short development time using CUDA and the fact that consumer grade GPUs are currently very cheap and abundant. GPUs also offer incredible performance. GPUs are not likely to vanish from this landscape anytime soon.

As more sophisticated development teams enter the space we will most likely witness FPGAs taking the lead in terms of power efficiency and performance. FPGAs provide more lower level customization then GPUs as well as more configuration options. FPGAs have faster development speed and flexibility than ASICS. With the world of ZKPs evolving at rapid speed FPGAs can be re-flashed with different programs for different systems and update solutions.

However to produce near real time proofs will likely require a combination of GPU/FPGA/ASICS configuration together depending on the system we are generating proofs for.

ZPUs are also likely to evolve providing large scale proof generators and low power devices with an extremely efficient solution (I will write about this in an upcoming article).

light

Written by

Table of Contents

Want to discuss further?

Ingonyama is commited to developing hardware for a private future using Zero Knowledge Proofs.

Get in touch
Get our RSS feed