Skip to content

Transaction performance testing setup

Yannik Goldgräbe edited this page Dec 2, 2021 · 2 revisions

These tests are purely for achieving consensus on transaction order and timestamps. They do not include the time to process transactions. For example, if every transaction is digitally signed, then a great deal of processing power might be needed to verify hundreds of thousands of digital signatures per second.

As with prior work, these tests are for achieving consensus on transaction order and timestamps, and assumes nodes are honest. They do not include the time to process transactions.


Where should the nodes be located?

US East, US West, Canada, Sao Paulo, Japan, Australia, South Korea, and Germany.

How performant should the nodes be?

Lowest: AWS t2.medium instances with two virtual CPUs, 4GB memory, and up to 1 Gbps network performance

Highest: AWS m4.4xlarge instances with 16 virtual CPUs, 64 GB memory, and up to 2 Gbps network performance

What size should the transactions have?

100-byte

Where do we measure?

On events-level, i.e. when a node receives a new event via the gossip-protocol.

Which tools do we use to test the performance and how should they be configured?

Usually the python Client, but not sure if it is efficient enough, especially considering the transaction application including the signatures. To be compliant with other research papers, we might need to strip this down and modify a node to directly trigger these transactions.

How do we define performance?

Transactions per Second + Latency between nodes

How do we define latency?

Latency is measured as the average number of seconds from when a client first submits a transaction to a node until when the node knows the transaction’s consensus order and timestamp.

What should the goal be?

Small goal: 2 nodes, 250k TPS, <0.02 seconds latency

Big goal: 128 nodes, 500k TPS, <10 seconds latency

References:

https://hedera.com/hh-ieee_coins_paper-200516.pdf

https://hedera.com/hh_whitepaper_v2.1-20200815.pdf