eth/consensus : implement eccpow consensus engine#10
Open
mmingyeomm wants to merge 3096 commits intocryptoecc:worldlandfrom
Open
eth/consensus : implement eccpow consensus engine#10mmingyeomm wants to merge 3096 commits intocryptoecc:worldlandfrom
mmingyeomm wants to merge 3096 commits intocryptoecc:worldlandfrom
Conversation
This fixes a regression introduced in #32518. In that PR, we removed the slowdown logic that would throttle lookups when the table runs empty. Said logic was originally added in #20389. Usually it's fine, but there exist pathological cases, such as hive tests, where the node can only discover one other node, so it can only ever query that node and won't get any results. In cases like these, we need to throttle the creation of lookups to avoid crazy CPU usage.
Fix logging in the verkle dump path to report the actual key being processed. Previously, the loop always logged keylist[0], which misled users when expanding multiple keys and made debugging harder. This change aligns the log with the key passed to root.Get, improving traceability and diagnostics.
Adds ethclient support for the eth_simulateV1 RPC method, which allows simulating transactions on top of a base state without making changes to the blockchain. --------- Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
New RPC method eth_sendRawTransactionSync(rawTx, timeoutMs?) that submits a signed tx and blocks until a receipt is available or a timeout elapses. Two CLI flags to tune server-side limits: --rpc.txsync.defaulttimeout (default wait window) --rpc.txsync.maxtimeout (upper bound; requests are clamped) closes #32094 --------- Co-authored-by: aodhgan <gawnieg@gmail.com> Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
This change addresses critical issues in the state object duplication process specific to Verkle trie implementations. Without these modifications, updates to state objects fail to propagate correctly through the trie structure after a statedb copy operation, leading to inaccuracies in the computation of the state root hash. --------- Co-authored-by: Guillaume Ballet <3272758+gballet@users.noreply.github.com>
This PR introduces two new metrics to monitor slow peers - One tracks the number of slow peers. - The other measures the time it takes for those peers to become "unfrozen" These metrics help with monitoring and evaluating the need for future optimization of the transaction fetcher and peer management, for example i n peer scoring and prioritization. Additionally, this PR moves the fetcher metrics into a separate file, `eth/fetcher/metrics.go`.
This PR removes dangling peers in `alternates` map In the current code, a dropped peer is removed from alternates for only the specific transaction hash it was requesting. If that peer is listed as an alternate for other transaction hashes, those entries still stick around in alternates/announced even though that peer already got dropped.
Fixes TestCorruptedKeySection flaky test failure. https://github.com/ethereum/go-ethereum/actions/runs/18600235182/job/53037084761?pr=32920
…t return anything (#32916) The code change is a noop here, and the tracing hook shouldn't be invoked if the account code doesn't actually change.
Uses the go module's `replace` directive to delegate keccak computation to precompiles. This is still in draft because it needs more testing. Also, it relies on a PR that I created, that hasn't been merged yet. _Note that this PR doesn't implement the stateful keccak state structure, and it reverts to the current behavior. This is a bit silly since this is what is used in the tree root computation. The runtime doesn't currently export the sponge. I will see if I can fix that in a further PR, but it is going to take more time. In the meantime, this is a useful first step_
enables the osaka fork on dev mode --------- Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
Previously, the journal writer is nil until the first time rejournal (default 1h), which means during this period, txs submitted to this node are not written into journal file (transactions.rlp). If this node is shutdown before the first time rejournal, then txs in pending or queue will get lost. Here, this PR initializes the journal writer soon after launch to solve this issue. --------- Co-authored-by: Gary Rong <garyrong0905@gmail.com>
) This PR prevents the SetCode hook from being called when the contract code remains unchanged. This situation can occur in the following cases: - The deployed runtime code has zero length - An EIP-7702 authorization attempt tries to unset a non-delegated account - An EIP-7702 authorization attempt tries to delegate to the same account
In this PR, the database batch for writing the history index data is pre-allocated. It's observed that database batch repeatedly grows the size of the mega-batch, causing significant memory allocation pressure. This approach can effectively mitigate the overhead.
This PR is an alternative to #32556. Instead of trying to be smart and reuse `geth init`, we can introduce a new flag `--genesis` that loads the `genesis.json` from file into the `Genesis` object in the same path that the other network flags currently work in. Question: is something like `--genesis` enough to start deprecating `geth init`? -- ```console $ geth --datadir data --hoodi .. INFO [10-06|22:37:11.202] - BPO2: @1762955544 .. $ geth --datadir data --genesis genesis.json .. INFO [10-06|22:37:27.988] - BPO2: @1862955544 .. ``` Pull the genesis [from the specs](https://raw.githubusercontent.com/eth-clients/hoodi/refs/heads/main/metadata/genesis.json) and modify one of the BPO timestamps to simulate a shadow fork. --------- Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
The var `ErrInvalidTxType` is never used in the code base.
This PR fixes some docs for the devp2p suite and uses the CLI library's required value instead of manually checking if required flags are passed.
Co-authored-by: Felix Lange <fjl@twurst.com>
Using the `IsHexAddress` method will result in no gaps in the verification logic, making it simpler.
The #32816 was only using the keccak precompile for some minor task. This PR implements a keccak state, which is what is used for hashing the tree.
Found in https://github.com/ethereum/go-ethereum/actions/runs/17803828253/job/50611300621?pr=32585 ``` --- FAIL: TestClientCancelWebsocket (0.33s) panic: read tcp 127.0.0.1:36048->127.0.0.1:38643: read: connection reset by peer [recovered, repanicked] goroutine 15 [running]: testing.tRunner.func1.2({0x98dd20, 0xc0005b0100}) /opt/actions-runner/_work/_tool/go/1.25.1/x64/src/testing/testing.go:1872 +0x237 testing.tRunner.func1() /opt/actions-runner/_work/_tool/go/1.25.1/x64/src/testing/testing.go:1875 +0x35b panic({0x98dd20?, 0xc0005b0100?}) /opt/actions-runner/_work/_tool/go/1.25.1/x64/src/runtime/panic.go:783 +0x132 github.com/ethereum/go-ethereum/rpc.httpTestClient(0xc0001dc1c0?, {0x9d5e40, 0x2}, 0xc0002bc1c0) /opt/actions-runner/_work/go-ethereum/go-ethereum/rpc/client_test.go:932 +0x2b1 github.com/ethereum/go-ethereum/rpc.testClientCancel({0x9d5e40, 0x2}, 0xc0001dc1c0) /opt/actions-runner/_work/go-ethereum/go-ethereum/rpc/client_test.go:356 +0x15f github.com/ethereum/go-ethereum/rpc.TestClientCancelWebsocket(0xc0001dc1c0?) /opt/actions-runner/_work/go-ethereum/go-ethereum/rpc/client_test.go:319 +0x25 testing.tRunner(0xc0001dc1c0, 0xa07370) /opt/actions-runner/_work/_tool/go/1.25.1/x64/src/testing/testing.go:1934 +0xea created by testing.(*T).Run in goroutine 1 /opt/actions-runner/_work/_tool/go/1.25.1/x64/src/testing/testing.go:1997 +0x465 FAIL github.com/ethereum/go-ethereum/rpc 0.371s ``` In `testClientCancel` we wrap the server listener in `flakeyListener`, which schedules an unconditional close of every accepted connection after a random delay, if the random delay is zero then the timer fires immediately, and then the http client paniced of connection reset by peer. Here we add a minimum 10ms to ensure the timeout won't fire immediately. Signed-off-by: jsvisa <delweng@gmail.com>
fix #33014 --------- Co-authored-by: lightclient <lightclient@protonmail.com>
Preallocate hashes slice with known length instead of using append in a loop. This avoids multiple reallocations during transaction indexing.
Based on [EIP-7864](https://eips.ethereum.org/EIPS/eip-7864), the tree index should be 32 bytes instead of 31 bytes. ``` def get_tree_key(address: Address32, tree_index: int, sub_index: int): # Assumes STEM_SUBTREE_WIDTH = 256 return tree_hash(address + tree_index.to_bytes(32, "little"))[:31] + bytes( [sub_index] ) ```
…3655) Implement standardized JSON format for slow block logging to enable cross-client performance analysis and protocol research. This change is part of the Cross-Client Execution Metrics initiative proposed by Gary Rong: https://hackmd.io/dg7rizTyTXuCf2LSa2LsyQ The standardized metrics enabled data-driven analysis like the EIP-7907 research: https://ethresear.ch/t/data-driven-analysis-on-eip-7907/23850 JSON format includes: - block: number, hash, gas_used, tx_count - timing: execution_ms, total_ms - throughput: mgas_per_sec - state_reads: accounts, storage_slots, bytecodes, code_bytes - state_writes: accounts, storage_slots, bytecodes - cache: account/storage/code hits, misses, hit_rate This should come after merging #33522 --------- Co-authored-by: Gary Rong <garyrong0905@gmail.com>
Recent pprof from our validator shows ~6% of all allocations because of the gas price oracle. This PR reduces that.
Preallocate the proof slice with the known size instead of growing it via append in a loop. The length is already known from the source slice.
This PR restores the previous Pebble configuration, disabling seek compaction. This feature is still needed by hash mode archive node, mitigating the overhead of frequent compaction.
…33704) Heartbeats are used to drop non-executable transactions from the queue. The timeout mechanism was not clearly documented, and it was updates also when not necessary.
Fix ECIES invalid-curve handling in RLPx handshake (reject invalid ephemeral pubkeys early) - Add curve validation in crypto/ecies.GenerateShared to reject invalid public keys before ECDH. - Update RLPx PoC test to assert invalid curve points fail with ErrInvalidPublicKey. Motivation / Context RLPx handshake uses ECIES decryption on unauthenticated network input. Prior to this change, an invalid-curve ephemeral public key would proceed into ECDH and only fail at MAC verification, returning ErrInvalidMessage. This allows an oracle on decrypt success/failure and leaves the code path vulnerable to invalid-curve/small-subgroup attacks. The fix enforces IsOnCurve validation up front.
core/state: add bounds check in heap eviction loop Add len(h) > 0 check before accessing h[0] to prevent potential panic and align with existing heap access patterns in txpool, p2p, and mclock packages.
Fix timeout parameter in eth_sendRawTransactionSync to be an integer instead of hex. The spec has now been clarified on this point.
Preallocate capacity for `keyOffsets` and `valOffsets` slices in `decodeRestartTrailer` since the exact size (`nRestarts`) is known upfront. --------- Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
Adds support for cell proofs in blob transactions in the signer --------- Co-authored-by: Gary Rong <garyrong0905@gmail.com>
Preallocates slices with known capacity in `stateSet.encode()` and `StateSetWithOrigin.encode()` methods to eliminate redundant reallocations during serialization.
Implements ethereum/execution-apis#729 and fixes #33491. It adds blockTimestamp to transaction objects returned by the RPC.
I recently went on a longer flight and started profiling the geth block
production pipeline.
This PR contains a bunch of individual fixes split into separate
commits.
I can drop some if necessary.
Benchmarking is not super easy, the benchmark I wrote is a bit
non-deterministic.
I will try to write a better benchmark later
```
goos: linux
goarch: amd64
pkg: github.com/ethereum/go-ethereum/miner
cpu: Intel(R) Core(TM) Ultra 7 155U
│ /tmp/old.txt │ /tmp/new.txt │
│ sec/op │ sec/op vs base │
BuildPayload-14 141.5µ ± 3% 146.0µ ± 6% ~ (p=0.346 n=200)
│ /tmp/old.txt │ /tmp/new.txt │
│ B/op │ B/op vs base │
BuildPayload-14 188.2Ki ± 4% 177.4Ki ± 4% -5.71% (p=0.018 n=200)
│ /tmp/old.txt │ /tmp/new.txt │
│ allocs/op │ allocs/op vs base │
BuildPayload-14 2.703k ± 4% 2.453k ± 5% -9.25% (p=0.000 n=200)
```
The `Witness` method was not implemented for the binary tree, which caused `debug_excutionWitness` to panic. This PR fixes that. Note that the `TransitionTrie` version isn't implemented, and that's on purpose: more thought must be given to what should go in the global witness.
adds support for the 0x0008 / 0x8000 product ID (Ledger Apex | Nano Gen5).
…B blobs in range loops (#33717) kzg4844.Blob is 131072 bytes. Using `for _, blob := range` copies the entire blob on each iteration. With up to 6 blobs per transaction, this wastes ~768KB of memory copies. Switch to index-based iteration and pass pointers directly.
The upstream libray has removed the assembly-based implementation of keccak. We need to maintain our own library to avoid a peformance regression. --------- Co-authored-by: lightclient <lightclient@protonmail.com>
…3701) Not many allocs we save here, but it also cleans up the logic and should speed up the process a tiny bit
GetAll did not return GaugeInfo metrics, which affects "chain/info" and "geth/info".
This adds a new type wrapper that decodes as a list, but does not actually decode the contents of the list. The type parameter exists as a marker, and enables decoding the elements lazily. RawList can also be used for building a list incrementally.
The error code for revert should be consistent with eth_call and be 3.
Clear was only used in tests, but it was missing some of the cleanup. Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
Follow-up to #33748 Same issue - ResettingTimer can be registered via loadOrRegister() but GetAll() silently drops it during JSON export. The prometheus exporter handles it fine (collector.go:70), so this is just an oversight in the JSON path. Note: ResettingTimer.Snapshot() resets the timer by design, which is consistent with how the prometheus exporter uses it.
### Problem `HasBody` and `HasReceipts` returned `true` for pruned blocks because they only checked `isCanon()` which verifies the hash table — but hash/header tables have `prunable: false` while body/receipt tables have `prunable: true`. After `TruncateTail()`, hashes still exist but bodies/receipts are gone. This caused inconsistency: `HasBody()` returns `true`, but `ReadBody()` returns `nil`. ### Changes Both functions now check `db.Tail()` when the block is in ancient store. If `number < tail`, the data has been pruned and the function correctly returns `false`. This aligns `HasBody`/`HasReceipts` behavior with `ReadBody`/`ReadReceipts` and fixes potential issues in `skeleton.linked()` which relies on these checks during sync.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
implements eccpow consensus engine for Worldland Network