-
Notifications
You must be signed in to change notification settings - Fork 7
feat: High-Security Integration for Multisig Pallet #365
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: testnet/planck
Are you sure you want to change the base?
Conversation
|
Gemini Review: Security: The implementation includes good defense-in-depth measures, such as checking call size before decoding. The documentation regarding the "Risk Window" during migration is clear and helpful. Code Quality: The refactoring in transaction_extensions.rs to share the whitelist logic with the multisig configuration is excellent. 1. Architecture & Dependency ManagementObservation:
Recommendation: Invert this dependency to keep
It kinda makes sense ^^^ |
ethan-crypto
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One issue I noticed is that in the benchmarks we aren't actually computing the worst-case for the cleanup process because we aren't setting the call data to the max size for the expired proposals. This could lead to underweighting.
ethan-crypto
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-
Right now we aren't making an effective use of the
rregression coefficient. I suggested some changes that would account for this by defining acleaned_target = i.min(r)which will only clean a subset of the total proposals, resulting in a more accurate weighting and a greater refund forproposalcalls -
The cleanup path remains expensive per proposal as long as call lives inside the same storage value. We should make the cleanup iteration cost depend only on the small fields we care about. To fix this we could either go with a split storage approach:
ProposalMeta (small: proposer/status/expiry/deposit/…) used by cleanup iteration
ProposalCall (big) only touched on execute / remove
OR store call as a preimage / hash (call_hash, call_len) and require bytes only when executing.
|
Proposal meta is a really good optimization @ethan-crypto Basically don't have to decode the call just to find the expiry date There's many ways to do this |
|
Finally, we decided to simplify the scope again, so benchmarks and weights are less sophisticated now. |
|
Here's Claude 4.6 review of PR #365: High-Security Integration for Multisig Pallet. SummaryThis PR introduces a Architecture & Design (Looks Good)The trait-based approach is clean:
The Issues Found1. Duplicate Call Size Check (Minor, DRY Violation)The diff adds an early size check: + let call_size = call.len() as u32;
+ ensure!(call_size <= T::MaxCallSize::get(), Error::<T>::CallTooLarge);But the original check at line 545 of The call is now validated twice. The early check is correctly motivated (prevent decode of oversized payloads), but the original should be removed. 2. Weight Anomaly —
|
| # | Severity | Issue |
|---|---|---|
| 1 | Low | Remove duplicate call size check |
| 2 | Medium | Re-run benchmarks — propose base weight looks anomalous |
| 3 | Low | Verify _c scaling for high-security decode path |
| 4 | Low | Move MockHighSecurity to mock.rs |
| 5 | Nit | Trim verbose comments |
Item #2 is the most impactful since it directly affects user fees. The rest are cleanup. Overall a solid PR — the unified HighSecurityInspector abstraction is a welcome improvement.
|
Gemini PR Review: High-Security Integration for Multisig PalletSummaryThis PR integrates High-Security (HS) mode into the Multisig pallet via a new Critical Issues (Must Fix)1. Weight Anomaly (Performance/Economics)Severity: High
The HS path does more work (decode + whitelist), so it should be more expensive. The 10x lower weight for HS suggests a flawed benchmarking environment or regression. This must be re-benchmarked to avoid undercharging HS calls or overcharging standard ones. 2. Unused
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall I really like the new design introducing the execute method and confining all clean up operations to extrinsic calls. However there are still some places where underweighting can occur:
-
cancel,remove_expriedandclaim_depositsare all benchmarked with a small fixed call size. This will inevitably lead to underweighting. We should parameterize cancel weight by stored call length like we do with the other methods that touch proposal storage. Forclaimed_depositswe would need to compute the averagecalldata size over the set of proposal in storage. -
Even though the
approve_dissolveweight function prices it as constant, it has two different execution paths: reached and not reached. We should adopt a similar pattern that accounts for this like we did withpropose; (1) charge worst-case up front, then (2) refund to the cheaper path viaactual_weight, and (3) benchmark both paths. -
Even though the
approvalsfield is much cheaper to decode compared tocall, the decode cost also scales with approvals length. The benchmarks currently do not parameterize approvals lengths and doesn't cover the worst case (100 approvals), so this could lead to minor underweighting unless we fix the approvals to 100 or parameterize.
No description provided.