-
Notifications
You must be signed in to change notification settings - Fork 270
feat: add split_k support for block scale gemm bquant mode. #3653
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR implements split-K support for BQuantGrouped quantized GEMM operations, enabling parallel processing of the K dimension across multiple workgroups. The implementation handles packed data types (fp8i4, bf8i4) and enforces alignment constraints to ensure correct quantization scale application.
Changes:
- Adds split-K offset calculation for packed data types with proper byte-boundary alignment
- Implements BQ (quantization scale) pointer offsetting for split-K batches
- Adds validation to ensure split-K batch sizes align with quantization groups and packed element boundaries
- Updates example configurations from Prefill (128x128 tiles) to Decode (16x64 tiles) for better split-K performance
Reviewed changes
Copilot reviewed 6 out of 6 changed files in this pull request and generated 5 comments.
| File | Description |
|---|---|
| include/ck_tile/ops/gemm_quant/kernel/gemm_quant_kernel.hpp | Core kernel changes: split-K offset calculation, BQ pointer offsetting, tensor view creation, and validation logic |
| example/ck_tile/38_block_scale_gemm/run_gemm_quant_example.inc | Removes premature split-K rejection, delegates validation to kernel |
| example/ck_tile/38_block_scale_gemm/gemm_bquant_quantgrouped_*.cpp | Changes config from Prefill to Decode for fp8, bf8, fp8i4, bf8i4 variants |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| // Split-K validation is handled by Kernel::IsSupportedArgument | ||
| // Split-K is only supported for BQuantGrouped without preshuffle |
Copilot
AI
Jan 26, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The PR adds split-K support for BQuantGrouped mode but does not add any tests that exercise this functionality with k_batch > 1. The existing tests in test/ck_tile/gemm_block_scale only test with k_batch=1 (the default). Tests should be added to verify split-K correctness for various configurations, especially with packed types (pk_int4_t) and different quantization group sizes to ensure the alignment constraints work correctly.
|
@AviralGoelAMD LGTM, please solve the CI |
This PR implements split-K support for BQuantGrouped mode across all data types (fp8, bf8, fp8i4, bf8i4), with proper handling of packed data types.
Key Changes
1. Split-K Offset Calculation for Packed Types
(KRead / BPackedSize)bytes for int4 types2. Validation Constraints
KRead % BPackedSize == 0to ensure split-K batches align on byte boundariesKRead % QuantGroupSize::kK == 0for proper scale applicationASCII Visualization: pk_int4_t Split-K Alignment
Checklist
Please put an
xinto the boxes that apply. You can also fill these out after creating the PR. If you're not sure, please don't hesitate to ask.clang-formaton all changed filesDiscussion
If this is a relatively large or complex change, feel free to start a discussion by explaining why you chose the solution you did and what alternatives you considered