Skip to content

[Hexagon] Optimize Softmax with DMA and VTCM#49

Open
max-krasnyansky wants to merge 7 commits intomasterfrom
hexagon-softmax-dma-9331496204151167248
Open

[Hexagon] Optimize Softmax with DMA and VTCM#49
max-krasnyansky wants to merge 7 commits intomasterfrom
hexagon-softmax-dma-9331496204151167248

Conversation

@max-krasnyansky
Copy link
Owner

This change refactors the Hexagon backend's Softmax implementation to fully utilize the DMA engine and VTCM scratchpad. This allows for aligning memory accesses and using optimized HVX vector kernels (hvx_fast_softmax_prep_f32, hvx_fast_softmax_f32) exclusively, eliminating the need for scalar fallback paths and unaligned memory handling. The implementation uses a double-buffering scheme similar to other activation operations (act-ops.c) to overlap computation with memory transfers. It also correctly handles optional mask (src1) broadcasting and ALiBi slope calculations within the DMA pipeline.


PR created automatically by Jules for task 9331496204151167248 started by @max-krasnyansky

max-krasnyansky and others added 6 commits February 16, 2026 13:57
Improves performance a bit by precomputing things and saving in the context.
- Replace scalar implementation with optimized HVX kernels and DMA pipeline
- Add support for double-buffered DMA for src0, src1, and dst
- Implement dynamic block sizing to handle row boundaries and ALiBi slope logic
- Update `htp_softmax_context` to support new DMA fields
- Remove unused legacy code

Co-authored-by: max-krasnyansky <1380796+max-krasnyansky@users.noreply.github.com>
@google-labs-jules
Copy link
Contributor

👋 Jules, reporting for duty! I'm here to lend a hand with this pull request.

When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down.

I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job!

For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with @jules. You can find this option in the Pull Request section of your global Jules UI settings. You can always switch back!

New to Jules? Learn more at jules.google/docs.


For security, I will only act on instructions from the user who triggered this task.

@github-actions github-actions bot added the ggml label Feb 17, 2026
- Replace scalar implementation with optimized HVX kernels and DMA pipeline
- Add support for double-buffered DMA for src0, src1, and dst
- Implement dynamic block sizing to handle row boundaries and ALiBi slope logic
- Update `htp_softmax_context` to support new DMA fields
- Remove unused legacy code

Co-authored-by: max-krasnyansky <1380796+max-krasnyansky@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant