Skip to content

fix: handle None padding_to in get_padding_to() for fused attention#8002

Merged
Jintao-Huang merged 1 commit intomodelscope:mainfrom
Mr-Neutr0n:fix-padding-to-none-comparison
Feb 7, 2026
Merged

fix: handle None padding_to in get_padding_to() for fused attention#8002
Jintao-Huang merged 1 commit intomodelscope:mainfrom
Mr-Neutr0n:fix-padding-to-none-comparison

Conversation

@Mr-Neutr0n
Copy link
Contributor

Summary

Fixed TypeError: '>' not supported between instances of 'int' and 'NoneType' in get_padding_to() when using fused attention with tp=1.

Problem

When using megatron training with fused attention (attention_backend='fused') and tensor_model_parallel_size=1, the function get_padding_to() raises a TypeError:

padding_to = max(padding_to, ((origin_padding_to) or 1) * 64)
# TypeError: '>' not supported between instances of 'int' and 'NoneType'

This occurs because padding_to can remain None when:

  • tensor_model_parallel_size == 1 or sequence_parallel is False
  • context_parallel_size <= 1
  • fp8_recipe != 'blockwise' and fp8_format is None

Solution

Applied the same or 1 pattern used elsewhere in the function:

# Before
padding_to = max(padding_to, ((origin_padding_to) or 1) * 64)

# After
padding_to = max(padding_to or 1, ((origin_padding_to) or 1) * 64)

Fixes #8000

When using fused attention with tp=1, padding_to can be None causing:
TypeError: '>' not supported between instances of 'int' and 'NoneType'

The fix uses 'padding_to or 1' to handle the None case, consistent
with the pattern used elsewhere in the same function.

Fixes modelscope#8000
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Mr-Neutr0n, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a runtime error in the get_padding_to() utility function. The error, a TypeError related to comparing int and NoneType, arose under specific conditions involving fused attention and tensor model parallelism. The fix ensures robust handling of potential None values in padding calculations, preventing crashes and improving the stability of the system.

Highlights

  • Bug Fix: TypeError in get_padding_to(): Resolved a TypeError that occurred when padding_to was None in the get_padding_to() function, specifically when using fused attention with tensor_model_parallel_size=1. The error arose from attempting to compare an integer with NoneType.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • swift/megatron/utils/utils.py
    • Modified the get_padding_to function to correctly handle cases where padding_to might be None when args.attention_backend == 'fused', by applying an or 1 operation to padding_to before using it in a max comparison.
Activity
  • No human activity (comments, reviews) has been recorded on this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a TypeError in get_padding_to() that occurs when using fused attention with a tensor model parallel size of 1. The issue arises because padding_to can be None, leading to an invalid comparison in max(). The fix correctly handles the None case by using padding_to or 1, which is consistent with how this variable is handled elsewhere in the function. The change is correct and effectively resolves the bug.

@Jintao-Huang Jintao-Huang merged commit b7ff53b into modelscope:main Feb 7, 2026
2 of 3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

TypeError: '>' not supported between instances of 'int' and 'NoneType'

2 participants