Skip to content

Conversation

@nick-y-snyk
Copy link
Contributor

@nick-y-snyk nick-y-snyk commented Jan 12, 2026

Summary

Refactors LanguageServer notification handlers to use a NotificationQueue for sequential async processing, eliminating race conditions and enabling proper async/await handling.

Changes

NotificationQueue (new)

  • Implements sequential FIFO processing of async operations
  • Uses promise chaining to ensure notifications are processed in order
  • Handles errors gracefully without stopping the queue
  • Supports graceful shutdown in LanguageServer.stop()

LanguageServer

  • Wraps SNYK_HAS_AUTHENTICATED, SNYK_FOLDERCONFIG, and SNYK_ADD_TRUSTED_FOLDERS notifications with NotificationQueue
  • Makes handleOrgSettingsFromFolderConfigs async and processes folder configs sequentially with for-loop instead of forEach
  • Replaces .then()/.catch() chains with proper async/await
  • Removes TODOs about awaiting calls when LS client supports async handlers

scopeDetectionService

  • Renames isSettingsWithDefaultValue to shouldSkipSettingUpdate for clarity
  • Adds check to skip updates when new value equals current effective value (prevents redundant writes)

Tests

  • Adds comprehensive NotificationQueue tests (288 lines)
  • Updates LanguageServer tests to await async handleOrgSettingsFromFolderConfigs calls
  • Adds missing outputChannel mocks and restores sinon.verify() calls

Benefits

  • Race condition prevention: Sequential processing ensures notifications don't interfere with each other
  • Proper async handling: Can now await async operations in notification handlers
  • Better error handling: Errors in one notification don't prevent others from processing
  • Graceful shutdown: Queue drains before LS stops

Test Plan

  • Unit tests pass (NotificationQueue + LanguageServer)
  • Integration tests pass
  • Manual testing of folder config notifications
  • Manual testing of authentication flow

@nick-y-snyk nick-y-snyk requested review from a team as code owners January 12, 2026 18:55
@snyk-io
Copy link

snyk-io bot commented Jan 12, 2026

Snyk checks have passed. No issues have been found so far.

Status Scanner Critical High Medium Low Total (0)
Open Source Security 0 0 0 0 0 issues
Licenses 0 0 0 0 0 issues
Code Security 0 0 0 0 0 issues

💻 Catch issues earlier using the plugins for VS Code, JetBrains IDEs, Visual Studio, and Eclipse.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @nick-y-snyk, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical bug where rapid interaction with the auto organization setting in the HTML settings UI could lead to an infinite loop of configuration updates. This was due to a race condition between different components attempting to write to the same configuration simultaneously. The fix introduces a robust asynchronous processing queue and refined logic for configuration updates to prevent these circular dependencies and ensure stable behavior, significantly improving the reliability of configuration management.

Highlights

  • Infinite Loop Fix: Resolved an infinite loop issue occurring when rapidly toggling the auto organization checkbox in the HTML settings page, caused by circular updates between the HTML autosave and ConfigurationWatcher.
  • Asynchronous Notification Queue: Introduced a new NotificationQueue class to process asynchronous Language Server notification handlers sequentially, preventing race conditions and ensuring stable configuration updates.
  • Sequential Folder Configuration Processing: Refactored handleOrgSettingsFromFolderConfigs to be asynchronous and process folder configurations one by one, eliminating potential race conditions during settings application.
  • Optimized Setting Updates: Enhanced scopeDetectionService.shouldSkipSettingUpdate to prevent unnecessary configuration writes by checking if the new value is identical to the current effective value or if it's the default and not explicitly set.
  • Comprehensive Testing: Added extensive unit tests for the new NotificationQueue to validate its sequential processing, error handling, and graceful shutdown capabilities.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a robust solution to prevent an infinite loop when toggling settings. The core of the fix is the new NotificationQueue, which serializes asynchronous operations coming from the language server, preventing race conditions. The implementation of the queue is well-designed and comes with a comprehensive set of unit tests. Additionally, the logic for updating settings has been improved to avoid redundant writes, further contributing to stability. The related refactoring from promise-based chains to async/await improves code readability and maintainability. I have a couple of suggestions to further improve the code quality, but overall this is a great set of changes.

.updateTokenAndEndpoint(token, apiUrl)
.then(() => {
void this.notificationQueue.enqueue({
id: `authenticated-${Date.now()}`,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using Date.now() for unique IDs is not fully reliable, as multiple notifications could arrive within the same millisecond, leading to duplicate IDs. This can make tracing and debugging difficult. To improve uniqueness, consider appending a random number. A similar change should be applied to other enqueue calls in this file (lines 252 and 270).

Suggested change
id: `authenticated-${Date.now()}`,
id: `authenticated-${Date.now()}-${Math.random()}`,

* Enqueues an item for processing. Items are processed sequentially in FIFO order.
* Uses a promise chain to ensure sequential processing without explicit locks or void.
*/
async enqueue<T>(item: QueueItem<T>): Promise<void> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The enqueue method is marked as async but doesn't use await, which can be slightly misleading as it resolves immediately. Since the method's logic is synchronous, you can remove the async keyword and change the return type to void to more accurately reflect its behavior. Note that this would require removing the await from calls to this method in the corresponding test file.

Suggested change
async enqueue<T>(item: QueueItem<T>): Promise<void> {
enqueue<T>(item: QueueItem<T>): void {

@nick-y-snyk nick-y-snyk changed the title fix: prevent infinite loop when toggling auto org checkbox [IDE-1661] refactor: use NotificationQueue for async LS notification handlers [IDE-1661] Jan 12, 2026
When rapidly toggling the auto organization checkbox in the HTML settings page,
an infinite loop of settings.json updates occurred. This was caused by both the
HTML autosave and ConfigurationWatcher writing to FOLDER_CONFIGS simultaneously.

Changes:
- Add flag-based prevention in ConfigurationWatcher to skip processing when HTML
  settings page is updating configuration
- Refactor LanguageServer notification handlers to use NotificationQueue for
  sequential async processing, preventing race conditions
- Make handleOrgSettingsFromFolderConfigs async and process folder configs
  sequentially
- Update scopeDetectionService to check if values changed before writing
- Add comprehensive tests for NotificationQueue

The flag approach cleanly separates HTML-initiated updates from VS Code UI
updates, preventing circular writes while maintaining normal functionality.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants