Skip to content

Conversation

@pan-x-c
Copy link
Collaborator

@pan-x-c pan-x-c commented Feb 11, 2026

Description

As the title says

Checklist

Please check the following items before code is ready to be reviewed.

  • Code has passed all tests
  • Docstrings have been added/updated in Google Style
  • Documentation has been updated
  • Code is ready for review

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @pan-x-c, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses an issue with logger handling in Ray workflows by introducing a mechanism for process-global logger reuse within Ray actors. This change ensures that logging configurations, particularly log levels, are consistently applied across different concurrent workflow runners. The modifications are thoroughly validated with new tests that confirm the correct behavior of log message filtering and file output.

Highlights

  • Logger Reuse in Ray Actors: Implemented a process-global logger (_ray_logger) in trinity/utils/log.py to ensure that Ray actors reuse logger instances within the same process, preventing redundant logger creation and improving logging consistency.
  • Enhanced Workflow Logging Tests: Updated tests/explorer/workflow_test.py to include comprehensive tests for the logging mechanism in concurrent workflow runners. These tests now verify that log messages are correctly filtered based on configured log levels and written to designated log files.
  • Chat Template Refactoring: Refactored the CHAT_TEMPLATE definition by moving it from tests/common/vllm_test.py to tests/tools.py to centralize common test utilities.
Changelog
  • tests/common/vllm_test.py
    • Updated imports to reflect the new location of CHAT_TEMPLATE.
    • Removed the CHAT_TEMPLATE definition from this file.
  • tests/explorer/workflow_test.py
    • Added imports for os and shutil for file system operations.
    • Imported LOG_DIR_ENV_VAR and LOG_LEVEL_ENV_VAR for configuring logging environment variables.
    • Refactored TestConcurrentWorkflowRunner to include a setUp method for initializing configuration and creating log directories.
    • Modified WorkflowRunner instantiation to use ray.remote and configure runtime environment variables for log directory and level.
    • Added debug, info, and warning log messages within ConcurrentTestWorkflow to be captured by the new logging setup.
    • Included new assertions to verify the presence and count of log messages in generated log files, ensuring correct log level filtering.
    • Implemented a tearDown method to clean up created log directories after tests complete.
  • tests/tools.py
    • Moved the CHAT_TEMPLATE definition from tests/common/vllm_test.py to this file, centralizing it for reuse.
  • trinity/utils/log.py
    • Introduced _ray_logger, a process-global logger variable, to ensure logger instances are reused across Ray actor processes.
    • Modified get_logger function to first check for an existing process-global logger before creating a new one, improving efficiency and consistency.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses an issue with logger instantiation in Ray workflows by introducing a process-global cache for the logger in trinity/utils/log.py. This ensures a single logger instance is reused within a Ray actor process. The changes are accompanied by a comprehensive test in tests/explorer/workflow_test.py that validates the logging behavior at different levels for remote actors. Additionally, the CHAT_TEMPLATE constant has been refactored from a test file into the shared tests/tools.py module for better organization.

My review focuses on improving the new test code. I've suggested correcting a typo in the log messages and using unittest's assertion methods for better test failure diagnostics. The core logic for the logger fix appears sound.

@pan-x-c
Copy link
Collaborator Author

pan-x-c commented Feb 11, 2026

/unittest-module-explorer

@github-actions
Copy link

Summary

Tests 📝 Passed ✅ Failed ❌ Skipped ⏭️ Other ❓ Flaky 🍂 Duration ⏱️
49 48 1 0 0 0 13m 14s

Failed Tests

Failed Tests ❌ Fail Message
❌ tests/explorer/workflow_test.py::TestConcurrentWorkflowRunner::test_concurrent_workflow_runner The test failed in the call phase due to an assertion error

Tests

Test Name Status Flaky Duration
tests/explorer/explorer_test.py::TestExplorerCountdownEval::test_explorer 1m 52s
tests/explorer/explorer_test.py::TestExplorerEvalDetailedStats::test_explorer 1m 21s
tests/explorer/explorer_test.py::TestExplorerGSM8KRULERNoEval::test_explorer 56.0s
tests/explorer/explorer_test.py::TestExplorerGSM8k::test_explorer 3m
tests/explorer/explorer_test.py::ServeTest::test_serve 1m 1s
tests/explorer/proxy_test.py::RecorderTest::test_recorder 62ms
tests/explorer/scheduler_test.py::SchedulerTest::test_async_workflow 5.1s
tests/explorer/scheduler_test.py::SchedulerTest::test_concurrent_operations 4.6s
tests/explorer/scheduler_test.py::SchedulerTest::test_dynamic_timeout 12.7s
tests/explorer/scheduler_test.py::SchedulerTest::test_get_results 20.3s
tests/explorer/scheduler_test.py::SchedulerTest::test_metric_calculation_with_non_repeatable_workflow_0 4.6s
tests/explorer/scheduler_test.py::SchedulerTest::test_metric_calculation_with_non_repeatable_workflow_1 4.6s
tests/explorer/scheduler_test.py::SchedulerTest::test_metric_calculation_with_repeatable_workflow_0 4.5s
tests/explorer/scheduler_test.py::SchedulerTest::test_metric_calculation_with_repeatable_workflow_1 4.9s
tests/explorer/scheduler_test.py::SchedulerTest::test_multi_step_execution 5.2s
tests/explorer/scheduler_test.py::SchedulerTest::test_non_repeatable_workflow 4.6s
tests/explorer/scheduler_test.py::SchedulerTest::test_over_rollout_min_wait 8.5s
tests/explorer/scheduler_test.py::SchedulerTest::test_scheduler_all_methods 14.7s
tests/explorer/scheduler_test.py::SchedulerTest::test_scheduler_restart_after_stop 8.6s
tests/explorer/scheduler_test.py::SchedulerTest::test_split_tasks 7.7s
tests/explorer/scheduler_test.py::SchedulerTest::test_stepwise_experience_eid 24.9s
tests/explorer/scheduler_test.py::SchedulerTest::test_wait_all 7.8s
tests/explorer/scheduler_test.py::SchedulerTest::test_wait_all_timeout_with_multi_batch 13.2s
tests/explorer/scheduler_test.py::TestRunnerStateCollection::test_runner_state_collection 9.8s
tests/explorer/step_wise_workflow_test.py::WorkflowTest::test_reward_propagation_workflow_0 1ms
tests/explorer/step_wise_workflow_test.py::WorkflowTest::test_reward_propagation_workflow_1 604ms
tests/explorer/step_wise_workflow_test.py::WorkflowTest::test_step_wise_reward_workflow_0 1ms
tests/explorer/step_wise_workflow_test.py::WorkflowTest::test_step_wise_reward_workflow_1 1.0s
tests/explorer/step_wise_workflow_test.py::WorkflowTest::test_workflows_raise_error 1ms
tests/explorer/step_wise_workflow_test.py::WorkflowTest::test_workflows_stop_at_max_env_steps 1.0s
tests/explorer/workflow_test.py::WorkflowTest::test_gsm8k_workflow 26ms
tests/explorer/workflow_test.py::WorkflowTest::test_math_boxed_workflow 16ms
tests/explorer/workflow_test.py::WorkflowTest::test_math_complex_workflow 446ms
tests/explorer/workflow_test.py::WorkflowTest::test_math_eval_workflow 4ms
tests/explorer/workflow_test.py::WorkflowTest::test_math_fraction_workflow 11ms
tests/explorer/workflow_test.py::WorkflowTest::test_math_workflow 7ms
tests/explorer/workflow_test.py::WorkflowTest::test_workflow_repeatable_0 1ms
tests/explorer/workflow_test.py::WorkflowTest::test_workflow_repeatable_1 101ms
tests/explorer/workflow_test.py::WorkflowTest::test_workflow_resettable_0 1ms
tests/explorer/workflow_test.py::WorkflowTest::test_workflow_resettable_1 201ms
tests/explorer/workflow_test.py::MultiTurnWorkflowTest_0::test_multi_turn_workflow 23.2s
tests/explorer/workflow_test.py::MultiTurnWorkflowTest_1::test_multi_turn_workflow 23.4s
tests/explorer/workflow_test.py::TestWorkflowStateRecording::test_workflow_state_recording 4.0s
tests/explorer/workflow_test.py::TestAgentScopeWorkflowAdapter::test_adapter_v0 774ms
tests/explorer/workflow_test.py::TestAgentScopeWorkflowAdapter::test_adapter_v1 14ms
tests/explorer/workflow_test.py::TestWorkflowRunner::test_workflow_runner 139ms
tests/explorer/workflow_test.py::TestWorkflowRunner::test_workflow_runner_get_state 8.1s
tests/explorer/workflow_test.py::TestWorkflowRunner::test_workflow_with_openai 26.8s
tests/explorer/workflow_test.py::TestConcurrentWorkflowRunner::test_concurrent_workflow_runner 43.6s

Github Test Reporter by CTRF 💚

@pan-x-c
Copy link
Collaborator Author

pan-x-c commented Feb 11, 2026

/unittest-pattern-TestConcurrentWorkflowRunner

@github-actions
Copy link

Summary

Tests 📝 Passed ✅ Failed ❌ Skipped ⏭️ Other ❓ Flaky 🍂 Duration ⏱️
1 1 0 0 0 0 59.1s

Tests

Test Name Status Flaky Duration
tests/explorer/workflow_test.py::TestConcurrentWorkflowRunner::test_concurrent_workflow_runner 51.6s

Github Test Reporter by CTRF 💚

@pan-x-c
Copy link
Collaborator Author

pan-x-c commented Feb 11, 2026

/unittest-module-cli

@github-actions
Copy link

Summary

Tests 📝 Passed ✅ Failed ❌ Skipped ⏭️ Other ❓ Flaky 🍂 Duration ⏱️
5 4 1 0 0 0 1m 14s

Failed Tests

Failed Tests ❌ Fail Message
❌ tests/cli/launcher_test.py::TestLauncherMain::test_debug_mode The test failed in the call phase due to an assertion error

Tests

Test Name Status Flaky Duration
tests/cli/launcher_test.py::TestLauncherMain::test_debug_mode 50.5s
tests/cli/launcher_test.py::TestLauncherMain::test_main_run_command 6.2s
tests/cli/launcher_test.py::TestLauncherMain::test_main_run_in_dlc 1.6s
tests/cli/launcher_test.py::TestLauncherMain::test_main_studio_command 1.1s
tests/cli/launcher_test.py::TestLauncherMain::test_multi_stage_run 14.2s

Github Test Reporter by CTRF 💚

@pan-x-c
Copy link
Collaborator Author

pan-x-c commented Feb 11, 2026

/unittest-pattern-test_debug_mode

@github-actions
Copy link

Summary

Tests 📝 Passed ✅ Failed ❌ Skipped ⏭️ Other ❓ Flaky 🍂 Duration ⏱️
1 1 0 0 0 0 54.8s

Tests

Test Name Status Flaky Duration
tests/cli/launcher_test.py::TestLauncherMain::test_debug_mode 47.3s

Github Test Reporter by CTRF 💚

@hiyuchang
Copy link
Collaborator

/unittest-module-utils

@github-actions
Copy link

Summary

Tests 📝 Passed ✅ Failed ❌ Skipped ⏭️ Other ❓ Flaky 🍂 Duration ⏱️
28 27 0 1 0 0 42.5s

Skipped

Tests Status
tests/utils/swanlab_test.py::TestSwanlabMonitor::test_swanlab_monitor_smoke skipped ⏭️

Tests

Test Name Status Flaky Duration
tests/utils/eval_utils_test.py::TestComputeScore::test_both_boxed_and_equivalent 11ms
tests/utils/eval_utils_test.py::TestComputeScore::test_both_boxed_and_not_equivalent 1ms
tests/utils/eval_utils_test.py::TestComputeScore::test_empty_ground_truth 1ms
tests/utils/eval_utils_test.py::TestComputeScore::test_empty_solution_string 1ms
tests/utils/eval_utils_test.py::TestComputeScore::test_multiple_boxed_answers_in_solution 1ms
tests/utils/eval_utils_test.py::TestComputeScore::test_solution_boxed_truth_raw_and_equivalent 1ms
tests/utils/eval_utils_test.py::TestComputeScore::test_solution_boxed_truth_raw_and_not_equivalent 1ms
tests/utils/eval_utils_test.py::TestComputeScore::test_solution_not_boxed 1ms
tests/utils/eval_utils_test.py::TestComputeScore::test_solution_raw_and_ground_truth_boxed_equivalent 1ms
tests/utils/eval_utils_test.py::TestMathEvalUtils::test_extract_answer 4ms
tests/utils/eval_utils_test.py::TestMathEvalUtils::test_verify_math_answer 72ms
tests/utils/eval_utils_test.py::TestEvalUtils::test_is_equiv 4ms
tests/utils/log_test.py::LogTest::test_actor_log 5.8s
tests/utils/log_test.py::LogTest::test_group_by_node 2.0s
tests/utils/log_test.py::LogTest::test_no_actor_log 823ms
tests/utils/plugin_test.py::TestPluginLoader::test_load_plugins_local_0__workspace_tests_utils_plugins 310ms
tests/utils/plugin_test.py::TestPluginLoader::test_load_plugins_local_1_tests_utils_plugins 305ms
tests/utils/plugin_test.py::TestPluginLoader::test_load_plugins_remote_0__workspace_tests_utils_plugins 8.8s
tests/utils/plugin_test.py::TestPluginLoader::test_load_plugins_remote_1_tests_utils_plugins 8.9s
tests/utils/plugin_test.py::TestPluginLoader::test_passing_custom_class_0__workspace_tests_utils_plugins 5.0s
tests/utils/plugin_test.py::TestPluginLoader::test_passing_custom_class_1_tests_utils_plugins 5.1s
tests/utils/registry_test.py::TestRegistryWithRay::test_dynamic_import 2.1s
tests/utils/registry_test.py::TestRegistry::test_algorithm_registry_mapping 25ms
tests/utils/registry_test.py::TestRegistry::test_buffer_module_registry_mapping 19ms
tests/utils/registry_test.py::TestRegistry::test_common_module_registry_mapping 229ms
tests/utils/registry_test.py::TestRegistry::test_register_module 1ms
tests/utils/registry_test.py::TestRegistry::test_utils_module_registry_mapping 1ms
tests/utils/swanlab_test.py::TestSwanlabMonitor::test_swanlab_monitor_smoke ⏭️ 1ms

Github Test Reporter by CTRF 💚

@hiyuchang hiyuchang merged commit 2fbdab7 into agentscope-ai:main Feb 11, 2026
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants