-
Notifications
You must be signed in to change notification settings - Fork 1.3k
feat: add streaming tool activity messages during task execution #999
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
@a7m-1st @fengju0213 could you please review the changes and let me know your feedback? thanks! |
d81f18c to
35f4ca2
Compare
The camel library throws 'AsyncChatCompletionStreamManager object has no attribute choices' when streaming is enabled for agents with tools attached. Changes: - Disable streaming for all astep calls in ListenChatAgent to avoid the error - Preserve model_config_dict when cloning agents (for future streaming support) - Add streaming infrastructure (ActionStreamingAgentOutputData) for when camel library support is available - Handle AsyncStreamingChatAgentResponse in single_agent_worker.py Note: Streaming text output for worker agents is blocked by upstream camel library limitation. Task decomposition streaming still works via decompose_text.
Shows real-time '[Tool] Shell Exec...', '[Tool] Write File...' etc. messages in the UI while tools are executing, giving users feedback on agent activity. Changes: - Add ActionStreamingAgentOutputData events in @listen_toolkit decorator - Add streaming events in _execute_tool and _aexecute_tool methods - Disable model streaming for worker agents (camel library limitation) - Replace emoji with [Tool] text for better compatibility Closes eigent-ai#87
|
@a7m-1st I added some changes and updated the description. This PR adds some real-time feedback while tasks are running. Before, you'd just see the task start and then wait until it finished. Now you'll see messages like I originally wanted to stream the model's actual thinking text (like |
|
I see awesome then Thanks @MkDev11, I need time so far as I am little occupied with other tasks. I will catch up with this PR once I got the chance. |
|
Cool!
…On Thu, Jan 22, 2026 at 11:37 AM Ahmed Awelkair A ***@***.***> wrote:
*a7m-1st* left a comment (eigent-ai/eigent#999)
<#999 (comment)>
I see awesome then Thanks @MkDev11 <https://github.com/MkDev11>, I need
time so far as I am little occupied with other tasks. I will catch up with
this PR once I got the chance.
—
Reply to this email directly, view it on GitHub
<#999 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AWOUTY6VPENCFY57A6CZXZT4ID4DVAVCNFSM6AAAAACSM4HTTSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZTOOBVGM4DQMRSGM>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
|
Hi there @Wendong-Fan , I think this would be a cool feature maybe Douglas can confirm if the UI meets expectations? |
|
@MkDev11 thanks for your contribution,i noticed that you mentioned that the camel library throws AsyncChatCompletionStreamManager errors when streaming with tools,however, Camel normally supports tool calls in streaming mode. Could I please tell you under what circumstances you encounter the error? |
Hey @fengju0213, good catch - my comment in the code was a bit misleading. It's not streaming + tools that's the issue, it's streaming + Here's what happens: model = ModelFactory.create(
model_platform=ModelPlatformType.OPENAI,
model_type=ModelType.GPT_4O_MINI,
model_config_dict={'stream': True}
)
agent = ChatAgent(system_message='...', model=model)
# this breaks
response = await agent.astep('query', response_format=SomePydanticModel)
async for chunk in response:
print(chunk)You get: The problem is in So I just disable streaming when we need structured output: worker_agent.model_backend.model_config_dict["stream"] = False
response = await worker_agent.astep(prompt, response_format=TaskResult)Let me know what you think |
The issue is streaming + response_format (structured output), not streaming + tools
…enerating functions
- Add is_final field to AgentMessage type in chatbox.d.ts - Simplify chatStore.ts streaming handler (remove type cast, use currentTaskId) - Use deep copy for model_config_dict to ensure isolation between cloned agents
…treaming Per a7m-1st's review: activate_toolkit already provides sufficient tool activity info. Streaming infrastructure preserved in ListenChatAgent for future model output streaming once CAMEL PR #3743 is merged.
|
@a7m-1st please review the changes |
thank you very much!i will investigate this issue |
Per Douglasymlai's review - improves text readability for users
The fix was merged upstream in camel-ai/camel#3743. Once camel-ai is updated to include this fix, streaming with structured output will work correctly without disabling streaming.
…gle_agent_worker The fix was merged upstream in camel-ai/camel#3743.
Includes fix for AsyncChatCompletionStreamManager from PR #3743
I believe we can simplify the configs now bcz it has been fixed @MkDev11 or are they unrelated? i.e. the manual setting of steam mode and on clone too |
Remove specific mention of stream setting since the workaround for AsyncChatCompletionStreamManager is no longer needed.
yep, I've simplified it! Removed the workarounds from both |
|
@a7m-1st please review the changes again |
|
All right sure, let me run it through the debugger to reconfirm the flow. |
|
hello @Wendong-Fan can you review the changes again and let me know your feedback? I am still waiting for you |
800cebd to
523ba97
Compare
|
hello @Wendong-Fan I am really sorry for tagging you, could you please give me any update for me? |

Streaming Tool Activity During Task Execution
What's the problem?
Users couldn't see what agents were doing in real-time during task execution.
What this PR does
Shows real-time tool activity messages while agents work - letting users see what tools are being executed.
Note: This does NOT fully address #87, which requests streaming model text output. Model text streaming is blocked by a camel library limitation (
AsyncChatCompletionStreamManagererror when streaming with tools). This PR provides tool activity feedback as a partial improvement.What you'll see
While tasks run, you'll see messages like:
This gives users real-time feedback on what the agent is doing.
What did I change?
Added streaming tool activity events:
@listen_toolkitdecorator to sendActionStreamingAgentOutputDataevents when tools start_execute_tooland_aexecute_toolmethodsstreaming_agent_outputevents and displays them in the UIDisabled model streaming for worker agents:
AsyncChatCompletionStreamManagererrors when streaming with toolsCommits
e1ee5c1 -
fix: disable streaming for worker agents to prevent camel library errorabae378 -
feat: add streaming tool activity messages during task execution@listen_toolkitdecorator_execute_tooland_aexecute_tool[Tool]text for compatibilityFiles changed
backend/app/utils/agent.py- Add streaming events in tool execution, disable model streamingbackend/app/utils/listen/toolkit_listen.py- Add streaming events in @listen_toolkit decoratorbackend/app/utils/single_agent_worker.py- Handle streaming responses properlyTesting
[Tool] Shell Exec...messages appear in real-time