Skip to content

Conversation

@MkDev11
Copy link
Contributor

@MkDev11 MkDev11 commented Jan 21, 2026

Streaming Tool Activity During Task Execution

What's the problem?

Users couldn't see what agents were doing in real-time during task execution.

What this PR does

Shows real-time tool activity messages while agents work - letting users see what tools are being executed.

Note: This does NOT fully address #87, which requests streaming model text output. Model text streaming is blocked by a camel library limitation (AsyncChatCompletionStreamManager error when streaming with tools). This PR provides tool activity feedback as a partial improvement.

What you'll see

While tasks run, you'll see messages like:

[Tool] Shell Exec...
[Tool] Write Content To File...
[Tool] List Running Server...

This gives users real-time feedback on what the agent is doing.

What did I change?

Added streaming tool activity events:

  • Modified @listen_toolkit decorator to send ActionStreamingAgentOutputData events when tools start
  • Added streaming events in _execute_tool and _aexecute_tool methods
  • Frontend already handles streaming_agent_output events and displays them in the UI

Disabled model streaming for worker agents:

  • The camel library throws AsyncChatCompletionStreamManager errors when streaming with tools
  • Disabled to prevent crashes while still providing tool activity feedback

Commits

  1. e1ee5c1 - fix: disable streaming for worker agents to prevent camel library error

    • Disable model streaming for agents with tools (prevents crash)
    • Add streaming infrastructure for future use
    • Handle AsyncStreamingChatAgentResponse in single_agent_worker.py
  2. abae378 - feat: add streaming tool activity messages during task execution

    • Add streaming events in @listen_toolkit decorator
    • Add streaming events in _execute_tool and _aexecute_tool
    • Replace emoji with [Tool] text for compatibility

Files changed

  • backend/app/utils/agent.py - Add streaming events in tool execution, disable model streaming
  • backend/app/utils/listen/toolkit_listen.py - Add streaming events in @listen_toolkit decorator
  • backend/app/utils/single_agent_worker.py - Handle streaming responses properly

Testing

  1. Submit a task like "Create a simple HTML page with a button that shows an alert when clicked, save it as index.html, then use Python's http.server to serve it on port 8080 and tell me the URL"
  2. Watch the Developer Agent card while it's running
  3. You should see [Tool] Shell Exec... messages appear in real-time
  4. Task completes successfully

@Wendong-Fan Wendong-Fan requested review from a7m-1st and fengju0213 and removed request for a7m-1st January 21, 2026 13:35
@MkDev11
Copy link
Contributor Author

MkDev11 commented Jan 21, 2026

@a7m-1st @fengju0213 could you please review the changes and let me know your feedback? thanks!

@a7m-1st
Copy link
Collaborator

a7m-1st commented Jan 21, 2026

Thanks for the PR @MkDev11 , I don't think I can manage to test this by today (UTC +3).
But have you taken a look at #767 ? kind of similar.

But UI wise can you attach a screenshot of which part you are exactly streaming? Just to get a faster picture !

@MkDev11 MkDev11 closed this Jan 22, 2026
@MkDev11 MkDev11 force-pushed the feature/streaming-agent-output branch from d81f18c to 35f4ca2 Compare January 22, 2026 02:08
@MkDev11 MkDev11 reopened this Jan 22, 2026
The camel library throws 'AsyncChatCompletionStreamManager object has no
attribute choices' when streaming is enabled for agents with tools attached.

Changes:
- Disable streaming for all astep calls in ListenChatAgent to avoid the error
- Preserve model_config_dict when cloning agents (for future streaming support)
- Add streaming infrastructure (ActionStreamingAgentOutputData) for when
  camel library support is available
- Handle AsyncStreamingChatAgentResponse in single_agent_worker.py

Note: Streaming text output for worker agents is blocked by upstream camel
library limitation. Task decomposition streaming still works via decompose_text.
Shows real-time '[Tool] Shell Exec...', '[Tool] Write File...' etc. messages
in the UI while tools are executing, giving users feedback on agent activity.

Changes:
- Add ActionStreamingAgentOutputData events in @listen_toolkit decorator
- Add streaming events in _execute_tool and _aexecute_tool methods
- Disable model streaming for worker agents (camel library limitation)
- Replace emoji with [Tool] text for better compatibility

Closes eigent-ai#87
@MkDev11 MkDev11 changed the title feat: implement streaming output for agent responses feat: add streaming tool activity messages during task execution Jan 22, 2026
@MkDev11
Copy link
Contributor Author

MkDev11 commented Jan 22, 2026

@a7m-1st I added some changes and updated the description. This PR adds some real-time feedback while tasks are running.

Before, you'd just see the task start and then wait until it finished. Now you'll see messages like [Tool] Shell Exec... pop up as the agent works, so you know what's actually happening behind the scenes.

I originally wanted to stream the model's actual thinking text (like ChatGPT does), but hit a wall - the camel library we use crashes when you try to stream with tools enabled. Since pretty much every agent uses tools, I pivoted to showing tool activity instead. Not quite what issue #87 asked for, but it's still a nice improvement.

@MkDev11
Copy link
Contributor Author

MkDev11 commented Jan 22, 2026

streaming messages

@a7m-1st
Copy link
Collaborator

a7m-1st commented Jan 22, 2026

I see awesome then Thanks @MkDev11, I need time so far as I am little occupied with other tasks. I will catch up with this PR once I got the chance.

@MkDev11
Copy link
Contributor Author

MkDev11 commented Jan 22, 2026 via email

@a7m-1st
Copy link
Collaborator

a7m-1st commented Jan 22, 2026

Hi there @Wendong-Fan , I think this would be a cool feature maybe Douglas can confirm if the UI meets expectations?

@fengju0213
Copy link
Collaborator

@MkDev11 thanks for your contribution,i noticed that you mentioned that the camel library throws AsyncChatCompletionStreamManager errors when streaming with tools,however, Camel normally supports tool calls in streaming mode. Could I please tell you under what circumstances you encounter the error?

@MkDev11
Copy link
Contributor Author

MkDev11 commented Jan 23, 2026

@MkDev11 thanks for your contribution,i noticed that you mentioned that the camel library throws AsyncChatCompletionStreamManager errors when streaming with tools,however, Camel normally supports tool calls in streaming mode. Could I please tell you under what circumstances you encounter the error?

Hey @fengju0213, good catch - my comment in the code was a bit misleading. It's not streaming + tools that's the issue, it's streaming + response_format (structured output).

Here's what happens:

model = ModelFactory.create(
    model_platform=ModelPlatformType.OPENAI,
    model_type=ModelType.GPT_4O_MINI,
    model_config_dict={'stream': True}
)
agent = ChatAgent(system_message='...', model=model)

# this breaks
response = await agent.astep('query', response_format=SomePydanticModel)
async for chunk in response:
    print(chunk)

You get:

AttributeError: 'AsyncChatCompletionStreamManager' object has no attribute 'choices'

The problem is in _handle_batch_response (chat_agent.py:3743) - it tries to do for choice in response.choices but OpenAI returns an AsyncChatCompletionStreamManager when you combine streaming with structured output, and that object doesn't have .choices.

So I just disable streaming when we need structured output:

worker_agent.model_backend.model_config_dict["stream"] = False
response = await worker_agent.astep(prompt, response_format=TaskResult)

Let me know what you think

The issue is streaming + response_format (structured output), not streaming + tools
a7m-1st and others added 3 commits January 24, 2026 02:23
- Add is_final field to AgentMessage type in chatbox.d.ts
- Simplify chatStore.ts streaming handler (remove type cast, use currentTaskId)
- Use deep copy for model_config_dict to ensure isolation between cloned agents
…treaming

Per a7m-1st's review: activate_toolkit already provides sufficient tool activity info.
Streaming infrastructure preserved in ListenChatAgent for future model output
streaming once CAMEL PR #3743 is merged.
@MkDev11
Copy link
Contributor Author

MkDev11 commented Jan 23, 2026

@a7m-1st please review the changes

@fengju0213
Copy link
Collaborator

@MkDev11 thanks for your contribution,i noticed that you mentioned that the camel library throws AsyncChatCompletionStreamManager errors when streaming with tools,however, Camel normally supports tool calls in streaming mode. Could I please tell you under what circumstances you encounter the error?

Hey @fengju0213, good catch - my comment in the code was a bit misleading. It's not streaming + tools that's the issue, it's streaming + response_format (structured output).

Here's what happens:

model = ModelFactory.create(
    model_platform=ModelPlatformType.OPENAI,
    model_type=ModelType.GPT_4O_MINI,
    model_config_dict={'stream': True}
)
agent = ChatAgent(system_message='...', model=model)

# this breaks
response = await agent.astep('query', response_format=SomePydanticModel)
async for chunk in response:
    print(chunk)

You get:

AttributeError: 'AsyncChatCompletionStreamManager' object has no attribute 'choices'

The problem is in _handle_batch_response (chat_agent.py:3743) - it tries to do for choice in response.choices but OpenAI returns an AsyncChatCompletionStreamManager when you combine streaming with structured output, and that object doesn't have .choices.

So I just disable streaming when we need structured output:

worker_agent.model_backend.model_config_dict["stream"] = False
response = await worker_agent.astep(prompt, response_format=TaskResult)

Let me know what you think

thank you very much!i will investigate this issue

@MkDev11 MkDev11 requested a review from a7m-1st January 25, 2026 00:17
MkDev11 and others added 6 commits January 26, 2026 07:45
Per Douglasymlai's review - improves text readability for users
The fix was merged upstream in camel-ai/camel#3743.
Once camel-ai is updated to include this fix, streaming with
structured output will work correctly without disabling streaming.
Includes fix for AsyncChatCompletionStreamManager from PR #3743
@a7m-1st
Copy link
Collaborator

a7m-1st commented Jan 26, 2026

Hopfully we an resolve this in camel-ai/camel#3743

I believe we can simplify the configs now bcz it has been fixed @MkDev11 or are they unrelated? i.e. the manual setting of steam mode and on clone too

Remove specific mention of stream setting since the workaround
for AsyncChatCompletionStreamManager is no longer needed.
@MkDev11
Copy link
Contributor Author

MkDev11 commented Jan 26, 2026

Hopfully we an resolve this in camel-ai/camel#3743

I believe we can simplify the configs now bcz it has been fixed @MkDev11 or are they unrelated? i.e. the manual setting of steam mode and on clone too

yep, I've simplified it! Removed the workarounds from both agent.py and single_agent_worker.py, and updated camel-ai to 0.2.85 which includes the fix. The stream mode settings and clone preservation are intentional config, not workarounds, so those stay.

@MkDev11
Copy link
Contributor Author

MkDev11 commented Jan 26, 2026

@a7m-1st please review the changes again

@a7m-1st
Copy link
Collaborator

a7m-1st commented Jan 27, 2026

All right sure, let me run it through the debugger to reconfirm the flow.
Aside than this, if @Wendong-Fan and @Douglasymlai find it a cool feature, I think it should be good 👍

@MkDev11
Copy link
Contributor Author

MkDev11 commented Jan 30, 2026

hello @Wendong-Fan can you review the changes again and let me know your feedback? I am still waiting for you

@MkDev11 MkDev11 force-pushed the feature/streaming-agent-output branch from 800cebd to 523ba97 Compare January 30, 2026 22:08
@MkDev11
Copy link
Contributor Author

MkDev11 commented Feb 1, 2026

hello @Wendong-Fan I am really sorry for tagging you, could you please give me any update for me?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants