LCORE-487: Fix /conversations endpoint #359
Conversation
…rrectly - Updated the endpoint handler to first list agent sessions and then retrieve the specific session data using the session ID. - Modified unit tests to mock the session retrieval process, ensuring the model_dump method is called correctly. This change improves the accuracy of conversation data retrieval.
WalkthroughThe conversation session retrieval logic was updated to first obtain a session list, extract the session ID, and then explicitly fetch the full session details using a separate API call. Corresponding test cases were updated to mock the additional retrieval step and ensure proper interface alignment. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Endpoint
participant SessionAPI
Client->>Endpoint: Request conversation session
Endpoint->>SessionAPI: List sessions for agent
SessionAPI-->>Endpoint: Return list of sessions
Endpoint->>SessionAPI: Retrieve session details (by session_id)
SessionAPI-->>Endpoint: Return full session data
Endpoint->>Client: Return session data as dict
Estimated code review effort🎯 2 (Simple) | ⏱️ ~8 minutes Possibly related PRs
Suggested reviewers
Poem
Note ⚡️ Unit Test Generation is now available in beta!Learn more here, or try it out under "Finishing Touches" below. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
src/app/endpoints/conversations.py(1 hunks)tests/unit/app/endpoints/test_conversations.py(1 hunks)
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: eranco74
PR: lightspeed-core/lightspeed-stack#348
File: src/utils/endpoints.py:91-94
Timestamp: 2025-08-06T06:02:21.060Z
Learning: The direct assignment to `agent._agent_id` in `src/utils/endpoints.py` is a necessary workaround for the missing agent rehydration feature in the LLS client SDK. This allows preserving conversation IDs when handling existing agents.
📚 Learning: the direct assignment to `agent._agent_id` in `src/utils/endpoints.py` is a necessary workaround for...
Learnt from: eranco74
PR: lightspeed-core/lightspeed-stack#348
File: src/utils/endpoints.py:91-94
Timestamp: 2025-08-06T06:02:21.060Z
Learning: The direct assignment to `agent._agent_id` in `src/utils/endpoints.py` is a necessary workaround for the missing agent rehydration feature in the LLS client SDK. This allows preserving conversation IDs when handling existing agents.
Applied to files:
src/app/endpoints/conversations.pytests/unit/app/endpoints/test_conversations.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: build-pr
- GitHub Check: e2e_tests
🔇 Additional comments (1)
tests/unit/app/endpoints/test_conversations.py (1)
309-312: LGTM! Test mocks properly align with the updated implementation.The additional mock for
session.retrieve()correctly simulates the new two-step retrieval process, ensuring the test validates both thelist()andretrieve()calls with propermodel_dump()method behavior.
| agent_sessions = (await client.agents.session.list(agent_id=agent_id)).data | ||
| session_id = str(agent_sessions[0].get("session_id")) | ||
|
|
||
| session_response = await client.agents.session.retrieve( | ||
| agent_id=agent_id, session_id=session_id | ||
| ) | ||
| session_data = session_response.model_dump() |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Good fix for the llama-stack upgrade issue, but add empty sessions check.
The two-step approach correctly addresses the issue where the upgraded llama-stack no longer returns conversation turns in the list response. However, there's a potential IndexError if no sessions exist.
Add a check for empty sessions:
agent_sessions = (await client.agents.session.list(agent_id=agent_id)).data
+if not agent_sessions:
+ logger.warning("No sessions found for conversation %s", conversation_id)
+ return ConversationResponse(
+ conversation_id=conversation_id,
+ chat_history=[],
+ )
session_id = str(agent_sessions[0].get("session_id"))🤖 Prompt for AI Agents
In src/app/endpoints/conversations.py around lines 137 to 143, add a check to
verify that agent_sessions is not empty before accessing the first element to
avoid an IndexError. If agent_sessions is empty, handle this case appropriately,
such as returning early or raising a clear exception. This ensures the code
safely handles cases where no sessions exist.
|
/lgtm |
tisnik
left a comment
There was a problem hiding this comment.
It looks good, thank you.
Description
The
/conversationsendpoint was returning an empty chat history when there should have been messages in the history. I think that the upgraded version of llama-stack broke theGET /v1/agents/{agent_id}/sessionsendpoint so that the turns are no longer returned in the response. This fix instead uses both theGET /v1/agents/{agent_id}/sessions(list()) to get thesession_idand then uses theGET /v1/agents/{agent_id}/sessions/{session_id}(retrieve()) to then get thesession_data.Type of change
Related Tickets & Documents
Checklist before requesting a review
Testing
Summary by CodeRabbit
Bug Fixes
Tests