Skip to content

LCORE-1223: Add e2e tests for rlsapi v1 /infer endpoint#1067

Open
major wants to merge 1 commit intolightspeed-core:mainfrom
major:LCORE-1223-rlsapi-v1-e2e-tests
Open

LCORE-1223: Add e2e tests for rlsapi v1 /infer endpoint#1067
major wants to merge 1 commit intolightspeed-core:mainfrom
major:LCORE-1223-rlsapi-v1-e2e-tests

Conversation

@major
Copy link
Contributor

@major major commented Jan 27, 2026

Description

Add end-to-end test coverage for the rlsapi v1 /infer endpoint, which serves RHEL Lightspeed Command Line Assistant (CLA) clients.

This PR adds 7 high-value test scenarios covering:

  • Basic inference with minimal request (question only)
  • Inference with full context (systeminfo populated)
  • Authentication enforcement (401 for missing/empty auth)
  • Input validation (422 for empty/whitespace question)
  • Response structure validation (data.text, data.request_id)
  • Statelessness validation (unique request_ids)

Type of change

  • End to end tests improvement

Tools used to create PR

  • Assisted-by: Claude
  • Generated by: N/A

Related Tickets & Documents

Checklist before requesting a review

  • I have performed a self-review of my code.
  • PR has passed all pre-merge test jobs.
  • If it is a core feature, I have added thorough tests.

Summary by CodeRabbit

  • New Features

    • Added inference configuration with OpenAI as default provider and gpt-4o-mini as default model.
    • Granted rlsapi_v1_infer permission to the user role.
  • Tests

    • Expanded end-to-end coverage for the inference API (/infer): success and error scenarios, auth/permission checks, response structure and request_id uniqueness; added test step implementations and updated test listings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 27, 2026

Walkthrough

Adds an inference top-level config (defaults: default_provider: openai, default_model: gpt-4o-mini) to multiple e2e YAML configs, introduces new BDD tests and step definitions for the rlsapi v1 /infer endpoint, updates RBAC to allow rlsapi_v1_infer, and registers the new features in the e2e test list.

Changes

Cohort / File(s) Summary
E2E Configuration Files
tests/e2e/configuration/library-mode/lightspeed-stack-auth-noop-token.yaml, tests/e2e/configuration/library-mode/lightspeed-stack.yaml, tests/e2e/configuration/server-mode/lightspeed-stack-auth-noop-token.yaml, tests/e2e/configuration/server-mode/lightspeed-stack.yaml
Added top-level inference block with default_provider: openai and default_model: gpt-4o-mini.
E2E RBAC Configs
tests/e2e/configuration/library-mode/lightspeed-stack-rbac.yaml, tests/e2e/configuration/server-mode/lightspeed-stack-rbac.yaml
Added rlsapi_v1_infer to the user role actions.
RLS API v1 Feature Tests
tests/e2e/features/rlsapi_v1.feature, tests/e2e/features/rlsapi_v1_errors.feature
Added BDD scenarios for /v1/infer: success cases, auth/permission checks (401/403), validation (422), service-unavailable (503), and response structure/request_id checks.
RLS API v1 Step Definitions
tests/e2e/features/steps/rlsapi_v1.py
Added step implementations: check_rlsapi_response_structure(context), store_rlsapi_request_id(context), check_rlsapi_request_id_different(context) with validations and request_id storage/comparison.
Test Registry
tests/e2e/test_list.txt
Registered new feature files (features/rlsapi_v1.feature, features/rlsapi_v1_errors.feature) in the e2e test list.

Sequence Diagram(s)

(omitted)

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested labels

ok-to-test

Suggested reviewers

  • tisnik
  • radofuchs
🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and specifically describes the main change: adding end-to-end tests for the rlsapi v1 /infer endpoint, which aligns with the primary focus of the changeset.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@major major force-pushed the LCORE-1223-rlsapi-v1-e2e-tests branch 2 times, most recently from 50ac720 to 19e699c Compare January 27, 2026 14:49
@major major marked this pull request as ready for review January 27, 2026 15:13
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@tests/e2e/features/steps/rlsapi_v1.py`:
- Around line 30-56: In both store_rlsapi_request_id and
check_rlsapi_request_id_different, tighten validation so request_id is a
non-empty string: after extracting response_json and before assigning
context.stored_request_id (in store_rlsapi_request_id) assert that
response_json["data"]["request_id"] is an instance of str and not empty, and in
check_rlsapi_request_id_different assert the same for current_request_id before
comparing; use clear assertion messages and keep referencing response_json,
current_request_id, and context.stored_request_id to locate the checks.
🧹 Nitpick comments (1)
tests/e2e/features/rlsapi_v1.feature (1)

65-73: Consider consolidating with the basic inference scenario.

This scenario validates the same assertions as "Basic inference with minimal request" (lines 8-16): both check for a 200 status code and valid response structure. Unless there's a specific reason to test with different question content, consider removing this scenario to reduce test duplication.

@major major force-pushed the LCORE-1223-rlsapi-v1-e2e-tests branch from 19e699c to 1d14117 Compare January 27, 2026 15:26
@@ -0,0 +1,89 @@
@Authorized
Feature: rlsapi v1 /infer endpoint API tests
Copy link
Contributor

@radofuchs radofuchs Feb 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am missing here the cases for 403 and 429 status codes,
you can acheive 403 when using different bearer token.
For 429, the quota is configurable in configuration file.
Also, add a test for 503, with broken llama-stack connection

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@radofuchs At the moment, we don't have 429 responses because we don't currently limit via a quota (it's not in our API spec). Should I do the 403 and 503 errors?

- Add rlsapi_v1.feature with 7 test scenarios
- Add rlsapi_v1.py step definitions for response validation
- Update test_list.txt to include new feature file

Implements LCORE-1223

Signed-off-by: Major Hayden <major@redhat.com>
@major major force-pushed the LCORE-1223-rlsapi-v1-e2e-tests branch from 1d14117 to 820ed10 Compare February 4, 2026 22:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants