-
Notifications
You must be signed in to change notification settings - Fork 57
Open
Labels
bugSomething isn't workingSomething isn't workingdocumentationImprovements or additions to documentationImprovements or additions to documentation
Description
Describe the bug
The tutorial suggests:
For gpt-oss models like gpt-oss-20b and gpt-oss-120b, you can control the reasoning effort using the extra_body parameter:
inference_parameters = ChatCompletionInferenceParams( extra_body={"reasoning_effort": "high"} )
But this doesn't work for any of the gpt models, even though it has a reasoning={"effort": "medium"} parameter described in official docs.(Please see here.
Steps/Code to reproduce bug
I obtained the model ID and provider by running:
config_builder.info.display(InfoType.MODEL_CONFIGS)
────────────────────────────────────────────────── Model Configs ──────────────────────────────────────────────────
┏━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Alias ┃ Model ┃ Provider ┃ Inference Parameters ┃
┡━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ nvidia-text │ nvidia/nemotron-3-nano-30b-a3b │ nvidia │ generation_type=chat-completion, │
│ │ │ │ max_parallel_requests=4, temperature=1.00, │
│ │ │ │ top_p=1.00 │
│ nvidia-reasoning │ openai/gpt-oss-20b │ nvidia │ generation_type=chat-completion, │
│ │ │ │ max_parallel_requests=4, temperature=0.35, │
│ │ │ │ top_p=0.95 │
│ nvidia-vision │ nvidia/nemotron-nano-12b-v2-vl │ nvidia │ generation_type=chat-completion, │
│ │ │ │ max_parallel_requests=4, temperature=0.85, │
│ │ │ │ top_p=0.95 │
│ nvidia-embedding │ nvidia/llama-3.2-nv-embedqa-1b-v2 │ nvidia │ generation_type=embedding, │
│ │ │ │ max_parallel_requests=4, │
│ │ │ │ extra_body={'input_type': 'query'}, │
│ │ │ │ encoding_format=float │
│ openai-text │ gpt-4.1 │ openai │ generation_type=chat-completion, │
│ │ │ │ max_parallel_requests=4, temperature=0.85, │
│ │ │ │ top_p=0.95 │
│ openai-reasoning │ gpt-5 │ openai │ generation_type=chat-completion, │
│ │ │ │ max_parallel_requests=4, temperature=0.35, │
│ │ │ │ top_p=0.95 │
│ openai-vision │ gpt-5 │ openai │ generation_type=chat-completion, │
│ │ │ │ max_parallel_requests=4, temperature=0.85, │
│ │ │ │ top_p=0.95 │
│ openai-embedding │ text-embedding-3-large │ openai │ generation_type=embedding, │
│ │ │ │ max_parallel_requests=4, │
│ │ │ │ encoding_format=float │
└──────────────────┴───────────────────────────────────┴──────────┴───────────────────────────────────────────────┘
Then used the following configs:
MODEL_PROVIDER = "openai"
MODEL_ID = "openai/gpt-oss-20b"
MODEL_ALIAS = "openai-text-abc"
model_configs = [
ModelConfig(
alias=MODEL_ALIAS,
model=MODEL_ID,
provider=MODEL_PROVIDER,
inference_parameters=ChatCompletionInferenceParams(
temperature=1.0,
top_p=1.0,
max_tokens=2048,
extra_body={"reasoning_effort": "high"}
),
)
]
But when I run a preview:
preview = data_designer.preview(config_builder, num_records=1)
I get this error:
[02:42:35] [INFO] 🖼️ Preview generation in progress
[02:42:35] [INFO] ✅ Validation passed
[02:42:35] [INFO] ⛓️ Sorting column configs into a Directed Acyclic Graph
[02:42:35] [INFO] 🩺 Running health checks for models...
[02:42:35] [INFO] |-- 👀 Checking 'openai/gpt-oss-20b' in provider named 'openai' for model alias 'openai-text-abc'...
[02:42:43] [ERROR] |-- ❌ Failed!
---------------------------------------------------------------------------
ModelBadRequestError Traceback (most recent call last)
[/usr/local/lib/python3.12/dist-packages/data_designer/interface/data_designer.py](https://github.com/NVIDIA-NeMo/DataDesigner/issues/177#) in preview(self, config_builder, num_records)
234 try:
--> 235 raw_dataset = builder.build_preview(num_records=num_records)
236 processed_dataset = builder.process_preview(raw_dataset)
7 frames[/usr/local/lib/python3.12/dist-packages/data_designer/engine/dataset_builders/column_wise_builder.py](https://github.com/NVIDIA-NeMo/DataDesigner/issues/177#) in build_preview(self, num_records)
124 def build_preview(self, *, num_records: int) -> pd.DataFrame:
--> 125 self._run_model_health_check_if_needed()
126
[/usr/local/lib/python3.12/dist-packages/data_designer/engine/dataset_builders/column_wise_builder.py](https://github.com/NVIDIA-NeMo/DataDesigner/issues/177#) in _run_model_health_check_if_needed(self)
204 if any(column_type_is_model_generated(config.column_type) for config in self.single_column_configs):
--> 205 self._resource_provider.model_registry.run_health_check(
206 list(set(config.model_alias for config in self.llm_generated_column_configs))
[/usr/local/lib/python3.12/dist-packages/data_designer/engine/models/registry.py](https://github.com/NVIDIA-NeMo/DataDesigner/issues/177#) in run_health_check(self, model_aliases)
130 logger.error(" |-- ❌ Failed!")
--> 131 raise e
132
[/usr/local/lib/python3.12/dist-packages/data_designer/engine/models/registry.py](https://github.com/NVIDIA-NeMo/DataDesigner/issues/177#) in run_health_check(self, model_aliases)
116 elif model.model_generation_type == GenerationType.CHAT_COMPLETION:
--> 117 model.generate(
118 prompt="Hello!",
[/usr/local/lib/python3.12/dist-packages/data_designer/engine/models/errors.py](https://github.com/NVIDIA-NeMo/DataDesigner/issues/177#) in wrapper(model_facade, *args, **kwargs)
253 )
--> 254 handle_llm_exceptions(
255 e, model_facade.model_name, model_facade.model_provider_name, purpose=kwargs.get("purpose")
[/usr/local/lib/python3.12/dist-packages/data_designer/engine/models/errors.py](https://github.com/NVIDIA-NeMo/DataDesigner/issues/177#) in handle_llm_exceptions(exception, model_name, model_provider_name, purpose)
160 case BadRequestError():
--> 161 raise err_msg_parser.parse_bad_request_error(exception) from None
162
ModelBadRequestError: |----------
| Cause: The request for model 'openai/gpt-oss-20b' was found to be malformed or missing required parameters while running health checks.
| Solution: Check your request parameters and try again.
|----------
During handling of the above exception, another exception occurred:
DataDesignerGenerationError Traceback (most recent call last)
[/tmp/ipython-input-4124882777.py](https://github.com/NVIDIA-NeMo/DataDesigner/issues/177#) in <cell line: 0>()
----> 1 preview = data_designer.preview(config_builder, num_records=1)
[/usr/local/lib/python3.12/dist-packages/data_designer/interface/data_designer.py](https://github.com/NVIDIA-NeMo/DataDesigner/issues/177#) in preview(self, config_builder, num_records)
236 processed_dataset = builder.process_preview(raw_dataset)
237 except Exception as e:
--> 238 raise DataDesignerGenerationError(f"🛑 Error generating preview dataset: {e}")
239
240 dropped_columns = raw_dataset.columns.difference(processed_dataset.columns)
DataDesignerGenerationError: 🛑 Error generating preview dataset: |----------
| Cause: The request for model 'openai/gpt-oss-20b' was found to be malformed or missing required parameters while running health checks.
| Solution: Check your request parameters and try again.
|----------
Expected behavior
Expected to enable the thinking mode.
Additional context
I am running this on google colab
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workingdocumentationImprovements or additions to documentationImprovements or additions to documentation