Skip to content

CoDS-GCS/GLOW

Repository files navigation

GLOW-QA: Leveraging LLM-GNN Integration for Open-World Question Answering over Knowledge Graphs

Abstract

Open-world Question Answering (OW-QA) over knowledge graphs (KGs) aims to answer questions over incomplete or evolving KGs. Traditional KGQA assumes a closed world where answers must exist in the KG, limiting real-world applicability. In contrast, open-world QA requires inferring missing knowledge based on graph structure and context. Large language models (LLMs) excel at language understanding but lack structured reasoning. Graph neural networks (GNNs) model graph topology but struggle with semantic interpretation. Existing systems integrate LLMs with GNNs or graph retrievers. Some support open-world QA but rely on structural embeddings without semantic grounding. Most assume observed paths or complete graphs, making them unreliable under missing links or multi-hop reasoning. We present GLOW, a hybrid system that combines a pre-trained GNN and an LLM for open-world KGQA. The GNN predicts top-$k$ candidate answers from the graph structure. These, along with relevant KG facts, are serialized into a structured prompt (e.g., triples and candidates) to guide the LLM's reasoning. This enables joint reasoning over symbolic and semantic signals, without relying on retrieval or fine-tuning. To evaluate generalization, we introduce GLOW-Bench, a 1,000-question benchmark over incomplete KGs across diverse domains. GLOW outperforms existing LLM–GNN systems on standard benchmarks and GLOW-Bench, achieving up to 53.3\% and an average 38\% improvement.

GLOW-QA Pipeline Phases
Fig1: The GLOW-QA Pipeline Phases

Table 1: Average accuracy (%) on open-world KGQA tasks, grouped by reasoning hop count.
GLOW-GN significantly outperforms the baseline methods, especially on both 1-hop and 2-hop questions.
All methods use Qwen3-8B as the underlying LLM.

Average accuracy (%) on open-world KGQA

scripts

To RUN GLOW-QA Pipelines

python src/GLOW.py  --llm_model  qwen3:8b --dataset_name biokg --runs 3 --glow-v All --top-k 3
  • llm_model choices=[gpt-4o-mini,deepseek-chat,deepseek-r1,granite3.3,gemini-1.5-flash,llama3.2:3b,qwen3:8b,phi4-mini]

  • dataset_name choices=[biokg,linkedIMDB,yago4-person,yago4-creativwork,crunchbase,arxiv2023,ogbnArxiv,ogbnProduct]

  • glow-v choices=[L,GCR,GN,G,N,LLM,All]

About

Inductive Graph RAG for Knoweldge Graphs.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages