Enable local TXT2KG + uncertainty-aware LLM with EDFL/ISR #10480
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Fixed import path
Corrected torch_geometric.nn.LLM → torch_geometric.llm.models.LLM to match the upstream module layout and avoid ImportError.
Local TXT2KG support
Added --use_local_txt2kg flag so users can choose between OpenAI/NIM APIs and local Hugging Face models for knowledge graph extraction. Enables fully offline pipelines, avoids API costs, and improves reproducibility.
Checkpoint validation
Properly validates cached triples when switching between local vs. API models. Prevents stale or mismatched KG states from silently propagating into training/inference.
GPU utilization
Passes n_gpus parameter to TXT2KG’s internal LLM, ensuring multi-GPU hardware is actually used.
Uncertainty-aware LLM
Extended LLM(..., uncertainty_estim=True) to compute EDFL/B2T/ISR metrics before generation:
Returns per-item uncertainty metrics: ISR, Δ̄, B2T, RoH bound, priors.
Optionally abstains ("[ABSTAIN]") if ISR < threshold.
During training, masks labels for low-ISR items, enabling GNN+LLM fine-tuning with calibrated refusal rather than random guessing
tests
RAG_TEST=1 pytest test/llm/models/test_llm.py::test_llm_uncertainty -v