-
-
Notifications
You must be signed in to change notification settings - Fork 1.9k
Description
🔖 Feature description
Implement a Jinja2-based template rendering system that allows prompts to dynamically inject context at runtime using organized namespaces.
🎤 Why is this feature needed ?
Currently, prompts are static strings with basic placeholder replacement ({summaries}, {query}). This severely limits dynamic context injection - you can't pass user information, system metadata, or pre-computed data into prompts. The system has no way to know the current date/time, request ID, or who's asking the question. Additionally, when tools need to be used (like fetching memory), the LLM must make a decision and call them after generating initial text, requiring a second LLM call just to use that context - a wasteful round trip.
✌️ How do you aim to achieve this?
A proper template system with namespaces enables:
- Dynamic Context at Runtime: Inject user info (
{{ passthrough.user_name }}), system data ({{ system.date }}), RAG results ({{ source.documents }}), and pre-fetched tool data ({{ tools.memory.fetch }}) directly into prompts before the LLM sees them - Single LLM Call: Pre-execute tools and render all context upfront, eliminating the need for the LLM to request data - it already has it
- Organized Variable Injection: 4 namespaces (system, passthrough, source, tools) prevent naming conflicts and make prompts readable
- Better Widget Integration: Web requests can pass custom parameters (user context, auth tokens, custom fields) that flow directly into prompts - enabling truly dynamic chatbot behavior
🔄️ Additional Information
No response
👀 Have you spent some time to check if this feature request has been raised before?
- I checked and didn't find similar issue
Are you willing to submit PR?
Yes I am willing to submit a PR!
Metadata
Metadata
Assignees
Labels
Type
Projects
Status