-
Notifications
You must be signed in to change notification settings - Fork 1k
Open
Description
Would there be any draw backs to pulling out some of the data and functions into different files for readability?
For example, I would want to remove the prompts to a separate file called 'agent_prompts.py'.
Or pulling out other functions into a 'helper.py' file.
Turning this:
def exec(self, inputs):
"""Call the LLM to decide whether to search or answer."""
question, context = inputs
print(f"🤔 Agent deciding what to do next...")
# Create a prompt to help the LLM decide what to do next with proper yaml formatting
prompt = f"""
### CONTEXT
You are a research assistant that can search the web.
Question: {question}
Previous Research: {context}
### ACTION SPACE
[1] search
Description: Look up more information on the web
Parameters:
- query (str): What to search for
[2] answer
Description: Answer the question with current knowledge
Parameters:
- answer (str): Final answer to the question
## NEXT ACTION
Decide the next action based on the context and available actions.
Return your response in this format:
```yaml
thinking: |
<your step-by-step reasoning process>
action: search OR answer
reason: <why you chose this action>
answer: <if action is answer>
search_query: <specific search query if action is search>
```
IMPORTANT: Make sure to:
1. Use proper indentation (4 spaces) for all multi-line fields
2. Use the | character for multi-line text fields
3. Keep single-line fields without the | character
"""
# Call the LLM to make a decision
response = call_llm(prompt)
# Parse the response to get the decision
yaml_str = response.split("```yaml")[1].split("```")[0].strip()
decision = yaml.safe_load(yaml_str)
return decision
Into this:
import decide_agent_prompt from agent_prompts
def exec(self, inputs):
"""Call the LLM to decide whether to search or answer."""
question, context = inputs
print(f"🤔 Agent deciding what to do next...")
# Create a prompt to help the LLM decide what to do next with proper yaml formatting
prompt = decide_agent_prompt(question, context)
# Call the LLM to make a decision
response = call_llm(prompt)
# Parse the response to get the decision
yaml_str = response.split("```yaml")[1].split("```")[0].strip()
decision = yaml.safe_load(yaml_str)
return decision
This concept could be expanded on further if you would like more examples, maybe with functions pulled out to a helper file. If you would be interested in this I could work on a pull request.
Let me know your thoughts.
zachary62
Metadata
Metadata
Assignees
Labels
No labels