LLM-Evaluation-s-Always-Fatiguing
Pinned Loading
Repositories
Showing 10 of 16 repositories
- demo-research-goal-confirmation-agent Public
A demo to demonstrate a dynamic context engineering pattern
LLM-Evaluation-s-Always-Fatiguing/demo-research-goal-confirmation-agent’s past year of commit activity - vibe-evaluator Public
LLM-Evaluation-s-Always-Fatiguing/vibe-evaluator’s past year of commit activity - leaf-playground-hub Public
LLM-Evaluation-s-Always-Fatiguing/leaf-playground-hub’s past year of commit activity - smolagents Public Forked from huggingface/smolagents
🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.
LLM-Evaluation-s-Always-Fatiguing/smolagents’s past year of commit activity - aider-solver-template Public
LLM-Evaluation-s-Always-Fatiguing/aider-solver-template’s past year of commit activity - open-webui Public Forked from open-webui/open-webui
User-friendly WebUI for LLMs (Formerly Ollama WebUI)
LLM-Evaluation-s-Always-Fatiguing/open-webui’s past year of commit activity - leaf-playground Public
A framework to build scenario simulation projects where human and LLM based agents can participant in, with a user-friendly web UI to visualize simulation, support automatically evaluation on agent action level.
LLM-Evaluation-s-Always-Fatiguing/leaf-playground’s past year of commit activity - ChainForge Public Forked from ianarawjo/ChainForge
An open-source visual programming environment for battle-testing prompts to LLMs.
LLM-Evaluation-s-Always-Fatiguing/ChainForge’s past year of commit activity
Top languages
Loading…
Most used topics
Loading…