Welcome to the Simple Agent API: a robust, production-ready application for serving Agents as an API. It includes:
- A FastAPI server for handling API requests.
- A PostgreSQL database for storing Agent sessions, knowledge, and memories.
- A set of pre-built Agents to use as a starting point.
For more information, checkout Agno and give it a ⭐️
Follow these steps to get your Agent API up and running:
Prerequisites: docker desktop should be installed and running.
git clone https://github.com/agno-agi/agent-api.git
cd agent-apiWe use GPT 4.1 as the default model, please export the OPENAI_API_KEY environment variable to get started.
export OPENAI_API_KEY="YOUR_API_KEY_HERE"Note: You can use any model provider, just update the agents in the
/agentsfolder.
Run the application using docker compose:
docker compose up -dThis command starts:
- The FastAPI server, running on http://localhost:8000.
- The PostgreSQL database, accessible on localhost:5432.
Once started, you can:
- Test the API at http://localhost:8000/docs.
- Open the Agno Playground.
- Add http://localhost:8000as a new endpoint. You can name itAgent API(or any name you prefer).
- Select your newly added endpoint and start chatting with your Agents.
agent-api-demo.mp4
When you're done, stop the application using:
docker compose downThe /agents folder contains pre-built agents that you can use as a starting point.
- Web Search Agent: A simple agent that can search the web.
- Agno Assist: An Agent that can help answer questions about Agno.
- Important: Make sure to load the agno_assistknowledge base before using this agent.
 
- Important: Make sure to load the 
- Finance Agent: An agent that uses the YFinance API to get stock prices and financial data.
To setup your local virtual environment:
We use uv for python environment and package management. Install it by following the the uv documentation or use the command below for unix-like systems:
curl -LsSf https://astral.sh/uv/install.sh | shRun the dev_setup.sh script. This will create a virtual environment and install project dependencies:
./scripts/dev_setup.shActivate the created virtual environment:
source .venv/bin/activate(On Windows, the command might differ, e.g., .venv\Scripts\activate)
If you need to add or update python dependencies:
Add or update your desired Python package dependencies in the [dependencies] section of the pyproject.toml file.
The requirements.txt file is used to build the application image. After modifying pyproject.toml, regenerate requirements.txt using:
./scripts/generate_requirements.shTo upgrade all existing dependencies to their latest compatible versions, run:
./scripts/generate_requirements.sh upgradeRebuild your Docker images to include the updated dependencies:
docker compose up -d --buildNeed help, have a question, or want to connect with the community?
- 📚 Read the Agno Docs for more in-depth information.
- 💬 Chat with us on Discord for live discussions.
- ❓ Ask a question on Discourse for community support.
- 🐛 Report an Issue on GitHub if you find a bug or have a feature request.
This repository includes a Dockerfile for building a production-ready container image of the application.
The general process to run in production is:
- Update the scripts/build_image.shfile and set your IMAGE_NAME and IMAGE_TAG variables.
- Build and push the image to your container registry:
./scripts/build_image.sh- Run in your cloud provider of choice.
- Configure for Production
- Ensure your production environment variables (e.g., OPENAI_API_KEY, database connection strings) are securely managed. Most cloud providers offer a way to set these as environment variables for your deployed service.
- Review the agent configurations in the /agentsdirectory and ensure they are set up for your production needs (e.g., correct model versions, any production-specific settings).
- Build Your Production Docker Image
- 
Update the scripts/build_image.shscript to set your desiredIMAGE_NAMEandIMAGE_TAG(e.g.,your-repo/agent-api:v1.0.0).
- 
Run the script to build and push the image: ./scripts/build_image.sh 
- Deploy to a Cloud Service With your image in a registry, you can deploy it to various cloud services that support containerized applications. Some common options include:
- 
Serverless Container Platforms: - Google Cloud Run: A fully managed platform that automatically scales your stateless containers. Ideal for HTTP-driven applications.
- AWS App Runner: Similar to Cloud Run, AWS App Runner makes it easy to deploy containerized web applications and APIs at scale.
- Azure Container Apps: Build and deploy modern apps and microservices using serverless containers.
 
- 
Container Orchestration Services: - Amazon Elastic Container Service (ECS): A highly scalable, high-performance container orchestration service that supports Docker containers. Often used with AWS Fargate for serverless compute or EC2 instances for more control.
- Google Kubernetes Engine (GKE): A managed Kubernetes service for deploying, managing, and scaling containerized applications using Google infrastructure.
- Azure Kubernetes Service (AKS): A managed Kubernetes service for deploying and managing containerized applications in Azure.
 
- 
Platform as a Service (PaaS) with Docker Support - Railway.app: Offers a simple way to deploy applications from a Dockerfile. It handles infrastructure, scaling, and networking.
- Render: Another platform that simplifies deploying Docker containers, databases, and static sites.
- Heroku: While traditionally known for buildpacks, Heroku also supports deploying Docker containers.
 
- 
Specialized Platforms: - Modal: A platform designed for running Python code (including web servers like FastAPI) in the cloud, often with a focus on batch jobs, scheduled functions, and model inference, but can also serve web endpoints.
 
The specific deployment steps will vary depending on the chosen provider. Generally, you'll point the service to your container image in the registry and configure aspects like port mapping (the application runs on port 8000 by default inside the container), environment variables, scaling parameters, and any necessary database connections.
- Database Configuration
- The default docker-compose.ymlsets up a PostgreSQL database for local development. In production, you will typically use a managed database service provided by your cloud provider (e.g., AWS RDS, Google Cloud SQL, Azure Database for PostgreSQL) for better reliability, scalability, and manageability.
- Ensure your deployed application is configured with the correct database connection URL for your production database instance. This is usually set via an environment variables.