A powerful and modern TypeScript CLI development kit powered by Bun, designed to help you create robust command-line applications with ease. This toolkit provides a clean and structured way to create CLI commands using TypeScript, Zod for validation, and Bun for fast execution. It includes a comprehensive AI system that supports multiple model providers (OpenAI, Anthropic, Ollama) allowing you to enhance your CLI commands with artificial intelligence capabilities in a simple and efficient way.
- TypeScript First: Built with TypeScript for maximum type safety and developer experience
- Bun Powered: Leverages Bun's speed and modern features
- Clean Architecture: Implements hexagonal architecture with domain-driven design principles
- Data Validation: Built-in Zod schema validation for robust command handling
- Multi-AI Support: Native integration with OpenAI (GPT), Anthropic (Claude), and Ollama (open source models)
- Developer Experience: Includes ESLint and Prettier configuration out of the box
- Type Safety: Strict TypeScript configuration for reliable code
- Modern Patterns: Implements SOLID principles and clean code practices
- Advanced Logging: Flexible logging system with multiple output options
The BunCLI-Kit includes a powerful logging system through the LoggerService that helps you track and debug your application:
- Flexible Output: Support for console and file logging
- Log Levels: Different log levels (INFO, ERROR, DEBUG, etc.)
- Clean Interface: Implementation of the
LoggerPortinterface for easy extension - Dependency Injection: Follows clean architecture principles
Example usage:
// Inject the logger service
constructor(private readonly logger: LoggerPort) {}
// Use in your code
this.logger.info('Command executed successfully');
this.logger.error('An error occurred', error);
this.logger.debug('Debug information');# Clone the repository
git clone https://github.com/offroadlabs/buncli-kit.git
cd buncli-kit
# Install dependencies
bun install
# See available commands
bun run help
# Create a new CLI command
bun run command:create <command-name>
# Remove a CLI command
bun run command:remove <command-name>- Create a new command using the generator:
bun run command:create my-command- This will automatically:
- Create a new command file in
src/infrastructure/commands/MyCommandCommand.ts - Update
src/index.tsto register the command - Add a script to
package.json
- Create a new command file in
The generated command will have this structure:
import { CommandPort } from "../../domain/ports/CommandPort";
export class MyCommandCommand implements CommandPort {
getName(): string {
return 'my-command';
}
getDescription(): string {
return 'Description de la commande my-command';
}
async execute(args: string[]): Promise<void> {
// Implement your command logic here
console.log('my-command command executed');
}
}To remove a command from your CLI:
bun run command:remove my-commandThis will:
- Remove the command file
- Clean up the imports in
src/index.ts - Remove the script from
package.json
- Use Zod schemas for command argument validation
- Follow the hexagonal architecture pattern:
domain/: Core business logic and interfacesinfrastructure/: Command implementationsapplication/: Application services
- Write clean, maintainable code following SOLID principles
- Use the provided ESLint and Prettier configuration
- Add tests for your commands using Bun's test runner
This project uses ESLint and Prettier to ensure consistent code style and catch potential issues early. The configuration is designed to enforce TypeScript best practices and maintain high code quality.
The project uses a modern flat configuration (eslint.config.js) with strict TypeScript rules:
// Key ESLint rules:
{
'@typescript-eslint/explicit-function-return-type': 'error',
'@typescript-eslint/no-explicit-any': 'error',
'@typescript-eslint/strict-boolean-expressions': 'error',
'@typescript-eslint/no-floating-promises': 'error',
'@typescript-eslint/no-misused-promises': 'error',
'eqeqeq': 'error',
'no-var': 'error',
'prefer-const': 'error'
}Code formatting is handled by Prettier with the following settings:
{
"semi": true,
"trailingComma": "es5",
"singleQuote": true,
"printWidth": 100,
"tabWidth": 2,
"useTabs": false,
"bracketSpacing": true,
"arrowParens": "avoid"
}# Run ESLint
bun run lint
# Fix auto-fixable issues
bun run lint --fixBunCLI-Kit integrates a flexible system to interact with different AI models through a clean and extensible architecture.
- AiModelService: Core service managing AI model interactions and validation
- IAiModel Interface: Base interface for all AI models
- Formatters: Formatting system to parse AI responses
- Factory Pattern: AI model creation via
AiModelFactorysingleton - Streaming Support: Built-in streaming capabilities for AI responses
BunCLI-Kit supports multiple AI model providers:
- GPT models (3.5-turbo, GPT-4, etc.)
- Full streaming support
- Compatible with Azure OpenAI and other OpenAI-compatible APIs
- Configuration via
OPENAI_API_KEYandOPENAI_BASE_URL
- Claude models (Claude 3 Opus, Sonnet, Haiku)
- Streaming support
- Configuration via
ANTHROPIC_API_KEY
- Open source models (Mistral, Llama, CodeLlama, etc.)
- Local model execution
- Streaming support
- Configuration via
OLLAMA_BASE_URL
Example usage with different models:
// Using OpenAI
const openaiModel = this.aiModelService.createModel('openai', 'gpt-4');
// Using Anthropic
const anthropicModel = this.aiModelService.createModel('anthropic', 'claude-3-opus-20240229');
// Using Ollama
const ollamaModel = this.aiModelService.createModel('ollama', 'mistral');The AI system provides a centralized service for managing AI models. Here's a complete example:
import { CommandPort } from '@/domain/ports/CommandPort';
import { LoggerService } from '@/application/services/LoggerService';
import { AiModelService } from '@/application/services/AiModelService';
import { z } from 'zod';
// Define your schemas
const WeatherDataSchema = z.object({
temperature: z.number(),
conditions: z.string(),
location: z.string(),
});
export class MyAiCommand implements CommandPort {
private readonly logger;
private readonly aiModelService;
constructor() {
this.logger = LoggerService.getInstance().getLogger({
prefix: 'my-ai-command',
timestamp: false,
});
this.aiModelService = AiModelService.getInstance();
}
async execute(): Promise<void> {
const model = this.aiModelService.createModel('ollama', 'mistral');
try {
// Example with schema validation
const response = await model.generate(
'Give me the weather in Paris. For temperature, write 9 for 9Β°C, 10 for 10Β°C, etc.',
{
temperature: 0.7,
systemPrompt: 'You are a weather reporter. Write in Spanish.',
schema: WeatherDataSchema,
}
);
// Validate response using AiModelService
const isValid = await this.aiModelService.validateModelResponse(
response.content,
WeatherDataSchema
);
if (isValid) {
this.logger.info('Weather data:');
this.logger.info('- Temperature:', response.content?.temperature ?? 'N/A');
this.logger.info('- Conditions:', response.content?.conditions ?? 'N/A');
this.logger.info('- Location:', response.content?.location ?? 'N/A');
} else {
this.logger.error('Invalid response format');
}
// Example with streaming and transformation
this.logger.info('\nStreaming response with transformation:');
const upperCaseFormatter = (content: string): string => content.toUpperCase();
if (model.streamGenerate) {
for await (const chunk of model.streamGenerate<string>('Tell me a short story.', {
temperature: 0.7,
formatter: upperCaseFormatter,
systemPrompt: 'in french.',
})) {
process.stdout.write(chunk.content ?? 'N/A');
}
}
} catch (error) {
this.logger.error('Error:', error);
}
}
}- Centralized AI model management through AiModelService
- Strong typing with Zod schemas for AI responses
- Built-in response validation
- Support for multiple AI model types
- Streaming support with transformation
- Flexible configuration (temperature, system prompt)
- Comprehensive logging and error handling
In addition to schema validation, you can use custom formatters to transform responses. Here's how to use them with AiModelService:
import { CommandPort } from '@/domain/ports/CommandPort';
import { LoggerService } from '@/application/services/LoggerService';
import { AiModelService } from '@/application/services/AiModelService';
export class MyFormatterCommand implements CommandPort {
private readonly logger;
private readonly aiModelService;
constructor() {
this.logger = LoggerService.getInstance().getLogger({
prefix: 'my-formatter-command',
timestamp: false,
});
this.aiModelService = AiModelService.getInstance();
}
async execute(): Promise<void> {
const model = this.aiModelService.createModel('ollama', 'mistral');
try {
// Example with a simple formatter that converts text to uppercase
const upperCaseFormatter = (content: string): string => content.toUpperCase();
const response = await model.generate(
'Tell me a short story.',
{
temperature: 0.7,
formatter: upperCaseFormatter,
systemPrompt: 'You are a storyteller.',
}
);
this.logger.info('Uppercase story:', response.content);
// Example with a formatter that adds prefix and suffix
const wrapFormatter = (content: string): string => {
return `π ${content} π`;
};
const wrappedResponse = await model.generate(
'Give me an inspiring quote.',
{
temperature: 0.7,
formatter: wrapFormatter,
systemPrompt: 'You are a motivational coach.',
}
);
this.logger.info('Decorated quote:', wrappedResponse.content);
// Example with streaming and transformation
this.logger.info('\nStreaming response with transformation:');
if (model.streamGenerate) {
for await (const chunk of model.streamGenerate<string>(
'Tell me a joke.',
{
temperature: 0.7,
formatter: upperCaseFormatter,
systemPrompt: 'You are a comedian.',
}
)) {
process.stdout.write(chunk.content ?? 'N/A');
}
}
} catch (error) {
this.logger.error('Error:', error);
}
}
}Formatters can be used to:
- Transform text (uppercase, lowercase, etc.)
- Add decorations or formatting
- Clean or normalize responses
- Apply custom transformations
- Process responses before validation
I offer development and consulting services in the following areas:
- Modern Web Applications (Next.js, React, TypeScript)
- APIs and Microservices (Symfony, Node.js)
- Software Architecture and DevOps
- Technical Training and Support
- Custom Application Development
- Legacy System Migration and Modernization
- Performance Optimization
- Technical Consulting
- Frontend: TypeScript, React, Next.js, Tailwind
- Backend: PHP/Symfony, Node.js
- Mobile: Flutter, React Native
- DevOps: Docker, CI/CD, AWS
- Databases: PostgreSQL, MySQL, MongoDB
For any collaboration or custom development requests:
- π§ Email: [email protected]
- π Blog: https://timoner.com
- π Website: https://offroadlabs.com
- π Calendar: Schedule a meeting
- π Location: Aix-en-Provence, France
This project is licensed under the MIT License - see the LICENSE file for details.
Contributions are welcome! Feel free to open an issue or submit a pull request.
Developed by SΓ©bastien TIMONER
BunCLI-Kit uses environment variables to manage sensitive configurations like API keys. An .env.dist template file is provided. To configure your environment:
- Copy the template file:
cp .env.dist .env- Modify the
.envfile with your configurations:
# OpenAI Configuration
OPENAI_API_KEY=your-openai-api-key
OPENAI_BASE_URL=https://api.openai.com/v1 # Optional, can be modified to use other compatible services
# Anthropic Configuration
ANTHROPIC_API_KEY=your-anthropic-api-key
# Ollama Configuration
OLLAMA_BASE_URL=http://localhost:11434 # Optional, modify if Ollama is on a different hostThe system supports different services compatible with the OpenAI API. Here are some configuration examples:
# Standard OpenAI
OPENAI_BASE_URL=https://api.openai.com/v1
# Azure OpenAI
OPENAI_BASE_URL=https://your-resource.openai.azure.com
# LocalAI
OPENAI_BASE_URL=http://localhost:8080/v1
# Other compatible services
OPENAI_BASE_URL=https://api.your-service.com/v1- The
.envfile is ignored by Git to protect your sensitive information - Never commit your real API keys to the repository
- Keep a secure backup of your API keys