A powerful, flexible, and type-safe AI chatbot library for TypeScript/JavaScript applications with support for OpenAI, Anthropic Claude, Google Gemini, Meta Llama, xAI Grok, DeepSeek, and Ollama (local models).
This is the TypeScript/JavaScript implementation of our multi-language chatbot library:
- ๐ npm-chatbot - TypeScript/JavaScript (this package)
- ๐ php-chatbot - PHP implementation
- ๐ท go-chatbot - Go implementation
All implementations share the same API design and features, making it easy to switch between languages or maintain consistency across polyglot projects.
- ๐ฏ OpenAI - GPT-4o, GPT-4 Turbo, o1, GPT-4o-mini
- ๐ค Anthropic - Claude Sonnet 4.5, Haiku 4.5, Opus 4.1
- ๐ฎ Google AI - Gemini 2.0 Flash, Gemini 1.5 Pro/Flash
- ๐ฆ Meta Llama - Llama 3.3 70B, Llama 3.2 90B Vision, Llama 3.1 405B
- ๐ xAI Grok - Grok 2, Grok 2 Vision, Grok Beta
- ๐ง DeepSeek - DeepSeek Chat, DeepSeek Reasoner
- ๐ Ollama - Any local model (llama3, mistral, codellama, etc.)
- ๐ Type-Safe - Full TypeScript support with comprehensive type definitions
- ๐พ Conversation Memory - Built-in conversation history management
- โก Streaming Support - Real-time response streaming for all providers
- ๐ Error Handling - Comprehensive error handling with retry logic
- ๐ Usage Tracking - Token usage and cost tracking
- ๐ก๏ธ Message Filtering - Profanity, aggression, and link filtering
- ๐ Content Moderation - AI-powered content safety with OpenAI API + custom rules
- ๐ซ Violence Detection - Pattern-based harmful content detection
- ๐ฌ Hate Speech Detection - Multi-pattern hate speech filtering
โ ๏ธ Risk Assessment - Real-time content risk level analysis- ๐พ Caching & Rate Limiting - Efficient moderation with result caching
- ๐งช Extensively Tested - 94% test coverage with 965+ tests
- ๐ Universal - Works in Node.js and modern browsers
- ๐ฆ Tree-Shakeable - Optimized bundle size with ESM support
- ๐ Comprehensive Examples - 10+ usage examples included
# Using npm
npm install @rumenx/chatbot
# Using yarn
yarn add @rumenx/chatbot
# Using pnpm
pnpm add @rumenx/chatbotInstall the AI provider SDK(s) you plan to use:
# For OpenAI (required for OpenAI provider and content moderation)
npm install openai
# For Anthropic Claude
npm install @anthropic-ai/sdk
# For Google Gemini
npm install @google/generative-aiNote: Meta Llama, xAI Grok, DeepSeek, and Ollama providers use
OpenAI-compatible APIs, so they only require the openai package (already
installed above).
All provider dependencies are optional peer dependencies, so you only install what you need.
import { Chatbot } from '@rumenx/chatbot';
// Initialize chatbot with OpenAI
const chatbot = new Chatbot({
provider: {
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY!,
model: 'gpt-4o', // Latest: 'gpt-4o', 'gpt-4-turbo', 'o1-preview'
},
temperature: 0.7,
maxTokens: 150,
});
// Send a message
const response = await chatbot.chat({
message: 'Hello! How are you?',
metadata: {
sessionId: 'user-123',
userId: 'user-123',
},
});
console.log(response.content);
// Output: "Hello! I'm doing well, thank you for asking. How can I help you today?"import { Chatbot } from '@rumenx/chatbot';
import type { ChatbotConfig } from '@rumenx/chatbot';
const config: ChatbotConfig = {
provider: {
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY!,
model: 'gpt-4o', // Latest: 'gpt-4o', 'gpt-4-turbo', 'o1-preview' (Note: gpt-3.5-turbo deprecated)
},
systemPrompt: 'You are a helpful AI assistant.',
temperature: 0.7,
maxTokens: 500,
enableMemory: true,
maxHistory: 20,
security: {
enableInputFilter: true,
enableOutputFilter: true,
maxInputLength: 4000,
},
rateLimit: {
enabled: true,
requestsPerMinute: 10,
requestsPerHour: 100,
},
};
const chatbot = new Chatbot(config);
// Simple chat
const response = await chatbot.chat({
message: 'What is the capital of France?',
metadata: {
sessionId: 'session-1',
userId: 'user-1',
},
});
console.log(response.content); // "The capital of France is Paris."
console.log(response.metadata.usage); // { promptTokens: 15, completionTokens: 8, totalTokens: 23 }import { Chatbot } from '@rumenx/chatbot';
const chatbot = new Chatbot({
provider: {
provider: 'anthropic',
apiKey: process.env.ANTHROPIC_API_KEY!,
model: 'claude-sonnet-4-5-20250929', // Latest: Claude Sonnet 4.5 (Sep 2025), Opus 4.1, Haiku 4.5
},
temperature: 0.8,
maxTokens: 1000,
});
const response = await chatbot.chat({
message: 'Explain quantum computing in simple terms.',
metadata: {
sessionId: 'session-2',
userId: 'user-2',
},
});
console.log(response.content);import { Chatbot } from '@rumenx/chatbot';
const chatbot = new Chatbot({
provider: {
provider: 'google',
apiKey: process.env.GOOGLE_API_KEY!,
model: 'gemini-2.0-flash-exp', // Latest: 'gemini-2.0-flash-exp', 'gemini-1.5-pro', 'gemini-1.5-flash'
},
temperature: 0.9,
maxTokens: 800,
});
const response = await chatbot.chat({
message: 'Write a haiku about programming.',
metadata: {
sessionId: 'session-3',
userId: 'user-3',
},
});
console.log(response.content);import { Chatbot } from '@rumenx/chatbot';
const chatbot = new Chatbot({
provider: {
provider: 'meta',
apiKey: process.env.TOGETHER_API_KEY!, // Together AI API key
model: 'llama-3.3-70b-instruct', // Latest: 'llama-3.3-70b-instruct', 'llama-3.2-90b-vision-instruct', 'llama-3.1-405b-instruct'
endpoint: 'https://api.together.xyz/v1', // Optional, defaults to Together AI
},
temperature: 0.7,
maxTokens: 500,
});
const response = await chatbot.chat({
message: 'Explain the concept of neural networks.',
metadata: {
sessionId: 'session-4',
userId: 'user-4',
},
});
console.log(response.content);import { Chatbot } from '@rumenx/chatbot';
const chatbot = new Chatbot({
provider: {
provider: 'xai',
apiKey: process.env.XAI_API_KEY!,
model: 'grok-2-latest', // Latest: 'grok-2-latest', 'grok-2-vision-latest', 'grok-beta'
},
temperature: 0.8,
maxTokens: 800,
});
const response = await chatbot.chat({
message: 'What are the latest trends in AI?',
metadata: {
sessionId: 'session-5',
userId: 'user-5',
},
});
console.log(response.content);import { Chatbot } from '@rumenx/chatbot';
const chatbot = new Chatbot({
provider: {
provider: 'deepseek',
apiKey: process.env.DEEPSEEK_API_KEY!,
model: 'deepseek-chat', // Options: 'deepseek-chat', 'deepseek-reasoner'
},
temperature: 0.7,
maxTokens: 1000,
});
const response = await chatbot.chat({
message: 'Write a function to calculate fibonacci numbers.',
metadata: {
sessionId: 'session-6',
userId: 'user-6',
},
});
console.log(response.content);import { Chatbot } from '@rumenx/chatbot';
const chatbot = new Chatbot({
provider: {
provider: 'ollama',
model: 'llama3', // Any Ollama model: 'llama3', 'mistral', 'codellama', etc.
endpoint: 'http://localhost:11434/v1', // Your Ollama server URL
},
temperature: 0.7,
maxTokens: 500,
});
const response = await chatbot.chat({
message: 'Hello! Tell me about yourself.',
metadata: {
sessionId: 'session-7',
userId: 'user-7',
},
});
console.log(response.content);Protect your application from inappropriate content:
import { Chatbot, MessageFilterMiddleware } from '@rumenx/chatbot';
// Create filter with custom configuration
const filter = new MessageFilterMiddleware({
profanities: ['badword1', 'badword2'], // Custom word list
aggressionPatterns: ['hate', 'kill', 'attack'], // Patterns to detect
linkPattern: 'https?:\\/\\/[\\w\\.-]+', // URL detection
enableRephrasing: true, // Rephrase instead of blocking
replacementText: '[filtered]', // Replacement text
instructions: [
'You must maintain a respectful tone.',
'Do not share external links.',
'Use appropriate language at all times.',
],
});
const chatbot = new Chatbot({
provider: {
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY!,
model: 'gpt-4o',
},
});
// Filter user message before sending
const userMessage = 'This is a fucking great product!';
const filterResult = filter.filter(userMessage);
if (filterResult.shouldBlock) {
console.log('Message blocked:', filterResult.filterReasons);
} else {
console.log('Filtered message:', filterResult.message);
// Output: "This is a [filtered] great product!"
const response = await chatbot.chat({
message: filterResult.message,
context: filterResult.context, // Includes system instructions
});
}AI-powered content safety with OpenAI's Moderation API:
import { ContentModerationService } from '@rumenx/chatbot';
// Create moderation service
const moderation = new ContentModerationService({
apiKey: process.env.OPENAI_API_KEY, // For OpenAI moderation API
useOpenAI: true, // Use OpenAI's moderation API
useCustomRules: true, // Also use custom pattern matching
threshold: 0.5, // Block threshold (0-1)
enableCache: true, // Cache results
cacheTTL: 300000, // 5 minutes
});
// Check content before processing
const message = 'I want to hurt someone';
const result = await moderation.moderate(message);
if (result.shouldBlock) {
console.log('Content blocked!');
console.log('Flagged categories:', result.categories);
// Output: ['violence', 'violence/graphic']
console.log('Scores:', result.scores);
} else {
// Safe to process
const response = await chatbot.chat({ message });
}
// Get risk statistics
const stats = await moderation.getStats(message);
console.log('Risk Level:', stats.riskLevel); // 'low', 'medium', 'high', or 'critical'
console.log('Is Safe:', stats.isSafe);
console.log('Highest Risk:', stats.highestCategory);Use both filtering and moderation for maximum protection:
import {
Chatbot,
MessageFilterMiddleware,
ContentModerationService,
} from '@rumenx/chatbot';
const chatbot = new Chatbot({
provider: {
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY!,
model: 'gpt-4o',
},
});
const filter = new MessageFilterMiddleware();
const moderation = new ContentModerationService({
apiKey: process.env.OPENAI_API_KEY,
useOpenAI: true,
useCustomRules: true,
});
async function safeChat(userMessage: string) {
// 1. Filter profanity and links
const filterResult = filter.filter(userMessage);
if (filterResult.shouldBlock) {
return 'Your message contains inappropriate language.';
}
// 2. Check for harmful content
if (await moderation.shouldBlock(filterResult.message)) {
return 'Your message violates our content policy.';
}
// 3. Safe to send to AI
const response = await chatbot.chat({
message: filterResult.message,
context: filterResult.context,
});
return response.content;
}
const reply = await safeChat('Tell me about AI safety');
console.log(reply);Stream responses in real-time for better UX:
import { Chatbot } from '@rumenx/chatbot';
const chatbot = new Chatbot({
provider: {
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY!,
model: 'gpt-4o',
},
});
// Stream responses
const stream = chatbot.chatStream({
message: 'Tell me a story about a robot.',
metadata: {
sessionId: 'session-4',
userId: 'user-4',
},
});
// Process the stream
for await (const chunk of stream) {
process.stdout.write(chunk); // Print each chunk as it arrives
}
console.log('\nโ
Stream complete!');The chatbot automatically maintains conversation history:
import { Chatbot } from '@rumenx/chatbot';
const chatbot = new Chatbot({
provider: {
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY!,
model: 'gpt-4o-mini', // Cost-effective model for conversations
},
enableMemory: true,
maxHistory: 10, // Keep last 10 messages
});
const sessionId = 'user-session-123';
// First message
await chatbot.chat({
message: 'My name is Alice.',
metadata: { sessionId, userId: 'alice' },
});
// Second message - chatbot remembers the context
const response = await chatbot.chat({
message: 'What is my name?',
metadata: { sessionId, userId: 'alice' },
});
console.log(response.content); // "Your name is Alice."
// Get conversation history
const history = chatbot.getConversationHistory(sessionId);
console.log(history); // Array of all messages in the sessionHere's an example of integrating the chatbot into a React application:
import React, { useState } from 'react';
import { Chatbot } from '@rumenx/chatbot';
const chatbot = new Chatbot({
provider: {
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY!,
model: 'gpt-4o-mini',
},
});
function ChatComponent() {
const [messages, setMessages] = useState<
Array<{ role: string; content: string }>
>([]);
const [input, setInput] = useState('');
const [loading, setLoading] = useState(false);
const sendMessage = async () => {
if (!input.trim()) return;
// Add user message
setMessages((prev) => [...prev, { role: 'user', content: input }]);
setLoading(true);
try {
const response = await chatbot.chat({
message: input,
metadata: {
sessionId: 'react-session',
userId: 'react-user',
},
});
// Add assistant response
setMessages((prev) => [
...prev,
{ role: 'assistant', content: response.content },
]);
} catch (error) {
console.error('Chat error:', error);
} finally {
setLoading(false);
setInput('');
}
};
return (
<div className="chat-container">
<div className="messages">
{messages.map((msg, idx) => (
<div key={idx} className={`message ${msg.role}`}>
<strong>{msg.role}:</strong> {msg.content}
</div>
))}
</div>
<div className="input-area">
<input
type="text"
value={input}
onChange={(e) => setInput(e.target.value)}
onKeyPress={(e) => e.key === 'Enter' && sendMessage()}
placeholder="Type a message..."
disabled={loading}
/>
<button onClick={sendMessage} disabled={loading}>
{loading ? 'Sending...' : 'Send'}
</button>
</div>
</div>
);
}
export default ChatComponent;Here's an example of using the chatbot in an Express.js API:
import express from 'express';
import { Chatbot } from '@rumenx/chatbot';
const app = express();
app.use(express.json());
const chatbot = new Chatbot({
provider: {
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY!,
model: 'gpt-4o-mini',
},
enableMemory: true,
});
// Chat endpoint
app.post('/api/chat', async (req, res) => {
try {
const { message, sessionId, userId } = req.body;
if (!message || !sessionId || !userId) {
return res.status(400).json({ error: 'Missing required fields' });
}
const response = await chatbot.chat({
message,
metadata: { sessionId, userId },
});
res.json({
content: response.content,
metadata: response.metadata,
});
} catch (error) {
console.error('Chat error:', error);
res.status(500).json({ error: 'Internal server error' });
}
});
// Streaming endpoint
app.post('/api/chat/stream', async (req, res) => {
try {
const { message, sessionId, userId } = req.body;
res.setHeader('Content-Type', 'text/event-stream');
res.setHeader('Cache-Control', 'no-cache');
res.setHeader('Connection', 'keep-alive');
const stream = chatbot.chatStream({
message,
metadata: { sessionId, userId },
});
for await (const chunk of stream) {
res.write(`data: ${JSON.stringify({ chunk })}\n\n`);
}
res.write('data: [DONE]\n\n');
res.end();
} catch (error) {
console.error('Stream error:', error);
res.status(500).json({ error: 'Internal server error' });
}
});
// Get conversation history
app.get('/api/chat/history/:sessionId', (req, res) => {
try {
const { sessionId } = req.params;
const history = chatbot.getConversationHistory(sessionId);
res.json({ history });
} catch (error) {
console.error('History error:', error);
res.status(500).json({ error: 'Internal server error' });
}
});
// Health check
app.get('/health', (req, res) => {
res.json({ status: 'ok', version: '1.0.0' });
});
app.listen(3000, () => {
console.log('๐ Chatbot API running on http://localhost:3000');
});| Model | Status | Use Case |
|---|---|---|
gpt-4o |
โ Recommended | Latest flagship model, best for complex tasks |
gpt-4o-mini |
โ Recommended | Cost-effective, great for most use cases |
gpt-4-turbo |
โ Supported | High performance, large context window |
o1-preview |
โ Supported | Advanced reasoning model |
o1-mini |
โ Supported | Faster reasoning model |
gpt-4 |
Still supported, but consider upgrading | |
gpt-3.5-turbo |
โ Deprecated | Will be retired June 2025, use gpt-4o-mini instead |
| Model | Status | Use Case |
|---|---|---|
claude-sonnet-4-5-20250929 |
โ Recommended | Latest - smartest model for complex agents and coding |
claude-haiku-4-5-20251001 |
โ Recommended | Fastest model with near-frontier intelligence |
claude-opus-4-1-20250805 |
โ Recommended | Exceptional model for specialized reasoning |
claude-3-5-sonnet-20241022 |
โ Supported | Previous generation (legacy) |
claude-3-5-sonnet-20240620 |
Consider upgrading to 4.5 | |
claude-3-opus-20240229 |
Consider upgrading to 4.1 |
| Model | Status | Use Case |
|---|---|---|
gemini-2.0-flash-exp |
โ Recommended | Latest experimental model |
gemini-1.5-pro |
โ Recommended | Production-ready, 2M token context |
gemini-1.5-flash |
โ Recommended | Fast and efficient |
gemini-1.5-flash-8b |
โ Supported | Smallest, fastest, most affordable |
gemini-pro |
Consider upgrading to 1.5 or 2.0 |
Note: Model availability and naming may change. Check your provider's documentation for the latest model names.
import type { ChatbotConfig } from '@rumenx/chatbot';
const config: ChatbotConfig = {
// Provider configuration (required)
provider: {
provider: 'openai', // 'openai' | 'anthropic' | 'google'
apiKey: 'your-api-key',
model: 'gpt-4',
apiUrl: 'https://api.openai.com/v1', // Optional: custom API endpoint
organizationId: 'org-123', // Optional: OpenAI organization ID
},
// Generation options
temperature: 0.7, // 0.0 to 1.0 (higher = more creative)
maxTokens: 500, // Maximum tokens in response
topP: 0.9, // Optional: nucleus sampling
frequencyPenalty: 0, // Optional: -2.0 to 2.0
presencePenalty: 0, // Optional: -2.0 to 2.0
stop: ['###'], // Optional: stop sequences
// System configuration
systemPrompt: 'You are a helpful AI assistant.', // Optional: system message
enableMemory: true, // Enable conversation history
maxHistory: 20, // Maximum messages to keep in memory
timeout: 30000, // Request timeout in ms
// Security settings
security: {
enableInputFilter: true, // Filter user input
enableOutputFilter: true, // Filter AI responses
maxInputLength: 4000, // Maximum input length
blockedPatterns: [/password/i], // Regex patterns to block
},
// Rate limiting
rateLimit: {
enabled: true,
requestsPerMinute: 10,
requestsPerHour: 100,
requestsPerDay: 1000,
},
// Logging
logLevel: 'info', // 'debug' | 'info' | 'warn' | 'error'
// Custom metadata
metadata: {
appName: 'My Chatbot App',
version: '1.0.0',
},
};- gpt-4o - Latest GPT-4 Omni model (Oct 2024)
- gpt-4o-mini - Fast and cost-effective GPT-4 Omni mini
- gpt-4-turbo - GPT-4 Turbo (Apr 2024)
- o1-preview - Advanced reasoning model (preview)
- o1-mini - Fast reasoning model
- claude-sonnet-4-5-20250929 - Claude Sonnet 4.5 (Sep 2025) - Latest
- claude-haiku-4-5-20250929 - Claude Haiku 4.5 (Sep 2025) - Fast & efficient
- claude-opus-4-1-20250929 - Claude Opus 4.1 (Sep 2025) - Most capable
- gemini-2.0-flash-exp - Gemini 2.0 Flash (Experimental) - Latest
- gemini-1.5-pro - Gemini 1.5 Pro - High capability
- gemini-1.5-flash - Gemini 1.5 Flash - Fast & efficient
- llama-3.3-70b-instruct - Llama 3.3 70B (Latest)
- llama-3.2-90b-vision-instruct - Llama 3.2 90B with vision
- llama-3.1-405b-instruct - Llama 3.1 405B (Largest)
- grok-2-latest - Grok 2 (Latest)
- grok-2-vision-latest - Grok 2 with vision capabilities
- grok-beta - Grok Beta (Experimental features)
- deepseek-chat - DeepSeek Chat model
- deepseek-reasoner - DeepSeek Reasoner (Advanced reasoning)
- llama3 - Llama 3 (8B, 70B variants)
- mistral - Mistral 7B
- codellama - Code Llama (7B, 13B, 34B, 70B)
- phi3 - Microsoft Phi-3
- gemma - Google Gemma
- qwen - Alibaba Qwen
- Any other Ollama model - See Ollama Model Library
Note: For Meta, xAI, DeepSeek, and Ollama, you need to provide the API endpoint URL in the configuration. See examples above.
new Chatbot(config: ChatbotConfig)Send a message and get a response.
const response = await chatbot.chat({
message: 'Hello!',
metadata: {
sessionId: 'session-id',
userId: 'user-id',
},
});Stream a response in real-time.
for await (const chunk of chatbot.chatStream({ message: 'Hello!' })) {
console.log(chunk);
}Get conversation history for a session.
const history = chatbot.getConversationHistory('session-id');Clear conversation history for a session.
chatbot.clearConversationHistory('session-id');Update chatbot configuration.
chatbot.updateConfig({
temperature: 0.9,
maxTokens: 1000,
});Get information about the current provider.
const info = chatbot.getProviderInfo();
console.log(info); // { name: 'openai', model: 'gpt-4', ... }interface ChatbotConfig {
provider: AiProviderConfig;
temperature?: number;
maxTokens?: number;
enableMemory?: boolean;
maxHistory?: number;
security?: SecurityConfig;
rateLimit?: RateLimitConfig;
// ... more options
}
interface ChatOptions {
message: string;
metadata?: {
sessionId?: string;
userId?: string;
[key: string]: unknown;
};
}
interface ChatResponse {
content: string;
metadata: {
provider: string;
model: string;
usage?: {
promptTokens: number;
completionTokens: number;
totalTokens: number;
};
responseTime?: number;
};
}This library is framework-agnostic and can be used with any JavaScript framework or library. Here are examples for popular frameworks:
npm install @rumenx/chatbot reactSee Using with React example above for a complete integration.
The library works seamlessly with Vue 3. Here's a basic example:
npm install @rumenx/chatbot vue<template>
<div class="chat-container">
<div
v-for="(msg, idx) in messages"
:key="idx"
:class="`message ${msg.role}`"
>
<strong>{{ msg.role }}:</strong> {{ msg.content }}
</div>
<input
v-model="input"
@keyup.enter="sendMessage"
placeholder="Type a message..."
/>
<button @click="sendMessage" :disabled="loading">
{{ loading ? 'Sending...' : 'Send' }}
</button>
</div>
</template>
<script setup lang="ts">
import { ref } from 'vue';
import { Chatbot } from '@rumenx/chatbot';
const chatbot = new Chatbot({
provider: {
provider: 'openai',
apiKey: import.meta.env.VITE_OPENAI_API_KEY,
model: 'gpt-3.5-turbo',
},
});
const messages = ref<Array<{ role: string; content: string }>>([]);
const input = ref('');
const loading = ref(false);
const sendMessage = async () => {
if (!input.value.trim()) return;
messages.value.push({ role: 'user', content: input.value });
loading.value = true;
try {
const response = await chatbot.chat({
message: input.value,
metadata: { sessionId: 'vue-session', userId: 'vue-user' },
});
messages.value.push({ role: 'assistant', content: response.content });
} catch (error) {
console.error('Chat error:', error);
} finally {
loading.value = false;
input.value = '';
}
};
</script>npm install @rumenx/chatbot next// app/api/chat/route.ts
import { Chatbot } from '@rumenx/chatbot';
import { NextRequest, NextResponse } from 'next/server';
const chatbot = new Chatbot({
provider: {
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY!,
model: 'gpt-3.5-turbo',
},
});
export async function POST(request: NextRequest) {
const { message, sessionId, userId } = await request.json();
try {
const response = await chatbot.chat({
message,
metadata: { sessionId, userId },
});
return NextResponse.json(response);
} catch (error) {
return NextResponse.json({ error: 'Chat failed' }, { status: 500 });
}
}The library provides comprehensive error handling:
import { Chatbot, ChatbotError } from '@rumenx/chatbot';
const chatbot = new Chatbot({
provider: {
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY!,
model: 'gpt-4',
},
});
try {
const response = await chatbot.chat({
message: 'Hello!',
metadata: { sessionId: 'session-1', userId: 'user-1' },
});
console.log(response.content);
} catch (error) {
if (error instanceof ChatbotError) {
console.error('Error category:', error.category);
console.error('Error severity:', error.severity);
console.error('Is retryable:', error.isRetryable);
console.error('Retry delay:', error.retryDelay);
// Handle specific error types
switch (error.category) {
case 'authentication':
console.error('Authentication failed. Check your API key.');
break;
case 'rate_limit':
console.error('Rate limit exceeded. Retry after:', error.retryDelay);
break;
case 'validation':
console.error('Invalid input:', error.userMessage);
break;
case 'network':
console.error('Network error. Check your connection.');
break;
default:
console.error('Unexpected error:', error.message);
}
} else {
console.error('Unknown error:', error);
}
}authentication- API key or authentication issuesrate_limit- Rate limit exceededvalidation- Invalid input or configurationnetwork- Network connectivity issuesprovider- Provider-specific errorstimeout- Request timeoutunknown- Unexpected errors
const chatbot = new Chatbot({
provider: {
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY!,
model: 'gpt-3.5-turbo',
},
security: {
enableInputFilter: true, // Filter malicious input
enableOutputFilter: true, // Filter inappropriate responses
maxInputLength: 4000, // Prevent oversized inputs
blockedPatterns: [/password/i, /credit card/i, /social security/i],
},
});const chatbot = new Chatbot({
provider: {
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY!,
model: 'gpt-3.5-turbo',
},
rateLimit: {
enabled: true,
requestsPerMinute: 10,
requestsPerHour: 100,
requestsPerDay: 1000,
},
});For more details, see SECURITY.md.
We're constantly working to improve and expand the library. Here's what's planned for future releases:
The following providers are planned but not yet implemented. Community contributions are welcome!
| Provider | Status | Target Version | Notes |
|---|---|---|---|
| Meta Llama | ๐ Planned | v2.x | Llama 3.3, 3.2, 3.1 support via Ollama or cloud APIs |
| xAI Grok | ๐ Planned | v2.x | Grok-2, Grok-2-mini integration |
| DeepSeek | ๐ Planned | v2.x | DeepSeek-V3 and DeepSeek-R1 support |
| Ollama | ๐ Planned | v2.x | Local LLM support with Ollama |
| Mistral AI | ๐ Planned | v2.x | Mistral Large, Medium, Small models |
| Cohere | ๐ Planned | v2.x | Command R+, Command R models |
| Perplexity | ๐ Planned | v3.x | pplx-7b-online, pplx-70b-online |
| Framework | Status | Target Version | Description |
|---|---|---|---|
| React Components | ๐ Planned | v2.x | Pre-built <ChatWidget />, <ChatInput />, <MessageList /> |
| Vue 3 Components | ๐ Planned | v2.x | Composition API components |
| Angular Components | ๐ Planned | v3.x | Standalone components for Angular 15+ |
| Svelte Components | ๐ Planned | v3.x | Svelte 5 components |
| Express Middleware | ๐ Planned | v2.x | app.use(chatbot.middleware()) |
| Next.js Integration | ๐ Planned | v2.x | Server actions and route handlers |
| Fastify Plugin | ๐ Planned | v2.x | fastify.register(chatbotPlugin) |
- ๐ฎ Function Calling / Tool Use - Support for OpenAI functions, Anthropic tools
- ๐ฎ Multi-Modal Support - Image, audio, and video inputs
- ๐ฎ RAG Integration - Vector database integration for retrieval-augmented generation
- ๐ฎ Prompt Templates - Pre-built templates for common use cases
- ๐ฎ Agent Framework - Build autonomous agents with planning and execution
- ๐ฎ Fine-tuning Support - Train and deploy custom models
- ๐ฎ Cost Optimization - Automatic model selection based on budget
- ๐ฎ A/B Testing - Test different models and prompts
- โ OpenAI (GPT-4o, GPT-4 Turbo, o1, GPT-4o-mini)
- โ Anthropic (Claude Sonnet 4.5, Haiku 4.5, Opus 4.1)
- โ Google AI (Gemini 2.0, Gemini 1.5 Pro/Flash)
- โ Streaming support for all providers
- โ Conversation memory management
- โ Type-safe TypeScript APIs
- โ Error handling with retries
- โ Rate limiting and security
- โ Token usage tracking
- โ 94% test coverage
Want to help implement these features? Check out our Contributing Guide and:
- Pick a feature from the roadmap
- Open an issue to discuss implementation
- Submit a PR with your implementation
- Get recognized as a contributor!
Priority is given to features with community interest. Open an issue to vote on features you'd like to see!
The library has 94% test coverage with 880+ tests.
# Run all tests
npm test
# Run tests with coverage
npm run test:coverage
# Run tests in watch mode
npm run test:watch
# Run fast tests only
npm run test:fastimport { Chatbot } from '@rumenx/chatbot';
describe('Chatbot', () => {
it('should send a message and receive a response', async () => {
const chatbot = new Chatbot({
provider: {
provider: 'openai',
apiKey: 'test-key',
model: 'gpt-3.5-turbo',
},
});
const response = await chatbot.chat({
message: 'Hello!',
metadata: { sessionId: 'test', userId: 'test' },
});
expect(response.content).toBeDefined();
expect(response.metadata.provider).toBe('openai');
});
});We welcome contributions! Please see CONTRIBUTING.md for details.
# Clone the repository
git clone https://github.com/RumenDamyanov/npm-chatbot.git
cd npm-chatbot
# Install dependencies
npm install
# Run tests
npm test
# Build the project
npm run build
# Run examples
npm run example:openaiPlease read our Code of Conduct before contributing.
This project is licensed under the MIT License - see the LICENSE.md file for details.
If you find this library helpful, please consider:
- โญ Starring the repository
- ๐ Reporting bugs via GitHub Issues
- ๐ก Suggesting new features
- ๐ Improving documentation
- ๐ฐ Sponsoring the project
See CHANGELOG.md for version history and release notes.
Made with โค๏ธ by Rumen Damyanov