Остання активність 2 months ago

Версія 04c8c4304484d919fb8948ea325f5c38f351c18c

Kepler-AI-SDK-Guide.md Неформатований

Complete Guide to Kepler AI SDK with MCP Integration

This guide covers everything you need to know about using the Kepler AI SDK, from basic usage to advanced MCP (Model Context Protocol) integration.

Table of Contents

  1. Overview
  2. Installation & Setup
  3. Basic Usage
  4. MCP Integration
  5. Advanced Patterns
  6. Provider-Specific Features
  7. Production Considerations
  8. Migration from Other SDKs
  9. Troubleshooting

Overview

The Kepler AI SDK provides:

  • Unified Interface: Single API across multiple LLM providers
  • MCP Integration: External tool capabilities via Model Context Protocol
  • Production Ready: Built-in error handling, token tracking, cost calculation
  • TypeScript First: Complete type safety and excellent DX
  • Official SDKs: Uses native provider SDKs under the hood

Architecture

┌─────────────────────────────────────────────────────────────┐
│                        Kepler                               │
│  ┌─────────────────┐    ┌─────────────────────────────────┐ │
│  │   ModelManager  │    │         MCPManager              │ │
│  │                 │    │  ┌───────────┐ ┌───────────┐    │ │
│  │ ┌─────────────┐ │    │  │ MCP Server│ │ MCP Server│    │ │
│  │ │Provider     │ │    │  │     1     │ │     2     │    │ │
│  │ │Adapters     │ │    │  └───────────┘ └───────────┘    │ │
│  │ └─────────────┘ │    └─────────────────────────────────┘ │
│  └─────────────────┘                                        │
└─────────────────────────────────────────────────────────────┘

Installation & Setup

1. Install the SDK

bun add kepler-ai-sdk

2. Install Provider SDKs

# Install the providers you need
bun add openai @anthropic-ai/sdk @google/gen-ai @mistralai/mistralai cohere-ai

3. Install MCP Dependencies

# Core MCP SDK (included automatically)
bun add @modelcontextprotocol/sdk

# Popular MCP servers
npm install -g @modelcontextprotocol/server-filesystem
npm install -g @modelcontextprotocol/server-git
npm install -g @modelcontextprotocol/server-sqlite
npm install -g @modelcontextprotocol/server-memory

4. Set Environment Variables

# LLM Provider API Keys
export OPENAI_API_KEY="your-openai-key"
export ANTHROPIC_API_KEY="your-anthropic-key" 
export GOOGLE_GENERATIVE_AI_API_KEY="your-gemini-key"
export MISTRAL_API_KEY="your-mistral-key"
export COHERE_API_KEY="your-cohere-key"

# Optional: For MCP servers that need API access
export EXA_API_KEY="your-exa-key"  # For web search servers

Basic Usage

Simple Text Completion

import { Kepler, AnthropicProvider } from 'kepler-ai-sdk';

const kepler = new Kepler({
  providers: [
    {
      provider: new AnthropicProvider({
        apiKey: process.env.ANTHROPIC_API_KEY!
      })
    }
  ]
});

const response = await kepler.generateCompletion({
  model: "claude-3-5-sonnet-20240620",
  messages: [
    { role: "user", content: "Explain quantum computing in simple terms" }
  ],
  temperature: 0.7,
  maxTokens: 500
});

console.log(response.content);
console.log(`Tokens used: ${response.usage.totalTokens}`);

Multi-Provider Setup

import { 
  Kepler, 
  OpenAIProvider, 
  AnthropicProvider, 
  GeminiProvider 
} from 'kepler-ai-sdk';

const kepler = new Kepler({
  providers: [
    {
      provider: new OpenAIProvider({
        apiKey: process.env.OPENAI_API_KEY!
      })
    },
    {
      provider: new AnthropicProvider({
        apiKey: process.env.ANTHROPIC_API_KEY!
      })
    },
    {
      provider: new GeminiProvider({
        apiKey: process.env.GOOGLE_GENERATIVE_AI_API_KEY!
      })
    }
  ]
});

// Use any model from any provider
const gptResponse = await kepler.generateCompletion({
  model: "gpt-4o-mini",
  messages: [{ role: "user", content: "Hello GPT!" }]
});

const claudeResponse = await kepler.generateCompletion({
  model: "claude-3-5-sonnet-20240620", 
  messages: [{ role: "user", content: "Hello Claude!" }]
});

const geminiResponse = await kepler.generateCompletion({
  model: "gemini-2.0-flash-exp",
  messages: [{ role: "user", content: "Hello Gemini!" }]
});

Streaming Responses

for await (const chunk of kepler.streamCompletion({
  model: "claude-3-5-sonnet-20240620",
  messages: [
    { role: "user", content: "Write a short story about AI" }
  ]
})) {
  if (chunk.delta) {
    process.stdout.write(chunk.delta);
  }
  
  if (chunk.finished) {
    console.log(`\n\nCompleted! Tokens: ${chunk.usage?.totalTokens}`);
  }
}

Tool Calling (Without MCP)

const response = await kepler.generateCompletion({
  model: "gpt-4o",
  messages: [
    { role: "user", content: "What's the weather in New York?" }
  ],
  tools: [
    {
      name: "get_weather",
      description: "Get current weather for a city",
      parameters: {
        type: "object",
        properties: {
          city: { type: "string" },
          unit: { type: "string", enum: ["celsius", "fahrenheit"] }
        },
        required: ["city"]
      }
    }
  ]
});

// Handle tool calls manually
if (response.toolCalls) {
  for (const call of response.toolCalls) {
    console.log(`Tool: ${call.name}`);
    console.log(`Args:`, call.arguments);
    
    // Execute your tool logic here
    if (call.name === "get_weather") {
      const weatherData = await fetchWeather(call.arguments.city);
      // Continue conversation with results...
    }
  }
}

MCP Integration

MCP (Model Context Protocol) allows you to connect external servers that provide tools, making LLMs capable of interacting with filesystems, databases, APIs, and more.

Basic MCP Setup

import { Kepler, AnthropicProvider, MCPServerConfig } from 'kepler-ai-sdk';

const mcpServers: MCPServerConfig[] = [
  {
    id: "filesystem", 
    name: "File System Server",
    command: "npx",
    args: ["@modelcontextprotocol/server-filesystem", process.cwd()],
    env: {} // Optional environment variables
  }
];

const kepler = new Kepler({
  providers: [
    {
      provider: new AnthropicProvider({
        apiKey: process.env.ANTHROPIC_API_KEY!
      })
    }
  ],
  mcpServers,
  autoDiscoverTools: true // Default: true
});

// LLM automatically has access to filesystem tools
const response = await kepler.generateCompletion({
  model: "claude-3-5-sonnet-20240620",
  messages: [
    { role: "user", content: "List the files in the current directory and show me the contents of package.json" }
  ]
});

console.log(response.content);

Multi-Server MCP Setup

const mcpServers: MCPServerConfig[] = [
  // File system access
  {
    id: "filesystem",
    name: "File System Server",
    command: "npx", 
    args: ["@modelcontextprotocol/server-filesystem", process.cwd()]
  },
  
  // Git repository access
  {
    id: "git",
    name: "Git Server",
    command: "npx",
    args: ["@modelcontextprotocol/server-git", "--repository", process.cwd()]
  },
  
  // SQLite database access
  {
    id: "database",
    name: "SQLite Server", 
    command: "npx",
    args: ["@modelcontextprotocol/server-sqlite", "/path/to/database.db"]
  },
  
  // Memory/knowledge management
  {
    id: "memory",
    name: "Memory Server",
    command: "npx", 
    args: ["@modelcontextprotocol/server-memory"]
  }
];

const kepler = new Kepler({
  providers: [
    {
      provider: new AnthropicProvider({
        apiKey: process.env.ANTHROPIC_API_KEY!
      })
    }
  ],
  mcpServers
});

// LLM can now access files, git, database, and persistent memory
const response = await kepler.generateCompletion({
  model: "claude-3-5-sonnet-20240620",
  messages: [
    { 
      role: "user", 
      content: "Check the git status, read the README file, and store a summary of this project in memory for future reference" 
    }
  ]
});

Custom MCP Servers

const customMCPServers: MCPServerConfig[] = [
  {
    id: "custom-api",
    name: "Custom API Server",
    command: "python",
    args: ["/path/to/your/mcp_server.py"],
    env: {
      API_KEY: process.env.CUSTOM_API_KEY!,
      DEBUG: "false"
    },
    cwd: "/path/to/server/directory"
  }
];

Adding User-Defined Tools with MCP

import { ToolDefinition, ToolHandler } from 'kepler-ai-sdk';

// Define your custom tool
const getCurrentTime: ToolDefinition = {
  name: "get_current_time",
  description: "Get the current date and time", 
  parameters: {
    type: "object",
    properties: {
      timezone: {
        type: "string",
        description: "Timezone (e.g., 'UTC', 'America/New_York')"
      }
    },
    required: []
  }
};

// Define the handler function
const timeHandler: ToolHandler = async (args) => {
  const timezone = args.timezone as string || 'UTC';
  return new Date().toLocaleString('en-US', {
    timeZone: timezone,
    dateStyle: 'full',
    timeStyle: 'long'
  });
};

// Add to Kepler (combines with MCP tools automatically)
await kepler.addUserTool(getCurrentTime, timeHandler);

// Now LLM has access to both MCP tools AND your custom tools
const response = await kepler.generateCompletion({
  model: "claude-3-5-sonnet-20240620",
  messages: [
    { 
      role: "user", 
      content: "What time is it now, and also show me the files in the current directory?" 
    }
  ]
});

Dynamic MCP Server Management

// Add servers dynamically
await kepler.addMCPServer({
  id: "new-server",
  name: "New Server",
  command: "npx",
  args: ["@modelcontextprotocol/server-brave-search"],
  env: { BRAVE_API_KEY: process.env.BRAVE_API_KEY! }
});

// Check server status
const status = kepler.getMCPServerStatus();
for (const server of status) {
  console.log(`${server.config.name}: ${server.connected ? '✅' : '❌'}`);
  console.log(`  Tools: ${server.toolCount}, Resources: ${server.resourceCount}`);
}

// Get all available tools
const tools = await kepler.getAllTools();
console.log(`Total tools available: ${tools.length}`);

// Refresh tools from all servers
await kepler.refreshAllTools();

// Remove a server
await kepler.removeMCPServer("new-server");

Advanced Patterns

Streaming with Tool Calls

const request = {
  model: "claude-3-5-sonnet-20240620",
  messages: [
    { 
      role: "user", 
      content: "Search for recent AI news and save a summary to a file called ai_news.txt" 
    }
  ]
};

for await (const chunk of kepler.streamCompletion(request)) {
  if (chunk.delta) {
    process.stdout.write(chunk.delta);
  }
  
  // Tool calls happen automatically within the stream
  if (chunk.toolCalls?.length > 0) {
    console.log('\n🔧 Executing tools:', chunk.toolCalls.map(tc => tc.name));
  }
  
  if (chunk.finished) {
    console.log('\n✅ Completed with tool execution!');
    break;
  }
}

Model Discovery and Selection

// List all available models
const allModels = await kepler.listModels();
console.log(`Found ${allModels.length} models`);

// Advanced model management
const modelManager = kepler.getModelManager();

// Find models with specific capabilities
const visionModels = await modelManager.findModelsByCapability('vision');
const toolModels = await modelManager.findModelsByCapability('functionCalling');

// Get the cheapest model with streaming support
const cheapest = await modelManager.getCheapestModel(['streaming']);

// Get the most capable model
const best = await modelManager.getMostCapableModel(['vision', 'functionCalling']);

console.log(`Best vision model: ${best?.id}`);
console.log(`Cheapest streaming model: ${cheapest?.id}`);

Multimodal with MCP

const response = await kepler.generateCompletion({
  model: "claude-3-5-sonnet-20240620",
  messages: [
    {
      role: "user",
      content: [
        { type: "text", text: "Analyze this image and save the analysis to a file" },
        { 
          type: "image_url", 
          imageUrl: "data:image/jpeg;base64,/9j/4AAQ...",
          mimeType: "image/jpeg"
        }
      ]
    }
  ]
});

// Claude can analyze the image AND use MCP tools to save the result

Error Handling and Resilience

import { LLMError } from 'kepler-ai-sdk';

try {
  const response = await kepler.generateCompletion({
    model: "claude-3-5-sonnet-20240620",
    messages: [{ role: "user", content: "Hello!" }]
  });
} catch (error) {
  if (error instanceof LLMError) {
    console.log('Provider:', error.provider);
    console.log('Status:', error.statusCode);
    console.log('Retryable:', error.isRetryable());
    console.log('User message:', error.getUserMessage());
    
    if (error.isRetryable()) {
      // Implement retry logic
      console.log('Retrying in 5 seconds...');
      await new Promise(resolve => setTimeout(resolve, 5000));
      // Retry the request
    }
  }
}

// MCP server failures are handled gracefully
const serverStatus = kepler.getMCPServerStatus();
const failedServers = serverStatus.filter(s => !s.connected);

if (failedServers.length > 0) {
  console.log('Some MCP servers are down:', failedServers.map(s => s.config.name));
  // The SDK continues to work with available servers
}

Provider-Specific Features

For specialized APIs that aren't part of the standard completion interface, access providers directly:

Image Generation (OpenAI DALL-E)

const modelManager = kepler.getModelManager();
const openai = modelManager.getProvider('openai');

const images = await openai.generateImage({
  prompt: "A futuristic city at sunset",
  model: "dall-e-3", 
  size: "1024x1024",
  quality: "hd",
  n: 1
});

console.log('Generated:', images.images[0].url);

Text-to-Speech (OpenAI/Gemini)

// OpenAI TTS
const openai = modelManager.getProvider('openai');
const audio = await openai.generateAudio({
  text: "Hello, this is a test",
  model: "tts-1",
  voice: "alloy",
  format: "mp3"
});

// Gemini TTS  
const gemini = modelManager.getProvider('gemini');
const geminiAudio = await gemini.generateAudio({
  text: "Hello from Gemini",
  model: "gemini-2.5-flash-preview-tts",
  voice: "leda"
});

Embeddings

const cohere = modelManager.getProvider('cohere');
const embeddings = await cohere.generateEmbedding({
  model: "embed-english-v3.0",
  input: ["Hello world", "How are you?"],
  inputType: "search_document"
});

console.log('Embeddings shape:', embeddings.embeddings[0].length);

Production Considerations

Cost Tracking

import { PricingCalculator, UsageTracker } from 'kepler-ai-sdk';

const pricing = new PricingCalculator();
const usage = new UsageTracker();

const response = await kepler.generateCompletion({
  model: "gpt-4o",
  messages: [{ role: "user", content: "Hello!" }]
});

// Calculate and track costs
const cost = await pricing.calculateCost(response.usage, response.model);
usage.trackUsage(response.model, response.usage, cost?.totalCost);

console.log(`Cost: $${cost?.totalCost.toFixed(6)}`);

// Get usage statistics
const stats = usage.getUsage("gpt-4o");
if (stats && !Array.isArray(stats)) {
  console.log(`Total requests: ${stats.totalRequests}`);
  console.log(`Total cost: $${stats.totalCost.toFixed(4)}`);
}

Resource Management

const kepler = new Kepler({
  providers: [
    {
      provider: new AnthropicProvider({
        apiKey: process.env.ANTHROPIC_API_KEY!
      })
    }
  ],
  mcpServers: [
    {
      id: "filesystem",
      name: "File System", 
      command: "npx",
      args: ["@modelcontextprotocol/server-filesystem", process.cwd()]
    }
  ]
});

// Always cleanup when done
process.on('SIGINT', async () => {
  console.log('Cleaning up...');
  await kepler.cleanup(); // Disconnects all MCP servers
  process.exit(0);
});

Configuration Management

// Environment-specific configuration
const isDevelopment = process.env.NODE_ENV === 'development';

const kepler = new Kepler({
  providers: [
    {
      provider: new OpenAIProvider({
        apiKey: process.env.OPENAI_API_KEY!,
        // Use different base URL in development
        baseURL: isDevelopment ? 'http://localhost:8080/v1' : undefined
      })
    }
  ],
  mcpServers: isDevelopment ? [
    // More servers in development
    { id: "fs", name: "FileSystem", command: "npx", args: ["@modelcontextprotocol/server-filesystem", "."] },
    { id: "git", name: "Git", command: "npx", args: ["@modelcontextprotocol/server-git", "--repository", "."] }
  ] : [
    // Minimal servers in production
    { id: "fs", name: "FileSystem", command: "npx", args: ["@modelcontextprotocol/server-filesystem", "/app/data"] }
  ],
  autoDiscoverTools: true
});

Migration from Other SDKs

From OpenAI SDK

// Before (OpenAI SDK)
import OpenAI from 'openai';

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

const response = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'Hello!' }]
});

console.log(response.choices[0].message.content);

// After (Kepler SDK)
import { Kepler, OpenAIProvider } from 'kepler-ai-sdk';

const kepler = new Kepler({
  providers: [
    {
      provider: new OpenAIProvider({
        apiKey: process.env.OPENAI_API_KEY!
      })
    }
  ]
});

const response = await kepler.generateCompletion({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'Hello!' }]
});

console.log(response.content);

From Anthropic SDK

// Before (Anthropic SDK)
import Anthropic from '@anthropic-ai/sdk';

const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });

const response = await anthropic.messages.create({
  model: 'claude-3-5-sonnet-20240620',
  max_tokens: 1000,
  messages: [{ role: 'user', content: 'Hello!' }]
});

console.log(response.content[0].text);

// After (Kepler SDK)  
import { Kepler, AnthropicProvider } from 'kepler-ai-sdk';

const kepler = new Kepler({
  providers: [
    {
      provider: new AnthropicProvider({
        apiKey: process.env.ANTHROPIC_API_KEY!
      })
    }
  ]
});

const response = await kepler.generateCompletion({
  model: 'claude-3-5-sonnet-20240620',
  maxTokens: 1000,
  messages: [{ role: 'user', content: 'Hello!' }]
});

console.log(response.content);

Troubleshooting

Common Issues

1. MCP Server Connection Failures

// Check server status
const status = kepler.getMCPServerStatus();
const disconnected = status.filter(s => !s.connected);

if (disconnected.length > 0) {
  console.log('Disconnected servers:', disconnected.map(s => ({
    name: s.config.name,
    error: s.lastError
  })));
  
  // Try reconnecting
  for (const server of disconnected) {
    try {
      await kepler.removeMCPServer(server.config.id);
      await kepler.addMCPServer(server.config);
      console.log(`Reconnected: ${server.config.name}`);
    } catch (error) {
      console.error(`Failed to reconnect ${server.config.name}:`, error);
    }
  }
}

2. Model Not Found Errors

try {
  const response = await kepler.generateCompletion({
    model: "some-unknown-model",
    messages: [{ role: "user", content: "Hello" }]
  });
} catch (error) {
  if (error.message.includes('not found')) {
    console.log('Available models:');
    const models = await kepler.listModels();
    models.slice(0, 10).forEach(model => {
      console.log(`- ${model.id} (${model.provider})`);
    });
  }
}

3. Tool Execution Issues

// Debug tool discovery
const tools = await kepler.getAllTools();
console.log('Available tools:', tools.map(t => t.name));

// Check if specific tools are available
const hasFileTools = tools.some(t => t.name.includes('file'));
const hasGitTools = tools.some(t => t.name.includes('git'));

console.log(`File tools available: ${hasFileTools}`);
console.log(`Git tools available: ${hasGitTools}`);

4. Environment Variable Issues

// Validate required environment variables
const requiredEnvVars = [
  'ANTHROPIC_API_KEY',
  'OPENAI_API_KEY', 
  'GOOGLE_GENERATIVE_AI_API_KEY'
];

const missing = requiredEnvVars.filter(envVar => !process.env[envVar]);

if (missing.length > 0) {
  console.error('Missing environment variables:', missing);
  console.error('Please set them in your .env file or environment');
  process.exit(1);
}

Debug Mode

// Enable debug mode
const kepler = new Kepler({
  providers: [...],
  mcpServers: [...],
  debug: process.env.KEPLER_DEBUG === 'true' // Custom debug flag
});

// Or check debug info manually
console.log('Kepler Debug Info:');
console.log('- Providers:', kepler.getModelManager().getProviders().map(p => p.constructor.name));
console.log('- MCP Servers:', kepler.getMCPServerStatus().map(s => s.config.name));
console.log('- Available Tools:', (await kepler.getAllTools()).length);

Performance Monitoring

// Track request performance
const startTime = Date.now();

const response = await kepler.generateCompletion({
  model: "claude-3-5-sonnet-20240620",
  messages: [{ role: "user", content: "Hello" }]
});

const duration = Date.now() - startTime;
console.log(`Request completed in ${duration}ms`);
console.log(`Tokens/second: ${response.usage.totalTokens / (duration / 1000)}`);

This guide should cover most use cases for the Kepler AI SDK. The combination of unified multi-provider access and MCP integration makes it a powerful tool for building AI applications that can interact with external systems and tools.