Última atividade 2 months ago

Aunali revisou este gist 2 months ago. Ir para a revisão

1 file changed, 863 insertions

Kepler-AI-SDK-Guide.md(arquivo criado)

@@ -0,0 +1,863 @@
1 + # Complete Guide to Kepler AI SDK with MCP Integration
2 +
3 + This guide covers everything you need to know about using the Kepler AI SDK, from basic usage to advanced MCP (Model Context Protocol) integration.
4 +
5 + ## Table of Contents
6 +
7 + 1. [Overview](#overview)
8 + 2. [Installation & Setup](#installation--setup)
9 + 3. [Basic Usage](#basic-usage)
10 + 4. [MCP Integration](#mcp-integration)
11 + 5. [Advanced Patterns](#advanced-patterns)
12 + 6. [Provider-Specific Features](#provider-specific-features)
13 + 7. [Production Considerations](#production-considerations)
14 + 8. [Migration from Other SDKs](#migration-from-other-sdks)
15 + 9. [Troubleshooting](#troubleshooting)
16 +
17 + ## Overview
18 +
19 + The Kepler AI SDK provides:
20 + - **Unified Interface**: Single API across multiple LLM providers
21 + - **MCP Integration**: External tool capabilities via Model Context Protocol
22 + - **Production Ready**: Built-in error handling, token tracking, cost calculation
23 + - **TypeScript First**: Complete type safety and excellent DX
24 + - **Official SDKs**: Uses native provider SDKs under the hood
25 +
26 + ### Architecture
27 +
28 + ```
29 + ┌─────────────────────────────────────────────────────────────┐
30 + │ Kepler │
31 + │ ┌─────────────────┐ ┌─────────────────────────────────┐ │
32 + │ │ ModelManager │ │ MCPManager │ │
33 + │ │ │ │ ┌───────────┐ ┌───────────┐ │ │
34 + │ │ ┌─────────────┐ │ │ │ MCP Server│ │ MCP Server│ │ │
35 + │ │ │Provider │ │ │ │ 1 │ │ 2 │ │ │
36 + │ │ │Adapters │ │ │ └───────────┘ └───────────┘ │ │
37 + │ │ └─────────────┘ │ └─────────────────────────────────┘ │
38 + │ └─────────────────┘ │
39 + └─────────────────────────────────────────────────────────────┘
40 + ```
41 +
42 + ## Installation & Setup
43 +
44 + ### 1. Install the SDK
45 +
46 + ```bash
47 + bun add kepler-ai-sdk
48 + ```
49 +
50 + ### 2. Install Provider SDKs
51 +
52 + ```bash
53 + # Install the providers you need
54 + bun add openai @anthropic-ai/sdk @google/gen-ai @mistralai/mistralai cohere-ai
55 + ```
56 +
57 + ### 3. Install MCP Dependencies
58 +
59 + ```bash
60 + # Core MCP SDK (included automatically)
61 + bun add @modelcontextprotocol/sdk
62 +
63 + # Popular MCP servers
64 + npm install -g @modelcontextprotocol/server-filesystem
65 + npm install -g @modelcontextprotocol/server-git
66 + npm install -g @modelcontextprotocol/server-sqlite
67 + npm install -g @modelcontextprotocol/server-memory
68 + ```
69 +
70 + ### 4. Set Environment Variables
71 +
72 + ```bash
73 + # LLM Provider API Keys
74 + export OPENAI_API_KEY="your-openai-key"
75 + export ANTHROPIC_API_KEY="your-anthropic-key"
76 + export GOOGLE_GENERATIVE_AI_API_KEY="your-gemini-key"
77 + export MISTRAL_API_KEY="your-mistral-key"
78 + export COHERE_API_KEY="your-cohere-key"
79 +
80 + # Optional: For MCP servers that need API access
81 + export EXA_API_KEY="your-exa-key" # For web search servers
82 + ```
83 +
84 + ## Basic Usage
85 +
86 + ### Simple Text Completion
87 +
88 + ```typescript
89 + import { Kepler, AnthropicProvider } from 'kepler-ai-sdk';
90 +
91 + const kepler = new Kepler({
92 + providers: [
93 + {
94 + provider: new AnthropicProvider({
95 + apiKey: process.env.ANTHROPIC_API_KEY!
96 + })
97 + }
98 + ]
99 + });
100 +
101 + const response = await kepler.generateCompletion({
102 + model: "claude-3-5-sonnet-20240620",
103 + messages: [
104 + { role: "user", content: "Explain quantum computing in simple terms" }
105 + ],
106 + temperature: 0.7,
107 + maxTokens: 500
108 + });
109 +
110 + console.log(response.content);
111 + console.log(`Tokens used: ${response.usage.totalTokens}`);
112 + ```
113 +
114 + ### Multi-Provider Setup
115 +
116 + ```typescript
117 + import {
118 + Kepler,
119 + OpenAIProvider,
120 + AnthropicProvider,
121 + GeminiProvider
122 + } from 'kepler-ai-sdk';
123 +
124 + const kepler = new Kepler({
125 + providers: [
126 + {
127 + provider: new OpenAIProvider({
128 + apiKey: process.env.OPENAI_API_KEY!
129 + })
130 + },
131 + {
132 + provider: new AnthropicProvider({
133 + apiKey: process.env.ANTHROPIC_API_KEY!
134 + })
135 + },
136 + {
137 + provider: new GeminiProvider({
138 + apiKey: process.env.GOOGLE_GENERATIVE_AI_API_KEY!
139 + })
140 + }
141 + ]
142 + });
143 +
144 + // Use any model from any provider
145 + const gptResponse = await kepler.generateCompletion({
146 + model: "gpt-4o-mini",
147 + messages: [{ role: "user", content: "Hello GPT!" }]
148 + });
149 +
150 + const claudeResponse = await kepler.generateCompletion({
151 + model: "claude-3-5-sonnet-20240620",
152 + messages: [{ role: "user", content: "Hello Claude!" }]
153 + });
154 +
155 + const geminiResponse = await kepler.generateCompletion({
156 + model: "gemini-2.0-flash-exp",
157 + messages: [{ role: "user", content: "Hello Gemini!" }]
158 + });
159 + ```
160 +
161 + ### Streaming Responses
162 +
163 + ```typescript
164 + for await (const chunk of kepler.streamCompletion({
165 + model: "claude-3-5-sonnet-20240620",
166 + messages: [
167 + { role: "user", content: "Write a short story about AI" }
168 + ]
169 + })) {
170 + if (chunk.delta) {
171 + process.stdout.write(chunk.delta);
172 + }
173 +
174 + if (chunk.finished) {
175 + console.log(`\n\nCompleted! Tokens: ${chunk.usage?.totalTokens}`);
176 + }
177 + }
178 + ```
179 +
180 + ### Tool Calling (Without MCP)
181 +
182 + ```typescript
183 + const response = await kepler.generateCompletion({
184 + model: "gpt-4o",
185 + messages: [
186 + { role: "user", content: "What's the weather in New York?" }
187 + ],
188 + tools: [
189 + {
190 + name: "get_weather",
191 + description: "Get current weather for a city",
192 + parameters: {
193 + type: "object",
194 + properties: {
195 + city: { type: "string" },
196 + unit: { type: "string", enum: ["celsius", "fahrenheit"] }
197 + },
198 + required: ["city"]
199 + }
200 + }
201 + ]
202 + });
203 +
204 + // Handle tool calls manually
205 + if (response.toolCalls) {
206 + for (const call of response.toolCalls) {
207 + console.log(`Tool: ${call.name}`);
208 + console.log(`Args:`, call.arguments);
209 +
210 + // Execute your tool logic here
211 + if (call.name === "get_weather") {
212 + const weatherData = await fetchWeather(call.arguments.city);
213 + // Continue conversation with results...
214 + }
215 + }
216 + }
217 + ```
218 +
219 + ## MCP Integration
220 +
221 + MCP (Model Context Protocol) allows you to connect external servers that provide tools, making LLMs capable of interacting with filesystems, databases, APIs, and more.
222 +
223 + ### Basic MCP Setup
224 +
225 + ```typescript
226 + import { Kepler, AnthropicProvider, MCPServerConfig } from 'kepler-ai-sdk';
227 +
228 + const mcpServers: MCPServerConfig[] = [
229 + {
230 + id: "filesystem",
231 + name: "File System Server",
232 + command: "npx",
233 + args: ["@modelcontextprotocol/server-filesystem", process.cwd()],
234 + env: {} // Optional environment variables
235 + }
236 + ];
237 +
238 + const kepler = new Kepler({
239 + providers: [
240 + {
241 + provider: new AnthropicProvider({
242 + apiKey: process.env.ANTHROPIC_API_KEY!
243 + })
244 + }
245 + ],
246 + mcpServers,
247 + autoDiscoverTools: true // Default: true
248 + });
249 +
250 + // LLM automatically has access to filesystem tools
251 + const response = await kepler.generateCompletion({
252 + model: "claude-3-5-sonnet-20240620",
253 + messages: [
254 + { role: "user", content: "List the files in the current directory and show me the contents of package.json" }
255 + ]
256 + });
257 +
258 + console.log(response.content);
259 + ```
260 +
261 + ### Multi-Server MCP Setup
262 +
263 + ```typescript
264 + const mcpServers: MCPServerConfig[] = [
265 + // File system access
266 + {
267 + id: "filesystem",
268 + name: "File System Server",
269 + command: "npx",
270 + args: ["@modelcontextprotocol/server-filesystem", process.cwd()]
271 + },
272 +
273 + // Git repository access
274 + {
275 + id: "git",
276 + name: "Git Server",
277 + command: "npx",
278 + args: ["@modelcontextprotocol/server-git", "--repository", process.cwd()]
279 + },
280 +
281 + // SQLite database access
282 + {
283 + id: "database",
284 + name: "SQLite Server",
285 + command: "npx",
286 + args: ["@modelcontextprotocol/server-sqlite", "/path/to/database.db"]
287 + },
288 +
289 + // Memory/knowledge management
290 + {
291 + id: "memory",
292 + name: "Memory Server",
293 + command: "npx",
294 + args: ["@modelcontextprotocol/server-memory"]
295 + }
296 + ];
297 +
298 + const kepler = new Kepler({
299 + providers: [
300 + {
301 + provider: new AnthropicProvider({
302 + apiKey: process.env.ANTHROPIC_API_KEY!
303 + })
304 + }
305 + ],
306 + mcpServers
307 + });
308 +
309 + // LLM can now access files, git, database, and persistent memory
310 + const response = await kepler.generateCompletion({
311 + model: "claude-3-5-sonnet-20240620",
312 + messages: [
313 + {
314 + role: "user",
315 + content: "Check the git status, read the README file, and store a summary of this project in memory for future reference"
316 + }
317 + ]
318 + });
319 + ```
320 +
321 + ### Custom MCP Servers
322 +
323 + ```typescript
324 + const customMCPServers: MCPServerConfig[] = [
325 + {
326 + id: "custom-api",
327 + name: "Custom API Server",
328 + command: "python",
329 + args: ["/path/to/your/mcp_server.py"],
330 + env: {
331 + API_KEY: process.env.CUSTOM_API_KEY!,
332 + DEBUG: "false"
333 + },
334 + cwd: "/path/to/server/directory"
335 + }
336 + ];
337 + ```
338 +
339 + ### Adding User-Defined Tools with MCP
340 +
341 + ```typescript
342 + import { ToolDefinition, ToolHandler } from 'kepler-ai-sdk';
343 +
344 + // Define your custom tool
345 + const getCurrentTime: ToolDefinition = {
346 + name: "get_current_time",
347 + description: "Get the current date and time",
348 + parameters: {
349 + type: "object",
350 + properties: {
351 + timezone: {
352 + type: "string",
353 + description: "Timezone (e.g., 'UTC', 'America/New_York')"
354 + }
355 + },
356 + required: []
357 + }
358 + };
359 +
360 + // Define the handler function
361 + const timeHandler: ToolHandler = async (args) => {
362 + const timezone = args.timezone as string || 'UTC';
363 + return new Date().toLocaleString('en-US', {
364 + timeZone: timezone,
365 + dateStyle: 'full',
366 + timeStyle: 'long'
367 + });
368 + };
369 +
370 + // Add to Kepler (combines with MCP tools automatically)
371 + await kepler.addUserTool(getCurrentTime, timeHandler);
372 +
373 + // Now LLM has access to both MCP tools AND your custom tools
374 + const response = await kepler.generateCompletion({
375 + model: "claude-3-5-sonnet-20240620",
376 + messages: [
377 + {
378 + role: "user",
379 + content: "What time is it now, and also show me the files in the current directory?"
380 + }
381 + ]
382 + });
383 + ```
384 +
385 + ### Dynamic MCP Server Management
386 +
387 + ```typescript
388 + // Add servers dynamically
389 + await kepler.addMCPServer({
390 + id: "new-server",
391 + name: "New Server",
392 + command: "npx",
393 + args: ["@modelcontextprotocol/server-brave-search"],
394 + env: { BRAVE_API_KEY: process.env.BRAVE_API_KEY! }
395 + });
396 +
397 + // Check server status
398 + const status = kepler.getMCPServerStatus();
399 + for (const server of status) {
400 + console.log(`${server.config.name}: ${server.connected ? '✅' : '❌'}`);
401 + console.log(` Tools: ${server.toolCount}, Resources: ${server.resourceCount}`);
402 + }
403 +
404 + // Get all available tools
405 + const tools = await kepler.getAllTools();
406 + console.log(`Total tools available: ${tools.length}`);
407 +
408 + // Refresh tools from all servers
409 + await kepler.refreshAllTools();
410 +
411 + // Remove a server
412 + await kepler.removeMCPServer("new-server");
413 + ```
414 +
415 + ## Advanced Patterns
416 +
417 + ### Streaming with Tool Calls
418 +
419 + ```typescript
420 + const request = {
421 + model: "claude-3-5-sonnet-20240620",
422 + messages: [
423 + {
424 + role: "user",
425 + content: "Search for recent AI news and save a summary to a file called ai_news.txt"
426 + }
427 + ]
428 + };
429 +
430 + for await (const chunk of kepler.streamCompletion(request)) {
431 + if (chunk.delta) {
432 + process.stdout.write(chunk.delta);
433 + }
434 +
435 + // Tool calls happen automatically within the stream
436 + if (chunk.toolCalls?.length > 0) {
437 + console.log('\n🔧 Executing tools:', chunk.toolCalls.map(tc => tc.name));
438 + }
439 +
440 + if (chunk.finished) {
441 + console.log('\n✅ Completed with tool execution!');
442 + break;
443 + }
444 + }
445 + ```
446 +
447 + ### Model Discovery and Selection
448 +
449 + ```typescript
450 + // List all available models
451 + const allModels = await kepler.listModels();
452 + console.log(`Found ${allModels.length} models`);
453 +
454 + // Advanced model management
455 + const modelManager = kepler.getModelManager();
456 +
457 + // Find models with specific capabilities
458 + const visionModels = await modelManager.findModelsByCapability('vision');
459 + const toolModels = await modelManager.findModelsByCapability('functionCalling');
460 +
461 + // Get the cheapest model with streaming support
462 + const cheapest = await modelManager.getCheapestModel(['streaming']);
463 +
464 + // Get the most capable model
465 + const best = await modelManager.getMostCapableModel(['vision', 'functionCalling']);
466 +
467 + console.log(`Best vision model: ${best?.id}`);
468 + console.log(`Cheapest streaming model: ${cheapest?.id}`);
469 + ```
470 +
471 + ### Multimodal with MCP
472 +
473 + ```typescript
474 + const response = await kepler.generateCompletion({
475 + model: "claude-3-5-sonnet-20240620",
476 + messages: [
477 + {
478 + role: "user",
479 + content: [
480 + { type: "text", text: "Analyze this image and save the analysis to a file" },
481 + {
482 + type: "image_url",
483 + imageUrl: "data:image/jpeg;base64,/9j/4AAQ...",
484 + mimeType: "image/jpeg"
485 + }
486 + ]
487 + }
488 + ]
489 + });
490 +
491 + // Claude can analyze the image AND use MCP tools to save the result
492 + ```
493 +
494 + ### Error Handling and Resilience
495 +
496 + ```typescript
497 + import { LLMError } from 'kepler-ai-sdk';
498 +
499 + try {
500 + const response = await kepler.generateCompletion({
501 + model: "claude-3-5-sonnet-20240620",
502 + messages: [{ role: "user", content: "Hello!" }]
503 + });
504 + } catch (error) {
505 + if (error instanceof LLMError) {
506 + console.log('Provider:', error.provider);
507 + console.log('Status:', error.statusCode);
508 + console.log('Retryable:', error.isRetryable());
509 + console.log('User message:', error.getUserMessage());
510 +
511 + if (error.isRetryable()) {
512 + // Implement retry logic
513 + console.log('Retrying in 5 seconds...');
514 + await new Promise(resolve => setTimeout(resolve, 5000));
515 + // Retry the request
516 + }
517 + }
518 + }
519 +
520 + // MCP server failures are handled gracefully
521 + const serverStatus = kepler.getMCPServerStatus();
522 + const failedServers = serverStatus.filter(s => !s.connected);
523 +
524 + if (failedServers.length > 0) {
525 + console.log('Some MCP servers are down:', failedServers.map(s => s.config.name));
526 + // The SDK continues to work with available servers
527 + }
528 + ```
529 +
530 + ## Provider-Specific Features
531 +
532 + For specialized APIs that aren't part of the standard completion interface, access providers directly:
533 +
534 + ### Image Generation (OpenAI DALL-E)
535 +
536 + ```typescript
537 + const modelManager = kepler.getModelManager();
538 + const openai = modelManager.getProvider('openai');
539 +
540 + const images = await openai.generateImage({
541 + prompt: "A futuristic city at sunset",
542 + model: "dall-e-3",
543 + size: "1024x1024",
544 + quality: "hd",
545 + n: 1
546 + });
547 +
548 + console.log('Generated:', images.images[0].url);
549 + ```
550 +
551 + ### Text-to-Speech (OpenAI/Gemini)
552 +
553 + ```typescript
554 + // OpenAI TTS
555 + const openai = modelManager.getProvider('openai');
556 + const audio = await openai.generateAudio({
557 + text: "Hello, this is a test",
558 + model: "tts-1",
559 + voice: "alloy",
560 + format: "mp3"
561 + });
562 +
563 + // Gemini TTS
564 + const gemini = modelManager.getProvider('gemini');
565 + const geminiAudio = await gemini.generateAudio({
566 + text: "Hello from Gemini",
567 + model: "gemini-2.5-flash-preview-tts",
568 + voice: "leda"
569 + });
570 + ```
571 +
572 + ### Embeddings
573 +
574 + ```typescript
575 + const cohere = modelManager.getProvider('cohere');
576 + const embeddings = await cohere.generateEmbedding({
577 + model: "embed-english-v3.0",
578 + input: ["Hello world", "How are you?"],
579 + inputType: "search_document"
580 + });
581 +
582 + console.log('Embeddings shape:', embeddings.embeddings[0].length);
583 + ```
584 +
585 + ## Production Considerations
586 +
587 + ### Cost Tracking
588 +
589 + ```typescript
590 + import { PricingCalculator, UsageTracker } from 'kepler-ai-sdk';
591 +
592 + const pricing = new PricingCalculator();
593 + const usage = new UsageTracker();
594 +
595 + const response = await kepler.generateCompletion({
596 + model: "gpt-4o",
597 + messages: [{ role: "user", content: "Hello!" }]
598 + });
599 +
600 + // Calculate and track costs
601 + const cost = await pricing.calculateCost(response.usage, response.model);
602 + usage.trackUsage(response.model, response.usage, cost?.totalCost);
603 +
604 + console.log(`Cost: $${cost?.totalCost.toFixed(6)}`);
605 +
606 + // Get usage statistics
607 + const stats = usage.getUsage("gpt-4o");
608 + if (stats && !Array.isArray(stats)) {
609 + console.log(`Total requests: ${stats.totalRequests}`);
610 + console.log(`Total cost: $${stats.totalCost.toFixed(4)}`);
611 + }
612 + ```
613 +
614 + ### Resource Management
615 +
616 + ```typescript
617 + const kepler = new Kepler({
618 + providers: [
619 + {
620 + provider: new AnthropicProvider({
621 + apiKey: process.env.ANTHROPIC_API_KEY!
622 + })
623 + }
624 + ],
625 + mcpServers: [
626 + {
627 + id: "filesystem",
628 + name: "File System",
629 + command: "npx",
630 + args: ["@modelcontextprotocol/server-filesystem", process.cwd()]
631 + }
632 + ]
633 + });
634 +
635 + // Always cleanup when done
636 + process.on('SIGINT', async () => {
637 + console.log('Cleaning up...');
638 + await kepler.cleanup(); // Disconnects all MCP servers
639 + process.exit(0);
640 + });
641 + ```
642 +
643 + ### Configuration Management
644 +
645 + ```typescript
646 + // Environment-specific configuration
647 + const isDevelopment = process.env.NODE_ENV === 'development';
648 +
649 + const kepler = new Kepler({
650 + providers: [
651 + {
652 + provider: new OpenAIProvider({
653 + apiKey: process.env.OPENAI_API_KEY!,
654 + // Use different base URL in development
655 + baseURL: isDevelopment ? 'http://localhost:8080/v1' : undefined
656 + })
657 + }
658 + ],
659 + mcpServers: isDevelopment ? [
660 + // More servers in development
661 + { id: "fs", name: "FileSystem", command: "npx", args: ["@modelcontextprotocol/server-filesystem", "."] },
662 + { id: "git", name: "Git", command: "npx", args: ["@modelcontextprotocol/server-git", "--repository", "."] }
663 + ] : [
664 + // Minimal servers in production
665 + { id: "fs", name: "FileSystem", command: "npx", args: ["@modelcontextprotocol/server-filesystem", "/app/data"] }
666 + ],
667 + autoDiscoverTools: true
668 + });
669 + ```
670 +
671 + ## Migration from Other SDKs
672 +
673 + ### From OpenAI SDK
674 +
675 + ```typescript
676 + // Before (OpenAI SDK)
677 + import OpenAI from 'openai';
678 +
679 + const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
680 +
681 + const response = await openai.chat.completions.create({
682 + model: 'gpt-4o',
683 + messages: [{ role: 'user', content: 'Hello!' }]
684 + });
685 +
686 + console.log(response.choices[0].message.content);
687 +
688 + // After (Kepler SDK)
689 + import { Kepler, OpenAIProvider } from 'kepler-ai-sdk';
690 +
691 + const kepler = new Kepler({
692 + providers: [
693 + {
694 + provider: new OpenAIProvider({
695 + apiKey: process.env.OPENAI_API_KEY!
696 + })
697 + }
698 + ]
699 + });
700 +
701 + const response = await kepler.generateCompletion({
702 + model: 'gpt-4o',
703 + messages: [{ role: 'user', content: 'Hello!' }]
704 + });
705 +
706 + console.log(response.content);
707 + ```
708 +
709 + ### From Anthropic SDK
710 +
711 + ```typescript
712 + // Before (Anthropic SDK)
713 + import Anthropic from '@anthropic-ai/sdk';
714 +
715 + const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });
716 +
717 + const response = await anthropic.messages.create({
718 + model: 'claude-3-5-sonnet-20240620',
719 + max_tokens: 1000,
720 + messages: [{ role: 'user', content: 'Hello!' }]
721 + });
722 +
723 + console.log(response.content[0].text);
724 +
725 + // After (Kepler SDK)
726 + import { Kepler, AnthropicProvider } from 'kepler-ai-sdk';
727 +
728 + const kepler = new Kepler({
729 + providers: [
730 + {
731 + provider: new AnthropicProvider({
732 + apiKey: process.env.ANTHROPIC_API_KEY!
733 + })
734 + }
735 + ]
736 + });
737 +
738 + const response = await kepler.generateCompletion({
739 + model: 'claude-3-5-sonnet-20240620',
740 + maxTokens: 1000,
741 + messages: [{ role: 'user', content: 'Hello!' }]
742 + });
743 +
744 + console.log(response.content);
745 + ```
746 +
747 + ## Troubleshooting
748 +
749 + ### Common Issues
750 +
751 + #### 1. MCP Server Connection Failures
752 +
753 + ```typescript
754 + // Check server status
755 + const status = kepler.getMCPServerStatus();
756 + const disconnected = status.filter(s => !s.connected);
757 +
758 + if (disconnected.length > 0) {
759 + console.log('Disconnected servers:', disconnected.map(s => ({
760 + name: s.config.name,
761 + error: s.lastError
762 + })));
763 +
764 + // Try reconnecting
765 + for (const server of disconnected) {
766 + try {
767 + await kepler.removeMCPServer(server.config.id);
768 + await kepler.addMCPServer(server.config);
769 + console.log(`Reconnected: ${server.config.name}`);
770 + } catch (error) {
771 + console.error(`Failed to reconnect ${server.config.name}:`, error);
772 + }
773 + }
774 + }
775 + ```
776 +
777 + #### 2. Model Not Found Errors
778 +
779 + ```typescript
780 + try {
781 + const response = await kepler.generateCompletion({
782 + model: "some-unknown-model",
783 + messages: [{ role: "user", content: "Hello" }]
784 + });
785 + } catch (error) {
786 + if (error.message.includes('not found')) {
787 + console.log('Available models:');
788 + const models = await kepler.listModels();
789 + models.slice(0, 10).forEach(model => {
790 + console.log(`- ${model.id} (${model.provider})`);
791 + });
792 + }
793 + }
794 + ```
795 +
796 + #### 3. Tool Execution Issues
797 +
798 + ```typescript
799 + // Debug tool discovery
800 + const tools = await kepler.getAllTools();
801 + console.log('Available tools:', tools.map(t => t.name));
802 +
803 + // Check if specific tools are available
804 + const hasFileTools = tools.some(t => t.name.includes('file'));
805 + const hasGitTools = tools.some(t => t.name.includes('git'));
806 +
807 + console.log(`File tools available: ${hasFileTools}`);
808 + console.log(`Git tools available: ${hasGitTools}`);
809 + ```
810 +
811 + #### 4. Environment Variable Issues
812 +
813 + ```typescript
814 + // Validate required environment variables
815 + const requiredEnvVars = [
816 + 'ANTHROPIC_API_KEY',
817 + 'OPENAI_API_KEY',
818 + 'GOOGLE_GENERATIVE_AI_API_KEY'
819 + ];
820 +
821 + const missing = requiredEnvVars.filter(envVar => !process.env[envVar]);
822 +
823 + if (missing.length > 0) {
824 + console.error('Missing environment variables:', missing);
825 + console.error('Please set them in your .env file or environment');
826 + process.exit(1);
827 + }
828 + ```
829 +
830 + ### Debug Mode
831 +
832 + ```typescript
833 + // Enable debug mode
834 + const kepler = new Kepler({
835 + providers: [...],
836 + mcpServers: [...],
837 + debug: process.env.KEPLER_DEBUG === 'true' // Custom debug flag
838 + });
839 +
840 + // Or check debug info manually
841 + console.log('Kepler Debug Info:');
842 + console.log('- Providers:', kepler.getModelManager().getProviders().map(p => p.constructor.name));
843 + console.log('- MCP Servers:', kepler.getMCPServerStatus().map(s => s.config.name));
844 + console.log('- Available Tools:', (await kepler.getAllTools()).length);
845 + ```
846 +
847 + ### Performance Monitoring
848 +
849 + ```typescript
850 + // Track request performance
851 + const startTime = Date.now();
852 +
853 + const response = await kepler.generateCompletion({
854 + model: "claude-3-5-sonnet-20240620",
855 + messages: [{ role: "user", content: "Hello" }]
856 + });
857 +
858 + const duration = Date.now() - startTime;
859 + console.log(`Request completed in ${duration}ms`);
860 + console.log(`Tokens/second: ${response.usage.totalTokens / (duration / 1000)}`);
861 + ```
862 +
863 + This guide should cover most use cases for the Kepler AI SDK. The combination of unified multi-provider access and MCP integration makes it a powerful tool for building AI applications that can interact with external systems and tools.
Próximo Anterior