| API key invalid |
Incorrect
Diagnose and fix TwinMind common errors and exceptions.
ReadGrepBash(curl:*)
TwinMind Common Errors
Overview
Quick reference for the most common TwinMind errors and their solutions.
Prerequisites
- TwinMind extension or API configured
- Access to error logs or console
- API credentials for testing
Instructions
Step 1: Identify the Error
Check error message in console, extension popup, or API response.
Step 2: Find Matching Error Below
Match your error to one of the documented cases.
Step 3: Apply Solution
Follow the solution steps for your specific error.
Error Reference
Authentication Failed
Error Message:
Error: Authentication failed - Invalid or expired API key
Status: 401 Unauthorized # HTTP 401 Unauthorized
Cause: API key is missing, expired, or incorrect.
Solution:
set -euo pipefail
# Verify API key is set correctly
echo $TWINMIND_API_KEY
# Test authentication
curl -H "Authorization: Bearer $TWINMIND_API_KEY" \
https://api.twinmind.com/v1/health
# Regenerate key if expired
# Visit: https://twinmind.com/settings/api
Microphone Access Denied
Error Message:
Error: Microphone permission denied
NotAllowedError: Permission denied
Cause: Browser or OS hasn't granted microphone access.
Solution:
Chrome:
1. Click lock icon in address bar
2. Site Settings > Microphone > Allow
3. Reload the page
macOS:
# Check current permissions
tccutil list com.google.Chrome
# Reset permissions (requires re-grant)
tccutil reset Microphone com.google.Chrome
Windows:
Settings > Privacy > Microphone > Allow apps to access microphone
Transcription Timeout
Error Message:
Error: Transcription timeout after 300000ms
RequestTimeoutError: Request exceeded timeout
Cause: Audio file too large or network issues.
Solution:
// Increase timeout for large files
const client = new TwinMindClient({
apiKey: process.env.TWINMIND_API_KEY,
timeout: 600000, // 10 minutes # 600000 = configured value
});
// Or use async processing with webhooks
const response = await client.post('/transcribe', {
audio_url: audioUrl,
async: true,
webhook_url: 'https://your-server.com/webhook/twinmind',
});
Rate Limit Exceeded
Error Message:
Error: Rate limit exceeded. Please retry after 60 seconds.
Status: 429 Too Many Requests # HTTP 429 Too Many Requests
X-RateLimit-Remaining: 0
Cause:
Execute TwinMind primary workflow: Meeting transcription and summary generation.
ReadWriteEditBash(npm:*)Grep
TwinMind Core Workflow A: Meeting Transcription & Summary
Contents
Overview
Primary workflow for capturing meetings, generating transcripts with speaker diarization, and creating AI summaries with action items.
Prerequisites
- Completed
twinmind-install-auth setup
- TwinMind Pro/Enterprise for API access
- Valid API credentials configured
- Audio source available (live or file)
Instructions
Step 1: Initialize Meeting Capture
Build a MeetingCapture class with startLiveCapture() for real-time recording and transcribeRecording() for file-based transcription. Use Ear-3 model with auto language detection and speaker diarization.
Step 2: Generate AI Summary
Create a SummaryGenerator with generateSummary() (brief/detailed/bullet-points formats), generateFollowUpEmail(), and generateMeetingNotes() methods.
Step 3: Handle Speaker Identification
Build a SpeakerManager that extracts speakers from transcript segments, calculates speaking time per speaker, and optionally matches speakers to calendar attendees.
Step 4: Orchestrate Complete Workflow
Wire everything together in processMeeting(): transcribe audio, then generate summary and identify speakers in parallel, optionally produce follow-up email and meeting notes.
See detailed implementation for complete MeetingCapture, SummaryGenerator, SpeakerManager, and orchestration code.
Output
- Complete meeting transcript with timestamps
- Speaker-labeled segments
- AI-generated summary
- Extracted action items with assignees
- Optional follow-up email draft
- Optional formatted meeting notes
Error Handling
| Error |
Cause |
Solution |
| Transcription timeout |
Large audio file |
Increase maxWaitMs or use async callback |
| Speaker match failed |
No calendar data |
Provide attendees list manually |
| Summary generation failed |
Transcript too short |
Ensure minimum 30s of audio |
| Audio format unsupported |
Wrong codec |
Convert to MP3/WAV/M4A |
| Rate limit exceeded |
Too many requests |
Implement queue-based processing |
Examples
Execute TwinMind secondary workflow: Action item extraction and follow-up automation.
ReadWriteEditBash(npm:*)Grep
TwinMind Core Workflow B: Action Items & Follow-ups
Contents
Overview
Secondary workflow for extracting action items with priority/assignee inference, automating follow-up emails, and syncing tasks to project management tools (Asana, Linear, Jira).
Prerequisites
- Completed
twinmind-core-workflow-a (transcription)
- Valid transcript or summary available
- Integration tokens for external services (optional)
Instructions
Step 1: Extract Action Items
Build ActionItemExtractor that calls TwinMind's /extract/action-items endpoint with options for context inclusion, speaker-based assignment, and due date inference. Auto-classify priority (high/medium/low) from keywords and categorize items (Review, Development, Communication, Meetings, Documentation).
Step 2: Automate Follow-up Emails
Create FollowUpAutomation with generateFollowUp() (AI-generated email with summary + action items), sendFollowUp() (immediate send), and scheduleFollowUp() (delayed send).
Step 3: Integrate with Task Management
Implement a TaskIntegration interface with createTask() and updateTask(). Build concrete integrations for Asana (REST API) and Linear (GraphQL) with priority mapping. Use factory pattern via getTaskIntegration().
Step 4: Orchestrate Complete Follow-up
Wire everything in runFollowUpWorkflow(): extract action items, create tasks in external system, then send or schedule follow-up email to attendees.
See detailed implementation for complete ActionItemExtractor, FollowUpAutomation, task integrations, and orchestration code.
Output
- Extracted action items with assignees and due dates
- Tasks created in project management tool
- Follow-up email sent or scheduled
- Complete audit trail
Error Handling
| Error |
Cause |
Solution |
| No action items found |
Transcript too vague |
Verify meeting had clear action items |
| Task creation failed |
Invalid project/team ID |
Check integration credentials |
| Email send failed |
Invalid recipients |
Verify email addresses |
| Assignee not found |
Name mismatch |
Map speakers to user accounts |
Optimize TwinMind costs across Free, Pro ($10/mo), and Enterprise tiers with usage monitoring and tier selection guidance.
ReadWriteEditBash(npm:*)Bash(curl:*)Grep
TwinMind Cost Tuning
Overview
Optimize TwinMind costs across Free, Pro ($10/mo), and Enterprise tiers with usage monitoring and tier selection guidance. TwinMind uses the Ear-3 speech model (5.26% WER, 3.8% DER) for transcription, with GPT-4, Claude, and Gemini for AI summarization.
Prerequisites
- TwinMind account (Free, Pro $10/mo, or Enterprise)
- Chrome extension installed and authenticated
- Understanding of TwinMind workflow
Instructions
Step 1: Setup
TwinMind operates as a Chrome extension and mobile app with optional API access for Pro/Enterprise users.
// TwinMind configuration
const config = {
apiKey: process.env.TWINMIND_API_KEY,
model: "ear-3", // Transcription model
aiModels: ["gpt-4", "claude", "gemini"], // Summary models
};
Step 2: Implementation
// TwinMind Cost Tuning implementation
// Core TwinMind integration
const twinmind = {
transcriptionModel: "ear-3",
languages: ["en", "es", "ko", "ja", "fr"],
features: ["transcription", "summary", "action-items"],
privacyMode: "on-device", // Audio never stored
};
// Check transcription capabilities
async function verify() {
const health = await fetch("https://api.twinmind.com/v1/health");
console.log("TwinMind status:", await health.json());
}
Step 3: Verification
# Verify TwinMind integration
curl -H "Authorization: Bearer $TWINMIND_API_KEY" https://api.twinmind.com/v1/health | jq .
Key TwinMind Specifications
| Feature |
Specification |
| Transcription model |
Ear-3 (5.26% WER) |
| Speaker diarization |
3.8% DER |
| Languages |
140+ supported |
| Audio processing |
On-device (no recordings stored) |
| AI models |
GPT-4, Claude, Gemini (auto-routed) |
| Platforms |
Chrome extension, iOS, Android |
| Pricing |
Free / Pro $10/mo / Enterprise custom |
Output
- TwinMind Cost Tuning configured and verified
- TwinMind integration operational
- Meeting transcription workflow ready
Error Handling
| Error |
Cause |
Solution |
| Microphone access denied |
Browser permissions not granted |
Enable in Chrome settings |
| Transcription not starting |
Audio source not detected |
Check microphone selection |
| API key invalid |
Incorrect or expired key |
Handle TwinMind meeting data with GDPR compliance: transcript storage, memory vault management, data export, and deletion policies.
ReadWriteEditBash(npm:*)Bash(curl:*)Grep
TwinMind Data Handling
Overview
Handle TwinMind meeting data with GDPR compliance: transcript storage, memory vault management, data export, and deletion policies. TwinMind uses the Ear-3 speech model (5.26% WER, 3.8% DER) for transcription, with GPT-4, Claude, and Gemini for AI summarization.
Prerequisites
- TwinMind account (Free, Pro $10/mo, or Enterprise)
- Chrome extension installed and authenticated
- Understanding of TwinMind workflow
Instructions
Step 1: Setup
TwinMind operates as a Chrome extension and mobile app with optional API access for Pro/Enterprise users.
// TwinMind configuration
const config = {
apiKey: process.env.TWINMIND_API_KEY,
model: "ear-3", // Transcription model
aiModels: ["gpt-4", "claude", "gemini"], // Summary models
};
Step 2: Implementation
// TwinMind Data Handling implementation
// Core TwinMind integration
const twinmind = {
transcriptionModel: "ear-3",
languages: ["en", "es", "ko", "ja", "fr"],
features: ["transcription", "summary", "action-items"],
privacyMode: "on-device", // Audio never stored
};
// Check transcription capabilities
async function verify() {
const health = await fetch("https://api.twinmind.com/v1/health");
console.log("TwinMind status:", await health.json());
}
Step 3: Verification
# Verify TwinMind integration
curl -H "Authorization: Bearer $TWINMIND_API_KEY" https://api.twinmind.com/v1/health | jq .
Key TwinMind Specifications
| Feature |
Specification |
| Transcription model |
Ear-3 (5.26% WER) |
| Speaker diarization |
3.8% DER |
| Languages |
140+ supported |
| Audio processing |
On-device (no recordings stored) |
| AI models |
GPT-4, Claude, Gemini (auto-routed) |
| Platforms |
Chrome extension, iOS, Android |
| Pricing |
Free / Pro $10/mo / Enterprise custom |
Output
- TwinMind Data Handling configured and verified
- TwinMind integration operational
- Meeting transcription workflow ready
Error Handling
| Error |
Cause |
Solution |
| Microphone access denied |
Browser permissions not granted |
Enable in Chrome settings |
| Transcription not starting |
Audio source not detected |
Check microphone selection |
| API key invalid |
Incorrect or e
Collect comprehensive diagnostic information for TwinMind issues.
ReadWriteBash(curl:*)Bash(npm:*)Grep
TwinMind Debug Bundle
Current State
!node --version 2>/dev/null || echo 'N/A'
!python3 --version 2>/dev/null || echo 'N/A'
!uname -a
Overview
Collect comprehensive diagnostic data to troubleshoot TwinMind issues.
Prerequisites
- TwinMind extension or API configured
- Access to browser developer tools
- Command-line access (for API debugging)
Instructions
Step 1: Create Debug Bundle Script
// scripts/twinmind-debug-bundle.ts
import * as fs from 'fs';
import * as os from 'os';
import * as path from 'path';
interface DebugBundle {
timestamp: string;
environment: EnvironmentInfo;
apiStatus: ApiStatus;
recentErrors: ErrorEntry[];
configuration: ConfigSnapshot;
networkTests: NetworkTest[];
}
interface EnvironmentInfo {
nodeVersion: string;
platform: string;
arch: string;
osRelease: string;
timezone: string;
memory: {
total: number;
free: number;
used: number;
};
}
interface ApiStatus {
healthy: boolean;
latencyMs: number;
endpoint: string;
responseHeaders?: Record<string, string>;
}
interface ErrorEntry {
timestamp: string;
type: string;
message: string;
stack?: string;
context?: Record<string, any>;
}
interface ConfigSnapshot {
apiKeyPresent: boolean;
apiKeyPrefix: string;
baseUrl: string;
timeout: number;
environment: string;
}
interface NetworkTest {
endpoint: string;
reachable: boolean;
latencyMs?: number;
error?: string;
}
export async function generateDebugBundle(): Promise<DebugBundle> {
const bundle: DebugBundle = {
timestamp: new Date().toISOString(),
environment: getEnvironmentInfo(),
apiStatus: await checkApiStatus(),
recentErrors: collectRecentErrors(),
configuration: getConfigSnapshot(),
networkTests: await runNetworkTests(),
};
return bundle;
}
function getEnvironmentInfo(): EnvironmentInfo {
return {
nodeVersion: process.version,
platform: os.platform(),
arch: os.arch(),
osRelease: os.release(),
timezone: Intl.DateTimeFormat().resolvedOptions().timeZone,
memory: {
total: os.totalmem(),
free: os.freemem(),
used: os.totalmem() - os.freemem(),
},
};
}
async function checkApiStatus(): Promise<ApiStatus> {
const endpoint = process.env.TWINMIND_API_URL || 'https://api.twinmind.com/v1';
const start = Date.now();
try {
const response = await fetch(`${endpoint}/health`, {
headers: {
'Authorization': `Bearer ${process.env.TWINMIND_API_KEY}`,
},
});
return {
healthy: response.ok,
latencyMs: Date.now() - start,
endpoint,
responseHeaders: Object.fromEntries(response.headers.entries()),
};
} catch (error: any) {
return {
healthy: false,
latencyMs: Date.now()
Deploy TwinMind integrations to production environments with Chrome extension deployment, mobile app configuration, and API access setup.
ReadWriteEditBash(npm:*)Bash(curl:*)Grep
TwinMind Deploy Integration
Overview
Deploy TwinMind integrations to production environments with Chrome extension deployment, mobile app configuration, and API access setup. TwinMind uses the Ear-3 speech model (5.26% WER, 3.8% DER) for transcription, with GPT-4, Claude, and Gemini for AI summarization.
Prerequisites
- TwinMind account (Free, Pro $10/mo, or Enterprise)
- Chrome extension installed and authenticated
- Understanding of TwinMind workflow
Instructions
Step 1: Setup
TwinMind operates as a Chrome extension and mobile app with optional API access for Pro/Enterprise users.
# TwinMind API access (Pro/Enterprise)
export TWINMIND_API_KEY="your-api-key"
curl -H "Authorization: Bearer $TWINMIND_API_KEY" https://api.twinmind.com/v1/health
# Expected: {"status": "ok"}
Step 2: Implementation
// TwinMind Deploy Integration implementation
// Core TwinMind integration
const twinmind = {
transcriptionModel: "ear-3",
languages: ["en", "es", "ko", "ja", "fr"],
features: ["transcription", "summary", "action-items"],
privacyMode: "on-device", // Audio never stored
};
// Check transcription capabilities
async function verify() {
const health = await fetch("https://api.twinmind.com/v1/health");
console.log("TwinMind status:", await health.json());
}
Step 3: Verification
# Verify TwinMind integration
curl -H "Authorization: Bearer $TWINMIND_API_KEY" https://api.twinmind.com/v1/health | jq .
Key TwinMind Specifications
| Feature |
Specification |
| Transcription model |
Ear-3 (5.26% WER) |
| Speaker diarization |
3.8% DER |
| Languages |
140+ supported |
| Audio processing |
On-device (no recordings stored) |
| AI models |
GPT-4, Claude, Gemini (auto-routed) |
| Platforms |
Chrome extension, iOS, Android |
| Pricing |
Free / Pro $10/mo / Enterprise custom |
Output
- TwinMind Deploy Integration configured and verified
- TwinMind integration operational
- Meeting transcription workflow ready
Error Handling
| Error |
Cause |
Solution |
| Microphone access denied |
Browser permissions not granted |
Enable in Chrome settings |
| Transcription not starting |
Audio source not detected |
Check microphone selection |
API key invalid
Configure TwinMind Enterprise with on-premise deployment, custom AI models, SSO integration, and team-wide transcript sharing.
ReadWriteEditBash(npm:*)Bash(curl:*)Grep
TwinMind Enterprise RBAC
Overview
Configure TwinMind Enterprise with on-premise deployment, custom AI models, SSO integration, and team-wide transcript sharing. TwinMind uses the Ear-3 speech model (5.26% WER, 3.8% DER) for transcription, with GPT-4, Claude, and Gemini for AI summarization.
Prerequisites
- TwinMind account (Free, Pro $10/mo, or Enterprise)
- Chrome extension installed and authenticated
- API access (Pro/Enterprise tier)
Instructions
Step 1: Enterprise Configuration
TwinMind Enterprise supports on-premise deployment, custom AI models, and unlimited context tokens.
// TwinMind configuration
const config = {
apiKey: process.env.TWINMIND_API_KEY,
model: "ear-3", // Transcription model
aiModels: ["gpt-4", "claude", "gemini"], // Summary models
};
Step 2: Role Configuration
// TwinMind Enterprise RBAC implementation
// Enterprise tier features
const twinmind = {
ssoProvider: "okta",
teamSharing: true,
customModels: ["gpt-4-turbo"],
onPremise: true,
};
// Configure team access
async function configureTeam() {
const team = await twinmind.createTeam({ name: "Engineering", members: ["user1", "user2"] });
console.log("Team configured:", team.id);
}
Step 3: Verification
# Verify TwinMind enterprise setup
curl -H "Authorization: Bearer $TWINMIND_API_KEY" https://api.twinmind.com/v1/team/members | jq .
Key TwinMind Specifications
| Feature |
Specification |
| Transcription model |
Ear-3 (5.26% WER) |
| Speaker diarization |
3.8% DER |
| Languages |
140+ supported |
| Audio processing |
On-device (no recordings stored) |
| AI models |
GPT-4, Claude, Gemini (auto-routed) |
| Platforms |
Chrome extension, iOS, Android |
| Pricing |
Free / Pro $10/mo / Enterprise custom |
Output
- TwinMind Enterprise RBAC configured and verified
- TwinMind integration operational
- Enterprise features enabled
Error Handling
| Error |
Cause |
Solution |
| Microphone access denied |
Browser permissions not granted |
Enable in Chrome settings |
| Transcription not starting |
Audio source not detected |
Check microphone selection |
| API key invalid |
Incorrect or expired key |
Regenerate in TwinMind dashboard |
| Sync failed |
Network interruption |
Create your first TwinMind meeting transcription and AI summary.
ReadWriteEdit
TwinMind Hello World
Overview
Create your first meeting transcription with AI-generated summary and action items.
Prerequisites
- Completed
twinmind-install-auth setup
- Chrome extension authenticated
- Microphone permissions granted
- Active internet connection
Instructions
Step 1: Start a Test Meeting
Option A - Browser Meeting:
- Open Google Meet, Zoom, or Teams in browser
- Start or join a test call
- Click TwinMind extension icon
- Click "Start Transcribing"
Option B - Voice Memo:
- Click TwinMind extension icon
- Select "Voice Memo" mode
- Click the microphone button
- Start speaking
Step 2: Speak Test Content
For a meaningful test, speak for 30-60 seconds covering:
"Welcome to today's project status meeting.
We have three items on the agenda.
First, the mobile app launch is scheduled for next Friday.
Sarah will handle the App Store submission.
Second, we need to review the Q1 budget.
John, please send the spreadsheet by Wednesday.
Third, the customer feedback survey shows 85% satisfaction.
We should schedule a follow-up meeting next week to discuss improvements."
Step 3: Stop and Generate Summary
- Click "Stop Transcribing" button
- Wait for processing (5-10 seconds)
- TwinMind automatically generates:
- Full transcript with timestamps
- Meeting summary
- Action items with owners
- Key discussion points
Step 4: Review Output
Expected transcript output:
[00:00] Welcome to today's project status meeting...
[00:05] We have three items on the agenda...
[00:12] First, the mobile app launch is scheduled for next Friday...
[00:18] Sarah will handle the App Store submission...
Expected AI summary:
## Meeting Summary
Project status meeting covering mobile app launch, Q1 budget review,
and customer feedback analysis.
## Action Items
- [ ] Sarah: Submit app to App Store (Due: Friday)
- [ ] John: Send Q1 budget spreadsheet (Due: Wednesday)
- [ ] Team: Schedule follow-up meeting for feedback discussion
## Key Points
- Mobile app launching next Friday
- Customer satisfaction at 85%
- Budget review pending
Step 5: Access Memory Vault
After the meeting:
// TwinMind stores transcripts in your Memory Vault
// Access via extension sidebar or ask AI:
"What did we discuss about the mobile app launch?"
"Who is responsible for the budget spreadsheet?"
"When is the next meeting scheduled?"
Step 6: Test Cross-Platform Sync (Optional)
- Open TwinMind mobile app
- Sign in with same account
- Verify transcript appears in app
- T
Incident response for TwinMind failures: transcription not starting, audio not captured, sync failures, and calendar disconnect.
ReadWriteEditBash(npm:*)Bash(curl:*)Grep
TwinMind Incident Runbook
Overview
Incident response for TwinMind failures: transcription not starting, audio not captured, sync failures, and calendar disconnect. TwinMind uses the Ear-3 speech model (5.26% WER, 3.8% DER) for transcription, with GPT-4, Claude, and Gemini for AI summarization.
Prerequisites
- TwinMind account (Free, Pro $10/mo, or Enterprise)
- Chrome extension installed and authenticated
- Understanding of TwinMind workflow
Instructions
Step 1: Setup
TwinMind operates as a Chrome extension and mobile app with optional API access for Pro/Enterprise users.
# TwinMind API access (Pro/Enterprise)
export TWINMIND_API_KEY="your-api-key"
curl -H "Authorization: Bearer $TWINMIND_API_KEY" https://api.twinmind.com/v1/health
# Expected: {"status": "ok"}
Step 2: Implementation
// TwinMind Incident Runbook implementation
// Core TwinMind integration
const twinmind = {
transcriptionModel: "ear-3",
languages: ["en", "es", "ko", "ja", "fr"],
features: ["transcription", "summary", "action-items"],
privacyMode: "on-device", // Audio never stored
};
// Check transcription capabilities
async function verify() {
const health = await fetch("https://api.twinmind.com/v1/health");
console.log("TwinMind status:", await health.json());
}
Step 3: Verification
# Verify TwinMind integration
curl -H "Authorization: Bearer $TWINMIND_API_KEY" https://api.twinmind.com/v1/health | jq .
Key TwinMind Specifications
| Feature |
Specification |
| Transcription model |
Ear-3 (5.26% WER) |
| Speaker diarization |
3.8% DER |
| Languages |
140+ supported |
| Audio processing |
On-device (no recordings stored) |
| AI models |
GPT-4, Claude, Gemini (auto-routed) |
| Platforms |
Chrome extension, iOS, Android |
| Pricing |
Free / Pro $10/mo / Enterprise custom |
Output
- TwinMind Incident Runbook configured and verified
- TwinMind integration operational
- Meeting transcription workflow ready
Error Handling
| Error |
Cause |
Solution |
| Microphone access denied |
Browser permissions not granted |
Enable in Chrome settings |
| Transcription not starting |
Audio source not detected |
Check microphone selection |
| API key invalid |
Incorrec
Install and configure TwinMind Chrome extension, mobile app, and API access.
ReadWriteEditBash(npm:*)Bash(curl:*)Grep
TwinMind Install & Auth
Overview
Set up TwinMind meeting AI across Chrome extension, mobile apps, and API integration.
Prerequisites
- Chrome browser (latest version) for extension
- iOS 15+ or Android 10+ for mobile apps
- Google account for calendar integration
- TwinMind account (Free, Pro, or Enterprise)
Instructions
Step 1: Install Chrome Extension
- Visit Chrome Web Store:
https://chromewebstore.google.com/detail/twinmind-chat-with-tabs-m/agpbjhhcmoanaljagpoheldgjhclepdj
- Click "Add to Chrome" and confirm permissions
- Pin the extension to your toolbar for quick access
Step 2: Create Account & Authenticate
- Click the TwinMind extension icon
- Sign up with Google or email
- Complete onboarding questionnaire for personalization
Step 3: Configure Calendar Integration
// TwinMind automatically syncs with authorized calendars
// After OAuth, meetings appear in the extension sidebar
// Calendar integration provides:
// - Meeting participant names
// - Meeting agenda/description
// - Automatic transcription start/stop
Authorize calendars in Settings > Integrations:
- Google Calendar (recommended)
- Microsoft Outlook
- Apple Calendar
Step 4: Configure Audio Permissions
Grant microphone access when prompted:
- Chrome: Settings > Privacy > Microphone
- macOS: System Preferences > Security > Microphone
- Windows: Settings > Privacy > Microphone
Step 5: Install Mobile App (Optional)
iOS:
https://apps.apple.com/us/app/twinmind-ai-notes-memory/id6504585781
Android:
https://play.google.com/store/apps/details?id=ai.twinmind.android
Step 6: Configure API Access (Pro/Enterprise)
set -euo pipefail
# Set environment variable for API access
export TWINMIND_API_KEY="your-api-key"
# Or create .env file
echo 'TWINMIND_API_KEY=your-api-key' >> .env
# Verify API access
curl -H "Authorization: Bearer $TWINMIND_API_KEY" \
https://api.twinmind.com/v1/health
Step 7: Verify Installation
// Test transcription with a short recording
// 1. Start a test meeting or voice memo
// 2. Click TwinMind extension
// 3. Click "Start Transcribing"
// 4. Speak for 10-15 seconds
// 5. Click "Stop" and verify transcript appears
Output
- Chrome extension installed and authenticated
- Calendar integration configured
- Mobile apps installed (optional)
- API key configured (Pro/Enterprise)
- Test transcription successful
Set up local development workflow with TwinMind API integration.
ReadWriteEditBash(npm:*)Bash(pip:*)Bash(curl:*)
TwinMind Local Dev Loop
Overview
Configure a productive local development environment for TwinMind API integration.
Prerequisites
- TwinMind Pro or Enterprise account (API access)
- Node.js 18+ or Python 3.10+
- API key from TwinMind dashboard
- Local development environment
Instructions
Step 1: Project Setup
set -euo pipefail
# Create project directory
mkdir twinmind-integration && cd twinmind-integration
# Initialize Node.js project
npm init -y
# Install dependencies
npm install dotenv axios zod typescript ts-node @types/node
# Initialize TypeScript
npx tsc --init
Step 2: Configure Environment
# Create environment file
cat > .env << 'EOF'
TWINMIND_API_KEY=your-api-key-here
TWINMIND_API_URL=https://api.twinmind.com/v1
TWINMIND_WEBHOOK_SECRET=your-webhook-secret
NODE_ENV=development
EOF
# Add to .gitignore
echo ".env" >> .gitignore
echo "node_modules" >> .gitignore
Step 3: Create TwinMind Client
// src/twinmind/client.ts
import axios, { AxiosInstance } from 'axios';
import { z } from 'zod';
// Response schemas
const TranscriptSchema = z.object({
id: z.string(),
text: z.string(),
duration_seconds: z.number(),
language: z.string(),
speakers: z.array(z.object({
id: z.string(),
name: z.string().optional(),
segments: z.array(z.object({
start: z.number(),
end: z.number(),
text: z.string(),
confidence: z.number(),
})),
})),
created_at: z.string(),
});
const SummarySchema = z.object({
id: z.string(),
transcript_id: z.string(),
summary: z.string(),
action_items: z.array(z.object({
text: z.string(),
assignee: z.string().optional(),
due_date: z.string().optional(),
})),
key_points: z.array(z.string()),
});
export type Transcript = z.infer<typeof TranscriptSchema>;
export type Summary = z.infer<typeof SummarySchema>;
export class TwinMindClient {
private client: AxiosInstance;
constructor(apiKey: string, baseUrl?: string) {
this.client = axios.create({
baseURL: baseUrl || 'https://api.twinmind.com/v1',
headers: {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json',
},
timeout: 30000, # 30000: 30 seconds in ms
});
}
async healthCheck(): Promise<boolean> {
const response = await this.client.get('/health');
return response.status === 200; # HTTP 200 OK
}
async transcribe(audioUrl: string, options?: {
language?: string;
diarization?: boolean;
model?: 'ear-3' | 'ear-2';
}): Promise<Transcript> {
const response = await this.client.post('/transcribe', {
audio_url: audioUrl,
la
Migrate from other meeting AI tools (Otter.
ReadWriteEditBash(npm:*)Bash(curl:*)Grep
TwinMind Migration Deep Dive
Overview
Migrate from other meeting AI tools (Otter.ai, Fireflies, Grain) to TwinMind with transcript import and workflow adaptation. TwinMind uses the Ear-3 speech model (5.26% WER, 3.8% DER) for transcription, with GPT-4, Claude, and Gemini for AI summarization.
Prerequisites
- TwinMind account (Free, Pro $10/mo, or Enterprise)
- Chrome extension installed and authenticated
- API access (Pro/Enterprise tier)
Instructions
Step 1: Enterprise Configuration
TwinMind Enterprise supports on-premise deployment, custom AI models, and unlimited context tokens.
// TwinMind configuration
const config = {
apiKey: process.env.TWINMIND_API_KEY,
model: "ear-3", // Transcription model
aiModels: ["gpt-4", "claude", "gemini"], // Summary models
};
Step 2: Role Configuration
// TwinMind Migration Deep Dive implementation
// Enterprise tier features
const twinmind = {
ssoProvider: "okta",
teamSharing: true,
customModels: ["gpt-4-turbo"],
onPremise: true,
};
// Configure team access
async function configureTeam() {
const team = await twinmind.createTeam({ name: "Engineering", members: ["user1", "user2"] });
console.log("Team configured:", team.id);
}
Step 3: Verification
# Verify TwinMind enterprise setup
curl -H "Authorization: Bearer $TWINMIND_API_KEY" https://api.twinmind.com/v1/team/members | jq .
Key TwinMind Specifications
| Feature |
Specification |
| Transcription model |
Ear-3 (5.26% WER) |
| Speaker diarization |
3.8% DER |
| Languages |
140+ supported |
| Audio processing |
On-device (no recordings stored) |
| AI models |
GPT-4, Claude, Gemini (auto-routed) |
| Platforms |
Chrome extension, iOS, Android |
| Pricing |
Free / Pro $10/mo / Enterprise custom |
Output
- TwinMind Migration Deep Dive configured and verified
- TwinMind integration operational
- Enterprise features enabled
Error Handling
| Error |
Cause |
Solution |
| Microphone access denied |
Browser permissions not granted |
Enable in Chrome settings |
| Transcription not starting |
Audio source not detected |
Check microphone selection |
| API key invalid |
Incorrect or expired key |
Regenerate in TwinMind dashboard |
| Sync failed |
Network interru
Configure TwinMind across development, staging, and production environments with separate accounts and API key management.
ReadWriteEditBash(npm:*)Bash(curl:*)Grep
TwinMind Multi-Environment Setup
Overview
Configure TwinMind across development, staging, and production environments with separate accounts and API key management. TwinMind uses the Ear-3 speech model (5.26% WER, 3.8% DER) for transcription, with GPT-4, Claude, and Gemini for AI summarization.
Prerequisites
- TwinMind account (Free, Pro $10/mo, or Enterprise)
- Chrome extension installed and authenticated
- API access (Pro/Enterprise tier)
Instructions
Step 1: Enterprise Configuration
TwinMind Enterprise supports on-premise deployment, custom AI models, and unlimited context tokens.
// TwinMind configuration
const config = {
apiKey: process.env.TWINMIND_API_KEY,
model: "ear-3", // Transcription model
aiModels: ["gpt-4", "claude", "gemini"], // Summary models
};
Step 2: Role Configuration
// TwinMind Multi-Environment Setup implementation
// Enterprise tier features
const twinmind = {
ssoProvider: "okta",
teamSharing: true,
customModels: ["gpt-4-turbo"],
onPremise: true,
};
// Configure team access
async function configureTeam() {
const team = await twinmind.createTeam({ name: "Engineering", members: ["user1", "user2"] });
console.log("Team configured:", team.id);
}
Step 3: Verification
# Verify TwinMind enterprise setup
curl -H "Authorization: Bearer $TWINMIND_API_KEY" https://api.twinmind.com/v1/team/members | jq .
Key TwinMind Specifications
| Feature |
Specification |
| Transcription model |
Ear-3 (5.26% WER) |
| Speaker diarization |
3.8% DER |
| Languages |
140+ supported |
| Audio processing |
On-device (no recordings stored) |
| AI models |
GPT-4, Claude, Gemini (auto-routed) |
| Platforms |
Chrome extension, iOS, Android |
| Pricing |
Free / Pro $10/mo / Enterprise custom |
Output
- TwinMind Multi-Environment Setup configured and verified
- TwinMind integration operational
- Enterprise features enabled
Error Handling
| Error |
Cause |
Solution |
| Microphone access denied |
Browser permissions not granted |
Enable in Chrome settings |
| Transcription not starting |
Audio source not detected |
Check microphone selection |
| API key invalid |
Incorrect or expired key |
Regenerate in TwinMind dashboard |
| Sync failed |
Netwo
Monitor TwinMind transcription quality, meeting coverage, action item extraction rates, and memory vault health.
ReadWriteEditBash(npm:*)Bash(curl:*)Grep
TwinMind Observability
Overview
Monitor TwinMind transcription quality, meeting coverage, action item extraction rates, and memory vault health. TwinMind uses the Ear-3 speech model (5.26% WER, 3.8% DER) for transcription, with GPT-4, Claude, and Gemini for AI summarization.
Prerequisites
- TwinMind account (Free, Pro $10/mo, or Enterprise)
- Chrome extension installed and authenticated
- Understanding of TwinMind workflow
Instructions
Step 1: Setup
TwinMind operates as a Chrome extension and mobile app with optional API access for Pro/Enterprise users.
# TwinMind API access (Pro/Enterprise)
export TWINMIND_API_KEY="your-api-key"
curl -H "Authorization: Bearer $TWINMIND_API_KEY" https://api.twinmind.com/v1/health
# Expected: {"status": "ok"}
Step 2: Implementation
// TwinMind Observability implementation
// Core TwinMind integration
const twinmind = {
transcriptionModel: "ear-3",
languages: ["en", "es", "ko", "ja", "fr"],
features: ["transcription", "summary", "action-items"],
privacyMode: "on-device", // Audio never stored
};
// Check transcription capabilities
async function verify() {
const health = await fetch("https://api.twinmind.com/v1/health");
console.log("TwinMind status:", await health.json());
}
Step 3: Verification
# Verify TwinMind integration
curl -H "Authorization: Bearer $TWINMIND_API_KEY" https://api.twinmind.com/v1/health | jq .
Key TwinMind Specifications
| Feature |
Specification |
| Transcription model |
Ear-3 (5.26% WER) |
| Speaker diarization |
3.8% DER |
| Languages |
140+ supported |
| Audio processing |
On-device (no recordings stored) |
| AI models |
GPT-4, Claude, Gemini (auto-routed) |
| Platforms |
Chrome extension, iOS, Android |
| Pricing |
Free / Pro $10/mo / Enterprise custom |
Output
- TwinMind Observability configured and verified
- TwinMind integration operational
- Meeting transcription workflow ready
Error Handling
| Error |
Cause |
Solution |
| Microphone access denied |
Browser permissions not granted |
Enable in Chrome settings |
| Transcription not starting |
Audio source not detected |
Check microphone selection |
| API key invalid |
Incorrect or expired key |
Optimize TwinMind transcription accuracy and speed with Ear-3 model configuration, audio quality tuning, and caching strategies.
ReadWriteEditBash(npm:*)Bash(curl:*)Grep
TwinMind Performance Tuning
Overview
Optimize TwinMind transcription accuracy and speed with Ear-3 model configuration, audio quality tuning, and caching strategies. TwinMind uses the Ear-3 speech model (5.26% WER, 3.8% DER) for transcription, with GPT-4, Claude, and Gemini for AI summarization.
Prerequisites
- TwinMind account (Free, Pro $10/mo, or Enterprise)
- Chrome extension installed and authenticated
- Understanding of TwinMind workflow
Instructions
Step 1: Setup
TwinMind operates as a Chrome extension and mobile app with optional API access for Pro/Enterprise users.
// TwinMind configuration
const config = {
apiKey: process.env.TWINMIND_API_KEY,
model: "ear-3", // Transcription model
aiModels: ["gpt-4", "claude", "gemini"], // Summary models
};
Step 2: Implementation
// TwinMind Performance Tuning implementation
// Core TwinMind integration
const twinmind = {
transcriptionModel: "ear-3",
languages: ["en", "es", "ko", "ja", "fr"],
features: ["transcription", "summary", "action-items"],
privacyMode: "on-device", // Audio never stored
};
// Check transcription capabilities
async function verify() {
const health = await fetch("https://api.twinmind.com/v1/health");
console.log("TwinMind status:", await health.json());
}
Step 3: Verification
# Verify TwinMind integration
curl -H "Authorization: Bearer $TWINMIND_API_KEY" https://api.twinmind.com/v1/health | jq .
Key TwinMind Specifications
| Feature |
Specification |
| Transcription model |
Ear-3 (5.26% WER) |
| Speaker diarization |
3.8% DER |
| Languages |
140+ supported |
| Audio processing |
On-device (no recordings stored) |
| AI models |
GPT-4, Claude, Gemini (auto-routed) |
| Platforms |
Chrome extension, iOS, Android |
| Pricing |
Free / Pro $10/mo / Enterprise custom |
Output
- TwinMind Performance Tuning configured and verified
- TwinMind integration operational
- Meeting transcription workflow ready
Error Handling
| Error |
Cause |
Solution |
| Microphone access denied |
Browser permissions not granted |
Enable in Chrome settings |
| Transcription not starting |
Audio source not detected |
Check microphone selection |
| API key invalid |
In
Complete production deployment checklist for TwinMind integrations.
ReadGrep
TwinMind Production Checklist
Overview
Comprehensive checklist for deploying TwinMind integrations to production.
Prerequisites
- Development and staging environments tested
- API credentials for production
- Infrastructure provisioned
- Team roles assigned
Production Readiness Checklist
1. Authentication & Security
## Authentication
- [ ] Production API key generated (separate from dev/staging)
- [ ] API key stored in secrets manager (not env vars)
- [ ] API key rotation procedure documented
- [ ] Webhook secrets configured
- [ ] All OAuth tokens refreshed and valid
## Security
- [ ] HTTPS enforced on all endpoints
- [ ] Webhook signature verification enabled
- [ ] CORS configured correctly
- [ ] Rate limiting implemented
- [ ] Input validation on all endpoints
- [ ] SQL injection protection verified
- [ ] XSS protection enabled
- [ ] CSP headers configured
2. Data & Privacy
## Data Protection
- [ ] Transcripts encrypted at rest (AES-256) # 256 bytes
- [ ] PII redaction enabled and tested
- [ ] Data retention policies configured
- [ ] Backup encryption verified
- [ ] Data residency requirements met
## Privacy Compliance
- [ ] GDPR compliance verified (if applicable)
- [ ] User consent flow implemented
- [ ] Data deletion API integrated
- [ ] Privacy policy updated
- [ ] Cookie consent banner (if applicable)
## Audit Trail
- [ ] Audit logging enabled for all operations
- [ ] Log retention configured
- [ ] Sensitive data excluded from logs
- [ ] Log access restricted
3. Infrastructure
## Compute
- [ ] Auto-scaling configured
- [ ] Health checks enabled
- [ ] Graceful shutdown implemented
- [ ] Resource limits set (CPU, memory)
- [ ] Container security scanned
## Networking
- [ ] Load balancer configured
- [ ] TLS 1.3 enforced
- [ ] DNS records verified
- [ ] CDN configured (if applicable)
- [ ] Firewall rules reviewed
## Storage
- [ ] Database backups automated
- [ ] Storage encryption enabled
- [ ] Disaster recovery plan tested
- [ ] Data migration scripts ready
4. Monitoring & Observability
## Metrics
- [ ] Prometheus/Datadog metrics configured
- [ ] Custom TwinMind metrics added:
- [ ] twinmind_transcriptions_total
- [ ] twinmind_transcription_duration_seconds
- [ ] twinmind_errors_total
- [ ] twinmind_api_latency_seconds
- [ ] Dashboards created
## Alerting
- [ ] Alert rules configured:
- [ ] Error rate > 5%
- [ ] P95 latency > 5s
- [ ] Rate limit warnings
- [ ] API availability
- [ ] On-call rotation set up
- [ ] Escalation policy defined
## Logging
- [ ] Structured logging implemented
- [ ] Log levels configured (INFO in prod)
- [ ] Log aggregation set up
- [ ] Log-based alerts configured
## Tracing
- [ ] Distributed tracing
Implement TwinMind rate limiting, backoff, and optimization patterns.
ReadWriteEdit
TwinMind Rate Limits
Overview
Handle TwinMind rate limits gracefully with exponential backoff and request optimization.
Prerequisites
- TwinMind API access (Pro/Enterprise)
- Understanding of async/await patterns
- Familiarity with rate limiting concepts
Instructions
Step 1: Understand Rate Limit Tiers
| Tier |
Audio Hours/Month |
API Requests/Min |
Concurrent Transcriptions |
Burst |
| Free |
Unlimited |
30 |
1 |
5 |
| Pro ($10/mo) |
Unlimited |
60 |
3 |
15 |
| Enterprise |
Unlimited |
300 |
10 |
50 |
Key Limits:
- Transcription: Based on audio duration ($0.23/hour with Ear-3)
- AI Operations: Token-based (2M context for Pro)
- Summarization: 10/minute (Free), 30/minute (Pro)
- Memory Search: 60/minute (Free), 300/minute (Pro)
Step 2: Implement Exponential Backoff with Jitter
// src/twinmind/rate-limit.ts
interface RateLimitConfig {
maxRetries: number;
baseDelayMs: number;
maxDelayMs: number;
jitterMs: number;
}
const defaultConfig: RateLimitConfig = {
maxRetries: 5,
baseDelayMs: 1000, # 1000: 1 second in ms
maxDelayMs: 60000, // Max 1 minute # 60000: 1 minute in ms
jitterMs: 500, # HTTP 500 Internal Server Error
};
export async function withRateLimit<T>(
operation: () => Promise<T>,
config: Partial<RateLimitConfig> = {}
): Promise<T> {
const { maxRetries, baseDelayMs, maxDelayMs, jitterMs } = {
...defaultConfig,
...config,
};
for (let attempt = 0; attempt <= maxRetries; attempt++) {
try {
return await operation();
} catch (error: any) {
if (attempt === maxRetries) throw error;
const status = error.response?.status;
if (status !== 429 && status !== 503) throw error; // Only retry on rate limits # 503: HTTP 429 Too Many Requests
// Check Retry-After header
const retryAfter = error.response?.headers?.['retry-after'];
let delay: number;
if (retryAfter) {
delay = parseInt(retryAfter) * 1000; # 1 second in ms
} else {
// Exponential backoff with jitter
const exponential = baseDelayMs * Math.pow(2, attempt);
const jitter = Math.random() * jitterMs;
delay = Math.min(exponential + jitter, maxDelayMs);
}
console.log(`Rate limited (attempt ${attempt + 1}). Waiting ${delay}ms...`);
await new Promise(r => setTimeout(r, delay));
}
}
throw new Error('Max retries exceeded');
}
Step 3: Implement Request Queue
// src/twinmind/queue.ts
import PQueue from 'p-queue';
Production architecture for meeting AI systems using TwinMind: transcription pipeline, memory vault, action item workflow, and calendar integration.
ReadWriteEditBash(npm:*)Bash(curl:*)Grep
TwinMind Reference Architecture
Overview
Production architecture for meeting AI systems using TwinMind: transcription pipeline, memory vault, action item workflow, and calendar integration. TwinMind uses the Ear-3 speech model (5.26% WER, 3.8% DER) for transcription, with GPT-4, Claude, and Gemini for AI summarization.
Prerequisites
- TwinMind account (Free, Pro $10/mo, or Enterprise)
- Chrome extension installed and authenticated
- Understanding of TwinMind workflow
Instructions
Step 1: Setup
TwinMind operates as a Chrome extension and mobile app with optional API access for Pro/Enterprise users.
// TwinMind configuration
const config = {
apiKey: process.env.TWINMIND_API_KEY,
model: "ear-3", // Transcription model
aiModels: ["gpt-4", "claude", "gemini"], // Summary models
};
Step 2: Implementation
// TwinMind Reference Architecture implementation
// Core TwinMind integration
const twinmind = {
transcriptionModel: "ear-3",
languages: ["en", "es", "ko", "ja", "fr"],
features: ["transcription", "summary", "action-items"],
privacyMode: "on-device", // Audio never stored
};
// Check transcription capabilities
async function verify() {
const health = await fetch("https://api.twinmind.com/v1/health");
console.log("TwinMind status:", await health.json());
}
Step 3: Verification
# Verify TwinMind integration
curl -H "Authorization: Bearer $TWINMIND_API_KEY" https://api.twinmind.com/v1/health | jq .
Key TwinMind Specifications
| Feature |
Specification |
| Transcription model |
Ear-3 (5.26% WER) |
| Speaker diarization |
3.8% DER |
| Languages |
140+ supported |
| Audio processing |
On-device (no recordings stored) |
| AI models |
GPT-4, Claude, Gemini (auto-routed) |
| Platforms |
Chrome extension, iOS, Android |
| Pricing |
Free / Pro $10/mo / Enterprise custom |
Output
- TwinMind Reference Architecture configured and verified
- TwinMind integration operational
- Meeting transcription workflow ready
Error Handling
| Error |
Cause |
Solution |
| Microphone access denied |
Browser permissions not granted |
Enable in Chrome settings |
| Transcription not starting |
Audio source not detected |
Check microphone selection |
Apply production-ready TwinMind SDK patterns for TypeScript and Python.
ReadWriteEdit
TwinMind SDK Patterns
Overview
Production patterns for TwinMind's AI memory and meeting intelligence REST API. TwinMind captures, organizes, and retrieves contextual memories from conversations and meetings.
Prerequisites
- TwinMind API key configured
- Understanding of REST API patterns
- Familiarity with memory/context retrieval concepts
Instructions
Step 1: Client Wrapper with Authentication
import requests
import os
class TwinMindClient:
def __init__(self, api_key: str = None, base_url: str = "https://api.twinmind.com/v1"):
self.api_key = api_key or os.environ["TWINMIND_API_KEY"]
self.base_url = base_url
self.session = requests.Session()
self.session.headers.update({
"Authorization": f"Bearer {self.api_key}",
"Content-Type": "application/json"
})
def _request(self, method: str, path: str, **kwargs):
response = self.session.request(method, f"{self.base_url}{path}", **kwargs)
response.raise_for_status()
return response.json()
Step 2: Memory Storage and Retrieval
class TwinMindClient:
# ... (continued from Step 1)
def store_memory(self, content: str, context: dict = None, tags: list = None) -> dict:
return self._request("POST", "/memories", json={
"content": content,
"context": context or {},
"tags": tags or [],
"timestamp": datetime.utcnow().isoformat()
})
def search_memories(self, query: str, limit: int = 10, tags: list = None) -> list:
params = {"q": query, "limit": limit}
if tags:
params["tags"] = ",".join(tags)
return self._request("GET", "/memories/search", params=params)
def get_memory(self, memory_id: str) -> dict:
return self._request("GET", f"/memories/{memory_id}")
Step 3: Meeting Context Integration
def create_meeting_context(self, meeting_id: str, transcript: str, participants: list) -> dict:
return self._request("POST", "/contexts/meeting", json={
"meeting_id": meeting_id,
"transcript": transcript,
"participants": participants,
"extract_action_items": True,
"extract_decisions": True
})
def get_meeting_insights(self, meeting_id: str) -> dict:
return self._request("GET", f"/contexts/meeting/{meeting_id}/insights")
Step 4: Batch Operations with Rate Limiting
Security best practices for TwinMind: on-device audio processing, encrypted cloud backups, microphone permissions, and data privacy controls.
ReadWriteEditBash(npm:*)Bash(curl:*)Grep
TwinMind Security Basics
Overview
Security best practices for TwinMind: on-device audio processing, encrypted cloud backups, microphone permissions, and data privacy controls. TwinMind uses the Ear-3 speech model (5.26% WER, 3.8% DER) for transcription, with GPT-4, Claude, and Gemini for AI summarization.
Prerequisites
- TwinMind account (Free, Pro $10/mo, or Enterprise)
- Chrome extension installed and authenticated
- Understanding of TwinMind workflow
Instructions
Step 1: Setup
TwinMind operates as a Chrome extension and mobile app with optional API access for Pro/Enterprise users.
// TwinMind configuration
const config = {
apiKey: process.env.TWINMIND_API_KEY,
model: "ear-3", // Transcription model
aiModels: ["gpt-4", "claude", "gemini"], // Summary models
};
Step 2: Implementation
// TwinMind Security Basics implementation
// Core TwinMind integration
const twinmind = {
transcriptionModel: "ear-3",
languages: ["en", "es", "ko", "ja", "fr"],
features: ["transcription", "summary", "action-items"],
privacyMode: "on-device", // Audio never stored
};
// Check transcription capabilities
async function verify() {
const health = await fetch("https://api.twinmind.com/v1/health");
console.log("TwinMind status:", await health.json());
}
Step 3: Verification
# Verify TwinMind integration
curl -H "Authorization: Bearer $TWINMIND_API_KEY" https://api.twinmind.com/v1/health | jq .
Key TwinMind Specifications
| Feature |
Specification |
| Transcription model |
Ear-3 (5.26% WER) |
| Speaker diarization |
3.8% DER |
| Languages |
140+ supported |
| Audio processing |
On-device (no recordings stored) |
| AI models |
GPT-4, Claude, Gemini (auto-routed) |
| Platforms |
Chrome extension, iOS, Android |
| Pricing |
Free / Pro $10/mo / Enterprise custom |
Output
- TwinMind Security Basics configured and verified
- TwinMind integration operational
- Meeting transcription workflow ready
Error Handling
| Error |
Cause |
Solution |
| Microphone access denied |
Browser permissions not granted |
Enable in Chrome settings |
| Transcription not starting |
Audio source not detected |
Check microphone selection |
| API key invalid |
Upgrade between TwinMind plan tiers and migrate configurations.
ReadWriteEditBash(curl:*)
TwinMind Upgrade & Migration
Current State
!npm list 2>/dev/null | head -20
!pip freeze 2>/dev/null | head -20
Overview
Guide for upgrading TwinMind tiers and migrating configurations between environments.
Prerequisites
- Active TwinMind account
- Admin access for billing changes
- Backup of current configurations
Plan Comparison
| Feature |
Free |
Pro ($10/mo) |
Enterprise (Custom) |
| Transcription |
Unlimited |
Unlimited |
Unlimited |
| Languages |
140+ |
140+ (Premium Ear-3) |
140+ (Premium) |
| AI Models |
Basic |
GPT-4, Claude, Gemini |
Custom + Fine-tuned |
| Context Tokens |
500K |
2M |
Unlimited |
| API Access |
No |
Yes |
Yes + Priority |
| Rate Limits |
30/min |
60/min |
300/min |
| Concurrent Jobs |
1 |
3 |
10+ |
| Support |
Community |
24-hour |
Dedicated |
| SSO/SAML |
No |
No |
Yes |
| On-Premise |
No |
No |
Yes |
| Custom Models |
No |
No |
Yes |
| SLA |
None |
99.5% |
99.9% |
Instructions
Step 1: Audit Current Usage
// scripts/usage-audit.ts
import { getTwinMindClient } from '../src/twinmind/client';
async function auditUsage() {
const client = getTwinMindClient();
// Get current plan info
const account = await client.get('/account');
console.log('Current Plan:', account.data.plan);
console.log('Plan Started:', account.data.plan_started_at);
// Get usage statistics
const usage = await client.get('/usage', {
params: {
period: 'last_30_days',
},
});
console.log('\n=== Usage Summary (Last 30 Days) ===');
console.log(`Transcription Hours: ${usage.data.transcription_hours}`);
console.log(`API Requests: ${usage.data.api_requests}`);
console.log(`AI Tokens Used: ${usage.data.ai_tokens_used}`);
console.log(`Storage Used: ${usage.data.storage_mb} MB`);
// Check if hitting limits
const limits = await client.get('/account/limits');
console.log('\n=== Current Limits ===');
console.log(`Rate Limit: ${limits.data.rate_limit_per_minute}/min`);
console.log(`Concurrent Jobs: ${limits.data.concurrent_transcriptions}`);
console.log(`Context Tokens: ${limits.data.context_tokens}`);
// Recommendations
console.log('\n=== Upgrade Recommendations ===');
if (usage.data.api_requests > 0 && account.data.plan === &
Handle TwinMind meeting events including transcription completion, action item extraction, and calendar sync notifications.
ReadWriteEditBash(npm:*)Bash(curl:*)Grep
TwinMind Webhooks & Events
Overview
Handle TwinMind meeting events including transcription completion, action item extraction, and calendar sync notifications. TwinMind uses the Ear-3 speech model (5.26% WER, 3.8% DER) for transcription, with GPT-4, Claude, and Gemini for AI summarization.
Prerequisites
- TwinMind account (Free, Pro $10/mo, or Enterprise)
- Chrome extension installed and authenticated
- Understanding of TwinMind workflow
Instructions
Step 1: Setup
TwinMind operates as a Chrome extension and mobile app with optional API access for Pro/Enterprise users.
// TwinMind configuration
const config = {
apiKey: process.env.TWINMIND_API_KEY,
model: "ear-3", // Transcription model
aiModels: ["gpt-4", "claude", "gemini"], // Summary models
};
Step 2: Implementation
// TwinMind Webhooks & Events implementation
// Core TwinMind integration
const twinmind = {
transcriptionModel: "ear-3",
languages: ["en", "es", "ko", "ja", "fr"],
features: ["transcription", "summary", "action-items"],
privacyMode: "on-device", // Audio never stored
};
// Check transcription capabilities
async function verify() {
const health = await fetch("https://api.twinmind.com/v1/health");
console.log("TwinMind status:", await health.json());
}
Step 3: Verification
# Verify TwinMind integration
curl -H "Authorization: Bearer $TWINMIND_API_KEY" https://api.twinmind.com/v1/health | jq .
Key TwinMind Specifications
| Feature |
Specification |
| Transcription model |
Ear-3 (5.26% WER) |
| Speaker diarization |
3.8% DER |
| Languages |
140+ supported |
| Audio processing |
On-device (no recordings stored) |
| AI models |
GPT-4, Claude, Gemini (auto-routed) |
| Platforms |
Chrome extension, iOS, Android |
| Pricing |
Free / Pro $10/mo / Enterprise custom |
Output
- TwinMind Webhooks & Events configured and verified
- TwinMind integration operational
- Meeting transcription workflow ready
Error Handling
| Error |
Cause |
Solution |
| Microphone access denied |
Browser permissions not granted |
Enable in Chrome settings |
| Transcription not starting |
Audio source not detected |
Check microphone selection |
| API key invalid |
Incorr
Ready to use twinmind-pack?
|
|
|
|
|
|
|
|
|