Claude Code skill pack for AssemblyAI (18 skills)
Installation
Open Claude Code and run this command:
/plugin install assemblyai-pack@claude-code-plugins-plus
Use --global to install for all projects, or --project for current project only.
What It Does
> Claude Code skill pack for AssemblyAI speech-to-text, LeMUR, and streaming transcription (18 skills)
What it does: Gives Claude Code deep knowledge of the AssemblyAI API — async transcription with audio intelligence (speaker diarization, sentiment, entities, PII redaction), real-time streaming via WebSocket, and LeMUR for LLM-powered audio analysis (summarization, Q&A, action items). Every skill uses the real assemblyai npm package with real SDK methods.
Links: AssemblyAI Docs | Node SDK | API Reference | Pricing
Skills (18)
'Configure AssemblyAI CI/CD integration with GitHub Actions and testing.
AssemblyAI CI Integration
Overview
Set up CI/CD pipelines for AssemblyAI transcription projects with unit tests (mocked), integration tests (live API), and cost-controlled test strategies.
Prerequisites
- GitHub repository with Actions enabled
- AssemblyAI API key for testing
- npm/pnpm project configured
Instructions
Step 1: GitHub Actions Workflow
# .github/workflows/assemblyai-tests.yml
name: AssemblyAI Tests
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
unit-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- run: npm ci
- run: npm test -- --coverage
# Unit tests use mocked AssemblyAI client — no API key needed
integration-tests:
runs-on: ubuntu-latest
# Only run on main branch to limit API costs
if: github.ref == 'refs/heads/main'
env:
ASSEMBLYAI_API_KEY: ${{ secrets.ASSEMBLYAI_API_KEY }}
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- run: npm ci
- run: npm run test:integration
timeout-minutes: 5
Step 2: Configure Secrets
# Store API key as GitHub secret
gh secret set ASSEMBLYAI_API_KEY --body "your-test-api-key"
# Use a separate test key with lower quota to control costs
# Get one from https://www.assemblyai.com/app/account
Step 3: Unit Tests (Mocked — Free, Fast)
// tests/unit/transcription.test.ts
import { describe, it, expect, vi, beforeEach } from 'vitest';
import { AssemblyAI } from 'assemblyai';
vi.mock('assemblyai', () => ({
AssemblyAI: vi.fn().mockImplementation(() => ({
transcripts: {
transcribe: vi.fn().mockResolvedValue({
id: 'mock-transcript-id',
status: 'completed',
text: 'Hello world, this is a test transcription.',
audio_duration: 5.2,
words: [
{ text: 'Hello', start: 0, end: 300, confidence: 0.99 },
{ text: 'world', start: 310, end: 600, confidence: 0.98 },
],
utterances: [
{ speaker: 'A', text: 'Hello world, this is a test transcription.', start: 0, end: 5200 },
],
}),
get: vi.fn(),
list: vi.fn().mockResolvedValue({
transcripts: [{ id: 'mock-1', status: 'completed' }],
}),
delete: vi.fn().mockResolvedValue({}),
},
lemur: {
task: vi.fn().mockResolvedValue({
request_id: 'mock-lemur-id',
response: 'This is a greeting recording.',
}),
'Diagnose and fix AssemblyAI common errors and exceptions.
AssemblyAI Common Errors
Overview
Quick reference for the most common AssemblyAI errors across transcription, streaming, and LeMUR APIs with real error messages and solutions.
Prerequisites
assemblyaipackage installed- API key configured
- Access to application logs or console
Instructions
Error 1: Authentication Failed
Error: Authentication error: Invalid API key
Status: 401
Cause: API key is missing, invalid, or revoked.
Solution:
# Verify key is set
echo $ASSEMBLYAI_API_KEY
# Test directly
curl -H "Authorization: $ASSEMBLYAI_API_KEY" \
https://api.assemblyai.com/v2/transcript \
-X GET
Error 2: Transcription Status Error
{ "status": "error", "error": "Download error: unable to download..." }
Cause: The audio URL is not publicly accessible, has expired, or returned non-audio content.
Solution:
// Verify URL is accessible
const response = await fetch(audioUrl, { method: 'HEAD' });
console.log('Content-Type:', response.headers.get('content-type'));
console.log('Status:', response.status);
// Content-Type should be audio/* or video/*
// For private files, upload directly
const transcript = await client.transcripts.transcribe({
audio: './local-file.mp3', // SDK handles upload
});
Error 3: Could Not Process Audio
{ "status": "error", "error": "Audio file could not be processed" }
Cause: Corrupted file, unsupported codec, file too short (<200ms), or audio is entirely silent.
Solution:
# Check file with ffprobe
ffprobe -v quiet -print_format json -show_format -show_streams input.mp3
# Convert to a known-good format
ffmpeg -i input.unknown -ar 16000 -ac 1 -f wav output.wav
Error 4: Rate Limit Exceeded
Error: Rate limit exceeded
Status: 429
Header: Retry-After: 30
Cause: Too many concurrent requests. Free tier: 5 streams/min. Paid: 100 streams/min (auto-scales).
Solution:
import { AssemblyAI } from 'assemblyai';
async function transcribeWithBackoff(audioUrl: string, retries = 3) {
const client = new AssemblyAI({ apiKey: process.env.ASSEMBLYAI_API_KEY! });
for (let attempt = 0; attempt <= retries; attempt++) {
try {
return await client.transcripts.transcribe({ audio: audioUrl });
} catch (err: any) {
if (err.status !== 429 || attempt === 'Execute AssemblyAI primary workflow: async transcription with audio.
AssemblyAI Core Workflow A — Async Transcription
Overview
Primary money-path workflow: submit audio for async transcription with audio intelligence features. The SDK handles file upload (for local files), queues the transcription job, and polls until completion.
Prerequisites
assemblyaipackage installed- API key configured in
ASSEMBLYAIAPIKEY
Instructions
Step 1: Basic Async Transcription
import { AssemblyAI } from 'assemblyai';
const client = new AssemblyAI({
apiKey: process.env.ASSEMBLYAI_API_KEY!,
});
// Remote URL — SDK queues and polls automatically
const transcript = await client.transcripts.transcribe({
audio: 'https://example.com/meeting-recording.mp3',
});
console.log(transcript.text);
console.log(`Duration: ${transcript.audio_duration}s`);
console.log(`Words: ${transcript.words?.length}`);
Step 2: Local File Upload
// The SDK uploads the file and transcribes in one call
const transcript = await client.transcripts.transcribe({
audio: './recordings/interview.wav',
});
// Or from a buffer/stream
import fs from 'fs';
const buffer = fs.readFileSync('./recordings/interview.wav');
const transcript2 = await client.transcripts.transcribe({
audio: buffer,
});
Step 3: Speaker Diarization
const transcript = await client.transcripts.transcribe({
audio: audioUrl,
speaker_labels: true,
speakers_expected: 3, // Optional: hint for expected speaker count
});
// Utterances are grouped by speaker
for (const utterance of transcript.utterances ?? []) {
console.log(`Speaker ${utterance.speaker}: ${utterance.text}`);
// Speaker A: Good morning, thanks for joining.
// Speaker B: Happy to be here.
}
Step 4: Full Audio Intelligence Stack
const transcript = await client.transcripts.transcribe({
audio: audioUrl,
// Speaker identification
speaker_labels: true,
// Content analysis
sentiment_analysis: true,
entity_detection: true,
auto_highlights: true,
iab_categories: true, // Topic detection (IAB taxonomy)
content_safety: true, // Flag sensitive content
summarization: true,
summary_model: 'informative',
summary_type: 'bullets',
// Formatting
punctuate: true,
format_text: true,
language_code: 'en',
// Word boost for domain terms
word_boost: ['AssemblyAI', 'LeMUR', 'transcription'],
boost_param: 'high',
});
// --- Access results ---
// Sentiment per sentence
for (const s of transcript.sentiment_analysis_results ?? []) {
console.log(`[${s.sentiment}] ${s.text}`);
// [POSITIVE] I really enjoyed working on this project.
}
// Named entities
for (const e of transcript.entit'Execute AssemblyAI streaming transcription and LeMUR workflows.
AssemblyAI Core Workflow B — Streaming & LeMUR
Overview
Two advanced workflows: (1) real-time streaming transcription via WebSocket for live captioning and voice agents, and (2) LeMUR for applying LLMs to transcripts — summarization, Q&A, action items, and custom tasks.
Prerequisites
assemblyaipackage installed (npm install assemblyai)- API key configured in
ASSEMBLYAIAPIKEY - For streaming: microphone or audio stream source
Part 1: Real-Time Streaming Transcription
Step 1: Basic Streaming Setup
import { AssemblyAI } from 'assemblyai';
const client = new AssemblyAI({
apiKey: process.env.ASSEMBLYAI_API_KEY!,
});
const transcriber = client.streaming.createService({
// Model options: 'nova-3' (default), 'nova-3-pro' (highest accuracy)
speech_model: 'nova-3',
sample_rate: 16000,
});
transcriber.on('open', ({ sessionId }) => {
console.log('Session opened:', sessionId);
});
transcriber.on('transcript', (message) => {
// message_type: 'PartialTranscript' or 'FinalTranscript'
if (message.message_type === 'FinalTranscript') {
console.log('[Final]', message.text);
} else {
process.stdout.write(`\r[Partial] ${message.text}`);
}
});
transcriber.on('error', (error) => {
console.error('Streaming error:', error);
});
transcriber.on('close', (code, reason) => {
console.log('Session closed:', code, reason);
});
await transcriber.connect();
// Send audio chunks (16-bit PCM, 16kHz mono)
// transcriber.sendAudio(audioBuffer);
// When done:
// await transcriber.close();
Step 2: Stream from Microphone (Node.js)
import { AssemblyAI } from 'assemblyai';
import { spawn } from 'child_process';
const client = new AssemblyAI({
apiKey: process.env.ASSEMBLYAI_API_KEY!,
});
const transcriber = client.streaming.createService({
speech_model: 'nova-3',
sample_rate: 16000,
});
transcriber.on('transcript', (msg) => {
if (msg.message_type === 'FinalTranscript' && msg.text) {
console.log(msg.text);
}
});
await transcriber.connect();
// Use SoX to capture microphone audio as raw PCM
const mic = spawn('sox', [
'-d', // default audio device
'-t', 'raw', // raw PCM output
'-b', '16', // 16-bit
'-r', '16000', // 16kHz sample rate
'-c', '1', // mono
'-e', 'signed-integer',
'-', // pipe to stdout
]);
mic.stdout.on('data', (chunk: Buffer) => {
transcriber.sendAudio(chunk);
});
mic.on('close', async () => {
await 'Optimize AssemblyAI costs through model selection, feature budgeting,.
AssemblyAI Cost Tuning
Overview
Optimize AssemblyAI costs through model selection, feature-aware billing, and usage monitoring. AssemblyAI charges per audio hour with add-on pricing for intelligence features.
Prerequisites
- Access to AssemblyAI billing dashboard at https://www.assemblyai.com/app
- Understanding of current usage patterns
Actual Pricing (Pay-As-You-Go)
Speech-to-Text (Async)
| Model | Price per Hour | Best For |
|---|---|---|
| Best (Universal-3) | $0.37/hr | Highest accuracy, production |
| Nano | $0.12/hr | High volume, cost-sensitive |
Streaming Speech-to-Text
| Model | Price per Hour |
|---|---|
| Universal Streaming | $0.47/hr |
Audio Intelligence Add-Ons
| Feature | Additional Cost per Hour |
|---|---|
| Speaker Diarization | $0.02/hr |
| Sentiment Analysis | $0.02/hr |
| Entity Detection | $0.08/hr |
| Auto Highlights | Included |
| Content Safety | $0.02/hr |
| IAB Categories | $0.02/hr |
| Summarization | Included (uses LeMUR) |
| PII Redaction | $0.02/hr |
| PII Audio Redaction | +processing time |
LeMUR
| Model | Price per Input Token | Price per Output Token |
|---|---|---|
| Default | ~$0.003/1K tokens | ~$0.015/1K tokens |
Instructions
Step 1: Cost Estimation Calculator
interface CostEstimate {
baseTranscriptionCost: number;
featuresCost: number;
totalCost: number;
breakdown: Record<string, number>;
}
function estimateTranscriptionCost(
audioHours: number,
options: {
model?: 'best' | 'nano';
speakerLabels?: boolean;
sentimentAnalysis?: boolean;
entityDetection?: boolean;
contentSafety?: boolean;
iabCategories?: boolean;
piiRedaction?: boolean;
} = {}
): CostEstimate {
const model = options.model ?? 'best';
const baseRate = model === 'best' ? 0.37 : 0.12;
const baseCost = audioHours * baseRate;
const breakdown: Record<string, number> = {
[`transcription (${model})`]: baseCost,
};
let featuresCost = 0;
if (options.speakerLabels) {
const cost = audioHours * 0.02;
breakdown['speaker_labels'] = cost;
featuresCost += cost;
}
if (options.sentimentAnalysis) {
const cost = audioHours * 0.02;
breakdown['sentiment_analysis'] = cost;
feature'Collect AssemblyAI debug evidence for support tickets and troubleshooting.
AssemblyAI Debug Bundle
Overview
Collect all diagnostic information needed to resolve AssemblyAI issues — SDK version, transcript status, API connectivity, and configuration — packaged for support tickets.
Prerequisites
assemblyaipackage installed- Access to application logs
- Failed transcript ID (if applicable)
Instructions
Step 1: Create Debug Bundle Script
#!/bin/bash
# assemblyai-debug-bundle.sh
set -euo pipefail
BUNDLE_DIR="assemblyai-debug-$(date +%Y%m%d-%H%M%S)"
mkdir -p "$BUNDLE_DIR"
echo "=== AssemblyAI Debug Bundle ===" > "$BUNDLE_DIR/summary.txt"
echo "Generated: $(date -u +%Y-%m-%dT%H:%M:%SZ)" >> "$BUNDLE_DIR/summary.txt"
echo "" >> "$BUNDLE_DIR/summary.txt"
# Environment
echo "--- Runtime ---" >> "$BUNDLE_DIR/summary.txt"
node --version >> "$BUNDLE_DIR/summary.txt" 2>&1 || echo "Node.js not found" >> "$BUNDLE_DIR/summary.txt"
echo "Platform: $(uname -s) $(uname -m)" >> "$BUNDLE_DIR/summary.txt"
echo "ASSEMBLYAI_API_KEY: ${ASSEMBLYAI_API_KEY:+[SET (${#ASSEMBLYAI_API_KEY} chars)]}" >> "$BUNDLE_DIR/summary.txt"
echo "" >> "$BUNDLE_DIR/summary.txt"
# SDK version
echo "--- SDK Version ---" >> "$BUNDLE_DIR/summary.txt"
npm list assemblyai 2>/dev/null >> "$BUNDLE_DIR/summary.txt" || echo "assemblyai not in node_modules" >> "$BUNDLE_DIR/summary.txt"
echo "" >> "$BUNDLE_DIR/summary.txt"
# API connectivity
echo "--- API Connectivity ---" >> "$BUNDLE_DIR/summary.txt"
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" \
-H "Authorization: ${ASSEMBLYAI_API_KEY:-none}" \
https://api.assemblyai.com/v2/transcript 2>/dev/null || echo "FAILED")
echo "GET /v2/transcript: HTTP $HTTP_CODE" >> "$BUNDLE_DIR/summary.txt"
STATUS_CODE=$(curl -s -o /dev/null -w "%{http_code}" \
https://api.assemblyai.com/v2 2>/dev/null || echo "FAILED")
echo "GET /v2: HTTP $STATUS_CODE" >> "$BUNDLE_DIR/summary.txt"
echo "" >> "$BUNDLE_DIR/summary.txt"
# AssemblyAI service status
echo "--- Service Status ---" >> "$BUNDLE_DIR/summary.txt"
curl -s https://status.assemblyai.com/api/v2/status.json 2>/dev/null \
| python3 -m json.tool 2>/dev/null >> "$BUNDLE_DIR/summary.txt" \
|| echo "Could not fetch status" >> "$BUNDLE_DIR/summary.txt"
# Package bundle
tar -czf "$BUNDLE_DIR.tar.gz" "$BUNDLE_DIR"
rm -rf "$BUNDLE_DIR"
echo ""
ech'Deploy AssemblyAI integrations to Vercel, Cloud Run, and Fly.
AssemblyAI Deploy Integration
Overview
Deploy AssemblyAI-powered transcription services to Vercel (serverless), Google Cloud Run (containers), and Fly.io with proper secrets management and webhook configuration.
Prerequisites
- AssemblyAI API key for production
- Platform CLI installed (
vercel,gcloud, orfly) - Application with working AssemblyAI integration
Instructions
Vercel Deployment (Serverless)
# Add secrets
vercel env add ASSEMBLYAI_API_KEY production
vercel env add ASSEMBLYAI_WEBHOOK_SECRET production
# Deploy
vercel --prod
API Route for Transcription:
// app/api/transcribe/route.ts (Next.js App Router)
import { AssemblyAI } from 'assemblyai';
import { NextRequest, NextResponse } from 'next/server';
const client = new AssemblyAI({
apiKey: process.env.ASSEMBLYAI_API_KEY!,
});
export async function POST(req: NextRequest) {
const { audioUrl, features } = await req.json();
if (!audioUrl) {
return NextResponse.json({ error: 'audioUrl required' }, { status: 400 });
}
// Use submit() + webhook for production (non-blocking)
const transcript = await client.transcripts.submit({
audio: audioUrl,
webhook_url: `${process.env.NEXT_PUBLIC_APP_URL}/api/webhooks/assemblyai`,
webhook_auth_header_name: 'X-Webhook-Secret',
webhook_auth_header_value: process.env.ASSEMBLYAI_WEBHOOK_SECRET!,
speaker_labels: features?.speakerLabels ?? false,
sentiment_analysis: features?.sentiment ?? false,
});
return NextResponse.json({
transcriptId: transcript.id,
status: transcript.status,
});
}
Webhook Handler:
// app/api/webhooks/assemblyai/route.ts
import { AssemblyAI } from 'assemblyai';
import { NextRequest, NextResponse } from 'next/server';
const client = new AssemblyAI({
apiKey: process.env.ASSEMBLYAI_API_KEY!,
});
export async function POST(req: NextRequest) {
// Verify webhook authenticity
const secret = req.headers.get('x-webhook-secret');
if (secret !== process.env.ASSEMBLYAI_WEBHOOK_SECRET) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 });
}
const { transcript_id, status } = await req.json();
if (status === 'completed') {
const transcript = await client.transcripts.get(transcript_id);
// Store transcript, notify user, trigger downstream processing
console.log(`Transcript ${transcript_id} completed: ${transcript.text?.length} chars`);
} else if (status === 'error') {
console.error(`Transcript ${transcript_id} failed`);
}
return NextResponse.json({ received: true });
}
Vercel config:
{
"function'Create a minimal working AssemblyAI transcription example.
AssemblyAI Hello World
Overview
Minimal working examples demonstrating AssemblyAI's three core capabilities: async transcription, audio intelligence features, and LeMUR (LLM-powered analysis).
Prerequisites
- Completed
assemblyai-install-authsetup - Valid API key configured in
ASSEMBLYAIAPIKEY
Instructions
Step 1: Basic Transcription (Remote URL)
import { AssemblyAI } from 'assemblyai';
const client = new AssemblyAI({
apiKey: process.env.ASSEMBLYAI_API_KEY!,
});
async function transcribeUrl() {
const transcript = await client.transcripts.transcribe({
audio: 'https://storage.googleapis.com/aai-web-samples/5_common_sports_702.wav',
});
if (transcript.status === 'error') {
throw new Error(`Transcription failed: ${transcript.error}`);
}
console.log('Transcript:', transcript.text);
console.log('Duration:', transcript.audio_duration, 'seconds');
console.log('Word count:', transcript.words?.length);
}
transcribeUrl().catch(console.error);
Step 2: Transcribe a Local File
async function transcribeLocal() {
// The SDK handles upload automatically when you pass a local path
const transcript = await client.transcripts.transcribe({
audio: './recording.mp3',
});
console.log('Transcript:', transcript.text);
// Access word-level timestamps
for (const word of transcript.words ?? []) {
console.log(`[${word.start}ms - ${word.end}ms] ${word.text} (${word.confidence})`);
}
}
Step 3: Enable Audio Intelligence Features
async function transcribeWithIntelligence() {
const transcript = await client.transcripts.transcribe({
audio: 'https://storage.googleapis.com/aai-web-samples/5_common_sports_702.wav',
speaker_labels: true, // Who said what
auto_highlights: true, // Key phrases extraction
sentiment_analysis: true, // Sentiment per sentence
entity_detection: true, // Named entities (people, orgs, locations)
summarization: true, // Auto-summary
summary_model: 'informative',
summary_type: 'bullets',
});
// Speaker diarization
for (const utterance of transcript.utterances ?? []) {
console.log(`Speaker ${utterance.speaker}: ${utterance.text}`);
}
// Key phrases
for (const result of transcript.auto_highlights_result?.results ?? []) {
console.log(`Key phrase: "${result.text}" (mentioned ${result.count} times)`);
}
// Sentiment analysis
for (const result of transcript.sentiment_analysis_results ?? []) {
console.log(`${result.sentiment}: "${result.text}"`);
}
// Summary
console.log('Summary:', transcript.summary);
}
Step 4: LeMUR
'Install and configure AssemblyAI SDK authentication.
AssemblyAI Install & Auth
Overview
Install the assemblyai npm package and configure API key authentication for transcription, LeMUR, and streaming APIs.
Prerequisites
- Node.js 18+ or Python 3.10+
- Package manager (npm, pnpm, yarn, or pip)
- AssemblyAI account — sign up at https://www.assemblyai.com/dashboard/signup
- API key from https://www.assemblyai.com/app/account
Instructions
Step 1: Install the SDK
# Node.js (official SDK)
npm install assemblyai
# Python
pip install assemblyai
Step 2: Configure API Key
# Set environment variable (recommended)
export ASSEMBLYAI_API_KEY="your-api-key-here"
# Or add to .env file
echo 'ASSEMBLYAI_API_KEY=your-api-key-here' >> .env
Add to .gitignore:
.env
.env.local
.env.*.local
Step 3: Initialize the Client
// src/assemblyai/client.ts
import { AssemblyAI } from 'assemblyai';
const client = new AssemblyAI({
apiKey: process.env.ASSEMBLYAI_API_KEY!,
});
export default client;
Step 4: Verify Connection
// verify-connection.ts
import { AssemblyAI } from 'assemblyai';
const client = new AssemblyAI({
apiKey: process.env.ASSEMBLYAI_API_KEY!,
});
async function verify() {
// Transcribe a short public audio to confirm everything works
const transcript = await client.transcripts.transcribe({
audio: 'https://storage.googleapis.com/aai-web-samples/5_common_sports_702.wav',
});
if (transcript.status === 'error') {
console.error('Transcription failed:', transcript.error);
process.exit(1);
}
console.log('Connection verified. Transcript ID:', transcript.id);
console.log('Status:', transcript.status);
console.log('Text preview:', transcript.text?.slice(0, 100));
}
verify().catch(console.error);
Python Setup
import assemblyai as aai
import os
# Configure globally
aai.settings.api_key = os.environ["ASSEMBLYAI_API_KEY"]
# Or pass per-client
transcriber = aai.Transcriber()
transcript = transcriber.transcribe(
"https://storage.googleapis.com/aai-web-samples/5_common_sports_702.wav"
)
print(transcript.text)
Output
- Installed
assemblyaipackage in node_modules or site-packages - API key stored in environment variable or
.envfile - Client initialized and connection verified with a test transcription
Error Handling
| Error | Cause | Solution | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Authentication error |
Invalid or missing API key | Verify key a
'Configure AssemblyAI local development with hot reload and testing.
ReadWriteEditBash(npm:*)Bash(pnpm:*)Grep
AssemblyAI Local Dev LoopOverviewSet up a fast, reproducible local development workflow for AssemblyAI transcription and LeMUR projects with mocking, caching, and hot reload. Prerequisites
InstructionsStep 1: Project Structure
Step 2: Dev Scripts
Step 3: Singleton Client with Env Loading
Step 4: Cache Transcription Results for Fast Iteration'Optimize AssemblyAI API performance with caching, parallel processing,.
ReadWriteEdit
AssemblyAI Performance TuningOverviewOptimize AssemblyAI transcription performance through model selection, parallel processing, caching, and webhook-based architectures. Prerequisites
Latency Benchmarks (Actual)Async Transcription
Streaming
Model Speed vs. Accuracy
InstructionsStep 1: Choose the Right Model
Step 2: Parallel Batch Processing'Execute AssemblyAI production deployment checklist and rollback procedures.
ReadBash(kubectl:*)Bash(curl:*)Grep
AssemblyAI Production ChecklistOverviewComplete checklist for deploying AssemblyAI-powered transcription services to production with health checks, monitoring, and rollback procedures. Prerequisites
InstructionsPre-Deployment ChecklistAPI Key & Auth
Code Quality
Error Handling
Performance
Health Check Implementation
Webhook-Based Processing (Recommended for Production)'Implement AssemblyAI rate limiting, backoff, and queue-based throttling.
ReadWriteEdit
AssemblyAI Rate LimitsOverviewHandle AssemblyAI rate limits with exponential backoff, queue-based throttling, and concurrency management. AssemblyAI auto-scales limits for paid users. Prerequisites
Rate Limit Tiers (Actual)Async Transcription API
Streaming (WebSocket)
LeMUR
Note: AssemblyAI auto-scales paid limits. At 70%+ utilization, the new session rate limit increases by 10% every 60 seconds with no ceiling cap. InstructionsStep 1: Exponential Backoff with Jitter
Step 2: Queue-Based Concurrency Co'Implement AssemblyAI reference architecture with best-practice project.
ReadGrep
AssemblyAI Reference ArchitectureOverviewProduction-ready architecture for AssemblyAI-powered transcription services with layered design, webhook-driven processing, and LeMUR analysis pipelines. Prerequisites
Project Structure
Architecture Layers
InstructionsStep 1: Client Layer
Step 2: Transcription Service'Apply production-ready AssemblyAI SDK patterns for TypeScript and Python.
ReadWriteEdit
AssemblyAI SDK PatternsOverviewProduction-ready patterns for the Prerequisites
InstructionsStep 1: Type-Safe Singleton Client
Step 2: Transcription Service Wrapper
Step 3: Error Handling Wrapper'Apply AssemblyAI security best practices for API keys, PII, and access.
ReadWriteGrep
AssemblyAI Security BasicsOverviewSecurity best practices for AssemblyAI: API key management, temporary tokens for browser clients, PII redaction, and data retention policies. Prerequisites
InstructionsStep 1: API Key Management
Step 2: Temporary Tokens for Browser StreamingNever expose your API key in frontend code. Use temporary tokens for browser-side streaming:
Step 3: PII Redaction in Transcripts
Step 4: Data Retention and Deletion'Analyze, plan, and execute AssemblyAI SDK upgrades with breaking change.
ReadWriteEditBash(npm:*)Bash(git:*)
AssemblyAI Upgrade & MigrationOverviewGuide for upgrading the Prerequisites
InstructionsStep 1: Check Current Version
Step 2: Review Changelog
Step 3: Create Upgrade Branch
Step 4: Major Migration — Old SDK to Current (v4.x)If migrating from
Key changes in the migration:
Step 5: Transcription Method Changes'Implement AssemblyAI webhook handling for transcription completion events.
ReadWriteEditBash(curl:*)
AssemblyAI Webhooks & EventsOverviewHandle AssemblyAI webhooks for transcription completion. When you submit a transcript with Prerequisites
How AssemblyAI Webhooks Work
Key difference from other APIs: AssemblyAI webhooks are per-transcript (set at submission time), not a global webhook registration. There are no event types to subscribe to — you get one callback per transcript. InstructionsStep 1: Submit Transcription with Webhook
Step 2: Webhook Endpoint (Express.js)Ready to use assemblyai-pack?Related Pluginssupabase-packComplete Supabase integration skill pack with 30 skills covering authentication, database, storage, realtime, edge functions, and production operations. Flagship+ tier vendor pack. vercel-packComplete Vercel integration skill pack with 30 skills covering deployments, edge functions, preview environments, performance optimization, and production operations. Flagship+ tier vendor pack. clay-packComplete Clay integration skill pack with 30 skills covering data enrichment, waterfall workflows, AI agents, and GTM automation. Flagship+ tier vendor pack. cursor-packComplete Cursor integration skill pack with 30 skills covering AI code editing, composer workflows, codebase indexing, and productivity features. Flagship+ tier vendor pack. exa-packComplete Exa integration skill pack with 30 skills covering neural search, semantic retrieval, web search API, and AI-powered discovery. Flagship+ tier vendor pack. firecrawl-packComplete Firecrawl integration skill pack with 30 skills covering web scraping, crawling, markdown conversion, and LLM-ready data extraction. Flagship+ tier vendor pack.
Tags
assemblyaisaassdkintegration
|