Claude Code skill pack for Snowflake (30 skills)
Installation
Open Claude Code and run this command:
/plugin install snowflake-pack@claude-code-plugins-plus
Use --global to install for all projects, or --project for current project only.
Skills (30)
Apply Snowflake advanced debugging techniques for hard-to-diagnose issues.
Snowflake Advanced Troubleshooting
Overview
Deep debugging techniques for complex Snowflake issues that resist standard troubleshooting.
Prerequisites
- Access to production logs and metrics
- kubectl access to clusters
- Network capture tools available
- Understanding of distributed tracing
Evidence Collection Framework
Comprehensive Debug Bundle
#!/bin/bash
# advanced-snowflake-debug.sh
BUNDLE="snowflake-advanced-debug-$(date +%Y%m%d-%H%M%S)"
mkdir -p "$BUNDLE"/{logs,metrics,network,config,traces}
# 1. Extended logs (1 hour window)
kubectl logs -l app=snowflake-integration --since=1h > "$BUNDLE/logs/pods.log"
journalctl -u snowflake-service --since "1 hour ago" > "$BUNDLE/logs/system.log"
# 2. Metrics dump
curl -s localhost:9090/api/v1/query?query=snowflake_requests_total > "$BUNDLE/metrics/requests.json"
curl -s localhost:9090/api/v1/query?query=snowflake_errors_total > "$BUNDLE/metrics/errors.json"
# 3. Network capture (30 seconds)
timeout 30 tcpdump -i any port 443 -w "$BUNDLE/network/capture.pcap" &
# 4. Distributed traces
curl -s localhost:16686/api/traces?service=snowflake > "$BUNDLE/traces/jaeger.json"
# 5. Configuration state
kubectl get cm snowflake-config -o yaml > "$BUNDLE/config/configmap.yaml"
kubectl get secret snowflake-secrets -o yaml > "$BUNDLE/config/secrets-redacted.yaml"
tar -czf "$BUNDLE.tar.gz" "$BUNDLE"
echo "Advanced debug bundle: $BUNDLE.tar.gz"
Systematic Isolation
Layer-by-Layer Testing
// Test each layer independently
async function diagnoseSnowflakeIssue(): Promise<DiagnosisReport> {
const results: DiagnosisResult[] = [];
// Layer 1: Network connectivity
results.push(await testNetworkConnectivity());
// Layer 2: DNS resolution
results.push(await testDNSResolution('api.snowflake.com'));
// Layer 3: TLS handshake
results.push(await testTLSHandshake('api.snowflake.com'));
// Layer 4: Authentication
results.push(await testAuthentication());
// Layer 5: API response
results.push(await testAPIResponse());
// Layer 6: Response parsing
results.push(await testResponseParsing());
return { results, firstFailure: results.find(r => !r.success) };
}
Minimal Reproduction
// Strip down to absolute minimum
async function minimalRepro(): Promise<void> {
// 1. Fresh client, no customization
const client = new SnowflakeClient({
apiKey: process.env.SNOWFLAKE_API_KEY!,
});
// 2. Simplest possible call
try {
const result = await client.ping();
console.log('Ping successful:', result);
} catch (error) {
console.error('Ping faiChoose and implement Snowflake validated architecture blueprints for different scales.
Snowflake Architecture Variants
Overview
Three validated architecture blueprints for Snowflake integrations.
Prerequisites
- Understanding of team size and DAU requirements
- Knowledge of deployment infrastructure
- Clear SLA requirements
- Growth projections available
Variant A: Monolith (Simple)
Best for: MVPs, small teams, < 10K daily active users
my-app/
├── src/
│ ├── snowflake/
│ │ ├── client.ts # Singleton client
│ │ ├── types.ts # Types
│ │ └── middleware.ts # Express middleware
│ ├── routes/
│ │ └── api/
│ │ └── snowflake.ts # API routes
│ └── index.ts
├── tests/
│ └── snowflake.test.ts
└── package.json
Key Characteristics
- Single deployment unit
- Synchronous Snowflake calls in request path
- In-memory caching
- Simple error handling
Code Pattern
// Direct integration in route handler
app.post('/api/create', async (req, res) => {
try {
const result = await snowflakeClient.create(req.body);
res.json(result);
} catch (error) {
res.status(500).json({ error: error.message });
}
});
Variant B: Service Layer (Moderate)
Best for: Growing startups, 10K-100K DAU, multiple integrations
my-app/
├── src/
│ ├── services/
│ │ ├── snowflake/
│ │ │ ├── client.ts # Client wrapper
│ │ │ ├── service.ts # Business logic
│ │ │ ├── repository.ts # Data access
│ │ │ └── types.ts
│ │ └── index.ts # Service exports
│ ├── controllers/
│ │ └── snowflake.ts
│ ├── routes/
│ ├── middleware/
│ ├── queue/
│ │ └── snowflake-processor.ts # Async processing
│ └── index.ts
├── config/
│ └── snowflake/
└── package.json
Key Characteristics
- Separation of concerns
- Background job processing
- Redis caching
- Circuit breaker pattern
- Structured error handling
Code Pattern
// Service layer abstraction
class SnowflakeService {
constructor(
private client: SnowflakeClient,
private cache: CacheService,
private queue: QueueService
) {}
async createResource(data: CreateInput): Promise<Resource> {
// Business logic before API call
const validated = this.validate(data);
// Check cache
const cached = await this.cache.get(cacheKey);
if (cached) return cached;
// API call with retry
const result = await this.withRetry(() =>
this.client.create(validated)
);
// Cache result
await this.cache.set(cacheKey, result, 300);
// Async follow-up
await this.queue.enqueue('snowflake.post-create', result);
return result;
}
}
Configure Snowflake CI/CD integration with GitHub Actions and testing.
Snowflake CI Integration
Overview
Set up CI/CD pipelines for Snowflake integrations with automated testing.
Prerequisites
- GitHub repository with Actions enabled
- Snowflake test API key
- npm/pnpm project configured
Instructions
Step 1: Create GitHub Actions Workflow
Create .github/workflows/snowflake-integration.yml:
name: Snowflake Integration Tests
on:
push:
branches: [main]
pull_request:
branches: [main]
env:
SNOWFLAKE_API_KEY: ${{ secrets.SNOWFLAKE_API_KEY }}
jobs:
test:
runs-on: ubuntu-latest
env:
SNOWFLAKE_API_KEY: ${{ secrets.SNOWFLAKE_API_KEY }}
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- run: npm ci
- run: npm test -- --coverage
- run: npm run test:integration
Step 2: Configure Secrets
gh secret set SNOWFLAKE_API_KEY --body "sk_test_***"
Step 3: Add Integration Tests
describe('Snowflake Integration', () => {
it.skipIf(!process.env.SNOWFLAKE_API_KEY)('should connect', async () => {
const client = getSnowflakeClient();
const result = await client.healthCheck();
expect(result.status).toBe('ok');
});
});
Output
- Automated test pipeline
- PR checks configured
- Coverage reports uploaded
- Release workflow ready
Error Handling
| Issue | Cause | Solution |
|---|---|---|
| Secret not found | Missing configuration | Add secret via gh secret set |
| Tests timeout | Network issues | Increase timeout or mock |
| Auth failures | Invalid key | Check secret value |
Examples
Release Workflow
on:
push:
tags: ['v*']
jobs:
release:
runs-on: ubuntu-latest
env:
SNOWFLAKE_API_KEY: ${{ secrets.SNOWFLAKE_API_KEY_PROD }}
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
- run: npm ci
- name: Verify Snowflake production readiness
run: npm run test:integration
- run: npm run build
- run: npm publish
Branch Protection
required_status_checks:
- "test"
- "snowflake-integration"
Resources
Next Steps
For dep
Diagnose and fix Snowflake common errors and exceptions.
Snowflake Common Errors
Overview
Quick reference for the top 10 most common Snowflake errors and their solutions.
Prerequisites
- Snowflake SDK installed
- API credentials configured
- Access to error logs
Instructions
Step 1: Identify the Error
Check error message and code in your logs or console.
Step 2: Find Matching Error Below
Match your error to one of the documented cases.
Step 3: Apply Solution
Follow the solution steps for your specific error.
Output
- Identified error cause
- Applied fix
- Verified resolution
Error Handling
Authentication Failed
Error Message:
Authentication error: Invalid API key
Cause: API key is missing, expired, or invalid.
Solution:
# Verify API key is set
echo $SNOWFLAKE_API_KEY
Rate Limit Exceeded
Error Message:
Rate limit exceeded. Please retry after X seconds.
Cause: Too many requests in a short period.
Solution:
Implement exponential backoff. See snowflake-rate-limits skill.
Network Timeout
Error Message:
Request timeout after 30000ms
Cause: Network connectivity or server latency issues.
Solution:
// Increase timeout
const client = new Client({ timeout: 60000 });
Examples
Quick Diagnostic Commands
# Check Snowflake status
curl -s https://status.snowflake.com
# Verify API connectivity
curl -I https://api.snowflake.com
# Check local configuration
env | grep SNOWFLAKE
Escalation Path
- Collect evidence with
snowflake-debug-bundle - Check Snowflake status page
- Contact support with request ID
Resources
Next Steps
For comprehensive debugging, see snowflake-debug-bundle.
Execute Snowflake primary workflow: Core Workflow A.
Snowflake Core Workflow A
Overview
Primary money-path workflow for Snowflake. This is the most common use case.
Prerequisites
- Completed
snowflake-install-authsetup - Understanding of Snowflake core concepts
- Valid API credentials configured
Instructions
Step 1: Initialize
// Step 1 implementation
Step 2: Execute
// Step 2 implementation
Step 3: Finalize
// Step 3 implementation
Output
- Completed Core Workflow A execution
- Expected results from Snowflake API
- Success confirmation or error details
Error Handling
| Error | Cause | Solution |
|---|---|---|
| Error 1 | Cause | Solution |
| Error 2 | Cause | Solution |
Examples
Complete Workflow
// Complete workflow example
Common Variations
- Variation 1: Description
- Variation 2: Description
Resources
Next Steps
For secondary workflow, see snowflake-core-workflow-b.
Execute Snowflake secondary workflow: Core Workflow B.
Snowflake Core Workflow B
Overview
Secondary workflow for Snowflake. Complements the primary workflow.
Prerequisites
- Completed
snowflake-install-authsetup - Familiarity with
snowflake-core-workflow-a - Valid API credentials configured
Instructions
Step 1: Setup
// Step 1 implementation
Step 2: Process
// Step 2 implementation
Step 3: Complete
// Step 3 implementation
Output
- Completed Core Workflow B execution
- Results from Snowflake API
- Success confirmation or error details
Error Handling
| Aspect | Workflow A | Workflow B |
|---|---|---|
| Use Case | Primary | Secondary |
| Complexity | Medium | Lower |
| Performance | Standard | Optimized |
Examples
Complete Workflow
// Complete workflow example
Error Recovery
// Error handling code
Resources
Next Steps
For common errors, see snowflake-common-errors.
Optimize Snowflake costs through tier selection, sampling, and usage monitoring.
Snowflake Cost Tuning
Overview
Optimize Snowflake costs through smart tier selection, sampling, and usage monitoring.
Prerequisites
- Access to Snowflake billing dashboard
- Understanding of current usage patterns
- Database for usage tracking (optional)
- Alerting system configured (optional)
Pricing Tiers
| Tier | Monthly Cost | Included | Overage |
|---|---|---|---|
| Free | $0 | 1,000 requests | N/A |
| Pro | $99 | 100,000 requests | $0.001/request |
| Enterprise | Custom | Unlimited | Volume discounts |
Cost Estimation
interface UsageEstimate {
requestsPerMonth: number;
tier: string;
estimatedCost: number;
recommendation?: string;
}
function estimateSnowflakeCost(requestsPerMonth: number): UsageEstimate {
if (requestsPerMonth <= 1000) {
return { requestsPerMonth, tier: 'Free', estimatedCost: 0 };
}
if (requestsPerMonth <= 100000) {
return { requestsPerMonth, tier: 'Pro', estimatedCost: 99 };
}
const proOverage = (requestsPerMonth - 100000) * 0.001;
const proCost = 99 + proOverage;
return {
requestsPerMonth,
tier: 'Pro (with overage)',
estimatedCost: proCost,
recommendation: proCost > 500
? 'Consider Enterprise tier for volume discounts'
: undefined,
};
}
Usage Monitoring
class SnowflakeUsageMonitor {
private requestCount = 0;
private bytesTransferred = 0;
private alertThreshold: number;
constructor(monthlyBudget: number) {
this.alertThreshold = monthlyBudget * 0.8; // 80% warning
}
track(request: { bytes: number }) {
this.requestCount++;
this.bytesTransferred += request.bytes;
if (this.estimatedCost() > this.alertThreshold) {
this.sendAlert('Approaching Snowflake budget limit');
}
}
estimatedCost(): number {
return estimateSnowflakeCost(this.requestCount).estimatedCost;
}
private sendAlert(message: string) {
// Send to Slack, email, PagerDuty, etc.
}
}
Cost Reduction Strategies
Step 1: Request Sampling
function shouldSample(samplingRate = 0.1): boolean {
return Math.random() < samplingRate;
}
// Use for non-critical telemetry
if (shouldSample(0.1)) { // 10% sample
await snowflakeClient.trackEvent(event);
}
Step 2: Batching Requests
// Instead of N individual calls
await Promise.all(ids.map(id => snowflakeClient.get(id)));
// Use batch endpoint (1 call)
await snowflakeClient.batchGet(ids);
Step 3: Caching (from P16)
- Cache frequently accessed data
Implement Snowflake PII handling, data retention, and GDPR/CCPA compliance patterns.
Snowflake Data Handling
Overview
Handle sensitive data correctly when integrating with Snowflake.
Prerequisites
- Understanding of GDPR/CCPA requirements
- Snowflake SDK with data export capabilities
- Database for audit logging
- Scheduled job infrastructure for cleanup
Data Classification
| Category | Examples | Handling |
|---|---|---|
| PII | Email, name, phone | Encrypt, minimize |
| Sensitive | API keys, tokens | Never log, rotate |
| Business | Usage metrics | Aggregate when possible |
| Public | Product names | Standard handling |
PII Detection
const PII_PATTERNS = [
{ type: 'email', regex: /[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}/g },
{ type: 'phone', regex: /\b\d{3}[-.]?\d{3}[-.]?\d{4}\b/g },
{ type: 'ssn', regex: /\b\d{3}-\d{2}-\d{4}\b/g },
{ type: 'credit_card', regex: /\b\d{4}[- ]?\d{4}[- ]?\d{4}[- ]?\d{4}\b/g },
];
function detectPII(text: string): { type: string; match: string }[] {
const findings: { type: string; match: string }[] = [];
for (const pattern of PII_PATTERNS) {
const matches = text.matchAll(pattern.regex);
for (const match of matches) {
findings.push({ type: pattern.type, match: match[0] });
}
}
return findings;
}
Data Redaction
function redactPII(data: Record<string, any>): Record<string, any> {
const sensitiveFields = ['email', 'phone', 'ssn', 'password', 'apiKey'];
const redacted = { ...data };
for (const field of sensitiveFields) {
if (redacted[field]) {
redacted[field] = '[REDACTED]';
}
}
return redacted;
}
// Use in logging
console.log('Snowflake request:', redactPII(requestData));
Data Retention Policy
Retention Periods
| Data Type | Retention | Reason |
|---|---|---|
| API logs | 30 days | Debugging |
| Error logs | 90 days | Root cause analysis |
| Audit logs | 7 years | Compliance |
| PII | Until deletion request | GDPR/CCPA |
Automatic Cleanup
async function cleanupSnowflakeData(retentionDays: number): Promise<void> {
const cutoff = new Date();
cutoff.setDate(cutoff.getDate() - retentionDays);
await db.snowflakeLogs.deleteMany({
createdAt: { $lt: cutoff },
type: { $nin: ['audit', 'compliance'] },
});
}
// Schedule daily cleanup
cron.schedule('0 3 * * *', () => cleanupSnowflakeDCollect Snowflake debug evidence for support tickets and troubleshooting.
Snowflake Debug Bundle
Overview
Collect all necessary diagnostic information for Snowflake support tickets.
Prerequisites
- Snowflake SDK installed
- Access to application logs
- Permission to collect environment info
Instructions
Step 1: Create Debug Bundle Script
#!/bin/bash
# snowflake-debug-bundle.sh
BUNDLE_DIR="snowflake-debug-$(date +%Y%m%d-%H%M%S)"
mkdir -p "$BUNDLE_DIR"
echo "=== Snowflake Debug Bundle ===" > "$BUNDLE_DIR/summary.txt"
echo "Generated: $(date)" >> "$BUNDLE_DIR/summary.txt"
Step 2: Collect Environment Info
# Environment info
echo "--- Environment ---" >> "$BUNDLE_DIR/summary.txt"
node --version >> "$BUNDLE_DIR/summary.txt" 2>&1
npm --version >> "$BUNDLE_DIR/summary.txt" 2>&1
echo "SNOWFLAKE_API_KEY: ${SNOWFLAKE_API_KEY:+[SET]}" >> "$BUNDLE_DIR/summary.txt"
Step 3: Gather SDK and Logs
# SDK version
npm list @snowflake/sdk 2>/dev/null >> "$BUNDLE_DIR/summary.txt"
# Recent logs (redacted)
grep -i "snowflake" ~/.npm/_logs/*.log 2>/dev/null | tail -50 >> "$BUNDLE_DIR/logs.txt"
# Configuration (redacted - secrets masked)
echo "--- Config (redacted) ---" >> "$BUNDLE_DIR/summary.txt"
cat .env 2>/dev/null | sed 's/=.*/=***REDACTED***/' >> "$BUNDLE_DIR/config-redacted.txt"
# Network connectivity test
echo "--- Network Test ---" >> "$BUNDLE_DIR/summary.txt"
echo -n "API Health: " >> "$BUNDLE_DIR/summary.txt"
curl -s -o /dev/null -w "%{http_code}" https://api.snowflake.com/health >> "$BUNDLE_DIR/summary.txt"
echo "" >> "$BUNDLE_DIR/summary.txt"
Step 4: Package Bundle
tar -czf "$BUNDLE_DIR.tar.gz" "$BUNDLE_DIR"
echo "Bundle created: $BUNDLE_DIR.tar.gz"
Output
snowflake-debug-YYYYMMDD-HHMMSS.tar.gzarchive containing:summary.txt- Environment and SDK infologs.txt- Recent redacted logsconfig-redacted.txt- Configuration (secrets removed)
Error Handling
| Item | Purpose | Included | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Environment versions | Compatibility check | ✓ | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| SDK version | Version-specific bugs | ✓ | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Error logs (redacted) | Root cause analysis | ✓ | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Config (redacted) | Configuration issues | ✓ |
| Role | Permissions | Use Case |
|---|---|---|
| Admin | Full access | Platform administrators |
| Developer | Read/write, no delete | Active development |
| Viewer | Read-only | Stakeholders, auditors |
| Service | API access only | Automated systems |
Role Implementation
enum SnowflakeRole {
Admin = 'admin',
Developer = 'developer',
Viewer = 'viewer',
Service = 'service',
}
interface SnowflakePermissions {
read: boolean;
write: boolean;
delete: boolean;
admin: boolean;
}
const ROLE_PERMISSIONS: Record<SnowflakeRole, SnowflakePermissions> = {
admin: { read: true, write: true, delete: true, admin: true },
developer: { read: true, write: true, delete: false, admin: false },
viewer: { read: true, write: false, delete: false, admin: false },
service: { read: true, write: true, delete: false, admin: false },
};
function checkPermission(
role: SnowflakeRole,
action: keyof SnowflakePermissions
): boolean {
return ROLE_PERMISSIONS[role][action];
}
SSO Integration
SAML Configuration
// Snowflake SAML setup
const samlConfig = {
entryPoint: 'https://idp.company.com/saml/sso',
issuer: 'https://snowflake.com/saml/metadata',
cert: process.env.SAML_CERT,
callbackUrl: 'https://app.yourcompany.com/auth/snowflake/callback',
};
// Map IdP groups to Snowflake roles
const groupRoleMapping: Record<string, SnowflakeRole> = {
'Engineering': SnowflakeRole.Developer,
'Platform-Admins': SnowflakeRole.Admin,
'Data-Team': SnowflakeRole.Viewer,
};
OAuth2/OIDC Integration
import { OAuth2Client } from '@snowflake/sdk';
const oauthClient = new OAuth2Client({
clientId: process.env.SNOWFLAKE_OAUTH_CLIENT_ID!,
clientSecret: process.env.SNOWFLAKE_OAUTH_CLIENT_SECRET!,
redirectUri: 'https://app.yourcompany.com/auth/snowflake/callback',
scopes: ['read', 'write'],
});
Organization Management
interface SnowflakeOrganization {
id: string;
name: string;
ssoEnabled: boolean;
enforceSso: boolean;
allowedDomains: string[];
defaultRole: SnowflakeRole;
}
async function createOrganization(
config: SnowflakeOrganization
):Create a minimal working Snowflake example.
Snowflake Hello World
Overview
Minimal working example demonstrating core Snowflake functionality.
Prerequisites
- Completed
snowflake-install-authsetup - Valid API credentials configured
- Development environment ready
Instructions
Step 1: Create Entry File
Create a new file for your hello world example.
Step 2: Import and Initialize Client
import { SnowflakeClient } from '@snowflake/sdk';
const client = new SnowflakeClient({
apiKey: process.env.SNOWFLAKE_API_KEY,
});
Step 3: Make Your First API Call
async function main() {
// Your first API call here
}
main().catch(console.error);
Output
- Working code file with Snowflake client initialization
- Successful API response confirming connection
- Console output showing:
Success! Your Snowflake connection is working.
Error Handling
| Error | Cause | Solution |
|---|---|---|
| Import Error | SDK not installed | Verify with npm list or pip show |
| Auth Error | Invalid credentials | Check environment variable is set |
| Timeout | Network issues | Increase timeout or check connectivity |
| Rate Limit | Too many requests | Wait and retry with exponential backoff |
Examples
TypeScript Example
import { SnowflakeClient } from '@snowflake/sdk';
const client = new SnowflakeClient({
apiKey: process.env.SNOWFLAKE_API_KEY,
});
async function main() {
// Your first API call here
}
main().catch(console.error);
Python Example
from snowflake import SnowflakeClient
client = SnowflakeClient()
# Your first API call here
Resources
Next Steps
Proceed to snowflake-local-dev-loop for development workflow setup.
Execute Snowflake incident response procedures with triage, mitigation, and postmortem.
Snowflake Incident Runbook
Overview
Rapid incident response procedures for Snowflake-related outages.
Prerequisites
- Access to Snowflake dashboard and status page
- kubectl access to production cluster
- Prometheus/Grafana access
- Communication channels (Slack, PagerDuty)
Severity Levels
| Level | Definition | Response Time | Examples |
|---|---|---|---|
| P1 | Complete outage | < 15 min | Snowflake API unreachable |
| P2 | Degraded service | < 1 hour | High latency, partial failures |
| P3 | Minor impact | < 4 hours | Webhook delays, non-critical errors |
| P4 | No user impact | Next business day | Monitoring gaps |
Quick Triage
# 1. Check Snowflake status
curl -s https://status.snowflake.com | jq
# 2. Check our integration health
curl -s https://api.yourapp.com/health | jq '.services.snowflake'
# 3. Check error rate (last 5 min)
curl -s localhost:9090/api/v1/query?query=rate(snowflake_errors_total[5m])
# 4. Recent error logs
kubectl logs -l app=snowflake-integration --since=5m | grep -i error | tail -20
Decision Tree
Snowflake API returning errors?
├─ YES: Is status.snowflake.com showing incident?
│ ├─ YES → Wait for Snowflake to resolve. Enable fallback.
│ └─ NO → Our integration issue. Check credentials, config.
└─ NO: Is our service healthy?
├─ YES → Likely resolved or intermittent. Monitor.
└─ NO → Our infrastructure issue. Check pods, memory, network.
Immediate Actions by Error Type
401/403 - Authentication
# Verify API key is set
kubectl get secret snowflake-secrets -o jsonpath='{.data.api-key}' | base64 -d
# Check if key was rotated
# → Verify in Snowflake dashboard
# Remediation: Update secret and restart pods
kubectl create secret generic snowflake-secrets --from-literal=api-key=NEW_KEY --dry-run=client -o yaml | kubectl apply -f -
kubectl rollout restart deployment/snowflake-integration
429 - Rate Limited
# Check rate limit headers
curl -v https://api.snowflake.com 2>&1 | grep -i rate
# Enable request queuing
kubectl set env deployment/snowflake-integration RATE_LIMIT_MODE=queue
# Long-term: Contact Snowflake for limit increase
500/503 - Snowflake Errors
# Enable graceful degradation
kubectl set env deployment/snowflake-integration SNOWFLAKE_FALLBACK=true
# Notify users of degraded service
# Update status page
# Monitor Snowflake status for resolution
Communication Templates
Internal (Slack)
🔴 P1 INCIDENT: Snowflake IntegrInstall and configure Snowflake SDK/CLI authentication.
Snowflake Install & Auth
Overview
Set up Snowflake SDK/CLI and configure authentication credentials.
Prerequisites
- Node.js 18+ or Python 3.10+
- Package manager (npm, pnpm, or pip)
- Snowflake account with API access
- API key from Snowflake dashboard
Instructions
Step 1: Install SDK
# Node.js
npm install @snowflake/sdk
# Python
pip install snowflake
Step 2: Configure Authentication
# Set environment variable
export SNOWFLAKE_API_KEY="your-api-key"
# Or create .env file
echo 'SNOWFLAKE_API_KEY=your-api-key' >> .env
Step 3: Verify Connection
// Test connection code here
Output
- Installed SDK package in node_modules or site-packages
- Environment variable or .env file with API key
- Successful connection verification output
Error Handling
| Error | Cause | Solution |
|---|---|---|
| Invalid API Key | Incorrect or expired key | Verify key in Snowflake dashboard |
| Rate Limited | Exceeded quota | Check quota at https://docs.snowflake.com |
| Network Error | Firewall blocking | Ensure outbound HTTPS allowed |
| Module Not Found | Installation failed | Run npm install or pip install again |
Examples
TypeScript Setup
import { SnowflakeClient } from '@snowflake/sdk';
const client = new SnowflakeClient({
apiKey: process.env.SNOWFLAKE_API_KEY,
});
Python Setup
from snowflake import SnowflakeClient
client = SnowflakeClient(
api_key=os.environ.get('SNOWFLAKE_API_KEY')
)
Resources
Next Steps
After successful auth, proceed to snowflake-hello-world for your first API call.
Identify and avoid Snowflake anti-patterns and common integration mistakes.
Snowflake Known Pitfalls
Overview
Common mistakes and anti-patterns when integrating with Snowflake.
Prerequisites
- Access to Snowflake codebase for review
- Understanding of async/await patterns
- Knowledge of security best practices
- Familiarity with rate limiting concepts
Pitfall #1: Synchronous API Calls in Request Path
❌ Anti-Pattern
// User waits for Snowflake API call
app.post('/checkout', async (req, res) => {
const payment = await snowflakeClient.processPayment(req.body); // 2-5s latency
const notification = await snowflakeClient.sendEmail(payment); // Another 1-2s
res.json({ success: true }); // User waited 3-7s
});
✅ Better Approach
// Return immediately, process async
app.post('/checkout', async (req, res) => {
const jobId = await queue.enqueue('process-checkout', req.body);
res.json({ jobId, status: 'processing' }); // 50ms response
});
// Background job
async function processCheckout(data) {
const payment = await snowflakeClient.processPayment(data);
await snowflakeClient.sendEmail(payment);
}
Pitfall #2: Not Handling Rate Limits
❌ Anti-Pattern
// Blast requests, crash on 429
for (const item of items) {
await snowflakeClient.process(item); // Will hit rate limit
}
✅ Better Approach
import pLimit from 'p-limit';
const limit = pLimit(5); // Max 5 concurrent
const rateLimiter = new RateLimiter({ tokensPerSecond: 10 });
for (const item of items) {
await rateLimiter.acquire();
await limit(() => snowflakeClient.process(item));
}
Pitfall #3: Leaking API Keys
❌ Anti-Pattern
// In frontend code (visible to users!)
const client = new SnowflakeClient({
apiKey: 'sk_live_ACTUAL_KEY_HERE', // Anyone can see this
});
// In git history
git commit -m "add API key" // Exposed forever
✅ Better Approach
// Backend only, environment variable
const client = new SnowflakeClient({
apiKey: process.env.SNOWFLAKE_API_KEY,
});
// Use .gitignore
.env
.env.local
.env.*.local
Pitfall #4: Ignoring Idempotency
❌ Anti-Pattern
// Network error on response = duplicate charge!
try {
await snowflakeClient.charge(order);
} catch (error) {
if (error.code === 'NETWORK_ERROR') {
await snowflakeClient.charge(order); // Charged twice!
}
}
✅ Better Approach
const idempotencyKey = `order-${order.id}-${Date.now()}`;
await snowflakeClient.charge(order, {
idempotencyKey, // Safe to retry
})Implement Snowflake load testing, auto-scaling, and capacity planning strategies.
Snowflake Load & Scale
Overview
Load testing, scaling strategies, and capacity planning for Snowflake integrations.
Prerequisites
- k6 load testing tool installed
- Kubernetes cluster with HPA configured
- Prometheus for metrics collection
- Test environment API keys
Load Testing with k6
Basic Load Test
// snowflake-load-test.js
import http from 'k6/http';
import { check, sleep } from 'k6';
export const options = {
stages: [
{ duration: '2m', target: 10 }, // Ramp up
{ duration: '5m', target: 10 }, // Steady state
{ duration: '2m', target: 50 }, // Ramp to peak
{ duration: '5m', target: 50 }, // Stress test
{ duration: '2m', target: 0 }, // Ramp down
],
thresholds: {
http_req_duration: ['p(95)<500'],
http_req_failed: ['rate<0.01'],
},
};
export default function () {
const response = http.post(
'https://api.snowflake.com/v1/resource',
JSON.stringify({ test: true }),
{
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${__ENV.SNOWFLAKE_API_KEY}`,
},
}
);
check(response, {
'status is 200': (r) => r.status === 200,
'latency < 500ms': (r) => r.timings.duration < 500,
});
sleep(1);
}
Run Load Test
# Install k6
brew install k6 # macOS
# or: sudo apt install k6 # Linux
# Run test
k6 run --env SNOWFLAKE_API_KEY=${SNOWFLAKE_API_KEY} snowflake-load-test.js
# Run with output to InfluxDB
k6 run --out influxdb=http://localhost:8086/k6 snowflake-load-test.js
Scaling Patterns
Horizontal Scaling
# kubernetes HPA
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: snowflake-integration-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: snowflake-integration
minReplicas: 2
maxReplicas: 20
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Pods
pods:
metric:
name: snowflake_queue_depth
target:
type: AverageValue
averageValue: 100
Connection Pooling
import { Pool } from 'generic-pool';
const snowflakePool = Pool.create({
create: async () => {
return new SnowflakeClient({
apiKey: process.env.SNOWFLAKE_API_KEY!,
});
},
destroy: async (client) => {
await client.close();
},
max: 20,
min: 5,
idleTimeoutMillis: 30000,
});
async function withSnowflakeClient<T>(
fn: (client: SnowflakeClient) => Promise<T>
): Promise<T> {
const clConfigure Snowflake local development with hot reload and testing.
Snowflake Local Dev Loop
Overview
Set up a fast, reproducible local development workflow for Snowflake.
Prerequisites
- Completed
snowflake-install-authsetup - Node.js 18+ with npm/pnpm
- Code editor with TypeScript support
- Git for version control
Instructions
Step 1: Create Project Structure
my-snowflake-project/
├── src/
│ ├── snowflake/
│ │ ├── client.ts # Snowflake client wrapper
│ │ ├── config.ts # Configuration management
│ │ └── utils.ts # Helper functions
│ └── index.ts
├── tests/
│ └── snowflake.test.ts
├── .env.local # Local secrets (git-ignored)
├── .env.example # Template for team
└── package.json
Step 2: Configure Environment
# Copy environment template
cp .env.example .env.local
# Install dependencies
npm install
# Start development server
npm run dev
Step 3: Setup Hot Reload
{
"scripts": {
"dev": "tsx watch src/index.ts",
"test": "vitest",
"test:watch": "vitest --watch"
}
}
Step 4: Configure Testing
import { describe, it, expect, vi } from 'vitest';
import { SnowflakeClient } from '../src/snowflake/client';
describe('Snowflake Client', () => {
it('should initialize with API key', () => {
const client = new SnowflakeClient({ apiKey: 'test-key' });
expect(client).toBeDefined();
});
});
Output
- Working development environment with hot reload
- Configured test suite with mocking
- Environment variable management
- Fast iteration cycle for Snowflake development
Error Handling
| Error | Cause | Solution |
|---|---|---|
| Module not found | Missing dependency | Run npm install |
| Port in use | Another process | Kill process or change port |
| Env not loaded | Missing .env.local | Copy from .env.example |
| Test timeout | Slow network | Increase test timeout |
Examples
Mock Snowflake Responses
vi.mock('@snowflake/sdk', () => ({
SnowflakeClient: vi.fn().mockImplementation(() => ({
// Mock methods here
})),
}));
Debug Mode
# Enable verbose logging
DEBUG=SNOWFLAKE=* npm run dev
Resources
- Snowflake SDK Reference
- Vitest Documentation
-
Execute Snowflake major re-architecture and migration strategies with strangler fig pattern.
ReadWriteEditBash(npm:*)Bash(node:*)Bash(kubectl:*)Snowflake Migration Deep Dive
Overview
Comprehensive guide for migrating to or from Snowflake, or major version upgrades.
Prerequisites
- Current system documentation
- Snowflake SDK installed
- Feature flag infrastructure
- Rollback strategy tested
Migration Types
Type Complexity Duration Risk Fresh install Low Days Low From competitor Medium Weeks Medium Major version Medium Weeks Medium Full replatform High Months High Pre-Migration Assessment
Step 1: Current State Analysis
# Document current implementation find . -name "*.ts" -o -name "*.py" | xargs grep -l "snowflake" > snowflake-files.txt # Count integration points wc -l snowflake-files.txt # Identify dependencies npm list | grep snowflake pip freeze | grep snowflakeStep 2: Data Inventory
interface MigrationInventory { dataTypes: string[]; recordCounts: Record<string, number>; dependencies: string[]; integrationPoints: string[]; customizations: string[]; } async function assessSnowflakeMigration(): Promise<MigrationInventory> { return { dataTypes: await getDataTypes(), recordCounts: await getRecordCounts(), dependencies: await analyzeDependencies(), integrationPoints: await findIntegrationPoints(), customizations: await documentCustomizations(), }; }Migration Strategy: Strangler Fig Pattern
Phase 1: Parallel Run ┌─────────────┐ ┌─────────────┐ │ Old │ │ New │ │ System │ ──▶ │ Snowflake │ │ (100%) │ │ (0%) │ └─────────────┘ └─────────────┘ Phase 2: Gradual Shift ┌─────────────┐ ┌─────────────┐ │ Old │ │ New │ │ (50%) │ ──▶ │ (50%) │ └─────────────┘ └─────────────┘ Phase 3: Complete ┌─────────────┐ ┌─────────────┐ │ Old │ │ New │ │ (0%) │ ──▶ │ (100%) │ └─────────────┘ └─────────────┘Implementation Plan
Phase 1: Setup (Week 1-2)
# Install Snowflake SDK npm install @snowflake/sdk # Configure credentials cp .env.example .env.snowflake # Edit with new credentials # Verify connectivity node -e "require('@snowflake/sdk').ping()"Phase 2: Adapter Layer (Week 3-4)
// src/adapters/snowflake.ts interface ServiceAdapter { create(data: CreateInput): Promise<Resource>; read(id: string): Promise<Resource>; update(id: string, data: UpdateInput)
Configure Snowflake across development, staging, and production environments.
Snowflake Multi-Environment Setup
Overview
Configure Snowflake across development, staging, and production environments.
Prerequisites
- Separate Snowflake accounts or API keys per environment
- Secret management solution (Vault, AWS Secrets Manager, etc.)
- CI/CD pipeline with environment variables
- Environment detection in application
Environment Strategy
| Environment | Purpose | API Keys | Data |
|---|---|---|---|
| Development | Local dev | Test keys | Sandbox |
| Staging | Pre-prod validation | Staging keys | Test data |
| Production | Live traffic | Production keys | Real data |
Configuration Structure
config/
├── snowflake/
│ ├── base.json # Shared config
│ ├── development.json # Dev overrides
│ ├── staging.json # Staging overrides
│ └── production.json # Prod overrides
base.json
{
"timeout": 30000,
"retries": 3,
"cache": {
"enabled": true,
"ttlSeconds": 60
}
}
development.json
{
"apiKey": "${SNOWFLAKE_API_KEY}",
"baseUrl": "https://api-sandbox.snowflake.com",
"debug": true,
"cache": {
"enabled": false
}
}
staging.json
{
"apiKey": "${SNOWFLAKE_API_KEY_STAGING}",
"baseUrl": "https://api-staging.snowflake.com",
"debug": false
}
production.json
{
"apiKey": "${SNOWFLAKE_API_KEY_PROD}",
"baseUrl": "https://api.snowflake.com",
"debug": false,
"retries": 5
}
Environment Detection
// src/snowflake/config.ts
import baseConfig from '../../config/snowflake/base.json';
type Environment = 'development' | 'staging' | 'production';
function detectEnvironment(): Environment {
const env = process.env.NODE_ENV || 'development';
const validEnvs: Environment[] = ['development', 'staging', 'production'];
return validEnvs.includes(env as Environment)
? (env as Environment)
: 'development';
}
export function getSnowflakeConfig() {
const env = detectEnvironment();
const envConfig = require(`../../config/snowflake/${env}.json`);
return {
...baseConfig,
...envConfig,
environment: env,
};
}
Secret Management by Environment
Local Development
# .env.local (git-ignSet up comprehensive observability for Snowflake integrations with metrics, traces, and alerts.
Snowflake Observability
Overview
Set up comprehensive observability for Snowflake integrations.
Prerequisites
- Prometheus or compatible metrics backend
- OpenTelemetry SDK installed
- Grafana or similar dashboarding tool
- AlertManager configured
Metrics Collection
Key Metrics
| Metric | Type | Description |
|---|---|---|
snowflakerequeststotal |
Counter | Total API requests |
snowflakerequestduration_seconds |
Histogram | Request latency |
snowflakeerrorstotal |
Counter | Error count by type |
snowflakeratelimit_remaining |
Gauge | Rate limit headroom |
Prometheus Metrics
import { Registry, Counter, Histogram, Gauge } from 'prom-client';
const registry = new Registry();
const requestCounter = new Counter({
name: 'snowflake_requests_total',
help: 'Total Snowflake API requests',
labelNames: ['method', 'status'],
registers: [registry],
});
const requestDuration = new Histogram({
name: 'snowflake_request_duration_seconds',
help: 'Snowflake request duration',
labelNames: ['method'],
buckets: [0.05, 0.1, 0.25, 0.5, 1, 2.5, 5],
registers: [registry],
});
const errorCounter = new Counter({
name: 'snowflake_errors_total',
help: 'Snowflake errors by type',
labelNames: ['error_type'],
registers: [registry],
});
Instrumented Client
async function instrumentedRequest<T>(
method: string,
operation: () => Promise<T>
): Promise<T> {
const timer = requestDuration.startTimer({ method });
try {
const result = await operation();
requestCounter.inc({ method, status: 'success' });
return result;
} catch (error: any) {
requestCounter.inc({ method, status: 'error' });
errorCounter.inc({ error_type: error.code || 'unknown' });
throw error;
} finally {
timer();
}
}
Distributed Tracing
OpenTelemetry Setup
import { trace, SpanStatusCode } from '@opentelemetry/api';
const tracer = trace.getTracer('snowflake-client');
async function tracedSnowflakeCall<T>(
operationName: string,
operation: () => Promise<T>
): Promise<T> {
return tracer.startActiveSpan(`snowflake.${operationName}`, async (span) => {
try {
const result = await operation();
span.setStatus({ code: SpanStatusCode.OK });
return result;
} catch (error: any) {
span.setStatus({ code: SpanStatusCode.ERROR, meOptimize Snowflake API performance with caching, batching, and connection pooling.
Snowflake Performance Tuning
Overview
Optimize Snowflake API performance with caching, batching, and connection pooling.
Prerequisites
- Snowflake SDK installed
- Understanding of async patterns
- Redis or in-memory cache available (optional)
- Performance monitoring in place
Latency Benchmarks
| Operation | P50 | P95 | P99 |
|---|---|---|---|
| Read | 50ms | 150ms | 300ms |
| Write | 100ms | 250ms | 500ms |
| List | 75ms | 200ms | 400ms |
Caching Strategy
Response Caching
import { LRUCache } from 'lru-cache';
const cache = new LRUCache<string, any>({
max: 1000,
ttl: 60000, // 1 minute
updateAgeOnGet: true,
});
async function cachedSnowflakeRequest<T>(
key: string,
fetcher: () => Promise<T>,
ttl?: number
): Promise<T> {
const cached = cache.get(key);
if (cached) return cached as T;
const result = await fetcher();
cache.set(key, result, { ttl });
return result;
}
Redis Caching (Distributed)
import Redis from 'ioredis';
const redis = new Redis(process.env.REDIS_URL);
async function cachedWithRedis<T>(
key: string,
fetcher: () => Promise<T>,
ttlSeconds = 60
): Promise<T> {
const cached = await redis.get(key);
if (cached) return JSON.parse(cached);
const result = await fetcher();
await redis.setex(key, ttlSeconds, JSON.stringify(result));
return result;
}
Request Batching
import DataLoader from 'dataloader';
const snowflakeLoader = new DataLoader<string, any>(
async (ids) => {
// Batch fetch from Snowflake
const results = await snowflakeClient.batchGet(ids);
return ids.map(id => results.find(r => r.id === id) || null);
},
{
maxBatchSize: 100,
batchScheduleFn: callback => setTimeout(callback, 10),
}
);
// Usage - automatically batched
const [item1, item2, item3] = await Promise.all([
snowflakeLoader.load('id-1'),
snowflakeLoader.load('id-2'),
snowflakeLoader.load('id-3'),
]);
Connection Optimization
import { Agent } from 'https';
// Keep-alive connection pooling
const agent = new Agent({
keepAlive: true,
maxSockets: 10,
maxFreeSockets: 5,
timeout: 30000,
});
const client = new SnowflakeClient({
apiKey: process.env.SNOWFLAKE_API_KEY!,
httpAgent: agent,
});
Pagination Optimization
async function* paginatedSnowflakeList<T>(
fetcher: (cursor?: string) => Promise<{ data: T[]; nextCursor?: string }&Implement Snowflake lint rules, policy enforcement, and automated guardrails.
Snowflake Policy & Guardrails
Overview
Automated policy enforcement and guardrails for Snowflake integrations.
Prerequisites
- ESLint configured in project
- Pre-commit hooks infrastructure
- CI/CD pipeline with policy checks
- TypeScript for type enforcement
ESLint Rules
Custom Snowflake Plugin
// eslint-plugin-snowflake/rules/no-hardcoded-keys.js
module.exports = {
meta: {
type: 'problem',
docs: {
description: 'Disallow hardcoded Snowflake API keys',
},
fixable: 'code',
},
create(context) {
return {
Literal(node) {
if (typeof node.value === 'string') {
if (node.value.match(/^sk_(live|test)_[a-zA-Z0-9]{24,}/)) {
context.report({
node,
message: 'Hardcoded Snowflake API key detected',
});
}
}
},
};
},
};
ESLint Configuration
// .eslintrc.js
module.exports = {
plugins: ['snowflake'],
rules: {
'snowflake/no-hardcoded-keys': 'error',
'snowflake/require-error-handling': 'warn',
'snowflake/use-typed-client': 'warn',
},
};
Pre-Commit Hooks
# .pre-commit-config.yaml
repos:
- repo: local
hooks:
- id: snowflake-secrets-check
name: Check for Snowflake secrets
entry: bash -c 'git diff --cached --name-only | xargs grep -l "sk_live_" && exit 1 || exit 0'
language: system
pass_filenames: false
- id: snowflake-config-validate
name: Validate Snowflake configuration
entry: node scripts/validate-snowflake-config.js
language: node
files: '\.snowflake\.json$'
TypeScript Strict Patterns
// Enforce typed configuration
interface SnowflakeStrictConfig {
apiKey: string; // Required
environment: 'development' | 'staging' | 'production'; // Enum
timeout: number; // Required number, not optional
retries: number;
}
// Disallow any in Snowflake code
// @ts-expect-error - Using any is forbidden
const client = new Client({ apiKey: any });
// Prefer this
const client = new SnowflakeClient(config satisfies SnowflakeStrictConfig);
Architecture Decision Records
ADR Template
# ADR-001: Snowflake Client Initialization
## Status
Accepted
## Context
We need to decide how to initialize the Snowflake client across our application.
## Decision
We will use the singleton pattern with lazy initialization.
## Consequences
- Pro: Single client instance, connection reuse
- Pro: Easy to mock in tests
- Con: Global state requires careful lifecExecute Snowflake production deployment checklist and rollback procedures.
Snowflake Production Checklist
Overview
Complete checklist for deploying Snowflake integrations to production.
Prerequisites
- Staging environment tested and verified
- Production API keys available
- Deployment pipeline configured
- Monitoring and alerting ready
Instructions
Step 1: Pre-Deployment Configuration
- [ ] Production API keys in secure vault
- [ ] Environment variables set in deployment platform
- [ ] API key scopes are minimal (least privilege)
- [ ] Webhook endpoints configured with HTTPS
- [ ] Webhook secrets stored securely
Step 2: Code Quality Verification
- [ ] All tests passing (
npm test) - [ ] No hardcoded credentials
- [ ] Error handling covers all Snowflake error types
- [ ] Rate limiting/backoff implemented
- [ ] Logging is production-appropriate
Step 3: Infrastructure Setup
- [ ] Health check endpoint includes Snowflake connectivity
- [ ] Monitoring/alerting configured
- [ ] Circuit breaker pattern implemented
- [ ] Graceful degradation configured
Step 4: Documentation Requirements
- [ ] Incident runbook created
- [ ] Key rotation procedure documented
- [ ] Rollback procedure documented
- [ ] On-call escalation path defined
Step 5: Deploy with Gradual Rollout
# Pre-flight checks
curl -f https://staging.example.com/health
curl -s https://status.snowflake.com
# Gradual rollout - start with canary (10%)
kubectl apply -f k8s/production.yaml
kubectl set image deployment/snowflake-integration app=image:new --record
kubectl rollout pause deployment/snowflake-integration
# Monitor canary traffic for 10 minutes
sleep 600
# Check error rates and latency before continuing
# If healthy, continue rollout to 50%
kubectl rollout resume deployment/snowflake-integration
kubectl rollout pause deployment/snowflake-integration
sleep 300
# Complete rollout to 100%
kubectl rollout resume deployment/snowflake-integration
kubectl rollout status deployment/snowflake-integration
Output
- Deployed Snowflake integration
- Health checks passing
- Monitoring active
- Rollback procedure documented
Error Handling
| Alert | Condition | Severity |
|---|---|---|
| API Down | 5xx errors > 10/min | P1 |
| High Latency | p99 > 5000ms | P2 |
| Rate Limited | 429 errors > 5/min | P2 |
| Auth Failures | 401/403 errors > 0 | P1 |
Examples
Health Check Implementation
async function healthCheck(): Promise<{ status: string; snoImplement Snowflake rate limiting, backoff, and idempotency patterns.
Snowflake Rate Limits
Overview
Handle Snowflake rate limits gracefully with exponential backoff and idempotency.
Prerequisites
- Snowflake SDK installed
- Understanding of async/await patterns
- Access to rate limit headers
Instructions
Step 1: Understand Rate Limit Tiers
| Tier | Requests/min | Requests/day | Burst |
|---|---|---|---|
| Free | 60 | 1,000 | 10 |
| Pro | 300 | 10,000 | 50 |
| Enterprise | 1,000 | 100,000 | 200 |
Step 2: Implement Exponential Backoff with Jitter
async function withExponentialBackoff<T>(
operation: () => Promise<T>,
config = { maxRetries: 5, baseDelayMs: 1000, maxDelayMs: 32000, jitterMs: 500 }
): Promise<T> {
for (let attempt = 0; attempt <= config.maxRetries; attempt++) {
try {
return await operation();
} catch (error: any) {
if (attempt === config.maxRetries) throw error;
const status = error.status || error.response?.status;
if (status !== 429 && (status < 500 || status >= 600)) throw error;
// Exponential delay with jitter to prevent thundering herd
const exponentialDelay = config.baseDelayMs * Math.pow(2, attempt);
const jitter = Math.random() * config.jitterMs;
const delay = Math.min(exponentialDelay + jitter, config.maxDelayMs);
console.log(`Rate limited. Retrying in ${delay.toFixed(0)}ms...`);
await new Promise(r => setTimeout(r, delay));
}
}
throw new Error('Unreachable');
}
Step 3: Add Idempotency Keys
import { v4 as uuidv4 } from 'uuid';
import crypto from 'crypto';
// Generate deterministic key from operation params (for safe retries)
function generateIdempotencyKey(operation: string, params: Record<string, any>): string {
const data = JSON.stringify({ operation, params });
return crypto.createHash('sha256').update(data).digest('hex');
}
async function idempotentRequest<T>(
client: SnowflakeClient,
params: Record<string, any>,
idempotencyKey?: string // Pass existing key for retries
): Promise<T> {
// Use provided key (for retries) or generate deterministic key from params
const key = idempotencyKey || generateIdempotencyKey(params.method || 'POST', params);
return client.request({
...params,
headers: { 'Idempotency-Key': key, ...params.headers },
});
}
Output
- Reliable API calls with automatic retry
- Idempotent requests preventing duplicates
- Rate limit headers properly handled
Error Handling
| Header | Description |
Implement Snowflake reference architecture with best-practice project layout.
ReadGrep
Snowflake Reference ArchitectureOverviewProduction-ready architecture patterns for Snowflake integrations. Prerequisites
Project Structure
Layer Architecture
Key ComponentsStep 1: Client Wrapper
Step 2: Error BoundaryImplement Snowflake reliability patterns including circuit breakers, idempotency, and graceful degradation.
ReadWriteEdit
Snowflake Reliability PatternsOverviewProduction-grade reliability patterns for Snowflake integrations. Prerequisites
Circuit Breaker
Idempotency Keys
Bulkhead Pattern
Timeout HierarchyApply production-ready Snowflake SDK patterns for TypeScript and Python.
ReadWriteEdit
Snowflake SDK PatternsOverviewProduction-ready patterns for Snowflake SDK usage in TypeScript and Python. Prerequisites
InstructionsStep 1: Implement Singleton Pattern (Recommended)
Step 2: Add Error Handling Wrapper
Step 3: Implement Retry Logic
Output
Error Handling
ExamplesFactory Pattern (Multi-tenant)Apply Snowflake security best practices for secrets and access control.
ReadWriteGrep
Snowflake Security BasicsOverviewSecurity best practices for Snowflake API keys, tokens, and access control. Prerequisites
InstructionsStep 1: Configure Environment Variables
Step 2: Implement Secret Rotation
Step 3: Apply Least Privilege
Output
Error Handling
ExamplesService Account Pattern
Webhook Signature Verification
Security Checklist
Audit LoggingAnalyze, plan, and execute Snowflake SDK upgrades with breaking change detection.
ReadWriteEditBash(npm:*)Bash(git:*)
Snowflake Upgrade & MigrationOverviewGuide for upgrading Snowflake SDK versions and handling breaking changes. Prerequisites
InstructionsStep 1: Check Current Version
Step 2: Review Changelog
Step 3: Create Upgrade Branch
Step 4: Handle Breaking ChangesUpdate import statements, configuration, and method signatures as needed. Output
Error Handling
ExamplesImport Changes
Configuration Changes
Rollback Procedure
Deprecation Handling
ResourcesNext StepsFor CI integration during upgrades, see Implement Snowflake webhook signature validation and event handling.
ReadWriteEditBash(curl:*)
Snowflake Webhooks & EventsOverviewSecurely handle Snowflake webhooks with signature validation and replay protection. Prerequisites
Webhook Endpoint SetupExpress.js
Signature Verification
Event Handler PatternReady to use snowflake-pack?Related Pluginsdata-seeder-generatorGenerate realistic test data and database seed scripts for development and testing environments data-validation-engineDatabase plugin for data-validation-engine database-archival-systemDatabase plugin for database-archival-system database-audit-loggerDatabase plugin for database-audit-logger database-backup-automatorAutomate database backups with scheduling, compression, encryption, and restore procedures database-cache-layerDatabase plugin for database-cache-layer
Tags
snowflakesaassdkintegration
|
|---|