palantir-observability

Set up comprehensive observability for Palantir integrations with metrics, traces, and alerts. Use when implementing monitoring for Palantir operations, setting up dashboards, or configuring alerting for Palantir integration health. Trigger with phrases like "palantir monitoring", "palantir metrics", "palantir observability", "monitor palantir", "palantir alerts", "palantir tracing".

claude-code
3 Tools
palantir-pack Plugin
saas packs Category

Allowed Tools

ReadWriteEdit

Provided by Plugin

palantir-pack

Claude Code skill pack for Palantir (24 skills)

saas packs v1.0.0
View Plugin

Installation

This skill is included in the palantir-pack plugin:

/plugin install palantir-pack@claude-code-plugins-plus

Click to copy

Instructions

Palantir Observability

Overview

Set up comprehensive observability for Palantir integrations.

Prerequisites

  • Prometheus or compatible metrics backend
  • OpenTelemetry SDK installed
  • Grafana or similar dashboarding tool
  • AlertManager configured

Metrics Collection

Key Metrics

Metric Type Description
palantirrequeststotal Counter Total API requests
palantirrequestduration_seconds Histogram Request latency
palantirerrorstotal Counter Error count by type
palantirratelimit_remaining Gauge Rate limit headroom

Prometheus Metrics


import { Registry, Counter, Histogram, Gauge } from 'prom-client';

const registry = new Registry();

const requestCounter = new Counter({
  name: 'palantir_requests_total',
  help: 'Total Palantir API requests',
  labelNames: ['method', 'status'],
  registers: [registry],
});

const requestDuration = new Histogram({
  name: 'palantir_request_duration_seconds',
  help: 'Palantir request duration',
  labelNames: ['method'],
  buckets: [0.05, 0.1, 0.25, 0.5, 1, 2.5, 5],
  registers: [registry],
});

const errorCounter = new Counter({
  name: 'palantir_errors_total',
  help: 'Palantir errors by type',
  labelNames: ['error_type'],
  registers: [registry],
});

Instrumented Client


async function instrumentedRequest<T>(
  method: string,
  operation: () => Promise<T>
): Promise<T> {
  const timer = requestDuration.startTimer({ method });

  try {
    const result = await operation();
    requestCounter.inc({ method, status: 'success' });
    return result;
  } catch (error: any) {
    requestCounter.inc({ method, status: 'error' });
    errorCounter.inc({ error_type: error.code || 'unknown' });
    throw error;
  } finally {
    timer();
  }
}

Distributed Tracing

OpenTelemetry Setup


import { trace, SpanStatusCode } from '@opentelemetry/api';

const tracer = trace.getTracer('palantir-client');

async function tracedPalantirCall<T>(
  operationName: string,
  operation: () => Promise<T>
): Promise<T> {
  return tracer.startActiveSpan(`palantir.${operationName}`, async (span) => {
    try {
      const result = await operation();
      span.setStatus({ code: SpanStatusCode.OK });
      return result;
    } catch (error: any) {
      span.setStatus({ code: SpanStatusCode.ERROR, message: error.message });
      span.recordException(error);
      throw error;
    } finally {
      span.end();
    }
  });
}

Logging Strategy

Structured Logging


import pino from 'pino';

const logger = pino({
  name: 'palantir',
  level: process.env.LOG_LEVEL || 'info',
});

function logPalantirOperation(
  operation: string,
  data: Record<string, any>,
  duration: number
) {
  logger.info({
    service: 'palantir',
    operation,
    duration_ms: duration,
    ...data,
  });
}

Alert Configuration

Prometheus AlertManager Rules


# palantir_alerts.yaml
groups:
  - name: palantir_alerts
    rules:
      - alert: PalantirHighErrorRate
        expr: |
          rate(palantir_errors_total[5m]) /
          rate(palantir_requests_total[5m]) > 0.05
        for: 5m
        labels:
          severity: warning
        annotations:
          summary: "Palantir error rate > 5%"

      - alert: PalantirHighLatency
        expr: |
          histogram_quantile(0.95,
            rate(palantir_request_duration_seconds_bucket[5m])
          ) > 2
        for: 5m
        labels:
          severity: warning
        annotations:
          summary: "Palantir P95 latency > 2s"

      - alert: PalantirDown
        expr: up{job="palantir"} == 0
        for: 1m
        labels:
          severity: critical
        annotations:
          summary: "Palantir integration is down"

Dashboard

Grafana Panel Queries


{
  "panels": [
    {
      "title": "Palantir Request Rate",
      "targets": [{
        "expr": "rate(palantir_requests_total[5m])"
      }]
    },
    {
      "title": "Palantir Latency P50/P95/P99",
      "targets": [{
        "expr": "histogram_quantile(0.5, rate(palantir_request_duration_seconds_bucket[5m]))"
      }]
    }
  ]
}

Instructions

Step 1: Set Up Metrics Collection

Implement Prometheus counters, histograms, and gauges for key operations.

Step 2: Add Distributed Tracing

Integrate OpenTelemetry for end-to-end request tracing.

Step 3: Configure Structured Logging

Set up JSON logging with consistent field names.

Step 4: Create Alert Rules

Define Prometheus alerting rules for error rates and latency.

Output

  • Metrics collection enabled
  • Distributed tracing configured
  • Structured logging implemented
  • Alert rules deployed

Error Handling

Issue Cause Solution
Missing metrics No instrumentation Wrap client calls
Trace gaps Missing propagation Check context headers
Alert storms Wrong thresholds Tune alert rules
High cardinality Too many labels Reduce label values

Examples

Quick Metrics Endpoint


app.get('/metrics', async (req, res) => {
  res.set('Content-Type', registry.contentType);
  res.send(await registry.metrics());
});

Resources

Next Steps

For incident response, see palantir-incident-runbook.

Ready to use palantir-pack?