coreweave-install-auth

'Configure CoreWeave Kubernetes Service (CKS) access with kubeconfig

5 Tools
coreweave-pack Plugin
saas packs Category

Allowed Tools

ReadWriteEditBash(kubectl:*)Grep

Provided by Plugin

coreweave-pack

Claude Code skill pack for CoreWeave (24 skills)

saas packs v1.0.0
View Plugin

Installation

This skill is included in the coreweave-pack plugin:

/plugin install coreweave-pack@claude-code-plugins-plus

Click to copy

Instructions

CoreWeave Install & Auth

Overview

Set up access to CoreWeave Kubernetes Service (CKS). CKS runs bare-metal Kubernetes with NVIDIA GPUs -- no hypervisor overhead. Access is via standard kubeconfig with CoreWeave-issued credentials.

Prerequisites

  • CoreWeave account at https://cloud.coreweave.com
  • kubectl v1.28+ installed
  • Kubernetes namespace provisioned by CoreWeave

Instructions

Step 1: Download Kubeconfig

  1. Log in to https://cloud.coreweave.com
  2. Navigate to API Access > Kubeconfig
  3. Download the kubeconfig file

# Save kubeconfig
mkdir -p ~/.kube
cp ~/Downloads/coreweave-kubeconfig.yaml ~/.kube/coreweave

# Set as active context
export KUBECONFIG=~/.kube/coreweave

# Verify connection
kubectl get nodes
kubectl get namespaces

Step 2: Configure API Token


# CoreWeave API token for programmatic access
export COREWEAVE_API_TOKEN="your-api-token"

# Store securely
echo "COREWEAVE_API_TOKEN=${COREWEAVE_API_TOKEN}" >> .env
echo "KUBECONFIG=~/.kube/coreweave" >> .env

Step 3: Verify GPU Access


# List available GPU nodes
kubectl get nodes -l gpu.nvidia.com/class -o custom-columns=\
NAME:.metadata.name,GPU:.metadata.labels.gpu\.nvidia\.com/class,\
STATUS:.status.conditions[-1].type

# Check GPU allocatable resources
kubectl describe nodes | grep -A5 "Allocatable:" | grep nvidia

Step 4: Test with a Simple GPU Pod


# test-gpu.yaml
apiVersion: v1
kind: Pod
metadata:
  name: gpu-test
spec:
  restartPolicy: Never
  containers:
    - name: cuda-test
      image: nvidia/cuda:12.2.0-base-ubuntu22.04
      command: ["nvidia-smi"]
      resources:
        limits:
          nvidia.com/gpu: 1
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: gpu.nvidia.com/class
                operator: In
                values: ["A100_PCIE_80GB"]

kubectl apply -f test-gpu.yaml
kubectl logs gpu-test  # Should show nvidia-smi output
kubectl delete pod gpu-test

Error Handling

Error Cause Solution
Unable to connect to the server Wrong kubeconfig Verify KUBECONFIG path
Forbidden Missing namespace permissions Contact CoreWeave support
No GPU nodes found Wrong node labels Check gpu.nvidia.com/class labels
Pod stuck Pending GPU capacity exhausted Try different GPU type or region

Resources

Next Steps

Proceed to coreweave-hello-world to deploy your first inference service.

Ready to use coreweave-pack?