did not find a proper configuration for us
Launch, manage, and scale OCI compute instances with capacity retry logic.
ReadWriteEditBash(pip:*)Grep
OCI Compute — Launch, Manage & Scale
Overview
Provision and manage OCI compute instances using the Python SDK. Compute is the entry point for most OCI workloads, but "out of host capacity" errors, shape selection confusion (Flex vs Standard, AMD vs ARM vs Intel), and boot volume management make it harder than AWS EC2. This skill covers shape selection, launch with capacity retry across availability domains, instance lifecycle actions, and boot volume management.
Purpose: Launch reliable compute instances with retry logic that survives capacity shortages.
Prerequisites
- OCI Python SDK —
pip install oci
- Config file at
~/.oci/config with fields: user, fingerprint, tenancy, region, key_file
- IAM policy —
Allow group Developers to manage instances in compartment
- Python 3.8+
- A VCN with at least one subnet (see
oraclecloud-core-workflow-b)
Instructions
Step 1: Understand Shape Options
| Shape |
Arch |
Flex? |
OCPUs |
Use Case |
| VM.Standard.A1.Flex |
ARM (Ampere) |
Yes |
1-80 |
Always Free eligible, best price/perf |
| VM.Standard.E5.Flex |
AMD |
Yes |
1-94 |
General purpose, broadest availability |
| VM.Standard3.Flex |
Intel |
Yes |
1-32 |
Intel-specific workloads |
| VM.Standard.E4.Flex |
AMD |
Yes |
1-64 |
Previous gen, still available |
Key rule: Always use Flex shapes. They let you set exact OCPU and memory. Standard (non-Flex) shapes have fixed sizes and are being phased out.
Step 2: List Available Shapes and Images
import oci
config = oci.config.from_file("~/.oci/config")
compute = oci.core.ComputeClient(config)
identity = oci.identity.IdentityClient(config)
# Get availability domains
ads = identity.list_availability_domains(compartment_id=config["tenancy"]).data
# List shapes in each AD
for ad in ads:
shapes = compute.list_shapes(
compartment_id=config["tenancy"],
availability_domain=ad.name
).data
flex_shapes = [s for s in shapes if "Flex" in s.shape]
print(f"\n{ad.name}:")
for s in flex_shapes:
print(f" {s.shape} | OCPUs: {s.ocpu_options.min}-{s.ocpu_options.max}")
Step 3: Launch with Capacity Retry
The most common OCI error is 500 InternalError with message "Out of host capacity." The fix is to retry across availability domains.
Build OCI networking from scratch — VCN, subnets, gateways, and security rules.
ReadWriteEditBash(pip:*)Grep
OCI Networking — VCN, Subnets & Security Rules
Overview
Build a working OCI network from scratch using the Python SDK. OCI networking (VCN, subnets, security lists, NSGs, gateways) has more moving parts than AWS VPC. A misconfigured security list silently drops traffic with no error — just timeouts. This skill creates a complete network topology with public and private subnets, internet and NAT gateways, route tables, and Network Security Groups (NSGs).
Purpose: Build a production-ready VCN with proper routing and security rules that actually works on first deploy.
Prerequisites
- OCI Python SDK —
pip install oci
- Config file at
~/.oci/config with fields: user, fingerprint, tenancy, region, key_file
- IAM policy —
Allow group Developers to manage virtual-network-family in compartment
- Python 3.8+
Instructions
Step 1: Create the VCN
import oci
config = oci.config.from_file("~/.oci/config")
network = oci.core.VirtualNetworkClient(config)
vcn = network.create_vcn(
oci.core.models.CreateVcnDetails(
compartment_id=config["tenancy"],
display_name="app-vcn",
cidr_blocks=["10.0.0.0/16"],
dns_label="appvcn",
)
).data
print(f"VCN created: {vcn.id}")
Step 2: Create Internet Gateway and NAT Gateway
The internet gateway handles inbound/outbound traffic for public subnets. The NAT gateway gives private subnets outbound-only internet access.
# Internet Gateway (for public subnets)
igw = network.create_internet_gateway(
oci.core.models.CreateInternetGatewayDetails(
compartment_id=config["tenancy"],
vcn_id=vcn.id,
display_name="app-igw",
is_enabled=True,
)
).data
# NAT Gateway (for private subnets — outbound only)
nat = network.create_nat_gateway(
oci.core.models.CreateNatGatewayDetails(
compartment_id=config["tenancy"],
vcn_id=vcn.id,
display_name="app-nat",
)
).data
print(f"IGW: {igw.id}\nNAT: {nat.id}")
Step 3: Create Route Tables
# Public route table — all traffic via internet gateway
public_rt = network.create_route_table(
oci.core.models.CreateRouteTableDetails(
compartment_id=config["tenancy"],
vcn_id=vcn.id,
display_name="public-rt",
route_rules=[
oci.core.models.RouteRule(
network_entity_id=igw.id,
destination="0.0.0.0/0",
destination_type="CIDR_BLOCK",
)
Track OCI spend with the Usage API and set up budget alerts.
ReadWriteEditBash(pip:*)Grep
Oracle Cloud Cost Tuning
Overview
Track OCI spending programmatically using the Usage API and set up budget alerts before Universal Credits run out unexpectedly. OCI pricing varies by shape, region, and commitment level, and the Cost Analysis tool in the Console is buried and confusing. This skill uses the Usage API to query spend by compartment, service, and shape, creates budgets with alert rules, and covers optimization strategies including Always Free tier resources, preemptible instances, and reserved capacity.
Purpose: Get visibility into OCI spending through code, set proactive budget alerts, and identify cost optimization opportunities.
Prerequisites
- OCI tenancy with an API signing key in
~/.oci/config
- Python 3.8+ with
pip install oci
- Tenancy OCID (root compartment) for tenancy-wide cost queries
- IAM policy granting
read usage-reports in the tenancy
- Notification topic OCID for budget alert delivery (see
oraclecloud-observability)
Instructions
Step 1: Query Usage with the Usage API
The Usage API returns cost and usage data broken down by configurable dimensions:
import oci
from datetime import datetime, timedelta
config = oci.config.from_file("~/.oci/config")
usage_api = oci.usage_api.UsageapiClient(config)
# Query last 30 days of spend by service
response = usage_api.request_summarized_usages(
oci.usage_api.models.RequestSummarizedUsagesDetails(
tenant_id=config["tenancy"],
time_usage_started=(datetime.utcnow() - timedelta(days=30)).isoformat() + "Z",
time_usage_ended=datetime.utcnow().isoformat() + "Z",
granularity="DAILY",
query_type="COST",
group_by=["service"]
)
)
total_cost = 0.0
for item in response.data.items:
cost = item.computed_amount or 0
total_cost += cost
if cost > 0:
print(f"{item.service}: ${cost:.2f} ({item.currency})")
print(f"\nTotal 30-day spend: ${total_cost:.2f}")
Step 2: Break Down Cost by Compartment and Shape
Identify which compartments and shapes are driving your bill:
# Cost by compartment
response = usage_api.request_summarized_usages(
oci.usage_api.models.RequestSummarizedUsagesDetails(
tenant_id=config["tenancy"],
time_usage_started=(datetime.utcnow() - timedelta(days=30)).isoformat() + "Z",
time_usage_ended=datetime.utcnow().isoformat() + "Z",
granularity="MONTHLY",
query_type="COST",
group_by=["compartmentName", "skuName"]
)
)
for item in response.data.i
Manage OCI Object Storage — buckets, uploads, PARs, and lifecycle policies.
ReadWriteEditBash(pip:*)Grep
OCI Object Storage — Buckets, PARs & Lifecycle
Overview
Manage OCI Object Storage using the Python SDK. Object Storage is OCI's S3 equivalent, but PAR (Pre-Authenticated Request) URLs expire silently with no error — the URL just returns 404. Multipart uploads over 50GB require manual part management. Lifecycle policies can delete data unexpectedly if misconfigured. This skill covers the safe patterns for all of these operations.
Purpose: Upload, download, and share objects safely with proper PAR expiry management and lifecycle policy configuration.
Prerequisites
- OCI Python SDK —
pip install oci
- Config file at
~/.oci/config with fields: user, fingerprint, tenancy, region, key_file
- IAM policy —
Allow group Developers to manage objects in compartment
- Python 3.8+
Instructions
Step 1: Discover Namespace and Create a Bucket
Every OCI tenancy has a unique Object Storage namespace. You must discover it before any operation.
import oci
from datetime import datetime, timedelta
config = oci.config.from_file("~/.oci/config")
storage = oci.object_storage.ObjectStorageClient(config)
# Namespace is tenancy-specific — discover it, never hardcode
namespace = storage.get_namespace().data
print(f"Namespace: {namespace}")
# Create bucket
bucket = storage.create_bucket(
namespace_name=namespace,
create_bucket_details=oci.object_storage.models.CreateBucketDetails(
compartment_id=config["tenancy"],
name="app-data-bucket",
storage_tier="Standard",
public_access_type="NoPublicAccess",
versioning="Enabled", # Protect against accidental deletes
),
).data
print(f"Bucket created: {bucket.name}")
Step 2: Upload Objects (Simple and Multipart)
Use simple upload for files under 50MB. For larger files, use the UploadManager which handles multipart automatically.
# Simple upload (< 50MB)
with open("report.csv", "rb") as f:
storage.put_object(
namespace_name=namespace,
bucket_name="app-data-bucket",
object_name="reports/2026/report.csv",
put_object_body=f,
content_type="text/csv",
)
print("Simple upload complete")
# Multipart upload for large files (UploadManager handles chunking)
from oci.object_storage import UploadManager
upload_manager = UploadManager(storage)
response = upload_manager.upload_file(
namespace_name=namespace,
bucket_name="app-data-bucket",
object_name="backups/large-dump.tar.gz",
file_path=
Collect OCI instance diagnostics — serial console, cloud-init logs, metadata, and VCN flow logs — into a single debug bundle.
ReadWriteEditBash(oci:*)Bash(python3:*)Grep
Oracle Cloud Debug Bundle
Overview
Collect comprehensive diagnostics from an unresponsive OCI compute instance without touching the OCI Console. When an instance reports "unavailable due to an issue with the underlying infrastructure" or cloud-init failures, you need serial console output, cloud-init logs, instance metadata, and VCN flow logs — all gathered via CLI commands into a single tar archive for root-cause analysis or support ticket attachment.
Purpose: Generate a self-contained debug bundle (.tar.gz) with all the data OCI Support will ask for, collected in under 60 seconds.
Prerequisites
- OCI CLI installed and configured —
oci --version returns 3.x+, ~/.oci/config is valid (see oraclecloud-install-auth)
- Python 3.8+ with the OCI SDK —
pip install oci
- Compartment OCID — the compartment containing the target instance
- Instance OCID — format:
ocid1.instance.oc1.{region}.aaaa...
- IAM policies granting
inspect instance-console-histories, read instances, read vcn-flow-logs in the target compartment
Instructions
Step 1: Set Target Variables
export INSTANCE_OCID="ocid1.instance.oc1.iad.YOUR_INSTANCE_OCID"
export COMPARTMENT_OCID="ocid1.compartment.oc1..YOUR_COMPARTMENT_OCID"
export BUNDLE_DIR="oci-debug-$(date +%Y%m%d-%H%M%S)"
mkdir -p "$BUNDLE_DIR"
Step 2: Capture Instance Metadata
oci compute instance get \
--instance-id "$INSTANCE_OCID" \
--query 'data.{state:"lifecycle-state",shape:shape,ad:"availability-domain",created:"time-created",fault:"fault-domain"}' \
--output json > "$BUNDLE_DIR/instance-metadata.json"
echo "Instance state: $(jq -r '.state' "$BUNDLE_DIR/instance-metadata.json")"
Step 3: Retrieve Serial Console History
The serial console captures kernel panics, boot failures, and cloud-init output — even when SSH is unreachable:
# Create a console history capture
CAPTURE_ID=$(oci compute instance-console-history capture \
--instance-id "$INSTANCE_OCID" \
--query 'data.id' --raw-output)
echo "Console history capture: $CAPTURE_ID"
# Wait for capture to complete, then download
sleep 10
oci compute instance-console-history get-content \
--instance-console-history-id "$CAPTURE_ID" \
> "$BUNDLE_DIR/serial-console.log"
echo "Serial console: $(wc -l < "$BUNDLE_DIR/serial-console.log") lines"
Step 4: Extract Cloud-Init Logs via Instance
Deploy containers to OCI using OKE (Kubernetes) or Container Instances.
ReadWriteEditBash(pip:*)Bash(kubectl:*)Bash(docker:*)Grep
Oracle Cloud Deploy Integration
Overview
Deploy containerized applications to OCI using either OKE (Oracle Kubernetes Engine) or Container Instances. OKE provides full Kubernetes but requires 4x more config than EKS — you need a VCN, subnet, node pool, OCIR registry, and IAM policies before a single pod runs. Container Instances offer a simpler serverless alternative for workloads that don't need Kubernetes orchestration.
Purpose: Get containers running on OCI through both the full Kubernetes path (OKE) and the simpler Container Instances path, with working manifests and registry auth.
Prerequisites
- OCI tenancy with an API signing key in
~/.oci/config
- Python 3.8+ with
pip install oci for SDK-based provisioning
- Docker installed for building and pushing images
- kubectl installed for OKE cluster interaction
- Compartment OCID where resources will be created
- VCN with subnets — at least one public and one private subnet for OKE
Instructions
Step 1: Push Container Image to OCIR
Oracle Cloud Infrastructure Registry (OCIR) is OCI's Docker-compatible registry. Auth uses an OCI auth token, not your API key:
# Generate an auth token: Console > Profile > Auth Tokens > Generate Token
# Save the token — it's only shown once
# Login to OCIR (format: {region-key}.ocir.io/{namespace})
docker login us-ashburn-1.ocir.io
# Username: {tenancy-namespace}/oracleidentitycloudservice/{email}
# Password: your auth token
# Tag and push
docker tag myapp:latest us-ashburn-1.ocir.io/{namespace}/myapp:latest
docker push us-ashburn-1.ocir.io/{namespace}/myapp:latest
Step 2: Create OKE Cluster via Python SDK
Use the OCI Python SDK to provision an OKE cluster programmatically:
import oci
config = oci.config.from_file("~/.oci/config")
container_engine = oci.container_engine.ContainerEngineClient(config)
# Create cluster
create_cluster_response = container_engine.create_cluster(
oci.container_engine.models.CreateClusterDetails(
name="my-oke-cluster",
compartment_id="ocid1.compartment.oc1..example",
vcn_id="ocid1.vcn.oc1..example",
kubernetes_version="v1.28.2",
options=oci.container_engine.models.ClusterCreateOptions(
service_lb_subnet_ids=["ocid1.subnet.oc1..example-public"],
kubernetes_network_config=oci.container_engine.models.KubernetesNetworkConfig(
pods_cidr="10.244.0.0/16",
services_cidr="10.96.0.0/16"
)
)
)
)
cluster_id = create_cluster_response.headers["opc-work-request-id"]
print(f"Clu
Design OCI compartment hierarchies, dynamic groups, and cross-tenancy access patterns.
ReadWriteEditBash(pip:*)Bash(oci:*)Grep
Oracle Cloud Enterprise RBAC
Overview
OCI compartments are powerful but the inheritance model is confusing. Policies at root vs compartment level behave differently, dynamic groups enable compute-to-service auth without API keys, and cross-tenancy access requires matching policies on both sides. Most teams get this wrong and over-permission everything with manage all-resources in tenancy. This skill designs proper compartment hierarchies with least-privilege access.
Purpose: Build a scalable, least-privilege OCI organization structure using compartments, policy inheritance, dynamic groups, and tag-based access control.
Prerequisites
- OCI Python SDK —
pip install oci
- OCI config file at
~/.oci/config with valid credentials (user, fingerprint, tenancy, region, key_file)
- Tenancy administrator access — compartment and policy creation requires root-level permissions
- Familiarity with OCI IAM basics (see
oraclecloud-security-basics for policy syntax)
- Python 3.8+
Instructions
Step 1: Design the Compartment Hierarchy
OCI compartments are nested organizational units. Unlike AWS accounts, they share a single tenancy with inherited policies. A standard enterprise layout:
Root (Tenancy)
├── shared-infra ← DNS, networking hub, shared services
├── security ← Vault, audit logs, Cloud Guard
├── dev
│ ├── dev-compute ← Dev instances, OKE clusters
│ └── dev-data ← Dev databases, object storage
├── staging
│ ├── staging-compute
│ └── staging-data
└── prod
├── prod-compute
└── prod-data
Create this hierarchy programmatically:
import oci
config = oci.config.from_file("~/.oci/config")
identity = oci.identity.IdentityClient(config)
tenancy_id = config["tenancy"]
def create_compartment(parent_id, name, description):
"""Create a compartment and return its OCID."""
result = identity.create_compartment(
oci.identity.models.CreateCompartmentDetails(
compartment_id=parent_id,
name=name,
description=description
)
)
print(f"Created: {name} ({result.data.id})")
return result.data.id
# Top-level compartments
shared = create_compartment(tenancy_id, "shared-infra", "Shared infrastructure services")
security = create_compartment(tenancy_id, "security", "Security and audit resources")
dev = create_compartment(tenancy_id, "dev", "Development environment")
staging = create_compartment(tenancy_id, "staging", "Staging environment")
prod = create_compartment(tenancy_id, "prod", "Production environment")
# N
Launch your first OCI compute instance with capacity retry logic.
ReadWriteEditBash(pip:*)Bash(oci:*)Grep
Oracle Cloud Hello World
Overview
Launch, list, and manage your first OCI compute instance. The most common blocker for new OCI users is the Out of host capacity error when launching Always Free ARM shapes (VM.Standard.A1.Flex). This error means the data center has no available hosts — it is not a permissions issue. The solution is a retry loop that polls until capacity becomes available.
Purpose: Get a running compute instance on OCI, including the capacity retry pattern that makes Always Free ARM shapes actually usable.
Prerequisites
- Completed
oraclecloud-install-auth — valid ~/.oci/config with API key authentication
- Python 3.8+ with
pip install oci installed
- A subnet OCID in your tenancy (VCN > Subnets in the Console, or use the default VCN)
- An image OCID for your region (Compute > Custom Images, or list platform images via API)
- An SSH public key at
~/.ssh/id_rsa.pub (for instance access)
Instructions
Step 1: List Existing Instances
import oci
config = oci.config.from_file("~/.oci/config")
compute = oci.core.ComputeClient(config)
instances = compute.list_instances(compartment_id=config["tenancy"])
for inst in instances.data:
print(f"{inst.display_name:<30} {inst.lifecycle_state:<12} {inst.shape}")
Step 2: List Available Shapes and Images
# List shapes available in your tenancy
shapes = compute.list_shapes(compartment_id=config["tenancy"])
for s in shapes.data:
ocpus = getattr(s, "ocpus", "fixed")
print(f"{s.shape:<35} OCPUs: {ocpus}")
# List platform images (Oracle Linux)
images = compute.list_images(
compartment_id=config["tenancy"],
operating_system="Oracle Linux",
sort_by="TIMECREATED",
sort_order="DESC",
limit=5
)
for img in images.data:
print(f"{img.display_name:<60} {img.id[:40]}...")
Step 3: Launch an Instance (Standard)
import os
launch_details = oci.core.models.LaunchInstanceDetails(
compartment_id=config["tenancy"],
availability_domain="Uocm:US-ASHBURN-AD-1", # Change for your region
display_name="hello-oci",
shape="VM.Standard.E4.Flex",
shape_config=oci.core.models.LaunchInstanceShapeConfigDetails(
ocpus=1, memory_in_gbs=8
),
source_details=oci.core.models.InstanceSourceViaImageDetails(
image_id="ocid1.image.oc1.iad.aaaa...", # Your image OCID
boot_volume_size_in_gbs=50
),
create_vnic_details=oci.core.models.CreateVnicDetail
Self-service incident runbook for OCI outages — health probes, instance recovery, cross-AD/region failover.
ReadWriteEditBash(oci:*)Bash(python3:*)Grep
Oracle Cloud Incident Runbook
Overview
Self-service runbook for when OCI instances go down and the status page stays green. OCI's status page has a history of not acknowledging outages in real time (London Jan 2026 — 502s and instances disappearing for 10 minutes with no status update). OCI Support response times average 4+ hours for Sev-1 tickets. This runbook gives you health probes, automated instance recovery, cross-AD failover, and cross-region failover — all executable without waiting on Oracle.
Purpose: Detect OCI service degradation independently, recover instances automatically, and fail over to alternate availability domains or regions when the primary is impacted.
Prerequisites
- OCI CLI installed and configured —
~/.oci/config validated (see oraclecloud-install-auth)
- Python 3.8+ with the OCI SDK —
pip install oci
- Pre-configured resources: at least one compute instance, a VCN with subnets in multiple ADs
- IAM policies:
manage instances, manage volumes, inspect work-requests in the target compartment
- Boot volume backups enabled (recovery depends on having a recent backup)
Instructions
Step 1: Independent Health Probes
Do not trust the OCI status page alone. Run your own health checks against the OCI API:
import oci
import time
config = oci.config.from_file("~/.oci/config")
def probe_oci_health(config):
"""Probe OCI API endpoints independently of the status page."""
results = {}
# Probe 1: Identity service (lightest call)
try:
start = time.time()
identity = oci.identity.IdentityClient(config)
identity.list_regions()
results["identity"] = {"status": "healthy", "latency_ms": int((time.time() - start) * 1000)}
except oci.exceptions.ServiceError as e:
results["identity"] = {"status": "degraded", "error": str(e.status)}
# Probe 2: Compute service
try:
start = time.time()
compute = oci.core.ComputeClient(config)
compute.list_instances(compartment_id=config["tenancy"], limit=1)
results["compute"] = {"status": "healthy", "latency_ms": int((time.time() - start) * 1000)}
except oci.exceptions.ServiceError as e:
results["compute"] = {"status": "degraded", "error": str(e.status)}
# Probe 3: Networking service
try:
start = time.time()
network = oci.core.VirtualNetworkClient(config)
network.list_vcns(compartment_id=config["tenancy"], limit=1)
results["n
Install and configure Oracle Cloud Infrastructure (OCI) SDK and CLI authentication.
ReadWriteEditBash(pip:*)Bash(oci:*)Grep
Oracle Cloud Install & Auth
Overview
Configure API key authentication for Oracle Cloud Infrastructure (OCI). OCI auth requires a ~/.oci/config file with five mandatory fields — user OCID, fingerprint, tenancy OCID, region, and the path to an RSA private key. One wrong field produces the cryptic ConfigFileNotFound or InvalidKeyFilePath error with no hint about which field failed.
Purpose: Produce a validated ~/.oci/config file, generate an RSA key pair, upload the public key to OCI, and verify connectivity with both the Python SDK and OCI CLI.
Prerequisites
- OCI account with an active tenancy — sign up at https://cloud.oracle.com
- Python 3.8+ (the OCI Python SDK is the most mature SDK)
- OpenSSL installed (for RSA key generation)
- Your user OCID (Profile > User Settings in the OCI Console) — format:
ocid1.user.oc1..aaaa...
- Your tenancy OCID (Administration > Tenancy Details) — format:
ocid1.tenancy.oc1..aaaa...
- Your home region (e.g.,
us-ashburn-1, eu-frankfurt-1)
Instructions
Step 1: Install the OCI Python SDK and CLI
pip install oci oci-cli
Step 2: Generate an RSA Key Pair
mkdir -p ~/.oci
openssl genrsa -out ~/.oci/oci_api_key.pem 2048
chmod 600 ~/.oci/oci_api_key.pem
openssl rsa -pubout -in ~/.oci/oci_api_key.pem -out ~/.oci/oci_api_key_public.pem
Step 3: Get the Key Fingerprint
openssl rsa -pubout -outform DER -in ~/.oci/oci_api_key.pem | openssl md5 -c
# Output: ab:cd:ef:12:34:56:78:90:ab:cd:ef:12:34:56:78:90
Step 4: Upload Public Key to OCI Console
Navigate to: Profile (top-right) > User Settings > API Keys > Add API Key > Paste Public Key
Paste the contents of ~/.oci/ociapikey_public.pem. The console shows the fingerprint — it must match Step 3.
Step 5: Create the Config File
cat > ~/.oci/config << 'EOF'
[DEFAULT]
user=ocid1.user.oc1..aaaa_YOUR_USER_OCID
fingerprint=ab:cd:ef:12:34:56:78:90:ab:cd:ef:12:34:56:78:90
tenancy=ocid1.tenancy.oc1..aaaa_YOUR_TENANCY_OCID
region=us-ashburn-1
key_file=~/.oci/oci_api_key.pem
EOF
chmod 600 ~/.oci/config
All five fields are required. The key_file must point to the private key (not the public .pem).
Step 6: Verify with the Python SDK
import oci
config = oci.config.from_file("~/.oci/config")
oci.config.validate_config(config)
identity = oci.identity.IdentityClient(config)
user = id
Set up a productive local OCI development workflow using CLI and SDK instead of the web console.
ReadWriteEditBash(pip:*)Bash(oci:*)Grep
Oracle Cloud Local Dev Loop
Overview
The OCI web console is slow, hard to navigate, and requires dozens of clicks for common operations. A local dev workflow using the OCI CLI and Python SDK replaces the console for everything: listing resources, launching instances, managing object storage, and checking service health. Profile switching lets you target dev/staging/prod from the same terminal.
Purpose: Set up a complete local OCI development environment with CLI profiles, shell aliases, environment variable management, and common workflow scripts that eliminate the need for the web console.
Prerequisites
- Completed
oraclecloud-install-auth — valid ~/.oci/config with at least one profile
- Python 3.8+ with
pip install oci oci-cli
- Bash or Zsh shell
- OCIDs for your compartments (Governance > Compartments in the Console — last time you need it)
Instructions
Step 1: Install and Verify the OCI CLI
pip install oci-cli
# Verify installation
oci --version
# Quick connectivity test
oci iam region list --output table
Step 2: Set Up Multiple Profiles
Edit ~/.oci/config with profiles for each environment:
[DEFAULT]
user=ocid1.user.oc1..aaaa_YOUR_USER
fingerprint=ab:cd:ef:12:34:56:78:90:ab:cd:ef:12:34:56:78:90
tenancy=ocid1.tenancy.oc1..aaaa_PROD_TENANCY
region=us-ashburn-1
key_file=~/.oci/oci_api_key.pem
[dev]
user=ocid1.user.oc1..aaaa_YOUR_USER
fingerprint=ab:cd:ef:12:34:56:78:90:ab:cd:ef:12:34:56:78:90
tenancy=ocid1.tenancy.oc1..aaaa_DEV_TENANCY
region=us-phoenix-1
key_file=~/.oci/oci_api_key_dev.pem
[staging]
user=ocid1.user.oc1..aaaa_YOUR_USER
fingerprint=12:34:56:78:90:ab:cd:ef:12:34:56:78:90:ab:cd:ef
tenancy=ocid1.tenancy.oc1..aaaa_STAGING_TENANCY
region=eu-frankfurt-1
key_file=~/.oci/oci_api_key_staging.pem
Switch profiles with the --profile flag or OCICLIPROFILE env var:
# CLI flag
oci compute instance list --compartment-id <OCID> --profile dev
# Environment variable (applies to all commands in session)
export OCI_CLI_PROFILE=dev
oci compute instance list --compartment-id <OCID>
Step 3: Environment Variables and .env File
Create a project .env file for compartment OCIDs and region defaults:
# .env — OCI project configuration (NEVER commit this file)
OCI_COMPARTMENT_ID="ocid1.compartment.oc1..aaaa_YOUR_COMPARTMENT"
OCI_TENANCY_ID="ocid1.tenancy.oc1..aaaa_YOUR_TENANCY"
OCI_REGION="us-ashburn-1"
OCI_CLI_PROFILE="DEFAULT"
# Add to .gitignore
echo ".env" >> .gitignore
# Source in your
Migrate workloads from AWS or Azure to OCI — IAM translation, networking mapping, compute image import, and data migration.
ReadWriteEditBash(oci:*)Bash(python3:*)Bash(terraform:*)Grep
Oracle Cloud Migration Deep Dive
Overview
Migrating to OCI from AWS or Azure requires translating IAM concepts (roles to policies, accounts to compartments), networking (VPC to VCN, Security Groups to NSGs), and compute (AMI to custom image). OCI's migration tooling is underdocumented compared to AWS Migration Hub or Azure Migrate. This skill provides comprehensive concept mapping tables, custom image import procedures, network topology translation, IAM policy translation, and data migration patterns — everything needed for a controlled cloud migration.
Purpose: Translate AWS/Azure architecture into OCI equivalents and execute the migration using OCI CLI and Python SDK, with verification at each step.
Prerequisites
- OCI account with an active tenancy — https://cloud.oracle.com
- OCI CLI installed and configured —
~/.oci/config validated (see oraclecloud-install-auth)
- Python 3.8+ with the OCI SDK —
pip install oci
- Source cloud CLI —
aws CLI or az CLI for exporting resources
- Object Storage bucket in OCI for staging image imports
- IAM policies:
manage objects in compartment, manage custom-images in compartment, manage virtual-network-family in compartment
Instructions
Step 1: AWS-to-OCI Concept Mapping
| AWS Concept |
OCI Equivalent |
Key Differences |
| Account |
Tenancy |
One tenancy = one billing entity, use compartments for isolation |
| Organization OU |
Compartment |
Compartments are hierarchical, up to 6 levels deep |
| IAM Role |
IAM Policy |
OCI policies use allow group X to verb resource in compartment Y syntax |
| IAM User |
IAM User |
Same concept, but OCI uses API key auth (not access keys) |
| VPC |
VCN |
VCN subnets are regional (not AZ-scoped like AWS) |
| Security Group |
Network Security Group (NSG) |
NSGs attach to VNICs, not instances. Also have Security Lists (subnet-level) |
| Route Table |
Route Table |
Similar, but OCI route rules target gateway OCIDs |
| Internet Gateway |
Internet Gateway |
Identical concept |
| NAT Gateway |
NAT Gateway |
Identical concept |
| VPC Endpoint |
Service Gateway |
Service Gateway routes to OCI services without internet |
| VPC Peering |
LPG / DRG |
LPG for same-region, DRG for cross-region or on-premises |
| AMI |
Custom Image |
Export as VMDK
Configure multi-environment OCI workflows with config profiles and compartment-per-environment patterns.
ReadWriteEditBash(pip:*)Bash(oci:*)Grep
Oracle Cloud Multi-Environment Setup
Overview
OCI has no "accounts" like AWS — you use compartments plus OCI config profiles for dev/staging/prod separation. But profile switching is manual, compartment OCIDs are easy to confuse, and one wrong --compartment-id deploys to production. This skill sets up safe multi-environment workflows with named profiles, compartment aliasing, environment validation, and deployment guardrails.
Purpose: Configure safe, repeatable multi-environment OCI workflows that prevent accidental cross-environment operations.
Prerequisites
- OCI Python SDK —
pip install oci
- OCI CLI — installed and configured (
oci setup config)
- Separate API keys per environment (recommended) or a single key with cross-compartment policies
- Compartment OCIDs for each environment (dev, staging, prod)
- Python 3.8+
Instructions
Step 1: Configure Multi-Profile ~/.oci/config
The OCI config file supports named profiles. Each profile can point to different tenancies, regions, or use different API keys:
# ~/.oci/config
[DEFAULT]
user=ocid1.user.oc1..exampleuniqueID
fingerprint=aa:bb:cc:dd:ee:ff:00:11:22:33:44:55:66:77:88:99
tenancy=ocid1.tenancy.oc1..exampleuniqueID
region=us-ashburn-1
key_file=~/.oci/oci_api_key.pem
[DEV]
user=ocid1.user.oc1..exampleuniqueID
fingerprint=aa:bb:cc:dd:ee:ff:00:11:22:33:44:55:66:77:88:99
tenancy=ocid1.tenancy.oc1..exampleuniqueID
region=us-ashburn-1
key_file=~/.oci/oci_api_key_dev.pem
[STAGING]
user=ocid1.user.oc1..exampleuniqueID
fingerprint=11:22:33:44:55:66:77:88:99:aa:bb:cc:dd:ee:ff:00
tenancy=ocid1.tenancy.oc1..exampleuniqueID
region=us-phoenix-1
key_file=~/.oci/oci_api_key_staging.pem
[PROD]
user=ocid1.user.oc1..exampleuniqueID
fingerprint=ff:ee:dd:cc:bb:aa:00:99:88:77:66:55:44:33:22:11
tenancy=ocid1.tenancy.oc1..exampleuniqueID
region=us-ashburn-1
key_file=~/.oci/oci_api_key_prod.pem
Best practice: Use different API key pairs per environment. If the dev key is compromised, prod is unaffected.
Step 2: Create an Environment Configuration Module
Centralize compartment OCIDs and profile mappings to prevent OCID confusion:
import oci
import os
# Environment configuration — single source of truth for OCIDs
ENVIRONMENTS = {
"dev": {
"profile": "DEV",
"compartment_id": "ocid1.compartment.oc1..dev_example",
"region": "us-ashburn-1",
"allow_destructive": True,
},
"staging": {
"profile": "STAGING",
"compartment_id": "ocid1.compartment.oc1..staging_example",
"region": &q
Set up programmatic monitoring, logging, and alarms for OCI resources.
ReadWriteEditBash(pip:*)Grep
Oracle Cloud Observability
Overview
Set up programmatic monitoring for OCI infrastructure using the Monitoring, Logging, and Notifications services. The OCI Console buries these features behind nested menus, and the status page has historically failed to acknowledge outages (e.g., London region, January 2026). This skill builds monitoring you control through code — metric queries, alarm rules, custom metric publishing, and log searches — so you are never surprised by an outage you should have caught.
Purpose: Create a code-driven observability stack that queries metrics, fires alarms, publishes custom metrics, and searches logs without depending on the OCI Console.
Prerequisites
- OCI tenancy with an API signing key in
~/.oci/config
- Python 3.8+ with
pip install oci
- Compartment OCID containing the resources to monitor
- IAM policies granting
manage alarms and read metrics in the target compartment
- Notification topic created for alarm destinations (or create one in Step 4)
Instructions
Step 1: Query Metrics with MonitoringClient
OCI publishes built-in metrics for compute, networking, block storage, and more. Query them programmatically:
import oci
from datetime import datetime, timedelta
config = oci.config.from_file("~/.oci/config")
monitoring = oci.monitoring.MonitoringClient(config)
# Query CPU utilization for all instances in a compartment
response = monitoring.summarize_metrics_data(
compartment_id="ocid1.compartment.oc1..example",
summarize_metrics_data_details=oci.monitoring.models.SummarizeMetricsDataDetails(
namespace="oci_computeagent",
query='CpuUtilization[5m]{availabilityDomain = "Uocm:US-ASHBURN-AD-1"}.mean()',
start_time=(datetime.utcnow() - timedelta(hours=1)).isoformat() + "Z",
end_time=datetime.utcnow().isoformat() + "Z"
)
)
for metric in response.data:
for dp in metric.aggregated_datapoints:
print(f"{dp.timestamp}: {dp.value:.1f}% CPU")
Step 2: Create Alarm Rules
Alarms trigger when a metric crosses a threshold. Create them via SDK so they survive Console UI changes:
monitoring.create_alarm(
oci.monitoring.models.CreateAlarmDetails(
display_name="High CPU Alert",
compartment_id="ocid1.compartment.oc1..example",
metric_compartment_id="ocid1.compartment.oc1..example",
namespace="oci_computeagent",
query='CpuUtilization[5m].mean() > 80',
severity="CRITICAL",
body="CPU utilization exceeded 80% for 5 minutes.",
Optimize OCI compute shapes, block volume tiers, and network throughput.
ReadWriteEditBash(pip:*)Grep
Oracle Cloud Performance Tuning
Overview
Navigate OCI's opaque shape naming, block volume performance tiers, and shape-dependent network bandwidth. OCI shapes like VM.Standard.E5.Flex, VM.Standard3.Flex, and VM.Standard.A1.Flex look similar but have wildly different performance profiles. Block volume tiers (Balanced, Higher Performance, Ultra High Performance) have different IOPS and throughput limits that are easy to get wrong. This skill maps performance characteristics to shapes and storage tiers so you can make informed infrastructure decisions.
Purpose: Choose the right compute shape and storage tier for your workload by understanding OCI's performance characteristics, and monitor those resources programmatically.
Prerequisites
- OCI tenancy with an API signing key in
~/.oci/config
- Python 3.8+ with
pip install oci
- Compartment OCID for querying available shapes and metrics
- Basic understanding of IOPS, throughput, and OCPU concepts
Instructions
Step 1: Understand Shape Naming
OCI shape names encode processor generation, type, and flexibility:
| Shape |
Processor |
OCPUs |
Network Gbps per OCPU |
Best For |
VM.Standard.E5.Flex |
AMD EPYC 9J14 (Genoa) |
1–94 |
1 Gbps |
General workloads (latest gen) |
VM.Standard.E4.Flex |
AMD EPYC 7J13 (Milan) |
1–64 |
1 Gbps |
General workloads |
VM.Standard3.Flex |
Intel Xeon (Ice Lake) |
1–32 |
1 Gbps |
Intel-optimized software |
VM.Standard.A1.Flex |
Ampere Altra (ARM) |
1–80 |
1 Gbps |
ARM-native, cost-efficient |
VM.Optimized3.Flex |
Intel Xeon (Ice Lake) |
1–18 |
4 Gbps |
HPC, network-intensive |
BM.Standard.E5.192 |
AMD EPYC 9J14 |
192 |
100 Gbps total |
Bare metal, full isolation |
Key insight: Flex shapes let you choose OCPU and memory independently. Memory defaults to 1 GB/OCPU min, 64 GB/OCPU max (varies by shape). Network bandwidth scales linearly with OCPUs up to the shape maximum.
Step 2: Query Available Shapes
Discover what shapes are available in your tenancy and region:
import oci
config = oci.config.from_file("~/.oci/config")
compute = oci.core.ComputeClient(config)
shapes = compute.list_shapes(
compartment_id="ocid1.compartment.oc1..example"
).data
for shape in shapes:
print(
f"{shape.shape}: "
Pre-production readiness checklist for OCI — backup policies, security audit, key rotation, encryption, and Cloud Guard.
ReadWriteEditBash(oci:*)Bash(python3:*)Grep
Oracle Cloud Production Checklist
Overview
OCI has no "Well-Architected Review" equivalent to AWS. This is the pre-production gate: a comprehensive checklist covering backup policies, security list audit, API key rotation, compartment isolation, boot volume encryption, OS Management agent, Cloud Guard, and Vulnerability Scanning. Every item is verifiable via CLI or Python SDK — no subjective assessments, only pass/fail checks.
Purpose: Validate that an OCI environment meets production-grade security, resilience, and operational standards before going live.
Prerequisites
- OCI CLI installed and configured —
~/.oci/config validated (see oraclecloud-install-auth)
- Python 3.8+ with the OCI SDK —
pip install oci
- Administrator-level IAM policies — the checks require
inspect and read across most service families
- Target compartment OCID — the compartment being audited
- Cloud Guard must be enabled at the tenancy level (Administration > Cloud Guard)
Instructions
Step 1: Compartment Isolation Audit
Production workloads must be in a dedicated compartment, not the root:
# List compartments — production should NOT be the root compartment
oci iam compartment list \
--compartment-id "$TENANCY_OCID" \
--query 'data[].{name:name, id:id, state:"lifecycle-state"}' \
--output table
# Verify prod compartment has policies restricting access
oci iam policy list \
--compartment-id "$PROD_COMPARTMENT_OCID" \
--query 'data[].{name:name, statements:statements}' \
--output json
Pass criteria: Production compartment is NOT the root tenancy. Policies follow least-privilege (no manage all-resources in tenancy).
Step 2: Backup Policy Verification
import oci
config = oci.config.from_file("~/.oci/config")
blockstorage = oci.core.BlockstorageClient(config)
# List all boot volumes in prod compartment
boot_volumes = blockstorage.list_boot_volumes(
compartment_id="PROD_COMPARTMENT_OCID",
availability_domain="AD-1",
).data
for vol in boot_volumes:
# Check backup policy assignment
try:
assignments = blockstorage.get_volume_backup_policy_asset_assignment(
asset_id=vol.id
).data
if assignments:
print(f"PASS: {vol.display_name} — backup policy assigned")
else:
print(f"FAIL: {vol.display_name} — no backup policy")
except oci.exceptions.ServiceError:
print(f"FAIL: {vol.display_name} — cannot check backup policy")
Pass criteria: Every boot volume a
Query OCI metrics with MQL and create monitoring alarms via the Python SDK.
ReadWriteEditBash(pip:*)Grep
OCI Monitoring — MQL Queries & Alarms
Overview
Query OCI metrics using MQL (Monitoring Query Language) and create alarms via the Python SDK. MQL is underdocumented and the console query builder is buggy — it often generates invalid syntax or silently returns empty results. This skill provides working MQL queries for the metrics you actually need (CPU, memory, network, disk) via the SDK, bypassing console issues entirely.
Purpose: Retrieve infrastructure metrics programmatically and set up alerting without relying on the OCI Console query builder.
Prerequisites
- OCI Python SDK —
pip install oci
- Config file at
~/.oci/config with fields: user, fingerprint, tenancy, region, key_file
- IAM policies:
Allow group Developers to read metrics in compartment
Allow group Developers to manage alarms in compartment
Allow group Developers to manage ons-topics in compartment (for alarm notifications)
- Python 3.8+
- Running compute instances or other resources emitting metrics
Instructions
Step 1: Understand MQL Syntax
MQL queries follow this pattern:
MetricName[interval]{dimensionKey = "value"}.groupingFunction.statistic
Key components:
- MetricName — e.g.,
CpuUtilization, MemoryUtilization, NetworkBytesIn
- Interval — data granularity:
1m, 5m, 1h (minimum depends on metric)
- Dimensions — filters in curly braces:
{resourceId = "ocid1.instance..."}
- Grouping —
.groupBy(dimension) to split results
- Statistic —
.mean(), .max(), .min(), .sum(), .count(), .percentile(0.95)
Step 2: Query CPU Utilization
import oci
from datetime import datetime, timedelta
config = oci.config.from_file("~/.oci/config")
monitoring = oci.monitoring.MonitoringClient(config)
# CPU utilization across all instances (last 1 hour, 5-minute intervals)
response = monitoring.summarize_metrics_data(
compartment_id=config["tenancy"],
summarize_metrics_data_details=oci.monitoring.models.SummarizeMetricsDataDetails(
namespace="oci_computeagent",
query='CpuUtilization[5m].mean()',
start_time=datetime.utcnow() - timedelta(hours=1),
end_time=datetime.utcnow(),
),
)
for metric in response.data:
resource = metric.dimensions.get("resourceDisplayName&qu
Handle OCI API rate limits with defensive retry patterns and known limits by service.
ReadWriteEditBash(pip:*)Bash(oci:*)Grep
Oracle Cloud Rate Limits
Overview
OCI API rate limits vary by service and are not well documented. A 429 TooManyRequests response kills your automation, and unlike AWS or Azure, OCI does not return a Retry-After header. This skill maps known limits by service, implements exponential backoff with jitter, and provides circuit breaker patterns for bulk operations.
Purpose: Build resilient OCI API clients that handle throttling gracefully without data loss.
Prerequisites
- OCI Python SDK —
pip install oci
- OCI config file at
~/.oci/config with valid credentials (user, fingerprint, tenancy, region, key_file)
- Python 3.8+
- Understanding of which OCI service you are calling (limits vary per service)
Instructions
Step 1: Know the Limits
OCI publishes some rate limits, but many are undocumented. Here are the known limits observed in production:
| Service |
Endpoint Type |
Observed Limit |
Notes |
| Compute |
List/Get |
~20 req/sec |
Per-tenancy, not per-user |
| Compute |
Create/Update/Delete |
~10 req/sec |
Stricter for mutating operations |
| Object Storage |
List/Get |
~100 req/sec |
Per-bucket namespace |
| Object Storage |
Put/Delete |
~50 req/sec |
Varies by region load |
| Identity |
List/Get |
~10 req/sec |
Tenancy-wide shared limit |
| Identity |
Create/Update |
~5 req/sec |
Very conservative |
| Database |
All operations |
~10 req/sec |
Shared across DB family |
| Networking (VCN) |
All operations |
~20 req/sec |
Per-compartment |
| Monitoring |
Post metrics |
~50 req/sec |
Per-metric namespace |
| Events |
Rule CRUD |
~10 req/sec |
Per-compartment |
Critical: These are observed limits, not guaranteed SLAs. OCI may throttle lower under load.
Step 2: Implement Exponential Backoff with Jitter
OCI returns no Retry-After header on 429 responses, so you must implement your own backoff. The SDK's built-in retry handles some cases, but for bulk operations you need explicit control:
import oci
import time
import random
config = oci.config.from_file("~/.oci/config")
def call_with_retry(fn, max_retries=5, base_delay=1.0):
"""Call an OCI SDK function with exponential backoff and jitter.
OCI returns 429 TooManyRequests with NO Retry-After header,
so we implement our own backoff strate
Standard 3-tier OCI reference architecture with VCN, subnets, gateways, load balancer, compute, and Autonomous DB.
ReadWriteEditBash(oci:*)Bash(python3:*)Bash(terraform:*)Grep
Oracle Cloud Reference Architecture
Overview
OCI architecture has more moving parts than AWS or Azure. Where AWS has VPC + subnets + internet gateway, OCI has VCN + regional subnets + Internet Gateway + NAT Gateway + Service Gateway + DRG (Dynamic Routing Gateway) + LPG (Local Peering Gateway) — and getting the routing tables wrong means silent packet drops with no error. This provides the standard 3-tier architecture (web/app/db) with every OCI-specific component wired correctly, plus Terraform code to deploy it.
Purpose: Produce a production-ready 3-tier OCI architecture with correctly configured networking, gateways, security rules, and compute/database tiers — deployable via Terraform.
Prerequisites
- OCI account with an active tenancy — https://cloud.oracle.com
- OCI CLI installed and configured —
~/.oci/config validated (see oraclecloud-install-auth)
- Python 3.8+ with the OCI SDK —
pip install oci
- Terraform 1.5+ with the OCI provider — https://registry.terraform.io/providers/oracle/oci/latest/docs
- Compartment OCID for the target environment
- Familiarity with CIDR notation for subnet planning
Instructions
Step 1: Architecture Overview
┌─────────────────────────── OCI Region (us-ashburn-1) ───────────────────────────┐
│ │
│ ┌────────────────────────── VCN (10.0.0.0/16) ──────────────────────────────┐ │
│ │ │ │
│ │ ┌─── Internet GW ───┐ ┌─── NAT GW ───┐ ┌─── Service GW ───┐ │ │
│ │ └────────┬───────────┘ └──────┬────────┘ └───────┬──────────┘ │ │
│ │ │ │ │ │ │
│ │ ┌────────▼──────────────────────────────────────────────────────────┐ │ │
│ │ │ Public Subnet (10.0.1.0/24) — Web Tier │ │ │
│ │ │ Load Balancer (public) → routes to App Tier │ │ │
│ │ │ Bastion Host (optional) │ │ │
│ │ └──────────────────────┬───────────────────────────────────────────┘ │ │
│ │ │ │ │
│ │ ┌──────────────────────▼───────────────────────────────────────────┐ │ │
│ │ │ Private Subnet (10.0.2.0/24) — App Tier │ │ │
│ │ │ Compute Instances (VM.Standard.E4.Flex) │ │ │
│ │ │ → NAT GW for outbound internet (patching, APIs) │ │ │
│ │ │ → Service GW for OCI services (Object Storage, etc.) │ │ │
│ │ └──────────────────────┬──────────────────────────────────────────
Migrate to OCI Autonomous Database — wallet setup, mTLS, Data Pump, and python-oracledb.
ReadWriteEditBash(pip:*)Grep
OCI Autonomous Database — Migration & Connection
Overview
Migrate to and connect with OCI Autonomous Database (ADB) using the Python SDK and python-oracledb. Autonomous Database is OCI's crown jewel but migrating to it from standard Oracle DB or other databases is full of gotchas — wallet downloads require SDK calls (not just console clicks), mTLS is mandatory by default, connection strings use a different format than standard Oracle, and Data Pump exports need specific parameter adjustments for ADB compatibility.
Purpose: Provision an Autonomous Database, download the wallet, establish a connection, and migrate data using Data Pump.
Prerequisites
- OCI Python SDK —
pip install oci
- Oracle DB driver —
pip install oracledb
- Config file at
~/.oci/config with fields: user, fingerprint, tenancy, region, key_file
- IAM policy —
Allow group Developers to manage autonomous-databases in compartment
- Python 3.8+
- For Data Pump: access to the source Oracle database with DBA privileges
Instructions
Step 1: Provision an Autonomous Database
import oci
import base64
import zipfile
import os
config = oci.config.from_file("~/.oci/config")
db_client = oci.database.DatabaseClient(config)
# Create Autonomous Database (Transaction Processing workload)
adb = db_client.create_autonomous_database(
oci.database.models.CreateAutonomousDatabaseDetails(
compartment_id=config["tenancy"],
display_name="app-adb",
db_name="appadb",
cpu_core_count=1, # 1 OCPU (Always Free eligible)
data_storage_size_in_tbs=1, # 1 TB (Always Free: 20GB)
admin_password="SecureP@ss123!", # Must meet complexity requirements
db_workload="OLTP", # OLTP, DW, AJD, or APEX
is_free_tier=True, # Always Free if eligible
is_mtls_connection_required=True, # Default — use mTLS
)
).data
print(f"ADB provisioning: {adb.id}")
print(f"State: {adb.lifecycle_state}")
Step 2: Wait for Provisioning and Download Wallet
The wallet contains certificates and connection descriptors needed for mTLS. You must download it via the SDK — the wallet password is set at download time, not during provisioning.
# Wait for ADB to become AVAILABLE
waiter = oci.wait_until(
db_client,
db_client.get_autonomous_database(adb.id),
"lifecycle_state",
"AVAILABLE",
max_wait_seconds=600,
)
print(f"ADB ready: {waiter.data.lifecycle_state}")
# Download wallet
wallet_response = db_c
Production-grade OCI SDK patterns for client lifecycle, retry logic, and memory leak avoidance.
ReadWriteEditBash(pip:*)Grep
Oracle Cloud SDK Patterns
Overview
Production patterns for the OCI Python SDK that avoid the most common pitfalls: memory leaks from Instance Principal authentication (~10 MiB/hour if clients are recreated per request), missing retry logic for 429/500 errors, and timeout misconfiguration across different service clients. The OCI SDK has different timeout defaults depending on the service (Compute: 60s, Object Storage: 300s for uploads), and none of them set connection timeouts by default.
Purpose: Provide correct client lifecycle (create once, reuse, close), exponential backoff retry, singleton patterns that prevent the Instance Principal memory leak, and per-service timeout configuration.
Prerequisites
- Completed
oraclecloud-install-auth — valid ~/.oci/config
- Python 3.8+ with
pip install oci
- Familiarity with OCI service clients (
ComputeClient, ObjectStorageClient, etc.)
Instructions
Step 1: Singleton Client Pattern (Avoids Memory Leak)
Instance Principal authentication allocates new security tokens on each client instantiation. Creating clients per-request leaks ~10 MiB/hour. Use a singleton:
import oci
import threading
class OCIClients:
"""Thread-safe singleton for OCI service clients.
Prevents the Instance Principal memory leak by reusing clients
instead of creating new ones per request.
"""
_lock = threading.Lock()
_instance = None
def __init__(self):
self._config = oci.config.from_file("~/.oci/config")
oci.config.validate_config(self._config)
# Create clients once — reuse everywhere
self._compute = None
self._network = None
self._object_storage = None
self._identity = None
@classmethod
def get(cls):
if cls._instance is None:
with cls._lock:
if cls._instance is None:
cls._instance = cls()
return cls._instance
@property
def config(self):
return self._config
@property
def compute(self):
if self._compute is None:
self._compute = oci.core.ComputeClient(
self._config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY
)
return self._compute
@property
def network(self):
if self._network is None:
self._network = oci.core.VirtualNetworkClient(
self._config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY
)
return self._network
@property
def object_storage(self):
if self._object_storage is None:
self._object_storage = oci.object_storage.ObjectStorageClient(
self._config, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY
Master OCI IAM policy syntax, common policy patterns, and API key management.
ReadWriteEditBash(pip:*)Bash(oci:*)Grep
Oracle Cloud Security Basics
Overview
OCI IAM policy syntax (Allow group X to manage Y in compartment Z) is the number one enterprise complaint. One wrong policy locks you out of your own resources. One missing verb and your automation silently fails with a 404 NotAuthorizedOrNotFound that looks like a missing resource. This skill is the IAM policy cheat sheet with tested patterns for common access scenarios.
Purpose: Write correct IAM policies, manage API keys securely, and understand the OCI permission model.
Prerequisites
- OCI Python SDK —
pip install oci
- OCI config file at
~/.oci/config with valid credentials (user, fingerprint, tenancy, region, key_file)
- Tenancy administrator access (to create policies) or membership in a group with
manage policies permission
- Python 3.8+
Instructions
Step 1: Understand the Policy Verb Hierarchy
OCI uses four verbs in ascending order of privilege. Each higher verb includes all lower verbs:
| Verb |
Capabilities |
Typical Use Case |
inspect |
List resources, get metadata only |
Auditors, read-only dashboards |
read |
Inspect + get full resource details/contents |
Monitoring tools, reporting |
use |
Read + act on existing resources (start/stop, attach) |
Developers, operators |
manage |
Use + create, delete, move resources |
Admins, automation service accounts |
Critical: use does NOT include create or delete. This trips up every new OCI team.
Step 2: IAM Policy Syntax
Every OCI policy statement follows this exact structure:
Allow <subject> to <verb> <resource-type> in <location> [where <conditions>]
Subject types:
group — IAM user group
dynamic-group — resource principals (instances, functions)
any-user — every authenticated user (use with extreme caution)
Location types:
tenancy — entire tenancy (root-level policy only)
compartment — specific compartment
compartment id — by OCID (for automation)
Step 3: Common Policy Patterns
Copy these tested patterns directly. Replace group names and compartment names with your values:
import oci
config = oci.config.from_file("~/.oci/config")
identity = oci.i
Safely upgrade OCI Python SDK and Terraform provider — version pinning, breaking change detection, and rollback.
ReadWriteEditBash(pip:*)Bash(oci:*)Bash(terraform:*)Bash(python3:*)Grep
Oracle Cloud Upgrade & Migration
Overview
OCI Terraform provider and Python SDK break backwards compatibility more often than AWS equivalents. The Terraform provider has had provider crashes on terraform plan after upgrades, deprecated resource types removed without migration paths, and schema changes that silently alter behavior. The Python SDK has had memory leak fixes that changed object lifecycle semantics and authentication class renames between minor versions. This skill tracks known breaking changes and provides safe upgrade patterns with version pinning, pre-upgrade testing, and rollback procedures.
Purpose: Upgrade OCI Python SDK and Terraform provider versions safely, detect breaking changes before they hit production, and roll back cleanly if an upgrade fails.
Prerequisites
- Python 3.8+ with the current OCI SDK installed —
pip show oci
- Terraform 1.5+ with the OCI provider —
terraform version
- OCI CLI installed —
oci --version
- Git for version control of infrastructure code
- A test environment — never upgrade directly in production
- Current
~/.oci/config validated (see oraclecloud-install-auth)
Instructions
Step 1: Audit Current Versions
# Python SDK version
pip show oci | grep -E "^(Name|Version|Location)"
# Example: Version: 2.125.0
# OCI CLI version
oci --version
# Example: 3.41.0
# Terraform provider version
grep -A2 'oracle/oci' .terraform.lock.hcl 2>/dev/null || echo "No lock file found"
terraform providers
# Example: oracle/oci v5.46.0
import oci
print(f"OCI SDK version: {oci.__version__}")
Step 2: Check for Known Breaking Changes
Python SDK known breaking changes:
| Version |
Breaking Change |
Impact |
Mitigation |
| 2.120.0+ |
oci.retry module refactored |
Custom retry strategies may break |
Update to oci.retry.retry.RetryStrategyBuilder |
| 2.115.0+ |
oci.config.validate_config() stricter |
Rejects configs with extra fields |
Remove non-standard fields from ~/.oci/config |
| 2.105.0+ |
Composite operations return type changed |
.data attribute structure changed |
Check .data type assertions in your code |
| 2.90.0+ |
waitforstate deprecated on some clients |
Direct get_* polling required |
Use oci.wait_until() helper instead |
Wire up event-driven workflows with OCI Events, Notifications, and Functions.
ReadWriteEditBash(pip:*)Bash(oci:*)Grep
Oracle Cloud Webhooks & Events
Overview
Build event-driven workflows using the OCI Events service, Oracle Notification Service (ONS), and OCI Functions. OCI Events monitors resource state changes across your tenancy and fires rules that route events to ONS topics, streaming, or Functions. This skill covers event rule creation, ONS topic/subscription setup, event pattern matching syntax, and Functions integration.
Purpose: Create reliable event-driven pipelines that react to OCI resource changes in real time.
Prerequisites
- OCI Python SDK —
pip install oci
- OCI config file at
~/.oci/config with valid credentials (user, fingerprint, tenancy, region, key_file)
- IAM policies granting access to Events, ONS, and Functions:
Allow group EventAdmins to manage cloudevents-rules in compartment
Allow group EventAdmins to manage ons-topics in compartment
Allow group EventAdmins to use fn-function in compartment
- Compartment OCID for the target compartment
- Python 3.8+
Instructions
Step 1: Create an ONS Topic and Subscription
Create a notification topic that will receive events, then subscribe an endpoint (email, HTTPS, PagerDuty, or Slack via HTTPS):
import oci
config = oci.config.from_file("~/.oci/config")
ons_control = oci.ons.NotificationControlPlaneClient(config)
ons_data = oci.ons.NotificationDataPlaneClient(config)
# Create a topic
topic_response = ons_control.create_topic(
oci.ons.models.CreateTopicDetails(
name="infra-alerts",
compartment_id="ocid1.compartment.oc1..example",
description="Infrastructure lifecycle alerts"
)
)
topic_id = topic_response.data.topic_id
print(f"Topic created: {topic_id}")
# Subscribe an HTTPS endpoint (e.g., Slack incoming webhook)
subscription = ons_data.create_subscription(
oci.ons.models.CreateSubscriptionDetails(
topic_id=topic_id,
compartment_id="ocid1.compartment.oc1..example",
protocol="HTTPS",
endpoint="https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK"
)
)
print(f"Subscription: {subscription.data.id} ({subscription.data.lifecycle_state})")
Step 2: Create an Events Rule
Events rules use a condition block that matches on eventType, compartmentId, and optional attribute filters. The condition syntax is JSON, not HCL:
events_client = oci.events.EventsClient(config)
rule = events_client.create_rule(
oci.events.models.CreateRuleDetails(
display_name="instance-state-changes",
compartment_id=
Ready to use oraclecloud-pack?
|
|