guidewire-webhooks-integrations
Consume Guidewire App Events into downstream systems (SQS/SNS, Kafka, webhooks) and survive the event-side failures — events not firing because Gosu registration was missed, duplicates from queue redelivery, out-of-order arrival on the same resource, replay from a checkpoint for backfill, and back-pressure when consumers cannot keep up with producers. Use when registering App Events in Gosu, building an event-consumer service, or recovering from a missed-event window. Trigger with "guidewire app events", "guidewire webhooks", "guidewire event consumer", "guidewire event replay", "guidewire idempotent consumer".
Allowed Tools
Provided by Plugin
guidewire-pack
Claude Code skill pack for Guidewire InsuranceSuite (24 skills)
Installation
This skill is included in the guidewire-pack plugin:
/plugin install guidewire-pack@claude-code-plugins-plus
Click to copy
Instructions
Guidewire Webhooks and Event Integrations
Overview
Wire Guidewire's event system into downstream consumers — analytics warehouses, fraud-detection services, broker-portal cache invalidators, customer-notification services. Guidewire emits App Events (typed business events fired on entity-state transitions); they are configured server-side in Gosu and routed to a destination (SQS, Kafka, or webhook URL). The consumer side has its own production failure modes that this skill addresses.
Five production failures this skill prevents:
- Event registered nowhere — code subscribes to a
claim.boundevent that was never registered in Gosu; consumer waits forever; no error surfaces because there's nothing to error on. - Duplicate processing — the queue redelivers a message after consumer ack timeout; consumer creates a duplicate downstream record (duplicate notification, duplicate analytics row).
- Out-of-order arrival — events for the same claim arrive in the wrong order (
claim.reserve.changedbeforeclaim.created); consumer rejects the reserve event because the parent claim does not yet exist locally. - Quiet event loss — consumer was down for 30 minutes; events that fired during that window are gone; nobody notices until a downstream report is missing rows.
- Back-pressure cascade — producer (Guidewire) emits faster than the consumer drains the queue; queue depth grows; eventually the destination's queue retention expires and events are dropped.
Prerequisites
- A working integration per
guidewire-install-authandguidewire-sdk-patterns - Access to the InsuranceSuite config zone for editing Gosu / messaging-destination XML
- A destination ready to receive events (AWS SQS queue + dead-letter queue, an HTTPS webhook endpoint, or a Kafka topic)
- The consumer service has its own datastore for idempotency keys (Redis with TTL ≥ 7d works; a database table also works)
Instructions
Build the integration in this order. Each step targets one of the five production failures listed in Overview.
1. Register the App Event in Gosu
Events not registered do not fire. The registration lives in gw.api.messaging.MessageEvents (or carrier-customized equivalent) and pairs an event code with a Gosu callback that decides whether to emit, and what payload.
// modules/configuration/gsrc/com/acme/messaging/ClaimEventBuilder.gs
package com.acme.messaging
uses gw.api.messaging.MessageContext
uses entity.Claim
class ClaimEventBuilder {
static function buildClaimStatusChangedEvent(ctx: MessageContext, claim: Claim): String {
return new gw.api.web.json.JsonObject() {{
put("eventType", "claim.status.changed")
put("messageId", java.util.UUID.randomUUID().toString())
put("eventTime", java.time.Instant.now().toString())
put("claimId", claim.PublicID)
put("claimNumber", claim.ClaimNumber)
put("oldStatus", ctx.PreviousValue?.toString())
put("newStatus", claim.State.Code)
put("policyNumber", claim.Policy.PolicyNumber)
}}.toString()
}
}
Register the destination in config/Messaging.xml so the InsuranceSuite messaging engine knows which channel (SQS, webhook, Kafka) routes the event. Without that XML entry, the Gosu callback exists but never fires.
2. Idempotent consumer keyed on messageId
Every event payload includes a messageId (a UUID generated by the producer). The consumer dedups on it before processing. The dedup window must exceed the queue's max-redelivery-window — for SQS with 24-hour message retention, dedup TTL ≥ 7 days is safe.
async function handleEvent(msg: SqsMessage): Promise<void> {
const event = JSON.parse(msg.Body);
const seen = await redis.set(`evt:${event.messageId}`, "1", "EX", 7 * 86400, "NX");
if (!seen) {
return; // already processed; ack and skip
}
await processEvent(event); // your business logic
}
SET ... NX (set-if-not-exists) makes the dedup atomic — concurrent workers cannot both decide a duplicate is novel.
3. Out-of-order tolerance via state-machine validation
Events for the same claim can arrive in arbitrary order. Consumer must tolerate without rejecting.
async function processEvent(event: Event): Promise<void> {
const local = await getLocalClaim(event.claimId);
switch (event.eventType) {
case "claim.created":
if (!local) await createLocalClaim(event);
break;
case "claim.status.changed":
if (!local) {
await deferEvent(event, "waiting-on-claim-created");
return;
}
await applyStatusChange(local, event);
break;
}
}
The deferEvent helper writes the event to a holding table; a periodic re-processor retries deferred events when their dependencies might have arrived. Events older than a TTL (e.g., 24h) escalate to manual review — a deferred event still missing dependencies after a day indicates a real producer bug.
4. Checkpoint and replay for backfill / recovery
If the consumer goes down or a downstream system needs to be rebuilt, replay events from a checkpoint. Guidewire's messaging system retains events server-side per the configured retention; in addition, the consumer should persist its own checkpoint (last-processed eventTime per event type).
await db.upsert("event_checkpoint", {
consumer: "broker-portal-cache",
event_type: "policy.bound",
last_event_time: maxEventTimeInBatch,
updated_at: new Date(),
});
async function replay(consumer: string, eventType: string, fromTime: Date): Promise<void> {
const events = await fetch(`${BASE}/cc/rest/v1/events?eventType=${eventType}&since=${fromTime.toISOString()}`);
for await (const e of events) await handleEvent({ Body: JSON.stringify(e) } as any);
}
Replay must be idempotent — that is why the consumer's messageId dedup must outlive the replay window.
5. Back-pressure handling
If queue depth grows past a threshold, the consumer is losing ground. Three responses, pre-baked rather than improvised at 3am:
# CloudWatch alarm: SQS queue depth > 10000 for 15min
on-alarm:
- autoscale: increase consumer replicas to 4x
- if not catching up after 15min more:
- emit metric `consumer-saturation` to incident pipeline
- on-call paged
- if queue retention near expiry:
- last-resort: emergency cap on producer-side rate limit
The autoscale path handles transient bursts; the cap path is for sustained saturation that needs a producer-side conversation.
Output
A production-grade event integration ships with all of the following:
- App Events registered in Gosu and
Messaging.xmlfor every business event the consumer needs; absent registrations explicitly documented as out-of-scope. - Consumer-side idempotency keyed on
messageIdwith TTL ≥ queue retention window. - Out-of-order tolerance: consumer processes events that arrive in any order without rejecting; deferred events held in a queue with TTL escalation.
- Per-consumer checkpoint persisted; replay tooling that consumes from the checkpoint forward and is safe under retry.
- Back-pressure response: queue-depth alarm wired to autoscale; saturation playbook documented; emergency producer cap available.
Examples
Example 1 — Gosu event builder for a policy renewal event
class RenewalEventBuilder {
static function build(ctx: MessageContext, policy: Policy): String {
return new gw.api.web.json.JsonObject() {{
put("eventType", "policy.renewed")
put("messageId", java.util.UUID.randomUUID().toString())
put("eventTime", java.time.Instant.now().toString())
put("policyNumber", policy.PolicyNumber)
put("renewedFrom", policy.RenewedFromPolicy?.PolicyNumber)
put("effectiveDate", policy.EffectiveDate.toString())
put("totalPremium", policy.TotalPremium.Amount.toString())
}}.toString()
}
}
Example 2 — Out-of-order handling for claim events
case "claim.payment.created":
const claim = await getLocalClaim(event.claimId);
if (!claim) {
await deferEvent(event, "missing claim parent");
return;
}
if (!claim.exposures.find(e => e.id === event.exposureId)) {
await deferEvent(event, "missing exposure parent");
return;
}
await applyPayment(claim, event);
break;
Example 3 — Checkpoint-based replay
# Replay all policy.bound events since last successful checkpoint
LAST=$(psql -tAc "SELECT last_event_time FROM event_checkpoint WHERE consumer='broker-portal' AND event_type='policy.bound'")
node scripts/replay.js --consumer=broker-portal --type=policy.bound --since="$LAST"
Error Handling
| Symptom | Cause | Solution |
|---|---|---|
| Event subscription set up but no events arriving | Gosu registration missing or Messaging.xml entry missing |
confirm both; the Gosu callback alone does not route |
| Same downstream record created twice | consumer not deduping on messageId |
wire the Redis SET NX dedup; backfill cleanup of duplicates is painful |
| Consumer rejects event with "parent not found" | out-of-order arrival; parent event has not been processed yet | use the deferred-events queue; do not reject |
| Events lost during consumer outage | no replay tooling | implement checkpoint + replay; without it, outages are data-loss events |
| Queue depth growing 24/7 | producer faster than consumer | scale consumer; if scaling does not help, partition by entity-id |
| Replay creates duplicates downstream | consumer dedup TTL too short, or checkpoint not flushed atomically | extend dedup TTL; flush checkpoint only after batch fully processes |
| Webhook endpoint returning 5xx for valid events | endpoint capacity or bug | Guidewire retries with backoff; eventually goes to DLQ; investigate the endpoint |
Same messageId showing different payloads in DLQ |
producer bug — messageId is supposed to be unique per message |
escalate to Guidewire support / config team; consumer cannot fix this |
For deeper coverage (Kafka partitioning strategies, exactly-once semantics across boundaries, schema evolution for event payloads, multi-tenant event fan-out), see implementation guide and API reference.
See Also
guidewire-install-auth— auth between Guidewire and the messaging destination if it requires bearer tokensguidewire-core-workflow-a— the bind/issue/renewal events this skill consumes are emitted by that workflowguidewire-core-workflow-b— the FNOL/reserve/payment events this skill consumesguidewire-observability-and-incident-response— queue-depth and saturation alerts that drive this skill's back-pressure response