Advanced Workflow Groovy, HANA SQL, Oracle PL/SQL, and REST API snippets. Every entry has one rule: it must save a consultant at least 30 minutes.
Routes the approval request to the correct manager tier based on the participant's territory region. Handles null territory assignment gracefully instead of throwing a NullPointerException.
participant.territory returns null for participants assigned to the root org. Add the null check before any comparisons or you'll see silent failures.// Conditional approval routing by region // Use in: Approval Workflow Step > Script def territory = participant.getTerritory() def region = territory?.getRegion() ?: "DEFAULT" switch (region) { case "APAC": route.toPosition("APAC_DIRECTOR") break case "EMEA": route.toPosition("EMEA_DIRECTOR") break default: route.toDirect(participant.getManager()) }
Validates that a plan submission meets required criteria before routing to approvers. Returns a descriptive error message that appears in the UI instead of a generic failure.
false with a message object, not a plain string. Plain string returns cause the error to display as "null" in some AW versions.// Pre-submit validation script // Use in: Workflow > Before Submit action def target = plan.getQuotaTarget() def effectiveDate = plan.getEffectiveDate() if (target == null || target <= 0) { return [valid: false, message: "Quota target must be greater than 0"] } if (effectiveDate.before(new Date())) { return [valid: false, message: "Effective date cannot be in the past"] } return [valid: true]
Re-routes to the skip-level manager when an approval step has been sitting idle for more than three business days. Falls back to the COMP_ADMIN position if the skip-level is null.
getCreatedDate() returns a java.sql.Timestamp. Convert with .toLocalDate() before calling ChronoUnit.DAYS.between() or you will get a type-mismatch at runtime.// SLA escalation to skip-level manager // Use in: Workflow > Step Timeout action import java.time.LocalDate import java.time.temporal.ChronoUnit def created = workflowItem .getCreatedDate().toLocalDate() def days = ChronoUnit.DAYS.between( created, LocalDate.now()) if (days > 3) { def mgr = participant.getManager() def skip = mgr?.getManager() if (skip) { route.toPosition(skip.getPositionId()) notification.send( to: skip.getEmail(), subject: "SLA breach: ${participant.getName()}", body: "Pending ${days} days. Escalated to you." ) } else { route.toPosition("COMP_ADMIN") } }
Fires after an approval step completes. Sends the participant (and their manager in CC) a notification that includes the actual calculated payout for the period — not just a generic "your plan is approved" message.
getManager()?.getEmail() can still return null if the manager record has no email stored. Pass it through a null check before adding to CC or the entire notification call fails silently.// Post-approval notification with payout amount // Use in: Workflow > After Approve action def p = workflowItem.getParticipant() def period = workflowItem.getPeriod() def payout = results.getEarnings( p.getId(), period.getId()) def ccList = [] if (p.getManager()?.getEmail()) { ccList << p.getManager().getEmail() } notification.send( to: p.getEmail(), cc: ccList, subject: "Plan approved — ${period.getName()}", body: """Hi ${p.getFirstName()}, Your ${period.getName()} plan is approved. Payout: ${payout.getCurrencyCode()} ${payout.getAmount()}""" )
Runs as a Stage Hook on the Input Stage and rejects any transaction whose External ID has already been processed. Prevents double-pay in re-run scenarios where the source file contains previously loaded records.
// Duplicate transaction guard — Stage Hook // Use in: Pipeline > Input Stage > Before Step def extId = transaction.getExternalId() def dup = dataService.query( "SELECT COUNT(*) FROM CSC_PROCESSED_TXN " + "WHERE EXTERNAL_ID = ?", [extId]) if (dup > 0) { transaction.reject( reason: "Duplicate: ${extId} already processed", code: "DUPLICATE_TXN" ) return } // Mark as seen for future runs dataService.execute( "INSERT INTO CSC_PROCESSED_TXN" + "(EXTERNAL_ID, PROCESSED_AT) " + "VALUES(?, CURRENT_TIMESTAMP)", [extId])
Returns actual vs quota with attainment percentage for all participants in a given plan and period range. Includes handling for zero-quota rows to avoid divide-by-zero errors.
CSC_QUOTA_AMOUNT can be 0 for newly assigned participants. Use NULLIF to avoid division errors.SELECT p.NAME AS PARTICIPANT, r.PERIOD_ID, r.RESULT_AMOUNT, q.QUOTA_AMOUNT, ROUND( r.RESULT_AMOUNT / NULLIF(q.QUOTA_AMOUNT, 0) * 100, 2 ) AS ATTAINMENT_PCT FROM CSC_RESULTS r JOIN CSC_PARTICIPANT p ON r.PARTICIPANT_ID = p.ID LEFT JOIN CSC_QUOTA q ON q.PARTICIPANT_ID = p.ID AND q.PERIOD_ID = r.PERIOD_ID WHERE r.PLAN_ID = :plan_id AND r.PERIOD_ID BETWEEN :start AND :end ORDER BY p.NAME, r.PERIOD_ID;
Shows each step in a pipeline run with start time, end time, row count processed, and status. Essential for diagnosing slow pipeline runs before go-live.
CSC_PIPELINE_STEP_LOG is only retained for 90 days by default. Run this within the retention window or you'll get 0 rows.SELECT s.STEP_NAME, s.STATUS, s.START_TIME, s.END_TIME, SECONDS_BETWEEN(s.START_TIME, s.END_TIME) AS DURATION_SEC, s.ROWS_PROCESSED FROM CSC_PIPELINE_STEP_LOG s JOIN CSC_PIPELINE_RUN r ON r.ID = s.PIPELINE_RUN_ID WHERE r.ID = :run_id ORDER BY s.START_TIME ASC;
LEFT JOIN pattern that surfaces all transactions that entered the pipeline but produced no result record. Run this immediately after a pipeline execution to catch silent exclusions before they become payroll discrepancies.
CSC_TRANSACTION_STATUS before raising an incident.-- Transactions loaded but absent from pipeline results -- Run after every pipeline execution SELECT t.TRANSACTION_ID, t.EXTERNAL_ID, t.PARTICIPANT_ID, t.AMOUNT, t.PERIOD_ID FROM CSC_TRANSACTION t LEFT JOIN CSC_RESULTS r ON r.TRANSACTION_ID = t.TRANSACTION_ID AND r.PIPELINE_RUN_ID = :run_id WHERE t.PERIOD_ID = :period_id AND r.TRANSACTION_ID IS NULL ORDER BY t.PARTICIPANT_ID;
Groups all credit assignments by transaction and flags any where the split percentages do not sum to exactly 100. A single bad row here will cause under- or over-payment that is very hard to trace after finalize.
ABS(SUM(...) - 100) > 0.01 to allow for sub-cent rounding differences.-- Credit split validation: flag non-100% totals SELECT c.TRANSACTION_ID, COUNT(*) AS SPLIT_COUNT, SUM(c.SPLIT_PCT) AS TOTAL_SPLIT FROM CSC_CREDIT c JOIN CSC_TRANSACTION t ON t.TRANSACTION_ID = c.TRANSACTION_ID WHERE t.PERIOD_ID = :period_id GROUP BY c.TRANSACTION_ID HAVING ABS(SUM(c.SPLIT_PCT) - 100) > 0.01 ORDER BY c.TRANSACTION_ID;
Uses a HANA window function to produce a running YTD earnings total per participant across all periods in a given year. Far cleaner than a self-join approach and reads correctly in Crystal Reports exports.
YEAR() function works on standard dates, but if your implementation uses fiscal years, switch to CSC_PERIOD.FISCAL_YEAR to match how SAP calculates YTD internally.-- YTD running total per participant -- Replace :year with e.g. '2025' SELECT p.NAME AS PARTICIPANT, r.PERIOD_ID, r.RESULT_AMOUNT, SUM(r.RESULT_AMOUNT) OVER ( PARTITION BY r.PARTICIPANT_ID ORDER BY r.PERIOD_ID ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW ) AS YTD_EARNINGS FROM CSC_RESULTS r JOIN CSC_PARTICIPANT p ON p.ID = r.PARTICIPANT_ID JOIN CSC_PERIOD pd ON pd.ID = r.PERIOD_ID WHERE YEAR(pd.START_DATE) = :year ORDER BY r.PARTICIPANT_ID, r.PERIOD_ID;
Walks the entire Callidus participant tree using a recursive CTE. Returns all nodes with their level depth. Critical first step in any Oracle → HANA migration — you need to know your hierarchy structure before mapping to HANA.
WHERE LVL < 20 as a safety net.-- Recursive participant hierarchy -- Run on legacy Callidus Commissions Oracle DB WITH HIER AS ( -- Anchor: top-level participants SELECT ID, PARENT_ID, NAME, POSITION_NAME, 1 AS LVL FROM CS_PARTICIPANT WHERE PARENT_ID IS NULL UNION ALL -- Recursive: children SELECT p.ID, p.PARENT_ID, p.NAME, p.POSITION_NAME, h.LVL + 1 FROM CS_PARTICIPANT p JOIN HIER h ON h.ID = p.PARENT_ID WHERE h.LVL < 20 ) SELECT LVL, ID, PARENT_ID, NAME, POSITION_NAME FROM HIER ORDER BY LVL, NAME;
Extracts all compensation statement line items for a given period from the legacy Callidus Commissions Oracle schema. Formatted for easy comparison against SAP HANA post-migration validation.
CS_COMP_STATEMENT_LINE stores amounts in the plan's native currency. Join to CS_CURRENCY if you need converted values for cross-currency validation.SELECT p.NAME AS PARTICIPANT, s.PERIOD_ID, sl.LINE_TYPE, sl.DESCRIPTION, sl.AMOUNT, sl.CURRENCY_CODE FROM CS_COMP_STATEMENT s JOIN CS_COMP_STATEMENT_LINE sl ON sl.STATEMENT_ID = s.ID JOIN CS_PARTICIPANT p ON p.ID = s.PARTICIPANT_ID WHERE s.PERIOD_ID = &period_id AND s.STATUS = 'APPROVED' ORDER BY p.NAME, sl.LINE_TYPE;
Updates quota targets in bulk using BULK COLLECT + FORALL — the correct Oracle pattern for batch updates on large Callidus quota tables. Row-by-row UPDATE inside a cursor loop on this table will time out for period reforecasting runs with > 5 000 rows.
-- BULK COLLECT quota target uplift (10%) -- Adjust the multiplier for other reforecast % DECLARE TYPE t_id IS TABLE OF CS_QUOTA.ID%TYPE; TYPE t_amt IS TABLE OF CS_QUOTA.QUOTA_AMOUNT%TYPE; v_ids t_id; v_amts t_amt; BEGIN SELECT ID, QUOTA_AMOUNT * 1.10 BULK COLLECT INTO v_ids, v_amts FROM CS_QUOTA WHERE PERIOD_ID = &period_id AND PLAN_ID = &plan_id AND STATUS = 'ACTIVE'; FORALL i IN 1..v_ids.COUNT UPDATE CS_QUOTA SET QUOTA_AMOUNT = v_amts(i), LAST_UPDATED = SYSDATE WHERE ID = v_ids(i); COMMIT; DBMS_OUTPUT.PUT_LINE( 'Updated: ' || v_ids.COUNT || ' rows'); END; /
Finds credit rows in the Callidus schema whose PARTICIPANT_ID has no matching row in CS_PARTICIPANT. This silently zeroes out earnings for those credits in the pipeline. Common after bulk deletes or failed participant imports.
AND p.STATUS != 'INACTIVE' to the inner WHERE clause.-- Orphaned credits: no matching participant -- Run before pipeline to catch silent zero-pays SELECT c.CREDIT_ID, c.TRANSACTION_ID, c.PARTICIPANT_ID, c.AMOUNT, c.PERIOD_ID FROM CS_CREDIT c WHERE NOT EXISTS ( SELECT 1 FROM CS_PARTICIPANT p WHERE p.ID = c.PARTICIPANT_ID ) ORDER BY c.PERIOD_ID, c.PARTICIPANT_ID;
Compares approved compensation statements between two periods and surfaces participants whose payout changed by more than 20%. The standard first check HR runs after every close — save them the Excel pivot.
-- Period-over-period payout variance -- Flags >20% swings for review SELECT p.NAME AS PARTICIPANT, cur.AMOUNT AS CURRENT_PERIOD, prv.AMOUNT AS PRIOR_PERIOD, ROUND( (cur.AMOUNT - prv.AMOUNT) / NULLIF(prv.AMOUNT, 0) * 100, 1 ) AS VARIANCE_PCT FROM CS_COMP_STATEMENT cur JOIN CS_PARTICIPANT p ON p.ID = cur.PARTICIPANT_ID LEFT JOIN CS_COMP_STATEMENT prv ON prv.PARTICIPANT_ID = cur.PARTICIPANT_ID AND prv.PERIOD_ID = &prior_period_id WHERE cur.PERIOD_ID = ¤t_period_id AND cur.STATUS = 'APPROVED' AND ABS( (cur.AMOUNT - prv.AMOUNT) / NULLIF(prv.AMOUNT, 0) * 100 ) > 20 ORDER BY ABS(cur.AMOUNT - prv.AMOUNT) DESC;
Gets a bearer token for SAP Commissions API calls. Includes token caching to avoid hitting the rate limit on high-frequency automation scripts.
import requests, time _token_cache = {"token": None, "exp": 0} def get_token(client_id, secret, tenant): if time.time() < _token_cache["exp"] - 60: return _token_cache["token"] r = requests.post( f"https://{tenant}.calliduscloud.com" "/oauth/token", data={"grant_type": "client_credentials", "client_id": client_id, "client_secret": secret} ) data = r.json() _token_cache["token"] = data["access_token"] _token_cache["exp"] = (time.time() + data["expires_in"]) return _token_cache["token"]
Two-step connection check: confirms the OAuth token endpoint is reachable and the credentials are valid, then makes a minimal read call to verify API scope is actually granted. Use this at the start of every automation script.
GET /api/v2/participants?pageSize=1 call — a 403 there means your service account is missing read permissions.import requests, time def check_connection(client_id, secret, tenant): base = f"https://{tenant}.calliduscloud.com" t0 = time.time() try: r = requests.post( base + "/oauth/token", timeout=10, data={"grant_type": "client_credentials", "client_id": client_id, "client_secret": secret}) ms = round((time.time() - t0) * 1000) if r.status_code != 200: return {"ok": False, "stage": "auth", "http": r.status_code} token = r.json()["access_token"] # Verify read scope rr = requests.get( base + "/api/v2/participants", headers={"Authorization": f"Bearer {token}"}, params={"pageSize": 1}, timeout=10) return {"ok": rr.status_code == 200, "stage": "read", "http": rr.status_code, "latency_ms": ms} except requests.Timeout: return {"ok": False, "stage": "timeout"}
Loops through a list of participant dicts and PUTs each one, tracking created vs updated counts and collecting errors without stopping the batch. Includes a sleep to stay inside the API's per-minute rate limit.
time.sleep(0.7) inside the loop for batches larger than 80 participants or you will receive 429s mid-run.import requests, time def upsert_participants(token, tenant, participants): """ participants: list of dicts with keys externalId, firstName, lastName, email """ base = f"https://{tenant}.calliduscloud.com/api/v2" hdrs = {"Authorization": f"Bearer {token}", "Content-Type": "application/json"} out = {"created": 0, "updated": 0, "errors": []} for p in participants: r = requests.put( f"{base}/participants/{p['externalId']}", json=p, headers=hdrs) if r.status_code == 201: out["created"] += 1 elif r.status_code == 200: out["updated"] += 1 else: out["errors"].append({ "id": p["externalId"], "http": r.status_code, "body": r.text[:200]}) time.sleep(0.7) # stay under rate limit return out
Fires a pipeline run via the API and then polls every 15 seconds until the run reaches COMPLETED, FAILED, or CANCELLED. Designed for use in automated post-load pipelines where you need to gate the next step on pipeline completion.
timeout conservatively — a 10-minute cap will cause false failures.import requests, time def run_pipeline(token, tenant, pipeline_name, period_id, timeout=3600): base = f"https://{tenant}.calliduscloud.com/api/v2" hdrs = {"Authorization": f"Bearer {token}", "Content-Type": "application/json"} r = requests.post( f"{base}/pipelines/run", headers=hdrs, json={"pipelineName": pipeline_name, "periodId": period_id}) r.raise_for_status() run_id = r.json()["runId"] deadline = time.time() + timeout terminal = {"COMPLETED", "FAILED", "CANCELLED"} while time.time() < deadline: s = requests.get( f"{base}/pipelines/runs/{run_id}", headers=hdrs).json().get("status") if s in terminal: return {"runId": run_id, "status": s} time.sleep(15) raise TimeoutError( f"Pipeline {run_id} not done in {timeout}s")
Pages through the compensation statements endpoint using page+pageSize parameters and writes everything to an in-memory CSV. Handles the API's page exhaustion correctly so you never get truncated exports in automated month-end reports.
items key is empty — not absent — on the last page. Checking len(items) < page_size as your break condition is safer than relying on a totalCount field, which some API versions omit entirely.import requests, csv, io def export_statements_csv(token, tenant, period_id): base = f"https://{tenant}.calliduscloud.com/api/v2" hdrs = {"Authorization": f"Bearer {token}"} rows, page, pgsz = [], 0, 200 while True: r = requests.get( f"{base}/compensationStatements", headers=hdrs, params={"periodId": period_id, "page": page, "pageSize": pgsz}) r.raise_for_status() items = r.json().get("items", []) if not items: break rows.extend(items) page += 1 if len(items) < pgsz: break if not rows: return "" out = io.StringIO() w = csv.DictWriter(out, fieldnames=rows[0].keys()) w.writeheader() w.writerows(rows) return out.getvalue()
Found something useful, or want to discuss the SAP ICM space? Reach out or stay in the loop.