Advanced Workflow Groovy, HANA SQL, Oracle PL/SQL, and REST API snippets. Every entry has one rule: it must save a practitioner at least 30 minutes.
Routes the approval request to the correct manager tier based on the participant's territory region. Handles null territory assignment gracefully instead of throwing a NullPointerException.
participant.territory returns null for participants assigned to the root org. Add the null check before any comparisons or you'll see silent failures.// Conditional approval routing by region // Use in: Approval Workflow Step > Script // Note: verify territory/region property names against your tenant's IntelliSense def territory = participant?.territory def region = territory?.region ?: "DEFAULT" switch (region) { case "APAC": route.setApproverByRole("APAC_DIRECTOR") break case "EMEA": route.setApproverByRole("EMEA_DIRECTOR") break default: route.setApprover(participant?.manager?.id ?: "DEFAULT_APPROVER") }
Validates that a plan submission meets required criteria before routing to approvers. Returns a descriptive error message that appears in the UI instead of a generic failure.
false with a message object, not a plain string. Plain string returns cause the error to display as "null" in some AW versions.// Pre-submit validation script // Use in: Workflow > Before Submit action def target = plan.getQuotaTarget() def effectiveDate = plan.getEffectiveDate() if (target == null || target <= 0) { return [valid: false, message: "Quota target must be greater than 0"] } if (effectiveDate.before(new Date())) { return [valid: false, message: "Effective date cannot be in the past"] } return [valid: true]
Re-routes to the skip-level manager when an approval step has been sitting idle for more than three business days. Falls back to the COMP_ADMIN position if the skip-level is null.
getCreatedDate() returns a java.sql.Timestamp. Convert with .toLocalDate() before calling ChronoUnit.DAYS.between() or you will get a type-mismatch at runtime.// SLA escalation to skip-level manager // Use in: Workflow > Step Timeout action import java.time.LocalDate import java.time.temporal.ChronoUnit def created = workflowItem .getCreatedDate().toLocalDate() def days = ChronoUnit.DAYS.between( created, LocalDate.now()) if (days > 3) { def mgr = participant?.manager def skip = mgr?.manager if (skip?.id) { route.setApprover(skip.id) notification.setSubject("SLA breach: ${participant.name}") notification.setBody("Pending ${days} days. Escalated to you.") notification.addRecipient(skip.id) notification.send() } else { route.setApproverByRole("COMP_ADMIN") } }
Fires after an approval step completes. Sends the participant (and their manager in CC) a notification that includes the actual calculated payout for the period — not just a generic "your plan is approved" message.
getManager()?.getEmail() can still return null if the manager record has no email stored. Pass it through a null check before adding to CC or the entire notification call fails silently.// Post-approval notification with payout amount // Use in: Workflow > After Approve action def p = workflowItem.getParticipant() def period = workflowItem.getPeriod() def payout = results.getEarnings( p.getId(), period.getId()) def ccList = [] if (p.getManager()?.getEmail()) { ccList << p.getManager().getEmail() } // notification API uses 4 methods: setSubject, setBody, addRecipient(s), send() notification.setSubject("Plan approved — ${period.getName()}") notification.setBody("""Hi ${p.getFirstName()}, Your ${period.getName()} plan is approved. Payout: ${payout.getCurrencyCode()} ${payout.getAmount()}""") notification.addRecipient(p.getEmail()) ccList.each { notification.addRecipient(it) } notification.send()
Returns calculated result amounts for all participants in a given plan and period range. Note: CS_QUOTA stores quota definitions (QUOTASEQ, NAME, DESCRIPTION) — actual quota values are held in CS_QUOTA_VARIABLES via a variable assignment chain and require a multi-table join to retrieve.
-- Result amounts by participant and period -- CS_QUOTA stores definitions only (QUOTASEQ, NAME, DESCRIPTION). -- Quota values require joining through CS_QUOTA_VARIABLES. SELECT p.NAME AS PARTICIPANT, r.PERIOD_ID, r.RESULT_AMOUNT FROM CS_PAYMENT r JOIN CS_PARTICIPANT p ON r.PARTICIPANT_ID = p.ID WHERE r.PLAN_ID = :plan_id AND r.PERIOD_ID BETWEEN :start AND :end ORDER BY p.NAME, r.PERIOD_ID;
LEFT JOIN pattern that surfaces all transactions that entered the pipeline but produced no result record. Run this immediately after a pipeline execution to catch silent exclusions before they become payroll discrepancies.
-- Transactions loaded but absent from pipeline results -- Run after every pipeline execution SELECT t.TRANSACTION_ID, t.EXTERNAL_ID, t.PARTICIPANT_ID, t.AMOUNT, t.PERIOD_ID FROM CS_SALESTRANSACTION t LEFT JOIN CS_PAYMENT r ON r.TRANSACTION_ID = t.TRANSACTION_ID AND r.PIPELINE_RUN_ID = :run_id WHERE t.PERIOD_ID = :period_id AND r.TRANSACTION_ID IS NULL ORDER BY t.PARTICIPANT_ID;
Groups all credit assignments by transaction and flags any where the split percentages do not sum to exactly 100. A single bad row here will cause under- or over-payment that is very hard to trace after finalize.
ABS(SUM(...) - 100) > 0.01 to allow for sub-cent rounding differences.-- Credit split validation: flag non-100% totals SELECT c.TRANSACTION_ID, COUNT(*) AS SPLIT_COUNT, SUM(c.SPLIT_PCT) AS TOTAL_SPLIT FROM CS_CREDIT c JOIN CS_SALESTRANSACTION t ON t.TRANSACTION_ID = c.TRANSACTION_ID WHERE t.PERIOD_ID = :period_id GROUP BY c.TRANSACTION_ID HAVING ABS(SUM(c.SPLIT_PCT) - 100) > 0.01 ORDER BY c.TRANSACTION_ID;
Uses a HANA window function to produce a running YTD earnings total per participant across all periods in a given year. Far cleaner than a self-join approach and reads correctly in Crystal Reports exports.
YEAR() function works on standard dates, but if your implementation uses fiscal years, switch to CS_PERIOD.FISCAL_YEAR to match how SAP calculates YTD internally.-- YTD running total per participant -- Replace :year with e.g. '2025' SELECT p.NAME AS PARTICIPANT, r.PERIOD_ID, r.RESULT_AMOUNT, SUM(r.RESULT_AMOUNT) OVER ( PARTITION BY r.PARTICIPANT_ID ORDER BY r.PERIOD_ID ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW ) AS YTD_EARNINGS FROM CS_PAYMENT r JOIN CS_PARTICIPANT p ON p.ID = r.PARTICIPANT_ID JOIN CS_PERIOD pd ON pd.ID = r.PERIOD_ID WHERE YEAR(pd.START_DATE) = :year ORDER BY r.PARTICIPANT_ID, r.PERIOD_ID;
Walks the entire Callidus participant tree using a recursive CTE. Returns all nodes with their level depth. Critical first step in any Oracle → HANA migration — you need to know your hierarchy structure before mapping to HANA.
WHERE LVL < 20 as a safety net.-- Recursive participant hierarchy -- Run on legacy CallidusCloud Commissions Oracle DB WITH HIER AS ( -- Anchor: top-level participants SELECT ID, PARENT_ID, NAME, 1 AS LVL FROM CS_PARTICIPANT WHERE PARENT_ID IS NULL UNION ALL -- Recursive: children SELECT p.ID, p.PARENT_ID, p.NAME, h.LVL + 1 FROM CS_PARTICIPANT p JOIN HIER h ON h.ID = p.PARENT_ID WHERE h.LVL < 20 ) SELECT LVL, ID, PARENT_ID, NAME FROM HIER ORDER BY LVL, NAME;
Finds credit rows in the Callidus schema whose PARTICIPANT_ID has no matching row in CS_PARTICIPANT. This silently zeroes out earnings for those credits in the pipeline. Common after bulk deletes or failed participant imports.
AND p.STATUS != 'INACTIVE' to the inner WHERE clause.-- Orphaned credits: no matching participant -- Run before pipeline to catch silent zero-pays SELECT c.CREDIT_ID, c.TRANSACTION_ID, c.PARTICIPANT_ID, c.AMOUNT, c.PERIOD_ID FROM CS_CREDIT c WHERE NOT EXISTS ( SELECT 1 FROM CS_PARTICIPANT p WHERE p.ID = c.PARTICIPANT_ID ) ORDER BY c.PERIOD_ID, c.PARTICIPANT_ID;
Gets a bearer token for SAP Incentive Management API calls. Includes token caching to avoid hitting the rate limit on high-frequency automation scripts.
import requests, time _token_cache = {"token": None, "exp": 0} def get_token(client_id, secret, tenant): if time.time() < _token_cache["exp"] - 60: return _token_cache["token"] r = requests.post( f"https://{tenant}.calliduscloud.com" "/oauth/token", data={"grant_type": "client_credentials", "client_id": client_id, "client_secret": secret} ) data = r.json() _token_cache["token"] = data["access_token"] _token_cache["exp"] = (time.time() + data["expires_in"]) return _token_cache["token"]
Two-step connection check: confirms the OAuth token endpoint is reachable and the credentials are valid, then makes a minimal read call to verify API scope is actually granted. Use this at the start of every automation script.
GET /api/v2/participants?pageSize=1 call — a 403 there means your service account is missing read permissions.import requests, time def check_connection(client_id, secret, tenant): base = f"https://{tenant}.calliduscloud.com" t0 = time.time() try: r = requests.post( base + "/oauth/token", timeout=10, data={"grant_type": "client_credentials", "client_id": client_id, "client_secret": secret}) ms = round((time.time() - t0) * 1000) if r.status_code != 200: return {"ok": False, "stage": "auth", "http": r.status_code} token = r.json()["access_token"] # Verify read scope rr = requests.get( base + "/api/v2/participants", headers={"Authorization": f"Bearer {token}"}, params={"pageSize": 1}, timeout=10) return {"ok": rr.status_code == 200, "stage": "read", "http": rr.status_code, "latency_ms": ms} except requests.Timeout: return {"ok": False, "stage": "timeout"}
Loops through a list of participant dicts and PUTs each one, tracking created vs updated counts and collecting errors without stopping the batch. Includes a sleep to stay inside the API's per-minute rate limit.
time.sleep(0.7) inside the loop for batches larger than 80 participants or you will receive 429s mid-run.import requests, time def upsert_participants(token, tenant, participants): """ participants: list of dicts with keys externalId, firstName, lastName, email """ base = f"https://{tenant}.calliduscloud.com/api/v2" hdrs = {"Authorization": f"Bearer {token}", "Content-Type": "application/json"} out = {"created": 0, "updated": 0, "errors": []} for p in participants: r = requests.put( f"{base}/participants/{p['externalId']}", json=p, headers=hdrs) if r.status_code == 201: out["created"] += 1 elif r.status_code == 200: out["updated"] += 1 else: out["errors"].append({ "id": p["externalId"], "http": r.status_code, "body": r.text[:200]}) time.sleep(0.7) # stay under rate limit return out
Fires a pipeline run via the API and then polls every 15 seconds until the run reaches COMPLETED, FAILED, or CANCELLED. Designed for use in automated post-load pipelines where you need to gate the next step on pipeline completion.
timeout conservatively — a 10-minute cap will cause false failures.import requests, time def run_pipeline(token, tenant, pipeline_name, period_id, timeout=3600): base = f"https://{tenant}.calliduscloud.com/api/v2" hdrs = {"Authorization": f"Bearer {token}", "Content-Type": "application/json"} r = requests.post( f"{base}/pipelines/run", headers=hdrs, json={"pipelineName": pipeline_name, "periodId": period_id}) r.raise_for_status() run_id = r.json()["runId"] deadline = time.time() + timeout terminal = {"COMPLETED", "FAILED", "CANCELLED"} while time.time() < deadline: s = requests.get( f"{base}/pipelines/runs/{run_id}", headers=hdrs).json().get("status") if s in terminal: return {"runId": run_id, "status": s} time.sleep(15) raise TimeoutError( f"Pipeline {run_id} not done in {timeout}s")
Pages through the compensation statements endpoint using page+pageSize parameters and writes everything to an in-memory CSV. Handles the API's page exhaustion correctly so you never get truncated exports in automated month-end reports.
items key is empty — not absent — on the last page. Checking len(items) < page_size as your break condition is safer than relying on a totalCount field, which some API versions omit entirely.import requests, csv, io def export_statements_csv(token, tenant, period_id): base = f"https://{tenant}.calliduscloud.com/api/v2" hdrs = {"Authorization": f"Bearer {token}"} rows, page, pgsz = [], 0, 200 while True: r = requests.get( f"{base}/compensationStatements", headers=hdrs, params={"periodId": period_id, "page": page, "pageSize": pgsz}) r.raise_for_status() items = r.json().get("items", []) if not items: break rows.extend(items) page += 1 if len(items) < pgsz: break if not rows: return "" out = io.StringIO() w = csv.DictWriter(out, fieldnames=rows[0].keys()) w.writeheader() w.writerows(rows) return out.getvalue()
Found something useful, or want to discuss the SAP IM space? Reach out or stay in the loop.