What Is the Pipeline?
The pipeline is the heart of Callidus Commissions. It's a staged workflow that transforms raw sales transactions into compensation payouts. Every transaction that enters the system goes through the pipeline. Every participant's compensation is calculated by the pipeline. It's not a single process — it's a sequence of transformations, each with its own SQL logic, error handling, and monitoring.
Think of it like an assembly line. A transaction enters as a raw credit (a sale, a deal booking, a renewal). At each stage, the pipeline enriches, validates, and transforms it. By the end, the participant has a calculated incentive amount ready for payout. If something goes wrong at any stage, the transaction gets stuck in an error state, and you have to investigate and reprocess it.
The pipeline is driven by a scheduler (internal to Callidus or via external job runner). You can kick off a pipeline run manually, or it can run on a schedule (nightly, after month-end close, etc.). Each run processes a defined set of data for a defined period. The run itself is logged in the database, so you can audit what happened.
The Six Pipeline Stages
Every pipeline run executes these six stages in order. Each stage has an entry condition (data must exist in table X), logic (transformations and validations), and an exit condition (data written to table Y). Here's the flow:
Stage 1: Import
Raw transactions arrive in the system from upstream sources: CRM (Salesforce), ERP (SAP, Oracle), or manual uploads. The system loads them into the staging table CS_TRANSACTION. This is the raw data before any processing. Fields include the transaction amount, date, source reference (deal ID, booking ID, etc.), and any metadata from the source.
The import stage is usually the fastest. It's just a bulk insert. No validation happens here — garbage in, garbage out. If the transaction is malformed (missing required fields), it still imports but will fail in later stages.
Stage 2: Classify
Now the system matches each transaction to a participant and a plan. It looks up the transaction's source reference (e.g., a deal ID) in the source system, finds the owner (salesperson), and identifies which compensation plan they're on. If the participant isn't found, or if they're not assigned to an active plan, the transaction gets flagged as 'UNMATCHED' and moves to the error state. Classification is where 80% of pipeline failures happen.
Classified transactions are still staging data. They're written to a temporary working table or marked in CS_TRANSACTION. The system is just saying: "This transaction belongs to Rep John Smith on the EMEA Sales Commission plan."
Stage 3: Credit
Once classified, the transaction becomes a credit — a record in the CS_CREDIT table. A credit is the system's internal representation of a transaction. It has the transaction amount, the participant, the plan, the period, and the credit date (when the transaction occurred). The CS_CREDIT table is the permanent record of all transaction-level activity in the system.
At this stage, the system validates that the credit can be processed: the plan exists, the period is open for calculations, the participant is active in that plan. If any validation fails, the credit's STATUS is set to 'ERROR' (or sometimes 'HOLD' for manual review). If all checks pass, STATUS is set to 'CREDITED' and the credit is ready for the next stage.
Stage 4: Calculate
The calculation engine reads all credits for a participant in a period and runs the compensation plan's rules. It applies the plan logic: commission percentage, quota attainment curves, SPIFs (Special Incentive Funds), overrides, adjustments, etc. The output is written to the CS_RESULTS table, which contains one row per participant per period with the calculated incentive amount.
This is the most complex stage. It invokes custom calculation logic, handles edge cases (zero quota, 200% attainment, etc.), and flags errors if data is missing (e.g., the participant has no quota). If the calculation fails, the result record is marked 'ERROR' and a diagnostic message is logged.
Stage 5: Summarize
Once all participants are calculated, the system rolls up the results into the CS_COMP_SUMMARY table. This table aggregates by period: total incentives paid, by plan, by department, etc. It's a read-only summary for reporting and audit. The Summarize stage is straightforward SQL aggregation — it rarely fails unless there's a data consistency issue.
Stage 6: Statement
Finally, compensation statements are generated from the results. Statements are records in CS_STATEMENT — one per participant per period, containing the breakdown of earnings, adjustments, and net payout. Statements are the legal record of what a participant earned. They go to finance for payroll, to the participant for visibility, and to management for audit.
The Statement stage often includes formatting and validation: are all results covered? Are there duplicates? Are amounts sensible? If the system is connected to a payroll system (SAP, ADP), the statements might auto-feed there, or they might be exported manually.
Pipeline Run Concepts: Full vs. Incremental
Two run modes control what data the pipeline processes:
- Full Run: Reprocesses all transactions for the entire period. All existing credits and results for that period are cleared (or archived), and the pipeline starts from scratch. Use this when you discover data errors and need to recalculate everything. Full runs are slow on large datasets (millions of transactions) but are the safest option.
- Incremental Run: Processes only new or changed transactions since the last run. The system tracks which transactions have been processed (via a watermark or hash) and only feeds new ones through the pipeline. Existing credits and results are left alone. Use this for daily or weekly runs when you only have a small delta of new transactions. Much faster than full runs.
Your choice depends on the scenario:
- Running nightly with new CRM deals? Use incremental.
- Running at month-end close when all transactions are finalized? Use full run to ensure accuracy.
- Found a bug in the classification logic for a specific product line? Fix the bug, then run full run for that product to recalculate.
- A large transaction was rejected and now needs to be reprocessed? Use incremental with manual transaction insertion.
Monitoring Pipeline Runs with Oracle SQL
Every pipeline run creates a record in the CS_PIPELINE_RUN table. This table is your audit log and monitoring dashboard. Key fields:
RUN_ID— unique identifier for the runPLAN_ID— which plan was processed (can be NULL for all-plan runs)PERIOD_ID— which periodRUN_TYPE— FULL or INCREMENTALSTATUS— STARTED, IN_PROGRESS, COMPLETED, FAILED, PARTIALSTART_TIME,END_TIME— run durationTOTAL_ROWS,PROCESSED_ROWS,ERROR_ROWS— counts at each stageERROR_MESSAGE— if STATUS = FAILED, the reason
-- Check the status of the last 10 pipeline runs SELECT run_id, plan_id, period_id, run_type, status, start_time, end_time, ROUND((end_time - start_time) * 1440, 2) AS duration_mins, total_rows, processed_rows, error_rows FROM cs_pipeline_run ORDER BY start_time DESC FETCH FIRST 10 ROWS ONLY; -- Find all failed runs in the last 7 days SELECT run_id, plan_id, period_id, status, error_message, start_time, end_time FROM cs_pipeline_run WHERE status = 'FAILED' AND start_time >= TRUNC(SYSDATE - 7) ORDER BY start_time DESC; -- Get summary of the last full run for plan 1001 SELECT run_id, status, total_rows, processed_rows, error_rows, ROUND((1 - error_rows/NULLIF(total_rows, 0)) * 100, 2) AS success_rate_pct FROM cs_pipeline_run WHERE plan_id = 1001 AND run_type = 'FULL' ORDER BY start_time DESC FETCH FIRST 1 ROW ONLY;
Finding and Diagnosing Failed Records
When a transaction fails in the pipeline, it's marked with a STATUS and usually an error code. The primary place to look is the CS_CREDIT table, which holds all transaction records with their statuses. A credit in error state has never made it to CS_RESULTS, so the participant won't get paid on that transaction.
-- List all credits with errors in a specific period SELECT credit_id, participant_id, plan_id, amount, credit_date, status, error_message, created_date FROM cs_credit WHERE period_id = 5001 AND status IN ('ERROR', 'HOLD') ORDER BY created_date DESC; -- Count errors by type SELECT error_message, COUNT(*) AS count FROM cs_credit WHERE period_id = 5001 AND status = 'ERROR' GROUP BY error_message ORDER BY count DESC; -- Find unmatched transactions (participant not found) SELECT credit_id, participant_id, plan_id, amount, source_ref FROM cs_credit WHERE status = 'UNMATCHED' AND period_id = 5001 AND participant_id IS NULL;
Common Pipeline Failures
The same issues come up repeatedly. Here's what to watch for:
Participant Not Found (Unmatched Transactions)
The classification stage tries to look up the transaction owner in CS_PARTICIPANT and can't find them. Causes: the participant hasn't been hired yet (HR sync hasn't run), the participant was terminated and removed from the system, the source reference (deal ID) is corrupted or doesn't map to anyone. Fix: verify the participant exists in CS_PARTICIPANT, check that the source system ID matches the mapping in CS_TRANSACTION, trigger HR sync if needed, or manually insert the missing participant record (only in emergency mode).
Plan Not Active
The transaction is matched to a plan, but the plan's status is 'Inactive' or the plan's EFFECTIVE_END date has passed. The credit stage rejects it. Fix: activate the plan (if it should be active) or update the EFFECTIVE_END date. If the plan is correctly inactive, you need to reassign the participant to a different active plan.
Period Not Open
The transaction is credited to a period, but the period status is 'Locked' or 'Final'. No new transactions can be added once a period is locked. Fix: change the period status back to 'Open', reprocess the transaction in an incremental run, or create an adjustment in a future period.
Missing or Zero Quota
The participant is assigned to the plan but has no quota (NULL or 0 in CS_PLAN_PARTICIPANT). The calculation stage fails because many formulas divide by quota (e.g., % of quota). Fix: assign a quota via CS_PLAN_PARTICIPANT or check if the participant should even be in this plan. Some plans have zero-quota handling (flat bonus, no attainment curve), so check the plan's calculation rules.
Divide-by-Zero in Rate Tables
The calculation engine applies a rate table (e.g., commission % for 0-50% attainment, 50-100%, >100%). If the attainment denominator is zero (no sales activity), division fails. This is usually handled with NULLIF or CASE logic, but custom calculation code might not. Fix: check the rate table definition and calculation logic, add NULL safety, or mark the result as zero.
Performance Bottlenecks (Full Runs on Large Data)
A full run processes millions of transactions. The pipeline can take hours or time out. Causes: missing database indexes on CS_CREDIT.participant_id or CS_CREDIT.plan_id, slow join to CS_PARTICIPANT or CS_COMP_PLAN, unoptimized calculation SQL. Fix: run an incremental run instead (much faster), add indexes to the staging tables, profile the calculation query, or run the pipeline in smaller batches (by plan, by business unit).
-- Check how many credits are in error state vs. processed SELECT status, COUNT(*) AS count FROM cs_credit WHERE period_id = 5001 GROUP BY status ORDER BY count DESC; -- Check if participant/plan keys are indexed SELECT index_name, column_name FROM all_ind_columns WHERE table_name = 'CS_CREDIT' AND column_name IN ('PARTICIPANT_ID', 'PLAN_ID') ORDER BY index_name;
Diagnostic Workflow
When the pipeline fails, follow this systematic approach:
- Check CS_PIPELINE_RUN. Look at the last run's status. Is it FAILED or PARTIAL? What's the error message? How many rows processed vs. errors?
- Check CS_CREDIT errors. Query all credits with STATUS = 'ERROR' or 'HOLD' in the affected period. Group by error message to see if it's a systematic issue or isolated transactions.
- Check master data. If the error is "participant not found," verify the participant exists in CS_PARTICIPANT and is Active. If "plan not active," check CS_COMP_PLAN status. If "period locked," check CS_PLAN_PERIOD.
- Check the source data. If the classification failed, verify the source reference (deal ID, booking ID) exists in the source system and maps to the correct participant. Check for data quality issues in the staging table.
- Check calculation logic. If the error is in CS_RESULTS, examine the plan's calculation rules for edge cases (zero quota, missing rate table, etc.). Run the calculation manually on a small sample to isolate the issue.
- Reprocess. Once you've fixed the underlying issue, either reprocess the specific failed credits (if the system supports targeted reprocessing) or run a full pipeline run for the period.
Real-World Example: A Failed Month-End Run
It's April 1st. You run the pipeline for March (period 5001). You expect 50,000 credits to process. The run completes, but the status is PARTIAL. You see: 50,000 total rows, 47,500 processed, 2,500 errors. Finance is breathing down your neck — they need statements for payroll.
You query for error credits. The error message is: "Plan 1002 not active." You check CS_COMP_PLAN for plan 1002: status is 'Inactive,' EFFECTIVE_END was March 15. That explains it — 2,500 transactions came in for plan 1002 after the plan ended. The plan should have extended to March 31. You update the EFFECTIVE_END date, run an incremental pipeline run, and the 2,500 credits now process cleanly. Finance has their statements on time.
The lesson: always validate your plan dates before a period closes. Build a pre-pipeline checklist: plans active? Participants assigned? Quotas set? Periods open? Catch these issues before the run, not after.
Monitoring Best Practices
- Set up alerts. Log into the Callidus UI or write a query to check CS_PIPELINE_RUN status daily. Alert your team if a run fails or if error_rows > 1% of total_rows.
- Archive old runs. Keep CS_PIPELINE_RUN lean. Archive or purge runs older than 12 months to keep query performance snappy.
- Track error rates. Build a dashboard showing error_rows / total_rows over time. A trending increase signals a data quality issue.
- Test in staging. Before running the pipeline on production data, test the plan logic and configuration in a staging environment. Run a test pipeline with historical data to validate.
- Document your run schedule. Is the pipeline full or incremental? When does it run? Who monitors it? Document it so the next person doesn't guess.
What Comes Next
Now you understand the pipeline as a workflow. The next lesson dives deep into the CS_ table schema: what tables exist, how they relate, and how to reverse-engineer the data model when SAP's documentation is sparse. You'll learn to query and join CS_ tables like a veteran Callidus DBA.