Packages: Organising PL/SQL Code
A package is a container for related procedures, functions, and variables. It has two parts: a spec (specification) and a body (implementation).
Think of the spec as a contract: "I provide these procedures and functions." The body is the implementation: "Here's how they work." Code that calls the package only sees the spec — it doesn't know or care about the implementation details.
Callidus Commissions uses packages extensively. You'll see packages like CS_CALC_PKG (calculation logic), CS_EXTRACT_PKG (data extraction), CS_VALIDATE_PKG (data validation). Each package groups related compensation logic.
Creating a Package: Spec and Body
Here's a simple package for commission utilities:
-- PACKAGE SPEC: Public interface CREATE OR REPLACE PACKAGE cs_commission_utils AS -- Public procedure: calculate commissions for a period PROCEDURE calc_period_commissions (p_period_id IN VARCHAR2, p_count OUT NUMBER); -- Public function: get tier based on quota attainment FUNCTION get_tier (p_part_id IN NUMBER, p_period IN VARCHAR2) RETURN VARCHAR2; -- Public variable: package-level constant c_max_commission CONSTANT NUMBER := 500000; END cs_commission_utils; / -- PACKAGE BODY: Implementation CREATE OR REPLACE PACKAGE BODY cs_commission_utils AS -- Private procedure (not visible to callers) PROCEDURE log_calculation(p_msg IN VARCHAR2) AS BEGIN DBMS_OUTPUT.PUT_LINE('[CS_COMMISSION] ' || p_msg); END log_calculation; -- Implement the public procedure PROCEDURE calc_period_commissions (p_period_id IN VARCHAR2, p_count OUT NUMBER) AS -- local variables and implementation BEGIN DBMS_OUTPUT.PUT_LINE('Calculating commissions for ' || p_period_id); p_count := 100; -- Simplified log_calculation('Done'); END calc_period_commissions; -- Implement the public function FUNCTION get_tier (p_part_id IN NUMBER, p_period IN VARCHAR2) RETURN VARCHAR2 AS BEGIN RETURN 'GOLD'; -- Simplified END get_tier; END cs_commission_utils; / -- CALL the public procedure DECLARE v_count NUMBER; BEGIN cs_commission_utils.calc_period_commissions('2026-Q1', v_count); DBMS_OUTPUT.PUT_LINE('Processed: ' || v_count); END; /
Key points:
- Package spec: Declares procedures, functions, and public variables. Callers see only these.
- Package body: Contains the implementation and can also have private procedures/functions/variables (declared only in the body, not in the spec). Callers cannot see or call private code.
- Procedure invocation: Call public procedures with
package_name.procedure_name(params). - Package constants: Define them in the spec with
CONSTANT. They're visible to all callers.
Benefits of Packages
- Encapsulation: Hide implementation details. Change the body without affecting callers.
- Namespace: Group related code.
cs_commission_utils.get_tier()is clearer than justget_tier(). - Performance: Package code is loaded into memory once and reused. Faster than individual procedures.
- Maintainability: Large systems (like Callidus) use packages to organise thousands of lines of code.
BULK COLLECT: Fetch Multiple Rows Efficiently
BULK COLLECT fetches multiple rows into a collection (array) in one database round-trip, instead of row-by-row with a cursor. This is 10x–100x faster for large datasets.
DECLARE -- Define collection types: arrays to hold IDs and amounts TYPE t_part_ids IS TABLE OF CS_PARTICIPANT.ID%TYPE; TYPE t_amounts IS TABLE OF CS_RESULTS.RESULT_AMOUNT%TYPE; -- Declare collection variables v_part_ids t_part_ids; v_amounts t_amounts; BEGIN -- Fetch all Q1 results into arrays (one database round-trip) SELECT PARTICIPANT_ID, RESULT_AMOUNT BULK COLLECT INTO v_part_ids, v_amounts FROM CS_RESULTS WHERE PERIOD_ID = '2026-Q1'; DBMS_OUTPUT.PUT_LINE('Fetched ' || v_part_ids.COUNT || ' rows'); -- Iterate the arrays (no more database calls) FOR i IN 1..v_part_ids.COUNT LOOP DBMS_OUTPUT.PUT_LINE( 'Part ' || v_part_ids(i) || ': $' || v_amounts(i) ); END LOOP; EXCEPTION WHEN NO_DATA_FOUND THEN DBMS_OUTPUT.PUT_LINE('No results found'); END; /
Key concepts:
TYPE ... IS TABLE OF: Define a collection type (an array).BULK COLLECT INTO: Fetch all rows from a query into arrays in one round-trip..COUNT: Get the number of elements in the array.- Access elements with parentheses:
v_part_ids(i),v_amounts(i).
FORALL: Bulk DML Operations
FORALL executes DML (UPDATE, INSERT, DELETE) operations in bulk. Combined with BULK COLLECT, it's the performance pattern for processing thousands of rows.
DECLARE -- Collections: IDs to update and new amounts TYPE t_ids IS TABLE OF CS_RESULTS.ID%TYPE; TYPE t_amounts IS TABLE OF CS_RESULTS.RESULT_AMOUNT%TYPE; v_ids t_ids; v_amounts t_amounts; BEGIN -- Step 1: BULK COLLECT — fetch IDs and amounts SELECT ID, RESULT_AMOUNT BULK COLLECT INTO v_ids, v_amounts FROM CS_RESULTS WHERE PERIOD_ID = '2026-Q1' AND STATUS = 'DRAFT'; DBMS_OUTPUT.PUT_LINE('Fetched ' || v_ids.COUNT || ' rows'); -- Step 2: FORALL — update all rows in bulk FORALL i IN 1..v_ids.COUNT UPDATE CS_RESULTS SET RESULT_AMOUNT = v_amounts(i) * 1.05, -- 5% increase STATUS = 'PROCESSED', PROCESSED_DATE = SYSDATE WHERE ID = v_ids(i); DBMS_OUTPUT.PUT_LINE('Updated ' || SQL%ROWCOUNT || ' rows'); COMMIT; EXCEPTION WHEN OTHERS THEN ROLLBACK; DBMS_OUTPUT.PUT_LINE('Error: ' || SQLERRM); END; /
Why FORALL is fast: Instead of looping row-by-row (100 database calls for 100 rows), FORALL batches the DML into a few round-trips. The exact number of round-trips depends on your session's ARRAYSIZE setting, but it's typically 1–5 round-trips for 10,000 rows. Compare that to 10,000 individual UPDATE statements in a cursor loop!
The Ultimate Pattern: BULK COLLECT + FORALL
The performance sweet spot is BULK COLLECT (fetch) + FORALL (DML). Here's a realistic Callidus scenario:
-- OLD WAY: Cursor loop (SLOW on large datasets) DECLARE CURSOR cur_results IS SELECT ID, RESULT_AMOUNT FROM CS_RESULTS WHERE PERIOD_ID = '2026-Q1'; BEGIN FOR rec IN cur_results LOOP UPDATE CS_RESULTS SET RESULT_AMOUNT = rec.RESULT_AMOUNT * 1.05 WHERE ID = rec.ID; -- 1 database call per row! END LOOP; END; / -- NEW WAY: BULK COLLECT + FORALL (FAST on large datasets) DECLARE TYPE t_ids IS TABLE OF CS_RESULTS.ID%TYPE; TYPE t_amounts IS TABLE OF CS_RESULTS.RESULT_AMOUNT%TYPE; v_ids t_ids; v_amounts t_amounts; v_start NUMBER; v_end NUMBER; BEGIN v_start := DBMS_UTILITY.GET_TIME; -- Fetch all at once SELECT ID, RESULT_AMOUNT BULK COLLECT INTO v_ids, v_amounts FROM CS_RESULTS WHERE PERIOD_ID = '2026-Q1'; -- Update all at once (batched round-trips, not row-by-row) FORALL i IN 1..v_ids.COUNT UPDATE CS_RESULTS SET RESULT_AMOUNT = v_amounts(i) * 1.05 WHERE ID = v_ids(i); COMMIT; v_end := DBMS_UTILITY.GET_TIME; DBMS_OUTPUT.PUT_LINE( 'Processed ' || v_ids.COUNT || ' rows in ' || (v_end - v_start) || ' centiseconds' ); EXCEPTION WHEN OTHERS THEN ROLLBACK; DBMS_OUTPUT.PUT_LINE('Error: ' || SQLERRM); END; /
On a typical Callidus dataset with 10,000 Q1 results, the cursor loop approach takes 30–60 seconds. The BULK COLLECT + FORALL approach takes 2–5 seconds. That's a 10–20x speedup.
SAVE EXCEPTIONS: Partial Failure Handling
If a FORALL operation hits an error (e.g., a unique constraint violation), the entire FORALL stops. SAVE EXCEPTIONS allows partial failure — continue processing rows even if some fail, then report which ones failed.
DECLARE TYPE t_part_ids IS TABLE OF NUMBER; TYPE t_tiers IS TABLE OF VARCHAR2(20); v_part_ids t_part_ids := t_part_ids(1001, 1002, 1003, 1004, 1005); v_tiers t_tiers := t_tiers('GOLD', 'SILVER', 'BRONZE', 'GOLD', 'SILVER'); v_error_count PLS_INTEGER := 0; v_sql_code NUMBER; v_error_msg VARCHAR2(500); BEGIN -- FORALL with SAVE EXCEPTIONS: continue even if some rows fail FORALL i IN 1..v_part_ids.COUNT SAVE EXCEPTIONS UPDATE CS_COMP_SUMMARY SET COMMISSION_TIER = v_tiers(i) WHERE PARTICIPANT_ID = v_part_ids(i); DBMS_OUTPUT.PUT_LINE('Updated ' || SQL%ROWCOUNT || ' rows'); EXCEPTION -- Catch errors from FORALL with SAVE EXCEPTIONS WHEN OTHERS THEN -- Loop through exceptions and report each failure FOR i IN 1..SQL%BULK_EXCEPTIONS.COUNT LOOP v_error_count := v_error_count + 1; v_sql_code := SQL%BULK_EXCEPTIONS(i).ERROR_CODE; v_error_msg := SQLERRM(-v_sql_code); DBMS_OUTPUT.PUT_LINE( 'Error updating participant ' || v_part_ids(SQL%BULK_EXCEPTIONS(i).ERROR_INDEX) || ': ' || v_error_msg ); END LOOP; DBMS_OUTPUT.PUT_LINE('Total errors: ' || v_error_count); COMMIT; -- Commit the rows that succeeded END; /
How it works:
SAVE EXCEPTIONStells FORALL to continue even if an UPDATE fails. Instead of stopping, it records the error.- After FORALL completes,
SQL%BULK_EXCEPTIONSis a collection of errors. Each error has an ERROR_CODE and ERROR_INDEX (which row failed). - Loop through the exceptions to log or handle each failure. Commit successful rows even if some failed.
Collection Limits: LIMIT Clause
For very large datasets, fetching all 100,000 rows at once into memory can be inefficient. Use LIMIT to fetch in batches:
DECLARE CURSOR cur_all_results IS SELECT ID, RESULT_AMOUNT FROM CS_RESULTS WHERE PERIOD_ID = '2026-Q1'; TYPE t_ids IS TABLE OF CS_RESULTS.ID%TYPE; TYPE t_amounts IS TABLE OF CS_RESULTS.RESULT_AMOUNT%TYPE; v_ids t_ids; v_amounts t_amounts; v_batch_size PLS_INTEGER := 1000; -- Process 1000 rows at a time v_total_rows PLS_INTEGER := 0; BEGIN OPEN cur_all_results; LOOP -- Fetch 1000 rows at a time FETCH cur_all_results BULK COLLECT INTO v_ids, v_amounts LIMIT v_batch_size; -- Exit if no more rows EXIT WHEN v_ids.COUNT = 0; -- Process this batch FORALL i IN 1..v_ids.COUNT UPDATE CS_RESULTS SET RESULT_AMOUNT = v_amounts(i) * 1.05, STATUS = 'PROCESSED' WHERE ID = v_ids(i); COMMIT; -- Commit this batch before fetching the next v_total_rows := v_total_rows + v_ids.COUNT; DBMS_OUTPUT.PUT_LINE( 'Processed ' || v_total_rows || ' rows so far' ); END LOOP; CLOSE cur_all_results; DBMS_OUTPUT.PUT_LINE('Total: ' || v_total_rows || ' rows'); EXCEPTION WHEN OTHERS THEN IF cur_all_results%ISOPEN THEN CLOSE cur_all_results; END IF; DBMS_OUTPUT.PUT_LINE('Error: ' || SQLERRM); END; /
LIMIT controls memory usage. Even processing 100,000 rows, you only hold 1,000 in memory at a time. This is the pattern for processing huge Callidus data volumes without exhausting memory or hitting session timeouts.
Comparison: Performance Benchmarks
| Approach | Dataset Size | Time (approx) | Best For |
|---|---|---|---|
| Cursor loop | 1,000 rows | 3–5 sec | Small, complex per-row logic |
| Cursor loop | 10,000 rows | 30–60 sec | (Avoid if possible) |
| BULK COLLECT + FORALL | 1,000 rows | 0.5 sec | Fast, batch processing |
| BULK COLLECT + FORALL | 10,000 rows | 2–5 sec | 10–20x faster than cursors |
| BULK COLLECT with LIMIT | 100,000 rows | 20–30 sec | Very large datasets, managed memory |
| Set-based SQL (pure UPDATE) | 10,000+ rows | < 1 sec | Simple transformations (fastest) |
Key Takeaways
- Packages organise related procedures and functions into a single unit. Use specs (public interface) and bodies (implementation).
- BULK COLLECT fetches multiple rows into arrays in one database round-trip. Use it whenever you need to process multiple rows.
- FORALL performs DML operations in bulk. Combined with BULK COLLECT, it's 10x–100x faster than cursor loops on large datasets.
- SAVE EXCEPTIONS allows partial failure — continue processing rows even if some fail.
- LIMIT on BULK COLLECT manages memory when processing huge datasets. Fetch and process in batches.
- In Callidus Commissions, if you're processing > 1,000 rows, use BULK COLLECT + FORALL. If processing > 100,000 rows, use LIMIT to batch.
- When possible, use set-based SQL (one UPDATE statement) instead of PL/SQL loops. It's often 10x faster.