TL;DR
Most type mappings are straightforward: VARCHAR2 → NVARCHAR, NUMBER → DECIMAL. One gotcha dominates: Oracle DATE has a time component, HANA DATE does not. This causes silent data loss on migrated data.
Character encoding: Oracle is locale-specific (NLS_CHARACTERSET). HANA is always Unicode (CESU-8). Implicit conversions differ. Data validation is critical before cutover.
Test your data migration script on a production copy first. Don't discover type conversion issues on day 1 of cutover.

Complete Type Mapping Table

Oracle TypeHANA EquivalentSize / PrecisionNotes
VARCHAR2(n)NVARCHAR(n)n characters (Unicode)HANA is always Unicode. VARCHAR2 size in Oracle is bytes; NVARCHAR in HANA is characters. 1 Euro symbol = 1 character in NVARCHAR but multiple bytes in VARCHAR2.
CHAR(n)NCHAR(n)Fixed-lengthRarely used in modern systems. Prefer variable-length.
NUMBER(p,s)DECIMAL(p,s)Precision p, scale sDirect equivalent. DECIMAL is fixed-point (accurate for money).
NUMBER (no p,s)DECIMAL(38,10) or BIGINTSee notesIf used for integers, use BIGINT. If used for decimals, choose DECIMAL size based on data audit.
INTEGERINTEGER-2^31 to 2^31-1Identical.
BIGINTBIGINT-2^63 to 2^63-1Both have it. Use for large integers.
FLOATREAL or DOUBLEFloating-pointAvoid for money. Use DECIMAL instead.
DATEDATE or TIMESTAMPSee gotcha belowCRITICAL: Oracle DATE has time; HANA DATE does not.
TIMESTAMPTIMESTAMPDate and timeIdentical concept. Be aware of timezone handling.
CLOBNCLOBLarge textOracle CLOB can be byte or character; HANA NCLOB is always character (Unicode).
BLOBBLOBLarge binaryIdentical.
BOOLEANNot native; use TINYINT0 or 1HANA has no native BOOLEAN. Workaround: use TINYINT, CHAR(1), or INTEGER.

The DATE Gotcha: Oracle vs HANA

This is the single most important type difference. It causes silent data loss if not handled correctly.

What's the Difference?

Oracle DATE: Stores both date and time (hour, minute, second). Resolution: 1 second.

HANA DATE: Stores only date (year, month, day). Resolution: 1 day. No time component.

Real-World Example

Callidus commission_date column: you store a transaction timestamp like "2026-04-08 14:30:45" (April 8, 2:30:45 PM).

Oracle: commission_date DATE
-- Insert
INSERT INTO cs_commissions VALUES (1, TO_DATE('2026-04-08 14:30:45', 'YYYY-MM-DD HH24:MI:SS'));

-- Query returns
SELECT commission_date FROM cs_commissions;
-- Result: 2026-04-08 14:30:45

When you migrate this to HANA and use DATE type:

HANA: commission_date DATE — DATA LOSS
-- Insert (time component is silently dropped)
INSERT INTO cs_commissions VALUES (1, TO_DATE('2026-04-08 14:30:45', 'YYYY-MM-DD HH24:MI:SS'));

-- Query returns
SELECT commission_date FROM cs_commissions;
-- Result: 2026-04-08 (no time!)

The time (14:30:45) is gone. If you're aggregating by date, the wrong date, it won't matter. But if you're calculating time differences or filtering by specific times, you've lost data.

The Solution

Audit your data first. For each column that's DATE in Oracle, ask: does it store time information that matters?

Data audit query (Oracle)
-- Check if commission_date has non-zero time components
SELECT COUNT(*)
FROM cs_commissions
WHERE TRUNC(commission_date) != commission_date;
-- If count > 0, the column has time data that you'll lose on migration

If the result is > 0, map that column to TIMESTAMP in HANA instead of DATE:

HANA: commission_date TIMESTAMP — PRESERVES TIME
INSERT INTO cs_commissions VALUES (1, TO_TIMESTAMP('2026-04-08 14:30:45', 'YYYY-MM-DD HH24:MI:SS'));

SELECT commission_date FROM cs_commissions;
-- Result: 2026-04-08 14:30:45 (time preserved)
⚠️Silent data loss: The migration script won't fail. It will run successfully and you won't notice the problem until business users report "the payout times are wrong" or "aggregations don't match". Always audit column by column before deciding between DATE and TIMESTAMP.

NUMBER Precision and Scale Issues

Oracle's NUMBER(p,s) is flexible — you can omit s (scale) and it becomes a variable-precision decimal. HANA's DECIMAL requires both p (precision) and s (scale).

Oracle: NUMBER with ambiguous scale
CREATE TABLE payroll (
  total_comp NUMBER(12, 2),
  -- What about this one? Maximum precision, no scale specified
  employee_id NUMBER,
  tax_rate NUMBER(5)
);

In Oracle:

  • NUMBER(12,2) = max 12 digits, 2 after decimal. Maximum value: 9,999,999,999.99
  • NUMBER = effectively (38,127): huge, variable precision.
  • NUMBER(5) = 5 digits, no scale. So 0-99999 (used as integer).

In HANA, you must specify both:

HANA: DECIMAL with explicit scale
CREATE TABLE payroll (
  total_comp DECIMAL(12, 2),
  employee_id BIGINT,  -- Large integer, not DECIMAL
  tax_rate DECIMAL(5, 3)  -- 5 digits, 3 after decimal (0.000-99.999)
);
📊Audit recommendation: For every NUMBER column in Callidus, run: SELECT MAX(column_name), MIN(column_name) FROM table WHERE column_name IS NOT NULL to understand the actual range. Then choose DECIMAL(p,s) or BIGINT based on the data, not just the Oracle definition.

Character Encoding: NLS_CHARACTERSET vs Unicode

Oracle's Character Set

Oracle respects the database NLS_CHARACTERSET parameter. Common values:

  • US7ASCII: 7-bit ASCII (no accents, no international characters).
  • WE8MSWIN1252: Windows Western European (covers most Latin-based languages).
  • UTF8: Unicode (full international support).
  • AL32UTF8: AL (all languages) 32-bit UTF-8 (full Unicode).

Your Callidus instance probably uses UTF8 or AL32UTF8 (modern default). But if it's older, it might be WE8MSWIN1252 or US7ASCII.

HANA's Character Encoding

HANA is always Unicode (CESU-8 variant). All string types (NVARCHAR, NCHAR) are Unicode-native.

Implications

If your Callidus database is US7ASCII, special characters (é, ñ, ü, Chinese, Arabic, etc.) are not stored. Your data doesn't have them. Migration is straightforward.

If your Callidus database is UTF8 or AL32UTF8, you have international characters. HANA's Unicode encoding is compatible, so migration is usually fine. But test with real data (names, descriptions, addresses with special characters).

💡Test case: If your Callidus data includes non-ASCII names (François, José, Müller, 李, محمد), run a migration of a small batch and validate that special characters are preserved. Don't assume — test.

Implicit Type Conversions

Oracle is forgiving with type conversions. HANA is stricter. This can cause migration queries to fail.

String to Number

Oracle: implicit conversion works
SELECT '100' + 50;  -- Converts '100' to 100, returns 150
HANA: may fail, explicit conversion needed
SELECT CAST('100' AS INTEGER) + 50;  -- Explicit conversion

String to Date

Oracle: implicit NLS_DATE_FORMAT conversion
SELECT * FROM cs_transactions
WHERE created_date = '2026-04-08';  -- Converted using NLS_DATE_FORMAT
HANA: explicit conversion recommended
SELECT * FROM cs_transactions
WHERE created_date = TO_DATE('2026-04-08', 'YYYY-MM-DD');  -- Explicit

BOOLEAN: No Native Support

HANA has no native BOOLEAN type. Workarounds:

WorkaroundProsCons
TINYINT (0/1)Storage-efficient. Numeric comparisons work.Not semantically clear (is 0 false or a numeric zero?). Allows invalid values like 2.
CHAR(1) ('Y'/'N')Semantically clear. Human-readable.Slower comparisons (string vs numeric).
INTEGER (0/1)Clear. 32-bit standard integers.Wastes space compared to TINYINT.

Recommendation: use TINYINT with CHECK constraint:

HANA: BOOLEAN workaround with constraint
CREATE TABLE cs_participants (
  id INTEGER,
  is_active TINYINT CHECK (is_active IN (0, 1))
);

Data Migration Script Pattern

When migrating schema, use this pattern to handle type conversions explicitly:

Data migration with explicit type conversion
-- Step 1: Create staging table with HANA types
CREATE TABLE cs_commissions_staging (
  id INTEGER,
  salesperson_id INTEGER,
  commission_amount DECIMAL(10,2),
  created_date TIMESTAMP,  -- Not DATE (preserves time)
  status NVARCHAR(20),     -- Not VARCHAR2
  is_approved TINYINT       -- Not BOOLEAN
);

-- Step 2: Insert from Oracle with explicit conversions
INSERT INTO cs_commissions_staging
SELECT
  id,
  salesperson_id,
  CAST(commission_amount AS DECIMAL(10,2)),
  TO_TIMESTAMP(created_date, 'YYYY-MM-DD HH24:MI:SS'),
  CAST(status AS NVARCHAR(20)),
  CASE WHEN is_approved = 'Y' THEN 1 ELSE 0 END
FROM oracle_link.cs_commissions;

Migration Checklist: Data Types

  1. Audit all DATE columns: For each DATE in Oracle, run the "TRUNC != column" query to detect time components.
  2. Decide DATE vs TIMESTAMP: If time data exists and matters, use TIMESTAMP in HANA.
  3. Audit NUMBER columns: Understand max/min values to choose DECIMAL(p,s) or BIGINT.
  4. Map VARCHAR2 to NVARCHAR: Specify the same length, but understand it's now character-based (not byte-based).
  5. Handle BOOLEAN: Choose TINYINT, CHAR(1), or INTEGER. Add CHECK constraints.
  6. Validate character encoding: Test migration with data containing non-ASCII characters (if applicable).
  7. Write explicit conversion queries: Don't rely on implicit conversions. Use CAST, TO_DATE, TO_TIMESTAMP.
  8. Test data migration on copy: Migrate a production copy to HANA test environment. Validate row counts, sample values, and calculated fields match Oracle.

Post-Migration Validation

After migrating data, run these queries to validate:

Row count validation
-- Oracle
SELECT COUNT(*) FROM cs_commissions;  -- e.g., 5000000

-- HANA
SELECT COUNT(*) FROM cs_commissions;  -- Must be 5000000
Aggregate validation (sample dates)
-- Oracle
SELECT COUNT(*) FROM cs_commissions
WHERE created_date BETWEEN TO_DATE('2026-01-01', 'YYYY-MM-DD')
  AND TO_DATE('2026-01-31', 'YYYY-MM-DD');

-- HANA — should match
SELECT COUNT(*) FROM cs_commissions
WHERE CAST(created_date AS DATE)
  BETWEEN TO_DATE('2026-01-01', 'YYYY-MM-DD')
  AND TO_DATE('2026-01-31', 'YYYY-MM-DD');