TDM enables users to provision tables in a TDM task. To do this, users can select one of the following two options:
Users can either store the tables in Fabric for later use or set the task's retention period to Do not retain in order to load the tables directly to the target environment without saving them to Fabric.
Each table is stored in Fabric, within the TDM_TableLevel LU, as a separate LU Instance (LUI). Each execution is stored as a separate LUI (a separate data snapshot), as well as creates a separate LUI (snapshot). For example, running two executions of a task to extract the Product_Mapping table would create two LUIs in the TDM_TableLevel LU in Fabric.
The LUI format is as follows:
[source environment name]|[DB interface name]|[DB schema name]|[table name]|[task execution id]
Examples:
SRC|CRM_DB|public|case_note|102822
SRC|CRM_DB|public|case_note|102826
SRC|BILLING_DB|public|contract_offer_mapping|102826
Each LUI contains the following tables:
TDM_REF_ROOT
A dynamic SQLite table, which has the same structure as the source table and contains the extracted table records. The naming convention for this table is:
__t_<source table name>
/*sqlite*/ insert into TDM_TableLevel.__t_case_note (case_id,note_id,note_date,note_text) values(?,?,?,?);
Notes:
A TDM table-level implementation contains the following steps:
Import the TDM_TableLevel LU and deploy it.
This step is required for Entities & referential data tasks. The list of available referential tables for a TDM task that contains a Business Entity and referential data is populated in the RefList MTable object. Populate the RefList with a list of all available related tables for each LU. The following settings should be populated for each record:
lu_name — populated by the LU name to allow selection of the related table in a TDM task based on the task's LUs.
id — an incrementing number.
reference_table_name — populated with the table name in the source environment.
schema_name — populated with the name of the source DB schema that stores the table.
interface_name — the table's source interface.
target_ref_table_name — an optional parameter. Populate it when the table names in the source and target differ. If not provided, the target table name is taken from the reference_table_name field.
target_schema_name — populated with the name of the target DB that stores the table.
target_interface_name — the table's target interface.
table_pk_list — an optional setting. Populated with the list of the target table's PK fields in the RefList object. These fields can be used later for customizing the load flow to run an Upsert on the target table.
truncate_indicator — by default, TDM runs a delete on the table in the target environment before loading it. If you have permission to run a truncate on the target table and need to use the truncate instead of delete (e.g., the target DB is Cassandra), set this indicator to true.
count_indicator — this setting is set to true by default, enabling counting the number of records in the source or target, as a way to monitor task execution. Set this indicator to false, if required, to disable counting records in the target.
Note that from TDM V9.3.1 onwards, the schema_name and target_schema_name fields can be populated with either of the following options:
Schema name
Global name. Add a @
sign before and after the Global name to indicate that the schema name should be taken from the Global's value. For example: @CUSTOMER_SCHEMA_NAME@
. Using a Global to populate the schema is useful when different environments have different schema names.
Click here for more information about MTable objects.
TDM V9.1 enables adding tables to the RefList MTable for the purpose of supporting the setting of different interface, schema name, or table name in the source and target environments for table-level tasks. To configure different settings in the source and target environments for table-level tasks, set the lu_name to TDM_TableLevel.
The TDM table flow uses Fabric Catalog masking. You can edit the PII settings in the Catalog when required.
Run the Discovery job on the table's interfaces. Following the job completion, the interface metadata is retrieved from the Catalog.
The TableLevelInterfaces MTable enables implementors to control which tables can be selected in a task and allows to define special handling rules for a given DB.
By default, this MTable is populated with the 'TDM' and 'POSTGRESQL_ADMIN' interfaces in order to prevent the TDM tasks from selecting the TDM tables. It is possible to populate additional DB interfaces, either to exclude them from table selection in the TDM task or to apply special handling to their tables. Each DB interface requires its own record. The following settings should be populated for each record:
interface_name — the interface name defined in the TDM project implementation.
suppress_indicator — when set to true, the DB tables are excluded from the selection of tables in a TDM task; when set to false, the interface's tables can be selected in a TDM task.
truncate_indicator — by default, TDM runs a delete on the table in the target environment before loading it. If you have permission to run a truncate on the target table and need to use the truncate instead of delete (e.g., the target DB is Cassandra), set this indicator to true.
count_indicator — this setting is set to true by default, enabling counting the number of records in the source or target, as a way to monitor task execution. Set this indicator to false, if required, to disable counting records in the target.
order_flow — an optional setting. Populate this setting to run a project's Broadway flow. The flow defines customized logic for determining the table execution order. The order flow must have an external output Map named result, which contains the list of tables and their order. An example of an output Map with the execution order:
{
"customer": 0,
"address": 1
}
TDM V9.1 introduces the TableLevelDefinitions MTable, which enables setting a customized logic for selected tables.
A customized flow can be added to a table's extract, load or delete processes. The implementor can apply it to all activities - extract, delete, and load - or only to specific ones. Using this feature, you can access the following capabilities:
Custom masking for selected fields (not Catalog-based).
Extract or load large volumes of data that requires using third-party party tools, such as DB2move.
Impact the table execution order.
The following settings should be populated for each record:
interface_name — the interface name defined in the TDM project implementation.
schema_name — the DB schema. Can be populated with either of the following options:
@
sign before and after the Global name to indicate that the schema name should be taken from the Global's value. For example: @CUSTOMER_SCHEMA_NAME@
. Using a Global to populate the schema is useful when different environments have different schema names. table_name — populated with the table name. If this setting is empty, the customized flows will run on all the tables in the interface and schema.
extract_flow — populated with the customized extract flow.
table_order — populated with a number. The table order defined in the TableLevelDefinitions MTable has the highest priority and can override the order defined in the TableLevelInterfaces MTable.
delete_flow — populated with the customized delete flow.
load_flow — populated with the load flow.
The installment of a K2exchange connector adds a dedicated TableLevelDefinitions file for the connector.
Example — TableLevelDefinitions___mongodb:
Note that you must set the task's retention period to Do not retain. This ensures that the tables are loaded directly to the target environment, without being saved to Fabric, when the data source is based on a connector.
The customized table flows are Broadway flows. These flows must be added under the Shared Objects in the Project tree.
The Catalog Masking Actor is invoked after the extract flow execution.
Setting customized masking logic on tables:
The below image depicts an example, which executes the following actions:
See the loop on the selected address records:
TDM enables users to provision tables in a TDM task. To do this, users can select one of the following two options:
Users can either store the tables in Fabric for later use or set the task's retention period to Do not retain in order to load the tables directly to the target environment without saving them to Fabric.
Each table is stored in Fabric, within the TDM_TableLevel LU, as a separate LU Instance (LUI). Each execution is stored as a separate LUI (a separate data snapshot), as well as creates a separate LUI (snapshot). For example, running two executions of a task to extract the Product_Mapping table would create two LUIs in the TDM_TableLevel LU in Fabric.
The LUI format is as follows:
[source environment name]|[DB interface name]|[DB schema name]|[table name]|[task execution id]
Examples:
SRC|CRM_DB|public|case_note|102822
SRC|CRM_DB|public|case_note|102826
SRC|BILLING_DB|public|contract_offer_mapping|102826
Each LUI contains the following tables:
TDM_REF_ROOT
A dynamic SQLite table, which has the same structure as the source table and contains the extracted table records. The naming convention for this table is:
__t_<source table name>
/*sqlite*/ insert into TDM_TableLevel.__t_case_note (case_id,note_id,note_date,note_text) values(?,?,?,?);
Notes:
A TDM table-level implementation contains the following steps:
Import the TDM_TableLevel LU and deploy it.
This step is required for Entities & referential data tasks. The list of available referential tables for a TDM task that contains a Business Entity and referential data is populated in the RefList MTable object. Populate the RefList with a list of all available related tables for each LU. The following settings should be populated for each record:
lu_name — populated by the LU name to allow selection of the related table in a TDM task based on the task's LUs.
id — an incrementing number.
reference_table_name — populated with the table name in the source environment.
schema_name — populated with the name of the source DB schema that stores the table.
interface_name — the table's source interface.
target_ref_table_name — an optional parameter. Populate it when the table names in the source and target differ. If not provided, the target table name is taken from the reference_table_name field.
target_schema_name — populated with the name of the target DB that stores the table.
target_interface_name — the table's target interface.
table_pk_list — an optional setting. Populated with the list of the target table's PK fields in the RefList object. These fields can be used later for customizing the load flow to run an Upsert on the target table.
truncate_indicator — by default, TDM runs a delete on the table in the target environment before loading it. If you have permission to run a truncate on the target table and need to use the truncate instead of delete (e.g., the target DB is Cassandra), set this indicator to true.
count_indicator — this setting is set to true by default, enabling counting the number of records in the source or target, as a way to monitor task execution. Set this indicator to false, if required, to disable counting records in the target.
Note that from TDM V9.3.1 onwards, the schema_name and target_schema_name fields can be populated with either of the following options:
Schema name
Global name. Add a @
sign before and after the Global name to indicate that the schema name should be taken from the Global's value. For example: @CUSTOMER_SCHEMA_NAME@
. Using a Global to populate the schema is useful when different environments have different schema names.
Click here for more information about MTable objects.
TDM V9.1 enables adding tables to the RefList MTable for the purpose of supporting the setting of different interface, schema name, or table name in the source and target environments for table-level tasks. To configure different settings in the source and target environments for table-level tasks, set the lu_name to TDM_TableLevel.
The TDM table flow uses Fabric Catalog masking. You can edit the PII settings in the Catalog when required.
Run the Discovery job on the table's interfaces. Following the job completion, the interface metadata is retrieved from the Catalog.
The TableLevelInterfaces MTable enables implementors to control which tables can be selected in a task and allows to define special handling rules for a given DB.
By default, this MTable is populated with the 'TDM' and 'POSTGRESQL_ADMIN' interfaces in order to prevent the TDM tasks from selecting the TDM tables. It is possible to populate additional DB interfaces, either to exclude them from table selection in the TDM task or to apply special handling to their tables. Each DB interface requires its own record. The following settings should be populated for each record:
interface_name — the interface name defined in the TDM project implementation.
suppress_indicator — when set to true, the DB tables are excluded from the selection of tables in a TDM task; when set to false, the interface's tables can be selected in a TDM task.
truncate_indicator — by default, TDM runs a delete on the table in the target environment before loading it. If you have permission to run a truncate on the target table and need to use the truncate instead of delete (e.g., the target DB is Cassandra), set this indicator to true.
count_indicator — this setting is set to true by default, enabling counting the number of records in the source or target, as a way to monitor task execution. Set this indicator to false, if required, to disable counting records in the target.
order_flow — an optional setting. Populate this setting to run a project's Broadway flow. The flow defines customized logic for determining the table execution order. The order flow must have an external output Map named result, which contains the list of tables and their order. An example of an output Map with the execution order:
{
"customer": 0,
"address": 1
}
TDM V9.1 introduces the TableLevelDefinitions MTable, which enables setting a customized logic for selected tables.
A customized flow can be added to a table's extract, load or delete processes. The implementor can apply it to all activities - extract, delete, and load - or only to specific ones. Using this feature, you can access the following capabilities:
Custom masking for selected fields (not Catalog-based).
Extract or load large volumes of data that requires using third-party party tools, such as DB2move.
Impact the table execution order.
The following settings should be populated for each record:
interface_name — the interface name defined in the TDM project implementation.
schema_name — the DB schema. Can be populated with either of the following options:
@
sign before and after the Global name to indicate that the schema name should be taken from the Global's value. For example: @CUSTOMER_SCHEMA_NAME@
. Using a Global to populate the schema is useful when different environments have different schema names. table_name — populated with the table name. If this setting is empty, the customized flows will run on all the tables in the interface and schema.
extract_flow — populated with the customized extract flow.
table_order — populated with a number. The table order defined in the TableLevelDefinitions MTable has the highest priority and can override the order defined in the TableLevelInterfaces MTable.
delete_flow — populated with the customized delete flow.
load_flow — populated with the load flow.
The installment of a K2exchange connector adds a dedicated TableLevelDefinitions file for the connector.
Example — TableLevelDefinitions___mongodb:
Note that you must set the task's retention period to Do not retain. This ensures that the tables are loaded directly to the target environment, without being saved to Fabric, when the data source is based on a connector.
The customized table flows are Broadway flows. These flows must be added under the Shared Objects in the Project tree.
The Catalog Masking Actor is invoked after the extract flow execution.
Setting customized masking logic on tables:
The below image depicts an example, which executes the following actions:
See the loop on the selected address records: