The K2View TDM has the following components:
The TDM web application offers self-service implementation of the following activities:
TDM settings and tasks are kept in the TDM PostgreSQL DB. Both the TDM GUI and Fabric connect to the TDM DB to get or update TDM settings or tasks.
Fabric acts as a staging DB for the provisioned entities and ETL layer for extracting data from data sources and loading it to the target environment.
When running a TDM task, data from the selected entities is stored and synchronized in Fabric according to the definitions of its LUs. Fabric creates and maintains a separate MicroDB for each entity (LUI). This has several advantages:
Convenience - Encapsulating the data of a business entity into one place so that it can be queried by consumers (many business entities have data residing in multiple data sources).
Security - Individual encryption on MicroDB or field levels allows more robust security.
Masking capabilities - masking sensitive data when storing entities.
Flexibility - Flexible sync policies based on business needs, including:
Reference or Operational tables that need to be copied as-is can be extracted from the source environment and saved into Cassandra under the k2view_tdm keyspace. These tables can be later loaded into selected target environments.
Click here for more information about TDM Reference Handling.
In general, data provisioning can be divided into two main sections:
The following diagram displays the TDM task creation and execution processes:
Fabric runs a batch process that executes pending execution requests:
A dedicated Fabric process checks for completed executions and updates the TDM DB accordingly on the execution's status and statistics. In addition, Fabric receives information and statistics on executed tasks and saves them in the Fabric TDM LU.
The K2View TDM has the following components:
The TDM web application offers self-service implementation of the following activities:
TDM settings and tasks are kept in the TDM PostgreSQL DB. Both the TDM GUI and Fabric connect to the TDM DB to get or update TDM settings or tasks.
Fabric acts as a staging DB for the provisioned entities and ETL layer for extracting data from data sources and loading it to the target environment.
When running a TDM task, data from the selected entities is stored and synchronized in Fabric according to the definitions of its LUs. Fabric creates and maintains a separate MicroDB for each entity (LUI). This has several advantages:
Convenience - Encapsulating the data of a business entity into one place so that it can be queried by consumers (many business entities have data residing in multiple data sources).
Security - Individual encryption on MicroDB or field levels allows more robust security.
Masking capabilities - masking sensitive data when storing entities.
Flexibility - Flexible sync policies based on business needs, including:
Reference or Operational tables that need to be copied as-is can be extracted from the source environment and saved into Cassandra under the k2view_tdm keyspace. These tables can be later loaded into selected target environments.
Click here for more information about TDM Reference Handling.
In general, data provisioning can be divided into two main sections:
The following diagram displays the TDM task creation and execution processes:
Fabric runs a batch process that executes pending execution requests:
A dedicated Fabric process checks for completed executions and updates the TDM DB accordingly on the execution's status and statistics. In addition, Fabric receives information and statistics on executed tasks and saves them in the Fabric TDM LU.