The K2view TDM has the following components:
The TDM web application is pre-integrated in Fabric Web Framework and it offers a self-service implementation of the following activities:
TDM settings and tasks are kept in the TDM PostgreSQL DB. Both TDM layers, the backend and frontend, connect to the TDM DB to get or update TDM settings or tasks.
Fabric acts as a staging DB for the provisioned entities and as an ETL layer for extracting data from data sources and loading it into the target environment.
Additionally, the TDM back-end APIs and processes are defined and executed within Fabric. The TDM back-end APIs and processes are included in the TDM library.
When running a TDM task, data from the selected entities is stored and synchronized in Fabric according to its LU definitions. Fabric creates and maintains a separate MicroDB for each entity (LUI). This has several advantages:
A TDM task can provision selected tables with or without Business Entities. The tables are extracted from the source environment and can be stored in Fabric. The tables that are stored in Fabric can be later loaded into selected target environments.
Click here for more information about TDM Tables implementation.
An organization's systems and environments can be located in different geographic locations. This topography requires data transmission between distant locations.
Example:
One of the main challenges when running data transmission over a network is ensuring the performance of the data transmission. Obtaining data from a remote location can be time-consuming.
K2view TDM architecture ensures efficient and quick data transmission between different locations. The following diagram is an example of the TDM architecture in a multi-DC topography:
In general, data provisioning is divided into two main sections:
The following diagram displays the TDM task creation and execution processes:
Fabric runs a batch process that executes pending execution requests for TDM tasks. A separate batch process is initiated on each task's LU and post-execution process.
Click here for more information about how the TDM generates entity lists on the task's LUs.
A dedicated Fabric process checks for completed executions and updates the TDM DB accordingly on the execution's status and statistics. Additionally, Fabric receives information and statistics about executed tasks and saves them in the Fabric TDM LU.
The K2view TDM has the following components:
The TDM web application is pre-integrated in Fabric Web Framework and it offers a self-service implementation of the following activities:
TDM settings and tasks are kept in the TDM PostgreSQL DB. Both TDM layers, the backend and frontend, connect to the TDM DB to get or update TDM settings or tasks.
Fabric acts as a staging DB for the provisioned entities and as an ETL layer for extracting data from data sources and loading it into the target environment.
Additionally, the TDM back-end APIs and processes are defined and executed within Fabric. The TDM back-end APIs and processes are included in the TDM library.
When running a TDM task, data from the selected entities is stored and synchronized in Fabric according to its LU definitions. Fabric creates and maintains a separate MicroDB for each entity (LUI). This has several advantages:
A TDM task can provision selected tables with or without Business Entities. The tables are extracted from the source environment and can be stored in Fabric. The tables that are stored in Fabric can be later loaded into selected target environments.
Click here for more information about TDM Tables implementation.
An organization's systems and environments can be located in different geographic locations. This topography requires data transmission between distant locations.
Example:
One of the main challenges when running data transmission over a network is ensuring the performance of the data transmission. Obtaining data from a remote location can be time-consuming.
K2view TDM architecture ensures efficient and quick data transmission between different locations. The following diagram is an example of the TDM architecture in a multi-DC topography:
In general, data provisioning is divided into two main sections:
The following diagram displays the TDM task creation and execution processes:
Fabric runs a batch process that executes pending execution requests for TDM tasks. A separate batch process is initiated on each task's LU and post-execution process.
Click here for more information about how the TDM generates entity lists on the task's LUs.
A dedicated Fabric process checks for completed executions and updates the TDM DB accordingly on the execution's status and statistics. Additionally, Fabric receives information and statistics about executed tasks and saves them in the Fabric TDM LU.