Two types of transactions can be differentiated when updating a commonDB reference table:
Two different flows occur for each transaction, depending on whether the update content size exceeds 1000 rows or not.
The following illustration shows how a Synchronisation Job (Sync Job 2) publishes an update notification and a short message content on the Kafka Queue dedicated to Table 1, subsequently causing all listening nodes in the cluster to write the update directly from Kafka to their own SQLite CommonDB copy.
The following illustration shows how a Synchronisation Job (Sync Job 1) publishes an update message in the Kafka Queue dedicated to Table T, and how it writes the long message content in Cassandra. This, subsequently, causes any listening node within the cluster to write the update's content directly from Cassandra into its own SQLite CommonDB copy.
Any transaction involving the common table is done in asynchronous mode, meaning that the updated data cannot be seen until it has been committed, and until Fabric updates the relevant commonDB table; more over, each node will perform the update in its own time.
The transaction message is sent to Kafka while its content is saved into Kafka (within the message payload) or in a Cassandra keyspace, depending on its size.
Regardless of the synchronization type (background or on-demand), Fabric provides two different modes for synchronizing Reference tables data.
This mode is by default, selected when any row update to the reference table is needed. In this mode, updates are performed as Create/Update/Delete SQL queries directly on the table itself. Each node executes this change locally on its local SQLite commonDB copy as a single transaction.
For example, an update consists of running 2500 insert commands. Each 1000 command bulk is written to Cassandra. The 2500 inserts are divided into 3 bulks of 1000, 1000 and 500 each. Kafka gets the transaction message. One message is sent to Kafka per table and per transaction.
A snapshot will only be published once one of the following actions is triggered:
where
statementEach node performs the following snapshot synchronization:
Two types of transactions can be differentiated when updating a commonDB reference table:
Two different flows occur for each transaction, depending on whether the update content size exceeds 1000 rows or not.
The following illustration shows how a Synchronisation Job (Sync Job 2) publishes an update notification and a short message content on the Kafka Queue dedicated to Table 1, subsequently causing all listening nodes in the cluster to write the update directly from Kafka to their own SQLite CommonDB copy.
The following illustration shows how a Synchronisation Job (Sync Job 1) publishes an update message in the Kafka Queue dedicated to Table T, and how it writes the long message content in Cassandra. This, subsequently, causes any listening node within the cluster to write the update's content directly from Cassandra into its own SQLite CommonDB copy.
Any transaction involving the common table is done in asynchronous mode, meaning that the updated data cannot be seen until it has been committed, and until Fabric updates the relevant commonDB table; more over, each node will perform the update in its own time.
The transaction message is sent to Kafka while its content is saved into Kafka (within the message payload) or in a Cassandra keyspace, depending on its size.
Regardless of the synchronization type (background or on-demand), Fabric provides two different modes for synchronizing Reference tables data.
This mode is by default, selected when any row update to the reference table is needed. In this mode, updates are performed as Create/Update/Delete SQL queries directly on the table itself. Each node executes this change locally on its local SQLite commonDB copy as a single transaction.
For example, an update consists of running 2500 insert commands. Each 1000 command bulk is written to Cassandra. The 2500 inserts are divided into 3 bulks of 1000, 1000 and 500 each. Kafka gets the transaction message. One message is sent to Kafka per table and per transaction.
A snapshot will only be published once one of the following actions is triggered:
where
statementEach node performs the following snapshot synchronization: